How DRAM changed the world

(micron.com)

66 points | by sandwichsphinx 9 hours ago ago

34 comments

  • neom 8 hours ago

    I miss RAM. I feel like if you lived through that 90s RAM frenzy, you probably miss RAM too. It was crazy how quickly we move through SDRAM/DDR, prices dropped and you could make real increases in performance year over year for not much money. I'm sure some of it was the software being able to capture the hw improvements, but that certainly was my fav period in tech so far.

    • gregmac 5 hours ago

      I am confused by this comment. You said "RAM" (contrast to "DRAM" in the article title) but I think you are talking about DRAM sticks? But those have not gone away (other than with some laptops where it's soldered on and not upgradable).

      Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.

      One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).

      Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.

      [1] https://aiimpacts.org/trends-in-dram-price-per-gigabyte/

      [2] http://blog.logicalincrements.com/2016/03/ultimate-guide-com...

      • tomnipotent a minute ago

        Most RAM found in consumer PC's during the 90s was still DRAM, including SDRAM, EDO, and Rambus. I believe OP is just being nostalgic over the period of time when RAM upgrades were very a exciting thing, as hardware was changing very quickly in that era and each year felt considerably more capable than the prior.

      • emptiestplace 3 hours ago

        > Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.

        This statement makes it difficult to believe you were there.

        • CalRobert 2 hours ago

          How so?

          • phil21 2 hours ago

            8GB -> 32GB doesn't really give you a whole lot more than opening more browser tabs or whatnot. Cache some more filesystem into memory I suppose.

            8MB -> 32MB enabled entirely new categories of (consumer-level) software. It would be a game changer for performance as you were no longer swapping to (exceedingly slow) disk.

            They simply are not comparable, imo. 8MB to 32MB was night and day difference and you would drool over the idea of someday being able to afford such a luxury. 8GB to 32GB was at least until very recently a nice to have for power users.

            • winrid an hour ago

              Yeah, the equivalent nowadays would be like going from 256mb to 16gb.

              Even most AAA games will still run on 8gb ram just fine.

          • Dalewyn 12 minutes ago

            Going from an inflatable pool in your backyard to a community pool is world changing.

            Going from the Pacific Ocean to the Seven Seas is still lots of water.

          • tsobral an hour ago

            It's simple, and for context I made both upgrades scenarios: 8gb -> 32gb: yeah, it feels a bit snappier, I can finally load multiple VMs and a ton of containers. Which rarely happens.. 8mb -> 32mb: Wow! I can finally play Age of Empires 2 and blue screen crashes dropped by half!

            The feeling isn't the same even remotely...

    • thr0w 6 hours ago

      Getting a new stick of RAM was so damn exciting in the 90s.

    • UltraSane 5 hours ago

      RAM speeds are still improving pretty fast. I'm running DDR5 6000 and DDR5 8300 is available. GDDR7 uses PAM3 to get 40Gbps

      • malfist 5 hours ago

        How does that contrast with the increase cas latency in real world terms? (Actually asking, not being combative, I don't know)

        • vbezhenar 5 hours ago

          Last time I checked, DDR5 and DDR4 latency was basically the same. Very little progress there. May be with integrating DRAM and CPU on the same package, some latency wins will be available just because wires will be shorter, but nothing groundbreaking.

          • sundvor an hour ago

            My DDR4 was C16, but my DDR5 at C30 makes up for that with sheer speed.

            Currently sporting this - G.Skill Trident Z5 Neo RGB Black 64GB (2x32GB) PC5-48000 (6000MHz) with a 7800x3d.

            Previous kit was G.Skill Trident Z Neo RGB 32GB (2x16GB), PC4-28800 (3600MHz) DDR4, 16-16-16-36 [X2, eventually, for 64 total] with, you guessed it, the 5800x3d, from the 3900xt - my son loves it.

            • smolder 3 minutes ago

              To reiterate the GPs point, in case you didn't get it: DDR4-3200 CL16 is equivalent to DDR5-6000 CL30 or DDR5-6400 CL32 in terms of latency. Divide the frequency by the CAS latency and you get the same number for all of those. It was the same situation going from DDR3 to 4.

    • IshKebab 7 hours ago

      Nah the biggest jump in performance by far was SSDs. It was a huge step so software had no chance to "catch up" initially.

      • szundi 2 hours ago

        It's happening, Windows gets slower every year to adapt to SSDs.

  • seunosewa 5 hours ago

    The article doesn't properly explain how DRAM is different from SRAM. DRAM has to constantly refresh itself in order not to 'forget' its contents.

  • MichaelZuo 8 hours ago

    Dennard scaling for SRAM has certainly halted, as demonstrated by TSMC’s 3nm process vs 5 nm.

    What’s the likely ETA for DRAM?

    • hajile 4 hours ago

      Years ago.

      DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).

      As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.

      Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.

      • aidenn0 2 hours ago

        If that's the case, why haven't we switched to SRAM? Isn't it only about 4x the price at any given process node?

        • RF_Savage 2 hours ago

          That 4x the price does also explain why it has not happened.

    • Salgat 4 hours ago

      5nm can hold roughly a gigabyte of SRAM on a cpu-sized die, that's around $130/GB I believe. At some point 5nm will be cheap enough that we can start considering replacing DRAM with SRAM directly on the chip (aka L4 cache). I wonder how big of a latency and bandwidth bonus that'd be. You could even go for a larger node size without losing much capacity for half the price.

    • wmf 8 hours ago

      Now? Prices have been flat for 15 years and DRAM has been stuck on 10 nm for a while.

      • philipkglass 7 hours ago

        That's overstating the flatness of prices. In 2009, the best price recorded here was 10 dollars per gigabyte:

        https://jcmit.net/memoryprice.htm

        Recently DDR4 RAM is available at well under $2/GB, some closer to $1/GB.

        • jychang 5 hours ago

          $1/GB? That's around the price SSDs took over from HDDs...

    • ksec 8 hours ago

      Not soon as DRAM is mostly on older node. But overall cost reduction of DRAM is moving very very slowly.

  • metta2uall 5 hours ago

    "8K video recording" - does anyone really need this? Seems like for negligible gain in quality people are pushed to sacrifice their storage & battery, and so upgrade their hardware sooner...

    • szundi 2 hours ago

      Yes, they record with higher resolutions and then the director and the operateur has greater flexibility later when they realize they need a different framing - or just fixing the cameraman's errors cutting parts of the picture out. They need the extra pixels/captured area to be able to do this.

    • scheme271 4 hours ago

      I think the studios and anyone doing video production probably would use a 8k toolchain if possible. As others have pointed out, this lets you crop and modify video while still being able to output 4k without having to upscale.

    • Pet_Ant 4 hours ago

      Well for starters 8k video lets you zoom in and crop and still get 4k in the end.

      • metta2uall 3 hours ago

        I think 4k is also too much in the vast majority of cases..

        • aidenn0 2 hours ago

          It's insufficient for large theaters and barely sufficient for medium theaters. 1080p is plenty for most home theaters (though with the so-so quality of streaming, I wonder to what degree macroblocks are the new pixels)

    • lightedman 35 minutes ago

      I need more than 8K. I'm working at microscopic levels when I study minerals, I need as much resolution as I can possibly get, to the limit of optical diffraction.