The RAM shortage comes for us all

(jeffgeerling.com)

216 points | by speckx 2 hours ago ago

252 comments

  • MPSimmons 2 hours ago

    This reminds me of the recent LaurieWired video presenting a hypothetical of, "what if we stopped making CPUs": https://www.youtube.com/watch?v=L2OJFqs8bUk

    Spoiler, but the answer is basically that old hardware rules the day because it lasts longer and is more reliable of timespans of decades.

    DDR5 32GB is currently going for ~$330 on Amazon

    DDR4 32GB is currently going for ~$130 on Amazon

    DDR3 32GB is currently going for ~50 on Amazon (4x8GB)

    For anyone where cost is a concern, using older hardware seems like a particularly easy choice, especially if a person is comfortable with a Linux environment, since the massive droves of recently retired Windows 10 incompatible hardware works great with your Linux distro of choice.

    • xboxnolifes 33 minutes ago

      Unfortunately, older RAM also means an older motherboard, which also means older socket and older CPUs. It works, but it's not usually a drop in replacement.

      • maximilianthe1 22 minutes ago

        Can't you use DDR3 in DDR5 compatible board?

        • forthefuture 19 minutes ago

          Unfortunately not, each version of RAM uses a different physical slot.

    • fullstop 2 hours ago

      If everyone went for DDR4 and DDR3, surely the cost would go up. There is no additional supply there, as they are no longer being made.

      • phil21 an hour ago

        Currently trying to source a large amount of DDR4 to upgrade a 3 year old fleet of servers at a very unfortunate time.

        It's very difficult to source in quantity, and is going up in price more or less daily at this point. Vendor quotes are good for hours, not days when you can find it.

        • fullstop an hour ago

          We're looking to buy 1TB of DDR5 for a large database server. I'm pretty sure that we're just going to pay the premium and move on.

          • georgeburdell an hour ago

            And that’s why everybody’s B2B these days. The decision-making people at companies are not spending their own money

      • acdha 2 hours ago

        At some point that’s true, but don’t they run the n-1 or 2 generation production lines for years after the next generation launches? There’s a significant capital investment there and my understanding is that the cost goes down significantly over the lifetime as they dial in the process so even though the price is lower it’s still profitable.

        • fullstop an hour ago

          Unless plans have changed, the foundries making DDR4 are winding down, with the last shipments going out as we speak.

        • mlyle an hour ago

          This is only true as long as there's enough of a market left. You tend to end up with oversupply and excessive inventory during the transition, and that pushes profit margins negative and removes all the supply pretty quickly.

      • MPSimmons 2 hours ago

        Undoubtably the cost would go up, but nobody is building out datacenters full of DDR4, either, so I don't figure it would go up nearly as much as DDR5 is right now.

        • fullstop an hour ago

          https://pcpartpicker.com/trends/price/memory/

          You can see the cost rise of DDR4 here.

          • MPSimmons an hour ago

            Awesome charts, thanks! I think it bears out that the multiplier for older hardware isn't as extreme as the newer hardware, right?

            • fullstop an hour ago

              ~2.8x for DDR4, ~3.6x for DDR5. DDR5 is still being made, though, so it will be interesting to see how it changes in the future.

              Either way, it's going to be a long few years at the least.

              • oskarkk an hour ago

                Unless the AI bubble pops.

                • bombcar an hour ago

                  One possible outcome is the remaining memory manufacturers have dedicated all their capacity for AI and when the bubble pops, they lose their customer and they go out of business too.

          • parineum an hour ago

            That's the Avarage price for new DDR4 which has dwindling supply. Meanwhile used DDR4 is being retired in both desktops and data centers.

            • fullstop an hour ago

              DDR4 production is winding down or done. Only "new old stock" will remain, or used DDR4 modules. Good luck buying that in quantity.

    • bullen an hour ago

      Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.

      Just like SSDs from 2010 have 100.000 writes per bit instead of below 10.000.

      CPUs might even follow the same durability pattern but that remains to be seen.

      Keep your old machines alive and backed up!

      • 0manrho 2 minutes ago

        > 100.000 writes per bit

        per cell*

        Also, that SSD example is wildly untrue. Especially with the context of available capacity at the time. You CAN get SSD's with mind boggling write endurance per cell, AND has multides more cells, resulting in vastly more durable media than what was available pre 2015. Not everything is a TLC/QLC 0.3DWPD disposable drive like has become standard in the consumer space.

        Regarding CPU's, they still follow that durability pattern if you unfuck what Intel and AMD are doing with boosting behavior and limit them to perform with the margins that they used to "back in the day". This is more of a problem on the consumer side (Core/Ryzen) than the enterprise side (Epyc/Xeon). It's also part of why the OC market is dying (save for maybe the XOC market that is having fun with LN2), those CPU's (especially consumer ones) come from the factory with much less margin for pushing things, because they're already close to their limit without exceedingly robust cooling.

        I have no idea what the relative durability of RAM is tbh, it's been pretty bulletproof in my experience over the years, or at least bulletproof enough for my usecases that I haven't really noticed a difference. Notable exception is what I see in GPU's, but that is largely heat-death related and often a result of poor QA by the AIB that made it (eg, thermal pads not making contact with the GDDR modules).

      • mrob an hour ago

        CAS latency is specified in cycles and clock rates are increasing, so despite the number getting bigger there's actually been a small improvement in latency with each generation.

        • bullen 24 minutes ago

          Not for small amounts of data.

          Bandwith increases, but if you only need a few bytes DDR3 is faster.

          Also slower speed means less heat and longer life.

          You can feel the speed advantage by just moving the mouse on a DDR3 PC...

          • mrob 14 minutes ago

            RAM latency doesn't affect mouse response in any perceptible way. The fastest gaming mice I know of run at 8000Hz, so that's 125000ns between samples, much bigger than any CAS latency. And most mice run substantially slower.

            Maybe your old PC used lower-latency GUI software, e.g. uncomposited Xorg instead of Wayland.

      • hajile an hour ago

        CAS latency doesn't matter so much as ns of total random-access latency and the raw clockspeed of the individual RAM cells. If you are accessing the same cell repeatedly, RAM hasn't gotten faster in years (around DDR2 IIRC).

    • diabllicseagull 2 hours ago

      a 2x16 ddr4 kit I bought in 2020 for $160 is now $220. older memory is relatively cheap but not cheaper than before at all.

      • shoo 21 minutes ago

        I wondered how much of this is inflation -- after adjusting for CPI inflation, $160 in 2020 is worth $200 in today's dollars [$], so the price of that ddr4 kit is 10% higher in real terms.

        [$] https://www.usinflationcalculator.com/

        • fullstop 14 minutes ago

          USD is also weaker than it was in the past.

    • thedangler an hour ago

      Nice, My current PC uses DDR4 Time to dust off my 2012 PC and put Linux on it.

    • christkv 2 hours ago

      A friend built a new rig and went with DDR4 and a 5800x3d just because of this as he needed a lot of ram.

  • Loic 2 hours ago

    I think the OpenAI deal to lock wafers was a wonderful coup. OpenAI is more and more losing ground against the regularity[0] of the improvements coming from Anthropic, Google and even the open weights models. By creating a chock point at the hardware level, OpenAI can prevent the competition from increasing their reach because of the lack of hardware.

    [0]: For me this is really an important part of working with Claude, the model improves with the time but stay consistent, its "personality" or whatever you want to call it, has been really stable over the past versions, this allows a very smooth transition from version N to N+1.

    • hodgehog11 2 hours ago

      I don't see this working for Google though, since they make their own custom hardware in the form of the TPUs. Unless those designs include components that are also susceptible?

      • jandrese an hour ago

        That was why OpenAI went after the wafers, not the finished products. By buying up the supply of the raw materials they bottleneck everybody, even unrelated fields. It's the kind of move that requires a true asshole to pull off, knowing it will give your company an advantage but screw up life for literally billions of people at the same time.

      • frankchn 2 hours ago

        TPUs use HBM, which are impacted.

      • UncleOxidant an hour ago

        Even their TPU based systems need RAM.

      • bri3d 2 hours ago

        Still susceptible, TPUs need DRAM dies just as much as anything else that needs to process data. I think they use some form of HBM, so they basically have to compete alongside the DDR supply chain.

    • Grosvenor 2 hours ago

      Could this generate pressure to produce less memory hungry models?

      • hodgehog11 an hour ago

        There has always been pressure to do so, but there are fundamental bottlenecks in performance when it comes to model size.

        What I can think of is that there may be a push toward training for exclusively search-based rewards so that the model isn't required to compress a large proportion of the internet into their weights. But this is likely to be much slower and come with initial performance costs that frontier model developers will not want to incur.

        • thisrobot an hour ago

          I wonder if this maintains the natural language capabilities which are what LLM's magic to me. There is a probably some middle ground, but not having to know what expressions, or idiomatic speech an LLM will understand is really powerful from a user experience point of view.

        • Grosvenor an hour ago

          Yeah that was my unspoken assumption. The pressure here results in an entirely different approach or model architecture.

          If openAI is spending $500B then someone can get ahead by spending $1B which improves the model by >0.2%

          I bet there's a group or three that could improve results a lot more than 0.2% with $1B.

        • UncleOxidant an hour ago

          Or maybe models that are much more task-focused? Like models that are trained on just math & coding?

        • jiggawatts 33 minutes ago

          > exclusively search-based rewards so that the model isn't required to compress a large proportion of the internet into their weights.

          That just gave me an idea! I wonder how useful (and for what) a model would be if it was trained using a two-phase approach:

          1) Put the training data through an embedding model to create a giant vector index of the entire Internet.

          2) Train a transformer LLM but instead only utilising its weights, it can also do lookups against the index.

          Its like a MoE where one (or more) of the experts is a fuzzy google search.

          The best thing is that adding up-to-date knowledge won’t require retraining the entire model!

        • parineum an hour ago

          > so that the model isn't required to compress a large proportion of the internet into their weights.

          The knowledge compressed into an LLM is a byproduct of training, not a goal. Training on internet data teaches the model to talk at all. The knowledge and ability to speak are intertwined.

      • lofaszvanitt an hour ago

        Of course and then watch those companies reined in.

    • Phelinofist 25 minutes ago

      > By creating a chock point at the hardware level, OpenAI can prevent the competition from increasing their reach because of the lack of hardware

      I already hate OpenAI, you don't have to convince me

    • hnuser123456 2 hours ago

      Sure, but if the price is being inflated by inflated demand, then the suppliers will just build more factories until they hit a new, higher optimal production level, and prices will come back down, and eventually process improvements will lead to price-per-GB resuming its overall downtrend.

      • malfist 2 hours ago

        Micron has said they're not scaling up production. Presumably they're afraid of being left holding the bag when the bubble does pop

        • fullstop an hour ago

          Why are they building a foundry in Idaho?

          https://www.micron.com/us-expansion/id

          • delfinom an hour ago

            Future demand aka DDR6.

            The 2027 timeline for the fab is when DDR6 is due to hit market.

          • roboror 25 minutes ago

            I mean it says on the page

            >help ensure U.S. leadership in memory development and manufacturing, underpinning a national supply chain and R&D ecosystem.

            It's more political than supply based

        • Analemma_ 2 hours ago

          Not just Micron, SK Hynix has made similar statements (unfortunately I can only find sources in Korean).

          DRAM manufacturers got burned multiple times in the past scaling up production during a price bubble, and it appears they've learned their lesson (to the detriment of the rest of us).

        • mindslight 40 minutes ago

          Hedging is understandable. But what I don't understand is why they didn't hedge by keeping Crucial around but more dormant (higher prices, less SKUs, etc)

          • daemonologist 16 minutes ago

            The theory I've heard is built on the fact that China (CXMT) is starting to properly get into DRAM manufacturing - Micron might expect that to swamp the low end of the market, leaving Crucial unprofitable regardless, so they might as well throw in the towel now and make as much money as possible from AI/datacenter (which has bigger margins) while they can.

            But yeah even if that's true I don't know why they wouldn't hedge their bets a bit.

      • nutjob2 2 hours ago

        Memory fabs take billions of dollars and years to build, also the memory business is a tough one where losses are common, so no such relief in sight.

        With a bit of luck OpenAI collapses under its own weight sooner than later, otherwise we're screwed for several years.

      • mholm 2 hours ago

        Chip factories need years of lead time, and manufacturers might be hesitant to take on new debt in a massive bubble that might pop before they ever see any returns.

    • lysace an hour ago

      Please explain to me like I am five: Why does OpenAI need so much RAM?

      2024 production was (according to openai/chatgpt) 120 billion gigabytes. With 8 billion humans that's about 15 GB per person.

      • GistNoesis 10 minutes ago

        What they need is not so much memory but memory bandwidth.

        For training, their models have a certain number of memory needed to store the parameters, and this memory is touched for every example of every iteration. Big models have 10^12 (>1T )parameters, and with typical values of 10^3 examples per batch, and 10^6 number of iteration. They need ~10^21 memory accesses per run. And they want to do multiple runs.

        DDR5 RAM bandwidth is 100G/s = 10^11, Graphics RAM (HBM) is 1T/s = 10^12. By buying the wafer they get to choose which types of memory they get.

        10^21 / 10^12 = 10^9s = 30 years of memory access (just to update the model weights), you need to also add a factor 10^1-10^3 to account for the memory access needed for the model computation)

        But the good news is that it parallelize extremely well. If you parallelize you 1T parameters, 10^3 times, your run time is brought down to 10^6 s = 12 days. But you need 10^3 *10^12 = 10^15 Bytes of RAM by run for weight update and 10^18 for computation (your 120 billions gigabytes is 10^20, so not so far off).

        Are all these memory access technically required : No if you use other algorithms, but more compute and memory is better if money is not a problem.

        Is it strategically good to deprive your concurrents from access to memory : Very short-sighted yes.

        It's a textbook cornering of the computing market to prevent the emergence of local models, because customers won't be able to buy the minimal RAM necessary to run the models locally even just the inferencing part (not the training). Basically a war on people where little Timmy won't be able to get a RAM stick to play computer games at Xmas.

      • mebassett an hour ago

        large language models are large and must be loaded into memory to train or to use for inference if we want to keep them fast. older models like gpt3 have around 175 billion parameters. at float32s that comes out to something like 700GB of memory. newer models are even larger. and openai wants to run them as consumer web services.

        • lysace an hour ago

          I mean, I know that much. The numbers still don't make sense to me. How is my internal model this wrong?

          For one, if this was about inference, wouldn't the bottleneck be the GPU computation part?

          • ssl-3 28 minutes ago

            Concurrency?

            Suppose some some parallelized, distributed task requires 700GB of memory (I don't know if it does or does not) per node to accomplish, and that speed is a concern.

            A singular pile of memory that is 700GB is insufficient not because it lacks capacity, but instead because it lacks scalability. That pile is only enough for 1 node.

            If more nodes were added to increase speed but they all used that same single 700GB pile, then RAM bandwidth (and latency) gets in the way.

      • daemonologist 27 minutes ago

        The conspiracy theory (which, to be clear, may be correct) is that they don't actually need so much RAM, but they know they and all their competitors do still need quite a bit of RAM. By buying up all the memory supply they can, for a while, keep everyone else from being able to add compute capacity/grow their business/compete.

    • codybontecou 2 hours ago

      This became very clear with the outrage, rather than excitement, of forcing users to upgrade to ChatGPT-5 over 4o.

  • radicality 2 hours ago

    Think the article should also mention how OpenAI is likely responsible for it. Good article I found from another thread here yesterday: https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...

    • diabllicseagull 2 hours ago

      yes. on the moore's law is dead podcast they were talking about rumors where some 'AI enterprise company's representatives' were trying to buy memory in bulk from brick and mortar stores. in some cases openai was mentioned. crazy if true. also interesting considering none of those would be ECC certified like what you would opt for for a commercial server.

  • RachelF 2 hours ago

    Perhaps we'll have to start optimizing software for performance and RAM usage again.

    I look at MS Teams currently using 1.5GB of RAM doing nothing.

    • irae 3 minutes ago

      [delayed]

    • alliao an hour ago

      now that's a word i haven't heard in a while...optimising

    • 650REDHAIR an hour ago

      I truly hate how bloated and inefficient MS/Windows is.

      My hope is that with the growing adoption of Linux that MS takes note...

      • vanviegen 38 minutes ago

        That's a very indirect way of enjoying the benefits of Linux.

      • bluGill 15 minutes ago

        Can we get web sites to optimize as well? I use a slower laptop and a lot of sites are terrible. My old Sparc station (40mhz) had a better web experience in 1997 because than people cared about this.

  • nish__ 2 hours ago

    Anyone want to start a fab with me? We can buy an ASML machine and figure out the rest as we go. Toronto area btw

    • Reason077 2 hours ago

      A dozen or so well-resourced tech titans in China are no doubt asking themselves this same question right now.

      Of course, it takes quite some time for a fab to go from an idea to mass production. Even in China. Expect prices to drop 2-3 years from now when all the new capacity comes online?

      • nish__ an hour ago

        My napkin math:

        According to my research, these machines can etch around 150 wafers per hour and each wafer can fit around 50 top-of-the-line GPUs. This means we can produce around 7500 AI chips per hour. Sell them for $1k a piece. That's $7.5 million per hour in revenue. Run the thing for 3 days and we recover costs.

        I'm sure there's more involved but that sounds like a pretty good ROI to me.

        • q3k 15 minutes ago

          A photolithography machine doesn't etch anything (well, some EUV machines do it as an unwanted side effect because of plasma generation), it just patterns some resist material. Etching is happening elsewhere. Also, keep in mind, you'll need to do multiple passes through a photolithography machine to pattern different steps of the process - it's not a single pass thing.

        • bluGill 21 minutes ago

          The catch is if you started today with plenty of money (billions of dollars!) and could hire the right experts as you need them (this is a big if!) there would still be a couple years between today and producing 150 wafers per house. So the question isn't what does the math look like today, it is what the math looks like in 2 years - if you could answer that why didn't you start two years ago so you could get the current prices?

        • shadowpho 34 minutes ago

          What about the $10b to build the facility (including clean air/water/chemicals/etc)?

          • nish__ 20 minutes ago

            Rent a warehouse.

            • ghc 11 minutes ago

              It would be cheaper to bulldoze the warehouse and start over.

        • Keyframe an hour ago

          that's 100% yield which ain't happening

          • warmwaffles 44 minutes ago

            Not with that attitude.

          • nish__ 44 minutes ago

            What would you expect yield to be?

            • danparsonson 27 minutes ago

              With no prior experience? 0%. Those machines are not just like printers :-)

              • nish__ 17 minutes ago

                We'll have to gain some experience then :)

                • danparsonson 4 minutes ago

                  Sure - once you have dozens of engineers and 5 years under your belt you'll be good to go!

                  This will get you started: https://youtu.be/B2482h_TNwg

                  Keep in mind that every wafer makes multiple trips around the fab, and on each trip it visits multiple machines. Broadly, one trip lays down one layer, and you may need 80-100 layers (although I guess DRAM will be fewer). Each layer must be aligned to nanometer precision with previous layers, otherwise the wafer is junk.

                  Then as others have said, once you finish the wafer, you still need to slice it, test the dies, and then package them.

                  Plus all the other stuff....

                  You'll need billions in investment, not millions - good luck!

              • vel0city 11 minutes ago

                Especially when the plan is to just run them in a random rented commercial warehouse.

                I drive by a large fab most days of the week. A few breweries I like are down the street from a few small boutique fabs. I got to play with some experimental fab equipment in college. These aren't just some quickly thrown together spaces in any random warehouse.

                And it's also ignoring the water manufacturing process, and having the right supply chain to receive and handle these ultra clean discs without introducing lots of gunk into your space.

            • mindslight 38 minutes ago

              Sounds like the kind of question ChatGPT would be good at answering...

      • dylan604 2 hours ago

        At that point, it'll be the opposite problem as more capacity than demand will be available. These new fabs won't be able to pay for themselves. Every tic receives a tok.

      • Keyframe an hour ago

        it's just a bunch of melted sand. How hard can it be?

      • UncleOxidant an hour ago

        I think it would be more like 5-7 years from now if they started breaking ground on new fabs today.

      • umanwizard 2 hours ago

        China cannot buy ASML machines. All advanced semiconductor manufacturing in China is done with stockpiled ASML machines from before the ban.

        • tooltalk an hour ago

          That restriction is only for the most advanced systems. According to ASML's Q3 2025 filing, 42% of all system sales went to China.

          SK Hynix also has significant memory manufacturing presence in China; or about 40% of the company's entire DRAM capacity.

        • phil21 an hour ago

          Would you really need ASML machines to do DDR5 RAM? Honest question, but I figured there was competition for the non-bleeding edge - perhaps naively so.

          • nish__ 42 minutes ago

            Yes. You need 16nm or better for DDR5.

        • squigz an hour ago

          As someone who knows next to nothing about this space, why can China not build their own machines? Is ASML the only company making those machines? If so, why? Is it a matter of patents, or is the knowledge required for this so specialized only they've built it up?

          • bluGill 17 minutes ago

            They can - if they are willing to invest a a lot of money over several years. The US got Nuclear bombs in a few years during WWII with this thinking, and China (or anyone else) could too. This problem might be harder than a bomb, but the point remains, all it takes is a willingness to invest.

            Of course the problem is we don't see what would be missed by doing this investment. If you put extra people into solving this problem that means less people curing cancer or whatever. (China has a lot of people, but not unlimited)

          • nish__ 38 minutes ago

            Yes. ASML is the only company making these machines. And both, they own thousands of patents and are also the only ones with the institutional knowledge required to build them anyway.

            • baq 16 minutes ago

              I thought Fermi paradox is about nukes; I increasingly think it's about chips

    • GeekFortyTwo 2 hours ago

      As someone with no skills in the space, no money, and lives near Ottawa: I'd love to help start a fab in Ontario.

      • nish__ 2 hours ago

        Right on, partner. I think there's a ton of demand for it tbh

        I'll do the engineering so we're good on that front. Just need investors.

        • mindcrime an hour ago

          I think I have a few dollars left on a Starbucks gift card. I'm in!

    • panzagl 41 minutes ago

      Hear me out- artisanal DRAM.

    • asjir an hour ago

      But what if it's a bubble driven by speculation?

      It wouldn't pay off.

      Starting a futures exchange on RAM chips, on the other hand...

    • jacquesm 2 hours ago

      I hope you have very deep pockets. But I'm cheering you on from the sidelines.

      • nish__ 2 hours ago

        Just need a half billion in upfront investment. And thank you for the support :)

        • dylan604 2 hours ago

          So, playing the Mega Powerball are you?

          • nish__ an hour ago
            • jvdvegt 40 minutes ago

              I suppose it would help if I could read the whole page: I cannot see the left few characters on Firefox on Android. What did you make this with?

              Note that fixing the site won't increase my chances of donating, I'm from the ASML country ;)

              • nish__ 26 minutes ago

                That's annoying... I made it with Next.js and Tailwind CSS tho. Hosted on Vercel.

            • dylan604 an hour ago

              wonder which one will find a winner faster...

              • nish__ an hour ago

                It's a toss up.

      • tmaly 2 hours ago

        that is an understatement

    • bhhaskin 2 hours ago

      Only if you put up the 10 billion dollars.

      • nish__ 2 hours ago

        Machines are less than 400 million.

        • q3k 35 minutes ago

          You're just talking about a lithography machine. Patterning is one step out of thousands in a modern process (albeit an important one). There's plenty more stuff needed for a production line, this isn't a 3D printer but for chips. And that's just for the FEOL stuff, then you still need to do BEOL :). And packaging. And testing (accelerated/environmental, too). And failure analysis. And...

          Also, you know, there's a whole process you'll need to develop. So prepare to be not making money (but spending tons of it on running the lines) until you have a well tested PDK.

          • baq 15 minutes ago

            > 3D printer but for chips

            how about a farm of electron microscopes? these should work

        • pixl97 2 hours ago

          And the cost of the people to run those machines, and the factories that are required to run the machines?

          • nish__ an hour ago

            I'll do it for free. And I'm sure we could rent a facility.

          • waynesonfire an hour ago

            Dont forget RAM.

    • deuplonicus 2 hours ago

      Sure, we can work on brining in TinyTapeout to modern fab

  • seanalltogether an hour ago

    I wonder if these RAM shortages are going to cause the Steam Machine to be dead on arrival. Valve is probably not a big enough player to have secured production guarantees like Sony or Nintendo would have. If they try to launch with a price tag over $750, they're probably not going sell a lot.

    • pja an hour ago

      Yeah, I think (sadly) this kills the Steam Machine in the short term if the competition is current consoles.

      At least until the supply contracts Sony & Microsoft have signed come up for renewal, at which point they’re going to be getting the short end of the RAM stick too.

      Indeed, in the short term the RAM shortage is going to kill homebrew PC building & small PC builders stone dead - prebuilts from the larger suppliers will be able to outcompete them on price so much that it simply won’t make any sense to buy from anyone except HP, Dell etc etc. Again, this applies only until the supply contracts those big PC firms have signed run out, or possibly only until their suppliers find they can’t source DDR5 ram chips for love nor money, because the fabs are only making HBM chips & so they have to break the contracts themselves.

      It’s going to get bumpy.

      • fullstop an hour ago

        > At least until the supply contracts Sony & Microsoft have signed come up for renewal, at which point they’re going to be getting the short end of the RAM stick too.

        Allegedly Sony has an agreement for a number of years, but Microsoft does not: https://thegamepost.com/leaker-xbox-series-prices-increase-r...

        • pja an hour ago

          Eesh.

          The fight over RAM supply is going to upend a lot of product markets. Just random happenstance over whether a company decided to lock in supply for a couple of years is going to make or break individual products.

    • delfinom 44 minutes ago

      If anyone that has hidden cash reserves that could buy out even Apple. It would probably be Valve.

      Lol, wacky reality if they say "hey we had spare cash so we bought out Micron to get DDR5 for our gaming systems"

  • mastax an hour ago

    > And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.

    > So they're shutting down their consumer memory lines, and devoting all production to AI.

    Okay this was the missing piece for me. I was wondering why AI demand, which should be mostly HBM, would have such an impact on DDR prices, which I’m quite sure are produced on separate lines. I’d appreciate a citation so I could read more.

    • loeg 13 minutes ago

      It's kind of a weird framing. Of course RAM companies are going to sell their limited supply to the highest bidder!

    • txdv an hour ago

      Just like the GPUs.

      NVIDIA started allocating most of the wafer capacity for 50k GPU chips. They are a business, its a logical choice.

  • philsnow an hour ago

    Red chip supply problems in your factory are usually caused by insufficient plastic bars, which is usually caused by oil production backing up because you're not consuming your heavy oil and/or petroleum fast enough.

    Crack heavy oil to light, and turn excess petroleum into solid fuel. As a further refinement, you can put these latter conversions behind pumps, and use the circuit network to only turn the pumps on when the tank storage of the respective reagent is higher than ~80%.

    hth, glhf

    • wpm 12 minutes ago

      Yup, and stockpiling solid fuel is not a waste because you need it for rocket fuel later on. Just add more chests.

    • dbbr an hour ago

      You don't happen to play Foxhole [0] do you?

      Because if not, the logistics Collies in SOL could make good use of a person with your talents. :-)

      [0] https://store.steampowered.com/app/505460/Foxhole/

  • jl6 2 hours ago

    Can someone explain why OpenAI is buying DDR5 RAM specifically? I thought LLMs typically ran on GPUs with specialised VRAM, not on main system memory. Have they figured out how to scale using regular RAM?

    • fullstop 2 hours ago

      They're not. They are buying wafers / production capacity to make HBM so there is less DDR5 supply.

      • jl6 2 hours ago

        OK, fair enough, but what are OpenAI doing buying production capacity rather than, say, paying NVIDIA to do it? OpenAI aren’t the ones making the hardware?

        • snuxoll an hour ago

          Just because Nvidia happily sells people discrete GPU's, DGX systems, etc., doesn't mean they would turn down a company like OpenAI paying them $$$ for just the packaged chips and the technical documentation to build their own PCBs; or, let OpenAI provide their own DRAM supply for production on an existing line.

          If you have a potentially multi-billion dollar contract, most businesses will do things outside of their standard product offerings to take in that revenue.

          • jl6 an hour ago

            Got it, thank you.

        • fullstop 2 hours ago

          Because they can provide the materials to NVIDIA for production and prevent Google, Anthropic, etc from having them.

        • baq 44 minutes ago

          > OpenAI aren’t the ones making the hardware?

          how surprised would you be if they announced that they are?

    • Thegn 2 hours ago

      They didn't buy DDR5 - they bought raw wafer capacity and a ton of it at that.

  • jsheard 2 hours ago

    I wonder if Apple will budge. The margins on their RAM upgrades were so ludicrous before that they're probably still RAM-profitable even without raising their prices, but do they want to give up those fat margins?

    • rfmc 2 hours ago

      I know contract prices are not set in stone. But if there’s one company that probably has their contract prices set for some time in the future, that company is Apple, so I don’t think they will be giving up their margins anytime soon.

    • Night_Thastus 2 hours ago

      RAM upgrades are such a minor, insignificant part of Apple's income - and play no part in plans for future expansion/stock growth.

      They don't care. They'll pass the cost on to the consumers and not give it a second thought.

    • throw0101d 2 hours ago

      > I wonder if Apple will budge.

      Perhaps I don't understand something so clarification would be helpful:

      I was under the impression that Apple's RAM was on-die, and so baked in during chip manufacturing and not a 'stand alone' SKU that is grafted onto the die. So Apple does not go out to purchase third-party product, but rather self-makes it (via ASML) when the rest of the chip is made (CPU, GPU, I/O controller, etc).

      Is this not the case?

    • txdv 36 minutes ago

      on one hand they are loosing profit, on the other hand they are gaining on market share. They will probably wait a short while to assess how much they are willing to sacrifice profits for market share

    • diabllicseagull 2 hours ago

      I'd like to believe that their pricing for ram upgrades are like that so the base model can hit a low enough of a price. I don't believe they have the same margin for the base model compared to the base model + memory upgrade.

    • suprnurd 2 hours ago

      I read online that Apple uses three different RAM suppliers supposedly? I wonder if Apple has the ability to just make their own RAM?

      • kayson 2 hours ago

        Apple doesn't own any foundries, so no. It's not trivial to spin up a DRAM foundry either. I do wonder if we'll see TSMC enter the market though. Maybe under pressure from Apple or nvidia...

      • FastFT 2 hours ago

        There are no large scale pure play DRAM fabs that I’m aware of, so Apple is (more or less) buying from the same 3 companies as everyone else.

      • umanwizard 2 hours ago

        Apple doesn't own semiconductor fabs. They're not capable of making their own RAM.

    • dcchambers 2 hours ago

      I am fully expecting a 20%+ price bump on new mac hardware next year.

      • Eric_WVGG 2 hours ago

        Not me. It’s wildly unusual for Apple to raise their prices on basically anything… in fact I'm not sure if its ever happened. *

        It’s been pointed out by others that price is part of Apple's marketing strategy. You can see that in the trash can Mac Pro, which logically should have gotten cheaper over the ridiculous six years it was on sale with near-unchanged specs. But the marketing message was, "we're selling a $3000 computer."

        Those fat margins leave them with a nice buffer. Competing products will get more expensive; Apple's will sit still and look even better by comparison.

        We are fortunate that Apple picked last year to make 16gb the new floor, though! And I don't think we're going to see base SSDs get any more generous for a very, very long time.

        * okay I do remember that Macbook Airs could be had for $999 for a few years, that disappeared for a while, then came back

    • loloquwowndueo 2 hours ago

      It’s 4D chess my dude, they were just training people to accept those super high ram prices. They saw this coming I tell you!

  • tverbeure an hour ago

    I updated a $330 new HP laptop (it flexes like cardboard) from 8GB to 32GB in May. Cost back then: $44. Today, the same kit costs a ridiculous $180.

    https://tomverbeure.github.io/2025/03/12/HP-Laptop-17-RAM-Up...

  • zoobab 2 hours ago

    "dig into that pile of old projects you never finished instead of buying something new this year."

    You don't need a new PC. Just use the old one.

    • kevin_thibedeau 2 hours ago

      I just bought some 30pin SIMMs to rehab an old computer. That market is fine.

      • fullstop 2 hours ago

        I have a bag of SIMMs that I saved, no idea why, because I clearly wrote BAD on the mylar bag.

        At time time I was messing around with the "badram" patch for Linux.

  • stevenjgarner 2 hours ago

    My understanding is that this is primarily hitting DDR5 RAM (or better). With prices so inflated, is there an argument to revert and downgrade systems to DDR4 RAM technology in many use cases (which is not so inflated)?

    • cptnapalm 2 hours ago

      DDR 4 shot up too. It was bad enough that instead of trying to put together a system with the AM4 m/b I already have, I just bought a Legion Go S.

    • geerlingguy 2 hours ago

      Linked in the article, DDR4 and LPDDR4 are also 2-4x more expensive now, forcing smaller manufactures to raise prices or cancel some products entirely.

    • tencentshill 2 hours ago

      It will be hit just as hard, they have stopped new DDR4 production to focus on DDR5 and HBM.

    • Numerlor 2 hours ago

      DDR4 manufacturing is mostly shut down, so if any real demand picks starts there the prices will shoot up

    • gizmo 2 hours ago

      No DDR4 is affected too. It's a simple question of production and demand, and the biggest memory manufacturers are all winding down their DDR4/DDR5 memory production for consumers (they still make some DDR5 for OEMS and servers).

    • segmondy 2 hours ago

      DDR4 prices have gone up 4x in the last 3 months.

  • levkk an hour ago

    Not a bad time for the secondary market to be created. We keep buying everything new, when the old stuff works just as well. There is a ton of e-waste. The enthusiast market can benefit, while the enterprise market can just eat the cost.

    Also, a great incentive to start writing efficient software. Does Chrome really need 5GB to run a few tabs?

  • almosthere 37 minutes ago

    I think I paid like $500 ($1300 today) in 1989ish to upgrade from 2MB to 5MB of ram (had to remove 1MB to add 4MB)

  • internet2000 25 minutes ago

    Hopefully Apple uses their volume as leverage to avoid getting affected by this for as long as possible. I can ride it out if they manage to.

  • notatoad an hour ago

    anybody care to speculate on how long this is likely to last? is this a blip that will resolve itself in six months, or is this demand sustainable and we are talking years to build up new manufacturing facilities to meet demand?

    • boxedemp 31 minutes ago

      Pure speculation, nobody can say say for sure, but my guess is 2-3 years.

  • hathawsh 2 hours ago

    The article suggests that because the power and cooling are customized, it would take a ton of effort to run the new AI servers in a home environment, but I'm skeptical of that. Home-level power and cooling are not difficult these days. I think when the next generation of AI hardware comes out (in 3-5 years), there will be a large supply of used AI hardware that we'll probably be able to repurpose. Maybe we'll sell them as parts. It won't be plug-and-play at first, but companies will spring up to figure it out.

    If not, what would these AI companies do with the huge supply of hardware they're going to want to get rid of? I think a secondary market is sure to appear.

    • dist-epoch 33 minutes ago

      A single server is 20 KW. A rack is 200 KW.

      These are not the old CPU servers of yesterday.

  • SchwKatze 2 hours ago

    I just gathered enough money to build my new PC. I'll even go to another country to pay less taxes, and this spike hit me hard. I'll buy anyway because I don't believe it will slow down so soon. But yeah, for me is a lot of money

    • rolandog 2 hours ago

      Me too. I had saved up to make a small server. I guess I'll have to wait up until 2027–2028 at this rate.

      • dylan604 2 hours ago

        Buy used gear and rip out the guts???

  • asdfman123 2 hours ago

    I can't help but be the pessimist angle. RAM production will need to increase to supply AI data centers. When the AI bubble bursts (and I do believe it will), the whole computing supply chain, which has been built around it, will take a huge hit too. Excess production capacity.

    Wonder what would happen if it really takes a dive. The impact on the SF tech scene will be brutal. Maybe I'll go escape on a sailboat for 3 years or something.

    Anyway, tangential, but something I think about occasionally.

    • XorNot an hour ago

      Prices are high because no one believes it's not a bubble. Nvidias strategy has been careful careful with volume this whole time as well.

      The thing is it's also not a conventional looking bubble: what we're seeing here is cashed up companies ploughing money into the only thing in their core business they could find to do so with, rather then a lot of over exuberant public trading and debt financing.

  • f055 an hour ago

    Bullwhip effect on this will be funny. At least, we are in for some cheap ram in like… a dozen months or so.

    • Forgeties79 21 minutes ago

      I’m really hoping this doesn’t become another GPU situation where every year we think it’s going to get lower and it just stays the same or gets worse

  • phkahler an hour ago

    Called it! About a year ago (or more?) I thought nVidia was overpriced and if AI was coming to PCs RAM would be important and it might be good to invest in DRAM makers. As usual I didn't do anything with my insight, and here we are. Micron has more than doubled since summer.

  • AceJohnny2 2 hours ago

    > And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.

    I wouldn't ascribe that much intent. More simply, datacenter builders have bought up the entire supply (and likely future production for some time), hence the supply shortfall.

    This is a very simple supply-and-demand situation, nothing nefarious about it.

    • teamonkey an hour ago

      That makes it sound like they are powerless, which is not the case. They don’t have to have their capacity fully bought out, they could choose to keep a proportion of capacity for maintaining the existing PC market, which they would do if they thought it would benefit them in the long term.

      They’re not doing that, because it benefits them not to.

      • baq 12 minutes ago

        $20B, 5 years and you can have your own DDR5 fab to print money with.

        jokes aside, if the AI demand actually materializes, somebody will look at the above calculation and say 'we're doing it in 12 months' with a completely straight face - incumbents' margin will be the upstart's opportunity.

  • JKCalhoun 2 hours ago

    > maybe it's a good time to dig into that pile of old projects you never finished instead of buying something new this year.

    Always good advice.

  • topaz0 2 hours ago

    I wonder how much of that RAM is sitting in GPUs in warehouses waiting for datacenters to be built or powered?

  • gizmo 2 hours ago

    The big 3 memory manufacturers (SK Hynix, Samsung, Micron) are essentially all moving upmarket. They have limited capacity and want to use it for high margin HBM for GPUs and ddr5 for servers. At the same time CXMT, Winbond and Nanya are stepping in at the lower end of the market.

    I don't think there is a conspiracy or price fixing going on here. Demand for high profit margin memory is insatiable (at least until 2027 maybe beyond) and by the time extra capacity comes online and the memory crunch eases the minor memory players will have captured such a large part of the legacy/consumer market that it makes little sense for the big 3 to get involved anymore.

    Add to that scars from overbuilding capacity during previous super memory super cycles and you end up with this perfect storm.

  • mikelitoris 2 hours ago

    Companies are adamant about RAMming AI down our throats, it seems.

  • walterbell 2 hours ago

    If the OpenAI Hodling Company buys and warehouses 40% of global memory production or 900,000 memory wafers (i.e. not yet turned into DDR5/DDR6 DIMMs) per month at price X in October 2025, leading to supply shortages and tripling of price, they have the option of later un-holding the warehoused memory wafers for a profit.

    https://news.ycombinator.com/item?id=46142100#46143535

      Had Samsung known SK Hynix was about to commit a similar chunk of supply — or vice-versa — the pricing and terms would have likely been different. It’s entirely conceivable they wouldn’t have both agreed to supply such a substantial part of global supply if they had known more...but at the end of the day - OpenAI did succeed in keeping the circles tight, locking down the NDAs, and leveraging the fact that these companies assumed the other wasn’t giving up this much wafer volume simultaneously…in order to make a surgical strike on the global RAM supply chain..
    
    What's the economic value per warehoused and insured cubic inch of 900,000 memory wafers? Grok response:

    > As of late 2025, 900,000 finished 300 mm 3D NAND memory wafers (typical high-volume inventory for a major memory maker) are worth roughly $9 billion and occupy about 104–105 million cubic inches when properly warehoused in FOUPs. → Economic value ≈ $85–90 per warehoused cubic inch.

    • petre 2 hours ago

      Sounds like the Silver Thursday all over again. I hope OpenAI ends up like the Hunt Btothers.

      https://en.wikipedia.org/wiki/Silver_Thursday

      • enlyth 21 minutes ago

        > To save the situation, a consortium of US banks provided a $1.1 billion line of credit to the brothers which allowed them to pay Bache which, in turn, survived the ordeal.

        It seems once you amass a certain amount of wealth, you just get automatically bailed out from your mistakes

  • otherayden 2 hours ago

    I think we kiss of deathed the article haha. Here's an archive https://archive.is/6QD8c

  • hyperhello 2 hours ago

    Is this a shortage of every type of RAM simultaneously?

    • jsheard 2 hours ago

      Every type of DRAM is ultimately made at the same fabs, so if one type is suddenly in high demand then the supply of everything else is going to suffer.

      • bee_rider 2 hours ago

        Wait, really? For CPUs each generation needs basically a whole new fab, I thought… are they more able to incrementally upgrade RAM fabs somehow?

        • selectodude 2 hours ago

          The old equipment is mothballed because china is the only buyer and nobody wants to do anything that the Trump admin will at some point decide is tariff-worthy. So it all sits.

    • internetter 2 hours ago

      Essentially yes, not necessarily equivalently but every type has increased substantially

  • mindslight an hour ago

    > But I've already put off some projects I was gonna do for 2026, and I'm sure I'm not the only one.

    Let's be honest here - the projects I'm going to do in 2026, I bought the parts for those back in 2024. But this is definitely going to make me put off some projects that I might have finally gotten around to in 2028.

  • roadside_picnic 2 hours ago

    I know this is mostly paranoid thinking on my behalf, but it almost feels like this is a conscious effort to attempt to destroy "personal" computing.

    I've been a huge advocate for local, open, generative AI as the best resistance to massive take-over by large corporations controlling all of this content creation. But even as it is (or "was" I should say), running decent models at home is prohibitively expensive for most people.

    Micron has already decided to just eliminate the Crucial brand (as mentioned in the post). It feels like if this continues, once our nice home PCs start to break, we won't be able to repair them.

    The extreme version of this is that even dumb terminals (which still require some ram) will be as expensive as laptops today. In this world, our entire computing experience is connecting a dumb terminal to a ChatGPT interface where the only way we can interact with anything is through "agents" and prompts.

    In this world, OpenAI is not overvalued, and there is no bubble because the large LLM companies become computing.

    But again, I think this is mostly a dystopian sci-fi fiction... but it does sit a bit too close to the realm of possible for my tastes.

    • clusterhacks 30 minutes ago

      I share your paranoia.

      My kids use personal computing devices for school, but their primary platform (just like their friends) is locked-down phones. Combining that usage pattern with business incentives to lock users into walled gardens, I kind of worry we are backing into the destruction of personal computing.

    • SimianSci 2 hours ago

      Wouldn't the easy answer to this be increased efficiency of RAM usage?

      RAM being plentiful and cheap led to a lot of software development being very RAM-unaware, allowing the inefficiencies of programs to be mostly obfuscated from the user. If RAM prices continue rising, the semi-apocalytic consumer fiction you've spun here would require that developers not change their behaviors when it comes to software they write. There will be an equillibrium in the market that still allows the entry of consumer PC's it will just mean devices people buy will have less available RAM than is typical. The demand will eventually match up to the change in supply as is typical of supply/demand issues and not continuously rise into an infinite horizon.

    • kalterdev an hour ago

      I believe that while centralized computing excels at specific tasks like consumer storage, it cannot compete with the unmatched diversity and unique intrinsic benefits of personal computing. Kindle cannot replace all e-readers. Even Apple’s closed ecosystem cannot permit it to replace macOS with iPadOS. These are not preferences but constraints of reality.

      The goal shouldn’t be to eliminate one side or the other, but to bridge the gap separating them. Let vscode.dev handle the most common cases, but preserve vscode.exe for the uncommon yet critical ones.

    • ben_w 2 hours ago

      The first "proper" "modern" computer I had, initially came with 8 megabytes of RAM.

      It's not a lot, but it's enough for a dumb terminal.

      • rolandog 2 hours ago

        That's not disproving OP's comment; OpenAI is, in my opinion, making it untenable for a regular Joe to build a PC capable of running local LLM model. It's an attack on all our wallets.

        • petre 2 hours ago

          Why do you need a LLM running locally so much that's the inflated RAM prices are an attack on your wallet? One can always opt not to play this losing game.

          I remember when the crypto miners rented a plane to deliver their precious GPUs.

    • dangus 2 hours ago

      It’s not a conspiracy, it’s just typical dumb short term business decisions amplified and enabled by a cartel supply market.

      If Crucial screws up by closing their consumer business they won’t feel any pain from it because the idea of new competitors entering the space is basically impossible.

    • plufz 2 hours ago

      I don’t think you need a conspiracy theory to explain this. This is simply capitalism, a system that seems less and less like the way forward. I’m not against markets, but I believe most countries need more regulations targeted at the biggest companies and richest people. We need stronger welfare states, smaller income gaps and more democracy. But most countries seems to vote in the absolute opposite direction.

      • BizarroLand an hour ago

        The end goal of capitalism is the same as the end goal of monopoly.

        1 person has all the money and all the power and everyone else is bankrupt forever and sad.

  • fithisux 2 hours ago

    32GB should be more than enough.

    You can go 16GB if you go native and throw some assembly in the mix. Use old school scripting languages. Debloat browsers.

    It has been long delayed.

    • Night_Thastus 2 hours ago

      16GB is more than fine if you're not doing high-end gaming, or heavy production workloads. No need for debloating.

      But it doesn't matter either way, because both 16 and 32GB have what, doubled, tripled? It's nuts. Even if you say "just buy less memory", now is a horrible time to be building a system.

      • Aardwolf 2 hours ago

        I found web browser tabs eating too much memory when you only have 16GB

        • prmoustache an hour ago

          Use adblock, stop visiting nasty websites and open less tabs. Problem is easily solved.

        • Night_Thastus an hour ago

          That's strange. How many tabs are we talking? Are they running something intense or just ordinary webpages?

    • felixfurtak 2 hours ago

      “640K ought to be enough for anybody.” - Bill Gates

      • shawndumas 2 hours ago

        there's no reliable evidence he ever uttered that phrase

    • harvey9 2 hours ago

      Hey, hey 16k What does that get you today?

      https://m.youtube.com/watch?v=IagZIM9MtLo

    • franciscojs 2 hours ago

      I bought a motherboard to build a DIY NAS... takes DDR5 SO-DIMM RAM and only 16gb costs more than double the motherboard (which includes an intel processor)

    • khannn 2 hours ago

      16GB is more than enough on Linux, but Win11 eats resources like crazy

    • Forgeties79 an hour ago

      Sure but 32GB DDR5 ram has just jumped from ~$100 to $300+ in a flash. The 2x16GB I have in my recent build went from $105 for the pair to $250 each. $500 total!

      SSD’s are also up. Hell I am seeing refurbished enterprise HDD’s at 2x right now. It’s sharp increases basically across the board except for CPU’s/GPU’s.

      Every PC build basically just cranked up $400-$600 easily, and that’s not accounting for the impact of inflation over the last few years weakening everyone’s wallets. The $1600 machine I spec’d out for my buddy 5 weeks ago to buy parts for this Black Friday now runs $2k even.

    • anthk an hour ago

      I'm using 1GB with TWM, DIllo, TUI tools, XTerm, MuPDF and the like. As most tools are small from https://t3x.org, https://luxferre.top and https://howerj.github.io/subleq.htm with EForth, (and I try to use cparser instead of) clang, my requeriments are really tiny.

      You can achieve a lot by learning Klong and reading the intro on statistics. And xargs to paralelize stuff. Oh, and vidir to edit directories at crazy speeds with any editor, even nano or gedit if you like them.

  • phendrenad2 2 hours ago

    I'm way ahead of all of you, I'm hoarding DDR2.

  • leeoniya an hour ago

    1. Buy up 40% of global fab capacity

    2. Resell wafers at huge markup to competitors

    3. Profit

  • bibimsz 2 hours ago

    time to stop using python boys, it's zig from here on out

  • submeta 2 hours ago

    Very happy to have bought a MBP M4 with 64 gb of Ram last year.

    • wslh 2 hours ago

      And sharing the RAM between your CPU/GPU/NPU instead of using separate memories.

  • ok_dad 2 hours ago

    I am very excited for a few years when the bubble bursts and all this hardware is on the market for cheap like back in the early to mid 2000's after that bubble burst and you had tons of old servers available for homelabs. I can't wait to fill a room with 50kW of bulk GPUs on a pallet and run some cool shit.

  • QuadrupleA an hour ago

    Only a matter of time before supply catches up and then likely overshoots (maybe combined with AI / datacenter bubble popping), and RAM becomes dirt cheap. Sucks for those who need it now though.

  • christkv 2 hours ago

    I grabbed a framework desktop with 128GB due to this. I can't imagine they can keep the price down for the next batches. If you bought 128GB of ram with "close" specs to the one used just that would be 1200 EUR at retail (who are obviously taking advantage).

  • ogogmad 2 hours ago

    Every shortage is followed by a glut. Wait and see for RAM prices to go way down. This will happen because RAM makers are racing to produce units to reap profits from the higher price. That overproduction will cause prices to crash.

    • barnas2 27 minutes ago

      They aren't overproducing consumer modules, they're actively cutting production of those. They're producing datacenter/AI specific form factors that won't be compatible with consumer hardware.

      • baq 10 minutes ago

        somebody will step up to pick up the free money if this continues.

  • carlCarlCarlCar 2 hours ago

    Pricing compute out of the average persons budget to prop up investment in data centers, stocks, ultimately control agency

    If an RTX 5000 series price topped out at historical prices no one would need hosted AI

    Then it came to be that models were on a path to run well enough loaded into RAM... uh oh

    This is in line with ISPs long ago banning running personal services and the long held desire to sell dumb thin clients that must work with a central service

    Web developers fell for confidence games of old elders hook line and sinker. Nothing but the insane ego and vanity of some tech oligarchs driving this. They cannot appear weak. Vain aura farming, projection of strength.

  • citizenpaul 2 hours ago

    > Micron's killing the Crucial brand of RAM and storage devices completely,

    More rot economy. Customers are such a drag. Lets just sell to other companies for billion dollar deals at once. These AI companies have bottomless wallets. No one has thought of this before we will totally get rich.

  • haunter 2 hours ago

    panem et circenses

    But what will happen when people are priced out from the circus?

  • aynyc 2 hours ago

    Ha! Maybe Javascript developers will finally drop memory usage! You need to display the multiplication table? Please allocate 1GB of RAM. Oh, you want alternate row coloring? Here is another 100MB of CSS to do that.

    edit: this is a joke

    • phantasmish an hour ago

      I do sometimes reflect on how 64MB of memory was enough to browse the Web with two or three tabs open, and (if running BeOS) even play MP3s at the same time with no stutters. 128MB felt luxurious at that time, it was like having no (memory-imposed) limits on personal computing tasks at all.

      Now you can't even fit a browser doing nothing into that memory...

      • anthk an hour ago

        HN works under Dillo and you don't needs JS at all. If some site needs JS, don't waste your time. Use mpv+yt-dlp where possible.

    • baal80spam 40 minutes ago

      Haha, the amount of downvotes of your very true comment just proves how many web developers are there on HN.

  • mschuster91 2 hours ago

    > The reason for all this, of course, is AI datacenter buildouts. I have no clue if there's any price fixing going on like there was a few decades ago—that's something conspiracy theorists can debate—but the problem is there's only a few companies producing all the world's memory supplies.

    So it's the Bitcoin craze all over again. Sigh. The bubble will eventually collapse, it has to - but the markets can stay irrational longer than you can stay solvent... or, to use a more appropriate comparison, have a working computer.

    I for myself? I hope once this bubble collapses, we see actual punishments again. Too-large-to-fail companies broken up, people getting prosecuted for the wash trading masquerading itself as "legitimate investments" in the entire bubble (that more looks like the genetic family table of the infamously incestuous Habsburg family), greedy executives jailed or, at least where national security is impacted due to chip shortages, permanently gotten rid of. I'm sick and tired of large companies being able to just get away with gobbling up everything, killing off the economy at large, they are not just parasites - they are a cancer, killing its host society.

  • altmanaltman 2 hours ago

    I mean what's the big deal, can't we just download more ram

  • comeonbro 2 hours ago

    This is ultimately the first stage of human economic obsolescence and extinction.

    This https://cdna.pcpartpicker.com/static/forever/images/trends/2... will happen to every class of thing (once it hits energy, everything is downstream of energy).

    • benlivengood 2 hours ago

      If your argument is that value produced per-cpu will increase so significantly that the value produced by AGI/ASI per unit cost exceeds what humans can produce for their upkeep in food and shelter, then yes that seems to be one of the significant risks long term if governments don't intervene.

      If the argument is that prices will skyrocket simply because of long-term AI demand, I think that ignores the fact that manufacturing vastly more products will stabilize prices up to the point that raw materials start to become significantly more expensive, and is strongly incentivized over the ~10-year timeframe for IC manufacturers.

      • kace91 an hour ago

        >the value produced by AGI/ASI per unit cost exceeds what humans can produce for their upkeep in food and shelter

        The value of AGI/ASI is not only defined by its practical use, It is also bounded by the purchasing power of potential consumers.

        If humans aren’t worth paying, those humans won’t be paying anyone either. No business can function without customers, no matter how good the product.

        • benlivengood an hour ago

          Precisely the place where government intervention is required to distribute wealth equitably.

    • captainkrtek 2 hours ago

      I'm no economist, but if (when?) the AI bubble bursts and demand collapses at the price point memory and other related components are at, wouldn't price recover?

      not trying to argue, just curious.

      • squidbeak 2 hours ago

        I'm no economist either, but I imagine the manufacturing processes for the two types of RAM are too different for supply to quickly bounce back.

      • reducesuffering 2 hours ago

        IF a theoretical AI bubble bursts sure. However the largest capitalized companies in the world and all the smartest people able to do cutting edge AI research are betting otherwise. This is also what the start of a takeoff looks like

    • kalterdev 2 hours ago

      Why should we believe in another apocalypse prediction?

      • IAmBroom 2 hours ago

        One of them has to be right, eventually!

      • gooseus 2 hours ago

        Because the collapse of complex societies is real - https://github.com/danielmkarlsson/library/blob/master/Josep...

        Unbounded increases in complexity lead to diminishing returns on energy investment and increased system fragility which both contribute to an increased likelihood of collapse as solutions to old problems generate new problems faster than new solutions can be created since energy that should be dedicated to new solutions is needed to maintain the layers of complexity generated by the layers of previous solutions.