Micron rolls out 276-layer SSD trio for speed, scale, and stability

(blocksandfiles.com)

113 points | by rbanffy 4 days ago ago

79 comments

  • ggm 15 hours ago

    I purchased 2TB ssd for a home nas. I watched prices for 18 months. They didn't move an inch.

    Prices for HDD do drop when the TB available rises but there seems to be a "floor" price.

    For SSD, there definitely appears to be a floor price.

    I am pretty convinced this is not cost btw. This is classic cost/price disjoint stuff.

    The price is tracking people's willingness to pay.

    • adastra22 14 hours ago

      Also the capacities for SSDs have barely budged. Many years ago I went all-out and equipped my PC with a 4TB SSD. Just last week I went SSD shopping for the first time in ages.. and 4TB was the largest drive anyone had. It's a few generations later and the new NVME/PCIe standards mean faster, lower-latency drives. But where are the 8TB, 12TB, or 24TB drives?

      • radicality 14 hours ago

        They exist but definitely more painful to use at a home setup since their formats are more server oriented, like U.2/U.3 or E1S and few others. And for course the prices - you can get 30,60 TB (and even more), but it’s gonna be >$4k.

        And versus the normal M2 drives, the larger server grade are more annoying. For example, I got recently a 15.36TB Kioxia Cd6-R in U.3 format, for $1.3k, which is not bad for ssd prices. After getting the right adapters and fitting it inside a minisforum ms-01. It’s working fine, but it immediately reached its “critical” temperature (while doing nothing) so I had to attach a big fan and cool it. All the larger SSDs which are meant for server rooms will expect lots of cooling.

        https://www.serversupply.com/SSD/PCI-E4.0/15.36TB/KIOXIA/KCD...

        • consp 11 hours ago

          You can also look up the rated idle power in the datasheets to know beforehand. Anything over ~5w needs cooling and usually they are 10+w (at least the ten or so I have here at home). Some of my disks have a low power more (still 7w) which you can enable with vendor tooling or the correct sas command but ymmv. Check the datasheet, it almost always states max 45c ambient and continuous airflow required.

        • adastra22 13 hours ago

          Why not get 4x 4TB drives and stripe?

          • cm2187 2 hours ago

            RAID 0, ok (but that's a bit dangerous). As soon as you add some redundancy, you pay either a massive price in term of capacity or in term of performance, pretty much any implementation you look at (ZFS, MD, Windows storage space, etc). Larger SSDs means less components which can fail, less power consumption per TB.

          • LtdJorge 13 hours ago

            MS-01 doesn't have enough space

          • bbarnett 2 hours ago

            I agree this would work, and yes backups are important, but you're now at 1/4 reliability plus raid issues reliability.

            The upthread may be able to hug that, but I like a quiet night in, not an unreliable wildcat.

      • justinclift 11 hours ago

        The capacities for SSDs have barely budged in the consumer space, but they're going upwards a lot for enterprise and 2nd hand enterprise drives.

        ie: https://www.ebay.com/itm/186355513308

        There are heaps on Ebay, but you'll be paying $$ for the larger sized drives.

      • Dylan16807 14 hours ago

        There's a couple 8TB options, but with the twin constraints of M.2 size and price nobody is really bothering to go bigger for consumer parts.

        And the technologies for fast connections to 2.5" drives keep failing to get a foothold in consumer products.

        • crote 8 hours ago

          > And the technologies for fast connections to 2.5" drives keep failing to get a foothold in consumer products.

          Not surprising, considering the vast majority of consumers don't need more storage than you can easily fit in an M.2 form factor. Why mess around with expensive enterprise U.2 drives when it gives you zero benefit and gains you the hassle of dealing with extra wires and finding space for an ugly bulky rectangle?

          Once you get to storage sizes where 2.5" becomes a need, you are well beyond the consumer price range. Very few people are willing to spend more on a single SSD than on the rest of their gaming computer combined.

          • mort96 4 hours ago

            With individual games being hundreds of gigabytes and growing, I'm not convinced that you're right.

      • mananaysiempre 9 hours ago

        There are 8TB drives, and (ignoring QLC ones) 18 months ago you’d have paid over $1200 for either of the two models available (Sabrent Rocket 4 Plus or Inland Performance Plus). Then WD released a 8TB version of their SN850X at an initial price of $850. These days I regularly see it for $650 or below (Amazon actually has it for $600 right this moment), which is a positive but tolerable premium over buying 2×4TB of the same (admittedly high-end) model. The other two options have also come down in price, though the Sabrent still hovers just below a thousand.

        • justinclift 5 hours ago

          Be extremely careful of buying WD (NVMe) SSDs if you're intending on using them with Linux: https://github.com/openzfs/zfs/discussions/14793

          Many _serious_ reports of problems there, across many models and firmware revisions. It's an ongoing problem, and WD is ignoring it entirely.

          • mananaysiempre 4 hours ago

            Thanks! I see that the people reporting problems with the (DRAMless) SN770 there also report no problems with the SN850X, but that might not be indicative either way—AFAIU the 8TB SN850X has somewhat different internals compared to its lower-capacity brethren. (It does put Framework’s decision to ship SN770—now SN7100—SSDs in their laptops as standard in an unfavourable light.)

          • jeffbee 4 hours ago

            It's just a brand. The SN770 shares nothing technical with, for example, the SN8100.

            • justinclift 4 hours ago

              Well, "the brand" is ignoring the problems across several SSD models.

              • wtallis 4 hours ago

                The reports in that thread you linked to are clustered around several models that do have substantial shared technology: a DRAMless design using an in-house SanDisk controller, and difficulties arise with the NVMe Host Memory Buffer feature that is only applicable to DRAMless drives. But the high-end SanDisk drives aren't DRAMless, and the SN8100 in particular doesn't even use an in-house SanDisk controller.

      • mschuster91 3 hours ago

        > But where are the 8TB, 12TB, or 24TB drives?

        Kioxia has them for servers [1]. You'll pay for that privilege though, 12.8 TB will set you back a healthy 1.600 € [2] and 30 TB 4.100 € [2].

        [1] https://europe.kioxia.com/de-de/business/ssd/enterprise-ssd....

        [2] https://www.primeline-solutions.com/de/kioxia-12-8-tb-cd8-v-...

        [3] https://www.primeline-solutions.com/de/kioxia-30-72-tb-cm7-r...

      • Culonavirus 11 hours ago

        I mean, isn't it obvious? There is no demand for massive SSDs. The current "average consumer" capacity has stabilized at 1-2TB for a system drive. It's also what you buy for your PS5s etc. There's just not that many (popular) use cases for larger SSD. Even with gaming, which is probably the most widespread use case for "fast and big" drives, people found out that there's only so many AAA 150GB behemoths that they can play at the same time...

        • mjevans 4 hours ago

          If consumers were offered the same space for half the cost or same cost for double the space, I find it likely at least 30% would pick each of those bins.

          It's not that people don't want larger drives, it really is that this is what the market is willing to bear and there is NOT sufficient competition to keep prices low.

        • Uvix 6 hours ago

          I think you're confusing cause and effect - 1-2 TB is the most common for PS5s because it's still where the price-per-GB sweet spot is. I bought a 2 TB because it was the cheapest per GB at the time, figuring I'd come out ahead financially even if I replaced it with a 4 TB in 2-3 years.

          Alas, prices have not come down like I expected. And sure, there's only so many I can play at a time, but I also don't want to have to wait through a reinstall each time I change it up.

        • lostlogin 2 hours ago

          > There is no demand for massive SSDs.

          Then why can I buy 22TB spinner by disks so easily?

          I’d love to go solid state, but the cost should be phenomenal.

          Judging by comments here, the server options also require a lot of cooling, and keeping the noise down is the point for me (along with keeping heat + power down, and speed).

        • bravesoul2 10 hours ago

          photo and video use case?

          • zozbot234 10 hours ago

            As an average consumer/data hoarder, why not store those on cheap spinning HDD's? They can stream the data fast enough, and modern NAS hardware gives you easy RAID too. You don't really need super-quick random access to multiple TB's of data, so HDD storage is probably good enough.

            • daymanstep 5 hours ago

              HDD is really not fast enough if you're going to be doing a RAID rebuild with a 18TB HDD

              • leptons 10 minutes ago

                I use my HDD RAID storage as "a buffer" for my LTO Tape drive. Everything gets backed-up to tape. Most of the data doesn't need to be online 24/7/365. So I can use smaller RAIDs, and I pay about $3/TB for LTO5 tape storage. I use the cheapest 2TB, 4TB, and 6TB drives I can get in the RAIDs, and they've been fine. RAID10 and the LTO tape backups give me some peace of mind.

                RAID rebuilds are extremely infrequent. I think I've had to do it twice in about 10 years, so "fast enough" isn't really a problem. I also use RAID 10, which can have up to 2 drive failures, so the rebuild being lightning fast isn't all that necessary. And I have 3 RAID 10 setups, one is used as "warm storage", so the important and frequently used data is online in more than one RAID.

                I did have an LSI RAID card go bad once, but I put in the spare and the entire RAID just showed up without any config or any data loss. It was magical.

              • lostlogin 2 hours ago

                I’m running several that large or larger in a Synology.

                Adding a drive takes a week. Replacing a drive a slightly quicker, but rebuild times are no joke.

              • GauntletWizard 3 hours ago

                It's plenty fast enough for my N+2 ZFS Raid, but I'll admit that that time is still measured in days.

    • Dylan16807 14 hours ago

      SSD prices cratered in 2023, then shot way back up, and have barely dropped since then.

      Baseline name brand SSDs got down to about $75 for 2TB, and I'm not going to be impressed by anything until I see similar numbers again.

      • porphyra 6 hours ago

        I got an Intel 660P 2TB for $55 haha. Should have stocked up on bigger SSDs ugh.

    • toast0 2 hours ago

      > Prices for HDD do drop when the TB available rises but there seems to be a "floor" price.

      For SSD, there definitely appears to be a floor price.

      The floor price for new hard drives is somewhere around $50-$80. When a drive at a certain capacity is only sellable for $50, most of the options disappear. Looking at newegg, you can get a 1TB drive for $60, 2TB for $65, and 4TB for $75 ... The price floor is around there. But if you want the best $/TB, you've got to get larger drives.

      For SSDs, the floor is a lot lower. You can get a 128gb ssd for $22, 256Gb for $25, 512GB for $33, 1TB for $50, and then after that $/TB stays pretty consistent, there's not a lot of reduction in cost due to density like with hard drives.

    • abdullahkhalids 14 hours ago

      Why do you think the price of SSDs should be close to cost?

      As there are many consumer level producers (who all buy from a smaller set of actual producers), it may seem like the market is close to perfect competition (which would justify price=cost).

      But actually there are many low quality producers that frequently burn those stupid enough to buy from them. And a few high quality producers who generally sell what they advertise. So if you are trying to buy a high quality SSD, you are buying from an oligopoly that have built their brands over the years. So they can charge significantly higher than cost due to this reason.

      And I imagine that others can't drop their prices much lower than this price because then people get suspicious and don't buy it at all.

      • ggm 13 hours ago

        > Why do you think the price of SSDs should be close to cost?

        Not close, but closeER and at least some evidence of tracking. That's what I'd expect if they were ubiquitous.

        Some goods track commodity prices closely. Shrinkflation happens when you can't easily alter the unit price, chocolate bars are a good example. Not that we pay anything like as little as the producers get: there's enough competition that putting valhrona to one side, chocolate prices reflect commodity prices. Same with fuel. Same with ROHC compliant resistors. SMD components. PCBs. Batteries, led light bulbs. DDR memory.

        Not SSD. There is no reason it must, this isn't the laws of physics. I just observe it doesn't. If there was more visible competition in supply of inputs, they MIGHT. But it looks like at best a duopoly or tri-opoly of inputs, and prices reflect demand a lot more. Supply isn't even close to demand, there's no surplus.

        Those other things are somewhat peripheral. Not saying they don't have a role, but I don't think it's fundamental. I bought 6 "patriot" P210 and they get average to poor reviews for speed and reliability.

        • lazide 13 hours ago

          Eh, probably RAM’esque price fixing. Not that anyone is going to look too closely with all the geopolitical fuckery right now.

          • mananaysiempre 8 hours ago

            There have been reports that Micron has been doing unusual ammounts of lobbying around the time that the US government introduced trade sanctions against YMTC and Apple dropped their plans to start using them. (Though that time period also coincides with the CHIPS Act frenzy, so take with a grain of salt.)

    • ksec 4 hours ago

      HDD isn't any different. Peice per TB has been flat for many years. You get higher capacity for Hyperscaler, but those are not intended for consumers and you need to pre book in advance to get them.

      SSD price may have fluctuating for the last few years but their cost model are now similar to HDD. And so is DRAM.

      Mainly cost to produce GB of DRAM / NAND / HDD are roughly the same for many years. You may get better performance or lower energy usage. But cost hast changed. And all fluctuations in market are supple and demand dynamics, which is why many saw 4TB for $150 in 2023 and thought things will drop soon. But that has not been the case.

      We need breakthroughs to further reduce the cost per GB. Although one could argue we have half the cost already if we accounted for inflation for the past 20 years.

      • Jerry2 4 hours ago

        What's a fair price for a TB of HDD? SSD? Which high-capacity SSDs are worth buying?

        I'm not sure if there are any sites tracking this. Anyway, I need to buy 30TB of storage this year so I can upgrade my NAS and make it last few years. Thanks for any replies from anyone who has an opinion!

        • synack an hour ago
        • wmf 3 hours ago

          HDD: $10/TB

          SSD: $50-100/TB

          • nixgeek 2 hours ago

            Most new drives are sold into consumer/end user retail around $20/TB and only seem to dip a little during sales (with most sellers also quantity limiting any sales, also). Getting to $10/TB is possible but typically involves buying manufacturer recertified, and may involve going smaller (14-16TB drives) than you might prefer.

            I managed to find 64x Seagate Exos 20TB for $13/TB new about 2 years ago, on NewEgg of all places, but I’ve never seen that deal repeat. :(

            All the new 30TB+ HDDs using HAMR technology from Seagate and WD still feel like expensive unobtainium.

      • AtlasBarfed 4 hours ago

        The SSDs come from, very generally speaking, the same semiconductor segment that has been busted multiple times for price fixing in DRAM.

        Not that hyperconsolidation in HDDs isn't vulnerable to the same things, but the management playbook of these guys is to fix and inflate as much as possible.

    • wmf 14 hours ago

      The floor for hard disks is the cost of the case, motor, controller board, etc. The floor for SSDs is the controller and the PCB.

      • ggm 14 hours ago

        We're nowhere near floor cost in SSD because the cost of the PCB and controller is cents.

        We're not even tracking the chipcost for the storage. There's no linear function between them in terms of numbers, or die space.

        The price is just "the price"

        • elchananHaas 14 hours ago

          Not true. High end controllers need DRAM for caching indexes. That's at least a few dollars.

          Flash storage is a commodity, we are paying close to the amortized cost of manufactured and sales.

          • ggm 13 hours ago

            Mate, if you think we're paying amortised costs you either work in American pharma or marketing. We're paying fat shareholder returns.

            "A few dollars" forsooth. My 2TB SSD cost $150 AUD and was (I believe) immensely profitable to everyone down the supply chain. The same spend gets you 16GB of packaged DDR ram and I think we can both see there is no linear relationship between the DDR chip cost, in GB and the 100x denser storage needed for SLC flash. This is not about vlsi density or number of chips. I'm not paying $15,000 more for my SSD.

            "The prices are the prices"

            • fh973 8 hours ago

              An SSD with 2TB flash typically contains 2GB RAM for lookup tables.

          • AtlasBarfed 3 hours ago

            We are talking about Samsung and Micron here?

      • dopa42365 14 hours ago

        Price/TB is nearly identical for ancient SATA and new "high-end" PCIe 4 M.2 SSDs. The cost is EVERYTHING except the controller and the PCB. Which would be the memory, shockingly.

        • ggm 13 hours ago

          Do you think they have a yield problem? The price on chips usually drops when the production tech matures. I could believe the fab lines are running smoking hot, but surely by now lower density fab in all kinds of economies could be making this tech.

          • esseph 12 hours ago

            Maybe spending time making more GPU memory instead?

            • wtallis 4 hours ago

              DRAM and NAND require entirely different fabs.

      • HarHarVeryFunny 5 hours ago

        The cost of manufacturing a 276 layer (!!!) NAND chip must also be significant, and a I dare say takes longer than the mechanical components given that we're talking 276 sequential steps of applying photo resist, UV exposure, then etching ...

        • wtallis 4 hours ago

          The whole reason why 3D NAND happened and higher layer counts enable cheaper, higher density storage is that it doesn't require an etch step per layer to build the memory cells. It's just deposit several dozen layers then etch holes down through all of those layers in one step, followed by filling in those holes with different materials. (Though at 276 layers, it's a stack of more than one deck of layers built as separate batches. The aspect ratio of the holes is the limiting factor.)

          But aside from that, the cost of the NAND is the variable portion of the drive's cost, not part of the fixed floor of cost necessary for a drive of any capacity.

          • HarHarVeryFunny 4 hours ago

            Interesting - thanks. I wasn't aware 3D NAND was built multiple layers at a time in this way.

            How are the layers made different, without individual masking and etch?

            • wtallis 4 hours ago

              The layers largely aren't different. The end result is a vertical string of (theoretically identical) NAND memory cells filling the holes that were etched through the stack, with contacts at the top and bottom. At the edge of the stack, at the end of the fabrication process, they etch a staircase to expose one contact with each layer. See https://thememoryguy.com/3d-nand-how-do-you-access-the-contr...

    • kijin 13 hours ago

      There is no consumer market for 4TB+ SSDs. There never was, and there probably won't be for the foreseeable future. Most non-technical people have been conditioned to store their data on their phones and/or in the cloud these days. When they need more storage, their first thought is to upgrade their cloud plan, not to open up their device and void the warranty.

      Professionals like us know of course that the SSD is an easily upgradable component. But we also tend to know how to set up a NAS with 4x 18TB HDDs in a zfs pool that can saturate the bandwidth of any reasonable home network when transferring large files. So the market for professionals and enthusiasts don't always translate into a market for large SSDs.

      • mystifyingpoi an hour ago

        > have been conditioned to store their data on their phones

        This is 100% true. I have a family member that literally has a decade of family photos stored in WhatsApp conversations. 10 gigs app storage used on some old iPhone, no backup.

      • asnyder 6 hours ago

        Opening up one's device does not void the warranty in the US. We have the Magnuson-Moss Warranty Act which forbids this, despite manufactures trying to groom us otherwise with those void if removed or broken stickers.

        Most recently, FTC started to raise awareness and crack down on some abuse: https://www.ftc.gov/news-events/news/press-releases/2024/07/....

        Will take a long time to de-program and by that time nothing will be replaceable to matter due to industries march towards preventing repair altogether.

      • dontlaugh 11 hours ago

        I would like to be able to get rid of my hard drive NAS at some point, in favour of something smaller and quieter. The price really doesn’t make sense right now, though.

      • rbanffy 7 hours ago

        > and there probably won't be for the foreseeable future.

        There might be if we get to vert large models that rely more on storage than compute, but that's a lot of "ifs" in there. Maybe when people start capturing their birthdays in 16K HDR at 120 fps.

      • mschuster91 3 hours ago

        > There is no consumer market for 4TB+ SSDs. There never was, and there probably won't be for the foreseeable future.

        Gamers certainly are a market. Borderlands 3 for example, a 2019 game, clocks in at 75GB on Steam, GTA 5 at 105 GB, MSFS 150 GB. And that's all OLD games. CoD Black Ops wants 175 GB, GTA 6 is rumored to want anything from 100 to 300 GB. You don't want that on spinning rust.

  • userbinator 12 hours ago

    Absolutely zero mention of retention for a storage device is disturbing.

    The endurance figures seem to suggest anywhere between 6.6k and 11k cycles, which is both a wide range and unusually high for TLC flash - this is the normally expected range for decent MLC and 5 years of retention, so I suspect they're massaging the retention downwards to get those numbers.

    Related: https://news.ycombinator.com/item?id=43702193

    • devttyeu 9 hours ago

      I don't think most people grasp how abstractly high even 1 DWPD is compared to enterprise HDDs. On the enterprise side you'll often read that a hard drive is rated for maybe 550TB/year, translating to 0.05~0.1 DRWPD [1] (yes, combined read AND write) and you have to be fine with that. (..yeah admittedly the workloads for each are quite different, you can realistically achieve >1 DWPD on an nvme with e.g. a large LSM database).

      What makes NVMe endurance ratings even better (though not for warranty purposes) is when your workload has sequential writes you can expect much higher effective endurance as most DWPD metrics are calculated for random 4k write, which is just about the worst case for flash with multi-megabyte erase blocks. It's my understanding that it's also in large part why there is some push for Zoned (hm-smr like) NVMe, where you can declare much higher DWPD.

      * [1] https://documents.westerndigital.com/content/dam/doc-library...

      • zozbot234 8 hours ago

        I assume that flash translation layers use LSM-like patterns underneath to cope with small random writes. The best case for flash with a good translation layer is data that can be erased/discarded in bulk, since this minimizes write amplification. This is close enough to the "large sequential writes" case but not necessarily equivalent.

    • jauntywundrkind 12 hours ago

      To ground these endurance figures a little more concretely, consumer drives will advertise 600TBW:TB, or 600 cycles! Starting at 10x that is a very solid starting place!

      On the other hand, an enterprise drive like Kioxia CM7 will offer either 1 or 3 drive writes per day (for regular and write-intensive drive models, respectively), across the 5 year warranty. That's ~1800 cycles or just shy of 5500 cycles.

      • justincormack 7 hours ago

        Generally they just chop off some space to allow for more to be used when it wears out, which is why enterprise is 3.84TB not 4TB. If you partition the drive smaller and never use the rest you can achieve similar endurance.

      • userbinator 12 hours ago

        Enterprise drives have always been rated for lower retention and hence higher cycles than consumer drives, since those are inversely correlated.

        • daymanstep 11 hours ago

          Do you have a source for that claim? Everything I've read suggests that data retention is mostly about the write temperature vs storage temperature, and that enterprise drives have about equivalent retention to consumer drives.

          By retention I'm assuming you're referring to the amount of time it takes for data loss to occur on SSDs in cold unpowered storage.

          • userbinator 11 hours ago

            From the horse's mouth: https://americas.kioxia.com/content/dam/kioxia/en-us/busines... (See graph on page 2.)

            https://www.macronix.com/Lists/ApplicationNote/Attachments/1... (See graph on page 3.)

            Then look up the retention specs for enterprise drives and compare to consumer ones, and the conclusion is obvious.

            • daymanstep 11 hours ago

              Correct me if I'm mistaken but it looks to me like your graphs are talking about the number of P/E cycles accumulated, rather than the number of P/E cycles that a drive is rated for?

              What this seems to suggest is that as a drive gets more "worn out", its data retention gets worse.

              But I don't see how that can be taken to imply that enterprise drives have worse data retention than consumer drives. Nothing that I've seen suggests this.

              • zozbot234 11 hours ago

                > What this seems to suggest is that as a drive gets more "worn out", its data retention gets worse.

                This is what informal tests (i.e. via scrubbing/resilvering the drive after leaving it powered off for a long time) have found. Retention/data remanence is remarkably good for a drive that has been written over just once, and quite bad (i.e. you start seeing bit errors) for one that's almost worn out. This is actually very good news for the EEPROM-like use case where rewrites are quite rare.

                (Note that "almost worn out" in this case can mean going far beyond the formal total-data-written rating of the drive. We're talking the range where the hardware itself is about to croak.)

              • userbinator 10 hours ago

                But I don't see how that can be taken to imply that enterprise drives have worse data retention than consumer drives

                Look up the specs. Commonly quoted as 3 months for enterprise ratings but the whole picture is in these tables:

                https://www.legitreviews.com/wp-content/uploads/2015/05/ssd-...

                Page 22 of this "says the quiet part out loud": https://www.snia.org/sites/default/files/AnilVasudeva_Are_SS... ("Lower Data Retention allows for higher endurance")

                • daymanstep 8 hours ago

                  I'm not sure I fully understand what that page is saying.

                  It is my understanding that JEDEC standard tests the data retention of the "worst case" scenario where a drive is fully worn out, i.e the drive has reached its maximum rated P/E cycles.

                  If I'm understanding correctly, that page you linked is saying that enterprise drives have firmware that essentially allows more P/E cycles, which then means that at the end of those cycles, the drive will be more "worn out" and thus will have a worse data retention.

                  But in a real world usage scenario where we subject a consumer SSD and an enterprise SSD to the same number of P/E cycles, would they have different data retention? I thought the JEDEC data was only for end-of-life scenarios.

                  • wtallis 4 hours ago

                    > But in a real world usage scenario where we subject a consumer SSD and an enterprise SSD to the same number of P/E cycles, would they have different data retention?

                    Probably not, assuming they're using the same underlying media and same strength of ECC and that the amount of host data written was appropriately adjusted to account for the different capacities and overprovisioning ratios to ensure the actual P/E cycles seen by the NAND were the same.

                    As you write more data, the consumer drive would be out of warranty first, while the enterprise drive would still be under warranty but not spec'd to retain data for as long as the worn-out consumer drive. So for either drive, the manufacturer isn't guaranteeing 1 year retention past the rated endurance of the consumer drive.

    • drtgh 9 hours ago

      Very disturbing, the article talks about the number of bits to be read per cell like if it were only a matter of speed.

      In addition to your comment (related), when 3D-NAND cells are read, interference by the traces (charge-trap disturbs) requires the neighbour cells to be refreshed with writes, if the controller wants to conserve data integrity. This did not happen with 2D-NAND in the past.

      Reading data from one cell in 3D-NAND involves writing cells; reading data in 3D-NAND consumes disk endurance.

      (Not to mention, temperature/number of layers/endurance)

      • pkaye 2 hours ago

        I used to work in SSD controller firmware. This kind of issue existed more than 10 years ago. To achieve higher capacity with each generation of NAND you are trading off everything else (endurance, retention, read/program times, etc) little by little. 20 years ago it was so rare to see any kind of errors with SLC NAND that error detection and handling was fairly simple.

      • zozbot234 9 hours ago

        The read disturb effect happens with 2D-NAND, it's just rare.

  • kachapopopow an hour ago

    140tb of writes, so us-, wait, that's petabytes isn't it.