Kioxia and Dell cram 10 PB into slim 2RU server

(blocksandfiles.com)

84 points | by rbanffy 5 hours ago ago

57 comments

  • fancyfredbot 3 hours ago

    The very first sentence of this article mistakes Terabytes and Petabytes. I used to dismiss the entire article as poor quality on seeing a mistake like this. But these days it also feels like an indicator the article was written by a human and might actually have something interesting to say.

    Sadly not in this case though - the Kioxia drives are interesting, but the fact that Dell has put some in a box is much less so.

  • Pallav123 2 hours ago

    At current enterprise NVMe prices, the drives alone for this must easily push past the $500k to $1M mark. It's fascinating to see this level of density, but it’s strictly going to be hyperscaler or high-end defense/research budget territory for a long time.

  • NitpickLawyer 3 hours ago

    There's been a lot of talk about orbital DCs lately, but with these levels of density, orbital CDNs might be a more obvious usecase. It would be interesting to see if something like Starlink can use something like this to cache media content and reduce their overall data moving through the constellation. It could even be worth it to have some satellites in higher orbits (even GEO if the ground hw can reach it) dedicated to streaming media content. You can tolerate higher RTT for content that doesn't need to be real time.

    • evil-olive 2 hours ago

      no, absolutely not. orbital datacenters are never going to happen, it doesn't matter whether you try to frame them as compute or storage or whatever else.

      the extreme density of these SSDs is actually an anti-feature in the context of spacecraft hardware.

      the RAD750 CPU [0] for example uses a 150nm process node. its successor the RAD5500 [1] is down to 45nm. that's an order of magnitude larger than chips currently made for terrestrial uses.

      radiation-hardening involves a lot of things, but in general the more tightly packed the transistors are, the more susceptible the chip is to damage. sending these SSDs to space would be an absurd waste of money because of how quickly they would degrade.

      and then there's the power consumption & heat dissipation. one of these drives draws 25W [2] and Dell is bragging about cramming 40 of them into one server. that's a full kilowatt of power - essentially a space heater in a 2U form factor.

      0: https://en.wikipedia.org/wiki/RAD750

      1: https://en.wikipedia.org/wiki/RAD5500

      2: https://americas.kioxia.com/content/dam/kioxia/en-us/busines...

      • wahern an hour ago

        AFAICT[1] the latest generation of SpaceX Starlink satellites use AMD Versal XQR SoCs, which are built on a 7nm process with components like the main processor (dual-core ARM Cortex A-72) and memory (DDR4) clocked in the gigahertz, not megahertz, range.[2] At least some of these SoCs models (presumably the lower-clocked ones) are certified for geosynchronous orbits, not just low-earth orbits.

        [1] https://www.pcmag.com/news/amd-chips-are-powering-newest-sta...

        [2] https://docs.amd.com/r/en-US/ds955-xqr-versal-ai-edge/Genera...

      • killerstorm 20 minutes ago

        It's possible to run modern GPU on a sattellite: https://www.starcloud.com/starcloud-1

        Some error rate is acceptable for uses which aren't "mission-critical".

      • dmurray 2 hours ago

        In the limit, packing transistors tighter should mean more radiation resistance, not less, because you can shield them with a smaller mass of water or lead or whatever.

      • manquer an hour ago

        > order of magnitude

        It is much worse than that. Even taking the node names at face value[1] that is just one dimension, there are two/three[2] dimensions to consider so it would be 100x different.

        Nehalem(2008) was a 45nm node based chip and had ~3MTr/mm2 transistors in comparison today we have 3nm(N3E/P/X/C) nodes(2023-4) from TSMC area about 220MTr/mm2.

        Of course that is just one metric- transistor count, there are many other improvements to consider over the last two decades.

        [1] Processor node names after all haven't been tied to physical scale for 30 years https://www.eejournal.com/article/no-more-nanometers

        [2] HBM that modern GPUs use already leverage 3D ICs.

      • fgfarben 2 hours ago

        i can write extremely confident things in all lowercase and include citations too. [1]

        doesn't mean i'm correct. [2]

        • hilariously 36 minutes ago

          It certainly looks more correct than a response like this, normally you'd expect a counter point instead of just doing whatever it is you are doing.

    • tesdinger an hour ago

      For the sake of the generations that come after us, we really should not dump valuable material into space. I somehow doubt the electronics in space would be recovered and recycled properly.

      • 9dev 41 minutes ago

        Nothing is recycled properly. Recycling was a story told to ease consumers minds so they keep on consuming. The stuff you throw away ends up on a landfill, in the sea, or on a ship to someplace else where it gets burned and then buried. Sending it to space makes absolutely no difference.

    • KaiserPro 2 hours ago

      Or you could use fibre, which has the advantage of not needing to use > 1kw of concentrated microwave to get ~2gig of throughput

      Or even better not yeeting it into an environment where its cooked/cooled every 90 minutes

      Or even better where its not absolutely pelted by cosmic rays enough to obliterate a good GB a day of data.

      Or space data centre.

    • ssl-3 2 hours ago

      If I correctly understand what you're suggesting, then that could save on uplink bandwidth. Sending one copy into space, and then sending it back down over and over again sounds nice.

      But does it solve a problem that we actually have? Is uplink bandwidth a pressing limitation?

  • bombcar 3 hours ago

    Full NICs takes about 666 minutes to fill this thing.

    Satan’s NAS!

  • ksec 2 hours ago

    This is one of the case limited by PCIe speed, sharing it with SSD so Network could only do 5x400Gbps Network. This is on PCIe 5.0, luckily we have 7.0 spec ready and 8.0 is even in 0.5 draft status.

    If we could somehow increase the density further by 5x, we would be able to store 1EB in a single rack.

    The most interesting part to me is the last sentence.

    >Scality tells us it’s working on supporting a future nearline-class SSD from Samsung, viewed as an HDD killer, with similar or even larger capacity and a roadmap out to a 1 PB drive.

    Finally a HDD killer. May be in another 5 - 10 years time. The day of everyone having an SSD NAS / AI Cloud at home will come.

    • loeg 18 minutes ago

      QLC already beats out HDD in power constrained hyperscalar environments. Capex is not the only factor.

  • mmanfrin 39 minutes ago

    Time for my NAS to get an upgrade.

  • zeristor 2 hours ago

    Tell me about the thermals.

    • zamadatix 2 hours ago

      Max per drive is 25 W, so even a rack with 20 servers of 40 drives each is probably less than the average GPU rack even after the other overheads.

  • amelius an hour ago

    If datacenters start buying these things, will we see consumer harddrives go up in price?

    • joezydeco 37 minutes ago

      Kioxia was my eMMC supplier until earlier this month, when they said they couldn't fill my orders anymore. They're sold out.

      So, yes.

  • nout an hour ago

    I could do some cool backups with this bad boy.

  • danhon 2 hours ago

    Someone please fix the title for this.

  • metadat an hour ago

    Now make it for consumers. Storage capacity per dollar has really stalled.

  • varispeed 2 hours ago

    10PB is probably the amount of data a medium sized country can collect about its all citizens (basic details, work history, all taxes, all financial records, all medical records, all police records, all biometric records and more) for their lifetime.

    I think development like this might get many public sector focused firms sweating.

    • jandrewrogers an hour ago

      Those records are going to be pretty negligible in terms of storage. It is only a couple new records per day. Even if you add things like detailed mobile and tracking telemetry, it is a few MB per person per day.

  • a1o an hour ago

    Now you put this in a cruise ship and you can move a lot of data

  • joe_mamba 4 hours ago

    Can't wait to move my spinning rust NAS to this in 20 years.

    • loeg 3 hours ago

      I went to QLC for my NAS last cycle. The $/TB was worse, but not by a huge margin, and the performance is quite a bit better (not that it matters).

    • anonymousiam 2 hours ago

      I've been wanting to update my (100TB) NAS for over five years, but I haven't yet found anything that I feel is worth upgrading to. One of these with a QSFP56 interface would be nice, but I would need to sell one of my houses to pay for it, so I'll be waiting a little longer...

    • mx7zysuj4xew 3 hours ago

      Sadly none of that enterprise hardware will ever make it to you due to being wastefully shredded

      • theandrewbailey 2 hours ago

        I work in the refurb department of an e-waste recycling company. In my n=1 data point, some server drives are shredded/destroyed, some aren't (maybe half) before they reach my team. Of the ones that aren't, most are too small to sell, or have bad reads or reallocated sectors. Maybe 10% are fit to resell, not zero.

    • tempest_ 4 hours ago

      NVME SSDs are consumable items more so than HDDs are.

      These drives will arrive in the secondary market to be snapped up by businesses lower in the food chain. By the time you can find them they will be ridden hard and put away wet that you probably wont want them.

      • theandrewbailey 2 hours ago

        I work in the refurb department of an e-waste recycling company. Some SSD brands are more durable than others. In my experience, a greater proportion of Intel and Micron SSDs are (or have) failed than any other brand. It's as if sysadmins are like "Intel is a good brand, lets use these SSDs to cache our HDD storage array", then throw them out when they turn read-only.

  • louwrentius 4 hours ago

    What would this cost?

    • retired 15 minutes ago

      $500 on Facebook Marketplace in 20 years time.

    • bracketfocus 3 hours ago

      They are likely 200USD+ per TB, so one 250TB drive would be ~50,000USD.

      There’s probably bulk pricing, but if you bought 40 drives separately thats 2,000,000USD in storage alone.

    • geerlingguy 3 hours ago

      I can't remember where I saw it, but I think each of these high capacity drives is in well into the 15-25k price range.

      So a petabyte will be $600-800k alone, plus a server with enough high-speed PCIe lanes to serve the 40+ drives, definitely $1m+

    • cr125rider 3 hours ago

      More than you can afford cause you had to ask, ha

    • gosub100 3 hours ago

      You can't buy this stuff anymore. They are leased and rented through layers of middlemen.

      • lostlogin 3 hours ago

        > anymore

        Could you ever buy it?

  • reactordev 4 hours ago

    Remember that season of Silicon Valley on HBO that was all about “the box”?

    I feel like we’re in that season.

    • darknavi 3 hours ago

      Just waiting for the Gavin Belson edition box.

  • retired 3 hours ago

    Some wealthy techbro from /r/datahoarders is going to purchase this to store all episodes of Doctor Who in uncompressed 10-bit 4:2:2 FFV1 Matroska remuxes with redundant PAR2 recovery archives.

    • trvz 2 hours ago

      Not quite yet.

      The interesting thing here is ~256TB in a single drive, but it's in E3.L form factor.

      I have about 160TB on hard drives that I'm waiting to offload onto a single SSD.

      But that needs to come with a connector that has adapters to USB-C, so I can attach it to my Macbook Neo.

      Hopefully they get it a bit more dense soon and into the 2.5" NVMe form.

      • dijit 2 hours ago

        I've been waiting with bated breath for a SATA 3.5" SSD with high capacity.

        I might be waiting forever, because clearly there's nothing coming. Though I'm not sure if it's because it's technically difficult (high power consumption to keep the flash lit?) or something else.

        I'm aware that it leaves performance on the table for the chips, and probably that means that unit economics means that for the yeild: OEMs would rather make high performance drives which sell for more.

        But a 4-bay NAS with 3.5" SSD's would be silent and theoretically sip power, and there so much space for chips, you could space them nicely and get 10+TiB in a drive...

        I don't need to touch every cell, I just want something silent and stateless and less power intensive for my time-capsule backups and linux ISOs.

        Alas.

      • TiredOfLife 33 minutes ago

        Attaching a $40k drive to a $600 Macbook

      • jauntywundrkind 2 hours ago

        There's a ton of different adapters already between edsff connector used for e3 / e2 / e1 drives and everything else pcie already (pcie, m.2, u.2). For example this pcie card. (Good luck tweaking your equalizer settings jumpers by hand though, whew!!) https://www.microsatacables.com/pcie-x8-gen4-with-redriver-t...

        Drop that in one of the many usb4 to pcie docks and you should be good to go. Pretty fugly but it ought to just work! I think there's some cheaper models that are under $90 still available, but here's a listing. https://www.dfrobot.com/product-2835.html

        I believe a more focused dedicated usb<->NVMe chip might also work, if attached to an edsff connector. I didnt look hard, but I haven't seen any such products yet, but: it's mostly mechanical/packaging, some signal integrity checks, but generally wouldn't really be much different in the end than a NVMe adapter. Seems very doable.

        Build it! Someone could sell (to quote a Daily Show) literally dozens of said adapter! (Eventually probably many many more, but not a huge second hand market for edsff atm).

    • nickstinemates 3 hours ago

      Hitting a little too close to home with this comment.

    • tliltocatl 2 hours ago

      Data retention is probably unusable for archival purposes.

  • tesdinger an hour ago

    All the increases in density are impressive, but they come with the downside of repairability and recycling difficulties. I hope we can still repair this when parts of it break or at least recycle it properly. No matter how high tech it is, eventually this will break.

    • geerlingguy an hour ago

      These drives all use standard enterprise storage interconnects, and the server chassis is like other Dell server chassis. Not using ATX or EATX, but it's status quo for Dell, and many old Dell servers wind down their old age in homelabs.

      Hopefully one of these 10 PB monsters will be under $2,000 someday, at which point I will pop it in my homelab :)