38 comments

  • tanelpoder 3 hours ago

    Indeed, would be nice if there were a standardized API/naming for internal NVMe events, so you'd not have to look up the vendor-specific RAW counters and their offsets. Somewhat like the libpfm/PerfMon2 library for standardized naming for common CPU counters/events across architectures.

    The `nvme id-ctrl -H` (human readable) option does parse and explain some configuration settings and hardware capabilities in a more standardized human readable fashion, but availability of internal activity counters, events vary greatly across vendors, products, firmware versions (and even your currently installed nvme & smartctl software package versions).

    Regarding eBPF (for OS level view), the `biolatency` tool supports -F option to additionally break down I/Os by the IORQ flags. I have added the iorq_flags to my eBPF `xcapture` tool as well, so I can break down IOs (and latencies) by submitter PID, user, program, etc and see IORQ flags like "WRITE|SYNC|FUA" that help to understand why some write operations are slower than others (especially on commodity SSDs without power-loss-protected write cache).

    An example output of viewing IORQ flags in general is below:

    https://tanelpoder.com/posts/xcapture-xtop-beta/#disk-io-wai...

    • Avamander 3 hours ago

      It's not only NVMe/SSD that could use such standardization.

      If you want detailed Ryzen stats you have to use ryzen_monitor. If you want detailed Seagate HDD stats you have to use OpenSeaChest. If you want detailed NIC queue stats there's ethq. I'm sure there are other examples as well.

      Most hardware metrics are still really difficult to collect, understand and monitor.

  • andai 5 hours ago

    This reminds me of not too long ago when you could hear the sound of the spinny disk in action, and you'd know if there was an issue (e.g. low on RAM and swapping a lot, or the dreaded Windows search indexer).

    You get many of the same problems these days, but they're a bit harder to diagnose. You have to go looking at system monitors to see what's going on. Whereas, if the computer just communicated to you what it was doing, in an ambient way, this stuff would be immediately obvious.

    I've heard stories like this where people worked on older computers that were loud, and then you could actually hear what it was doing. If it got stuck in an infinite loop, you'd literally hear it.

    That seems like very much a feature to me.

    • otras 4 hours ago

      I remember learning about the complex pumping machines running some of the reservoir pumps in Boston (https://en.wikipedia.org/wiki/Metropolitan_Waterworks_Museum), where they made such distinct noises when working (and malfunctioning) that an engineer could diagnose the problem by ear.

      I sometimes think about what a modern analogy would be for some of the operations work I do — translate a graph of status codes into a steady hum at 440hz for 200s, then cacophonous jolts as the 500s start to arrive? As you mentioned, no perfect analogy as you get farther and farther from moving parts.

      • andai 23 minutes ago

        Sound is a good cue to problems. In one place I worked, we had a big board of dials showing what was happening to our web servers. The hands were moved by little servomotors that made a slight noise when they turned. I couldn't see the board from my desk, but I found that I could tell immediately, by the sound, when there was a problem with a server.

        https://www.paulgraham.com/popular.html

      • SoftTalker 3 hours ago

        Cars are a pretty common example. Any new noises or changes in noises are indicating something. Usually a developing problem. E.g. a groaning or roaring noise, especially in turns, that varies with speed, is likely a worn out wheel bearing.

    • CaptainOfCoit 4 hours ago

      This is coming back now it seems, as the last three GPUs I've had all had coil whine which is distinct per activity. When I'm doing some processing sequentially across 3 different LLMs, I can hear based on the type of coil whine which LLM is currently doing the inference.

    • ssl-3 4 hours ago

      PCs used to be pretty noisy even in the 90s.

      The drives were numerous (hard, floppy, tape, optical), and the noises were too loud to avoid using diagnostically. Printers clacked and whooshed (and sometimes moved furniture). Scanners sang songs. Monitors produced clicks and pops and buzzes and sizzles, and the flyback transformer would continuously whine at different frequencies depending on mode. Modems made dialing and shrieking noises. Sound cards were anything but silent; a person could hear noises that varied based on the work the system was doing. And for a long while, CPUs and/or front side bus speeds put a lot of noise right in the middle of the FM dial.

      Computing is pretty quiet these days.

      • hulitu 3 hours ago

        > PCs used to be pretty noisy even in the 90s.

        They are still noisy when doing real work on them. Especially laprops.

      • mulmen 4 hours ago

        90s? I had all of the listed devices well into the 2010s.

        • ssl-3 3 hours ago

          During the 2010s: I was very done with floppy, and tape, and nearly done with optical media. My laptop no longer had a modem built-in and it took me months to notice this. I gave up on printing expensive color images at home (and began ordering inexpensive dye-sub or photographic prints), and laser printers (that could print any color desired as long it was black) were cheap and quiet and most of the surviving "old" examples were new enough to no longer smell strongly of ozone; the reciprocating print mechanisms of yore were simply gone. The scanner no longer sings. Essentially-silent LCD monitors had replaced the CRTs. Internal sound cards had become quite good at being silent, and during that time also became excellent at being irrelevant. SSDs became common (and big/cheap enough to use) on most normal systems. Even cooling fans were getting quieter, probably thanks to the combined effects of the introduction of standardized PWM and Noctua's influence (both in 2005): By the 2010s, building a very quiet PC was no longer the dark art it had been in parts of the 90s.

          At least in my world, the sound of computing had changed quite a bit over the span of decades from the 90s to the 2010s.

          The only incidentally-noisy computing things I had left at the end of the teens were the hard drives of ever-increasing size that got used for storing Linux ISOs.

    • userbinator 3 hours ago

      Drive activity lights are also useful especially with an SSD, but they seem to be gone from most if not all laptops these days. Part of me wonders if that was a deliberate decision to hide activity which users may not want.

      • Bender 3 hours ago

        Part of me wonders if that was a deliberate decision to hide activity which users may not want.

        Possibly. My first 386-DX40 had activity lights and I tried out a CompuServ disk and saw my HD activity going nuts so I killed the power and trashed the CD.

        There are programs that can show a virtual LED for HD and Network activity so all is not lost.

      • hulitu 3 hours ago

        > Part of me wonders if that was a deliberate decision to hide activity which users may not want.

        No. Just removing of parts to increase profits.

    • SoftTalker 3 hours ago

      My dad used to tell me that the first computers he programmed had a front panel with toggle switches and LEDs showing the binary content of the program counter and some other status values and he could tell by the activity on the LEDs whether the program was running normally.

    • eurleif 3 hours ago

      SSDs (many of them at least) actually do make little noises when they're busy! I noticed my PC was making a noise, and I went on a wild goose chase trying to track it down. (Was something wrong with one of the fans? Was it coil whine from the GPU?) I didn't immediately suspect the SSD, because everyone claims they're silent. Then I finally realized the noise corresponded with disk activity, and I found a YouTube video confirming it: https://www.youtube.com/watch?v=KS-BHI667po

      • nashashmi 3 hours ago

        I could hear those same sounds in my blackberry. It was surreal. I put it close to my ear and could hear it work.

    • buildbot 5 hours ago

      You can hear some modern GPU/CPUs (well really their power electronics) when they get heavily loaded!

      With training runs it makes a little beat and you can tell when it checkpoints because there’s a little skip. Or a GPU drops off the bus…

      • dataflow 4 hours ago

        > You can hear some modern GPU/CPUs (well really their power electronics) when they get heavily loaded!

        I'd hope you hear their fans too...

        • tomsmeding 3 hours ago

          Sure, but that has a resolution of seconds at best. The coil whine in the power electronics is milliseconds-accurate.

    • matsemann 2 hours ago

      I've experimented with sound for some debugging. Like, something that makes a soind every time a log line is emitted. Or every time the browser repaints something. Like a Geiger counter, can hear when something is off.

    • dr_kiszonka 2 hours ago

      It is clearly not the same experience, but it is possible to emulate these sounds with software like DiskClick (Win, Linux, maybe MacOS).

    • antisthenes 4 hours ago

      > You get many of the same problems these days, but they're a bit harder to diagnose.

      Luckily, storage also get incredibly cheap, so instead of diagnosing it's easier to just have a full backup of your data, and swap to it in case something goes wrong.

      • bayindirh 3 hours ago

        Malfunctioning devices are small part of the issue. One can understand how the workload strains the system via the sounds it (used to) make.

        Graphs and logs provide a proxy to that data at best, and attaching a debugger, tracer, or perf tool is not an option all the time.

        Sounds and LEDs provided an overhead-free real time communication channel to the operation of the system.

  • srean 4 hours ago

    I have had a long held, far far simpler wish that has remained out of my reach since ever -- can SMART implementation by vendors be not so half assed.

    I have had the wish since the days of spinning disks.

  • cjensen 4 hours ago

    This is asking too much. The management of trim, reallocation, wear leveling, and so much more is very complex. It's a full software stack hiding behind the abstraction of NVMe. Every manufacturer is running a different stack with different features and tradeoffs. The "stats" the author is asking for would be entirely different between manufacturers, and I doubt there is that much to be gained from peering behind the curtain.

    • zekrioca 3 hours ago

      It has been done previously for CPUs, which are much more complex than SSDs. Why couldn’t each manufacturer expose whatever performance metrics there are, in whichever way they want (as the post argues, eg., through SMART), and then let system engineers exploit this information to optimize their use-cases?

      • jeffbee 3 hours ago

        Seems like a poor example since CPU performance metrics differ not only between ISAs, and between vendors of one ISA (AMD vs. Intel, for example) but also between items from a single vendor. There's a 1000-page PDF that tries to explain what all the Intel PMU counters mean on different CPUs and it's full of errors and omissions as well.

        • zekrioca 2 hours ago

          Yes, but these differences don’t really matter. There are multiple techniques that system engineers can use to perform both variable selection and regularization (to help with differences across multiple architectures) to help them select counters that matter for their specific use cases.

          But then saying “it is too much to ask” is just another way to limit what user can do with the specific resources they paid for.

    • jeffbee 3 hours ago

      The abstraction is the problem. Get rid of the translation layer, manage flash directly in the operating system, and suddenly the ambiguity dissolves. You would get meaningful, uniform statistics with semantics necessarily matching those used by your operating system.

      • ssl-3 2 hours ago

        Do I really want my relatively expensive general-purpose CPU to be burdened with the task of managing flash using software, when a relatively inexpensive ASIC does that job very quickly and efficiently?

        There's a lot of non-trivial stuff that goes on inside of a modern SSD. And to be sure, none of it is magic; all of it could certainly be implemented in software.

        But is that kind of drastic move strictly necessary in order to get meaningful statistics?

  • zokier 4 hours ago

    Doesn't nvme have lot of log capabilities, like telemetry log etc?

    • wmf 4 hours ago

      Consumer NVMe probably doesn't have that. It has SMART which is hard to interpret.

  • jauntywundrkind 5 hours ago

    One of the big innovations of NVMe over SATA was giving us a bunch of separate command queues. It'd be lovely to get some per queue information.

    I feel like maybe some of this info is already available we just don't commonly look at it: knowing how deep the queue is, how many commands are outstanding at any given moment is probably a decent start. I haven't spent time digging into blk-mq to see what's available, to understand the hardware dispatch queue (how the kernel represents the many hardware queues available) info. https://www.kernel.org/doc/html/v5.16/block/blk-mq.html

    • adgjlsfhk1 5 hours ago

      I think a lot of the problem is that the ssd is fast enough and far enough away that by the time you get a response back it's fairly out of date

      • wmf 4 hours ago

        This was solved in networking by having the device collect a histogram of queue occupancy.

      • dleary 4 hours ago

        That’s a separate concern.

        Every command that you issue to the ssd returns a response. It would be nice to have a bunch of performance counters that tell us where the time went with each of the commands we give it.

        GPUs have this already.

    • fh973 4 hours ago

      Queues tend to be always full or always empty (see queueing theory). There is no steady state with a half full queue.

      For NVMe in particular you will have a hard time filling their queues. Your perceived performance is mostly latency, as there is hardly an application that can submit enough concurrent requests.