It's time for operating systems to rediscover hardware

(usenix.org)

49 points | by fanf2 7 hours ago ago

12 comments

  • amelius 5 hours ago

    The problem is that vendors of hardware (read: nVidia) do not want the OS to have full control over the hardware. So instead they write a driver that the OS can talk to, and everything else is kept behind closed doors.

    Meanwhile, companies like Apple who integrate everything can have full control, and are likely to come up with the better OSes in the future, but they are even more closed and the only talks we'll see about them are keynote speeches by the CEO.

    • grisBeik 5 hours ago

      > The problem is that vendors of hardware [...] do not want the OS to have full control over the hardware

      I agree. At least the first half of the presentation blames the sordid status quo on Linux, all the while it is actually the responsibility of the hardware vendors. Linux not being the boot loader, Linux not being the firmware, Linux not being the secure firmware, etc etc etc is all the fault of the hardware vendors. They keep everything closed; even on totally mainstream architectures. On x86, whatever runs in SMM, whatever initializes the RAM chips, etc is all highly guarded intellectual property. On the handful select boards where everything is open (Raptor Talos II?), or reverse engineered, you get LinuxBoot, Coreboot, ... Whoever owns the lowest levels of the architecture, dictates everything; for example where Linux may run.

      > Meanwhile, companies like Apple who integrate everything can have full control

      Yes. Conway's law. As long as your SoC "congeals" from parts from a bunch of vendors, your operating system (in the broad sense the presenter uses the term in) is going to be a hodge-podge too. At best, you will have formal interfaces / specifications between components, and open source code for each component, but the whole will still lack an overarching design.

      Edited to add: systems are incredibly overcomplicated too; they're perverse. To me, they've lost all appeal. They're unapproachable. I wish I had started my professional career twenty years earlier, when C (leading up to C89) still closely matched the hardware. (But I would have had to be born twenty years earlier for that :/)

      Edit#2: the suggestion to build our own hardware is completely impractical. That only makes the barrier to entry higher. (IIRC, Linus Torvalds at one point wrote that ARM64 in Linux wasn't getting many contributions becasue there were simply no ARM64 workstations and laptops for interested individuals to buy and play with.)

      • js8 5 hours ago

        While I largely agree, I think this is inaccurate:

        > Whoever owns the lowest levels of the architecture, dictates everything

        I think in IT, the people who can create most complexity for others, while keeping things relatively simpler for themselves, can dictate. Because these people then can sell the expertise, since they "produce" it cheaper than everyone else.

        Using HW barriers, or just closed-sourcing the stuff just happen to be quite effective ways how to make things complex for others and simple for yourself. Another way is to create your own language, standard or API. Yet another way is network barrier and data ownership (aka SaaS).

        My point is, it's possible to dictate on any level, not just the lowest.

        • grisBeik 4 hours ago

          Thanks; this is a great thought! Let me try to refine it: "create irreplaceable complexity for others".

      • rjsw 5 hours ago

        Another area that could be open and cooperating in the operating system is network controllers, most have an offload engine of some kind but you can't extend what it does or fix bugs in it.

  • shadowpho an hour ago

    One problem with linux on ARM is the lack of well-supported discovery mechanism like UEFI on x86, which necessitates a lot of custom code/release for many arm chips.

  • linguae 4 hours ago

    I listened to this 2021 talk from Timothy Roscoe and I believe it was interesting. In many ways it reminds me of Rob Pike's 2000 talk "Systems Software Research is Irrelevant," which also deplored the lack of OS research at the time. (For those of you who don't know, Rob Pike helped create the Plan 9 operating system at Bell Labs. He then moved on to Google and helped create the Go programming language.)

    However, I wonder if the reason behind fewer OS papers describing radical departures from Unix/Linux, whether it's back in 2000 when Rob Pike spoke on this topic or in 2021, is because the incentive structures that govern researchers' careers discourage this type of work? Writing an operating system requires a lot of effort. One could shrug this off, saying that the problem is worth the effort, but many researchers face career pressures that make taking on the task of writing an operating system difficult. In corporate environments, it is often the case that research activities must be justified from a business standpoint, and it is often the case that the company's direction is driven by short-term pressures. While Roscoe could argue that it's in a company's interest to invest in operating system infrastructure that is better-equipped to deal with modern systems, it may be cheaper for the company, at least in the short term, to just modify Linux and call it a day. Pre-tenured academics such as grad students, postdocs, and assistant professors have to deal with the "publish or perish" game. Perhaps a professor who already has tenure could pursue an operating system project, but even with tenure there's still the matter of getting grant money, and the grad students who contribute to it are often concerned about their own research careers; they are just starting the publication game.

    Maybe if we had corporate labs these days that functioned more like golden-era Bell Labs and Xerox PARC, and maybe if we had an academic environment with less pressure to publish steady results at top venues, there'd be more researchers willing to take risks and build operating systems with new designs rather than modifying Linux.

    • giantrobot 3 hours ago

      > there'd be more researchers willing to take risks and build operating systems with new designs rather than modifying Linux.

      I don't know if that would be the case. While Bell Labs and Xerox PARC produced a lot of very interesting/useful research a lot of it was tied up in corporate licensing for decades. The corpse of AT&T Unix has been haunting the industry for decades and cost many millions of dollars in lawsuits.

      Linux ate the world largely because anyone could do what they wanted with it. Modifying or building on top of Linux will get you a very long way on commodity hardware you can get at Best Buy down the street for $200. You can spend a lot more time on your target of research rather than having to build the whole underlying system.

      If you've got some genius idea for a process scheduler instead of writing a whole kernel and whatever hardware drivers you need you can just hack it into Linux. You can then distribute it easily to other researchers or testers since it's just patches on a kernel they've already got running.

      I'm not saying Linux is the end-all be-all of OS design or systems research is pointless. It's just a pretty good starting point for a lot of research since it is free and quite capable on its own. As a researcher you get a lot of capability out of the box and a whole ecosystem of development tools all ready to use.

      • musicale 3 hours ago

        > Linux ate the world largely because anyone could do what they wanted with it

        Linux ate the server world because 1) it didn't have server licensing fees like Windows NT and proprietary Unix 2) its closest competitor (BSD) was mired in lawsuits and uncertainty until 1994, 3) commodity x86 servers ended up competing very well on price/performance, and 4) there were possibly other factors like GPL vs. BSD, bazaar vs. cathedral, etc.

        On desktop and mobile, Linux did not exactly eat the world. Android and ChromeOS use Linux kernels however.

  • gary_0 4 hours ago
  • johnea 7 hours ago

    OS's forgot about h/w?

    • schmidtleonard 6 hours ago

      Yeah, it got buried and forgotten beneath three layers of pig lipstick and a candy crush ad.