Intel and AMD form advisory group to reshape x86 ISA

(theregister.com)

122 points | by LorenDB a day ago ago

118 comments

  • tliltocatl 19 hours ago

    Everyone is commenting on "let x86 die" and I would agree if it was just about ISA. But the problem is that x86 has some-sorta-total-disaster-of-a-standard for peripherals and configuration (ACPI, UEFI and so on) and you can actually buy a computer that is compliant and will run Linux out of the box, even if with some glitches.

    ARM is a handful of totally incompatible SoCs and you are totally dependent on SoC integrator providing support (hopefully in the form of throw-over-the-fence-and-forget Linux kernel headers, but more common are just kernel binaries) to run it at all. In theory UEFI supports ARM, but can I buy a desktop ARM processor that does? And this is going to be worse with RISC-V because hardware vendor are not interested in providing platform compatibility. So we would be back in pre-PC era platform-wise.

    There is no replacement for x86, not because it is impossible to replace, but because no vendor is interested in making one.

    • sidewndr46 18 hours ago

      I'm pretty fearful of the death of x86 because it probably implies the death of easily accessible hardware for general purpose computing. If I go to a big box store and buy any AMD or Intel laptop I can throw whatever kernel and userspace I want on it in a few minutes. The Chromebook movement has already been working to close this off.

      If x86 dies we will wind up with a bunch of devices where step 1 is "run this code to exploit a buffer overflow to bypass the locked bootloader. Only works on v1.2 PCB revision 7"

      • yjftsjthsd-h 18 hours ago

        > The Chromebook movement has already been working to close this off.

        ? Chromebooks are like one of two decent non-PC options (the other is Macs). AFAIK every Chromebook can be flipped to developer mode in <5min and then you can boot anything you want.

        • janice1999 16 hours ago

          That is not necessarily true, especially for older Chrombooks. Distros like GalliumOS existed for a reason - many Chromebooks needed drivers not available upstream for things like touchpads. Not all have UEFI Firmware support either. There is a great resource though for people who want to boot regular Linux - MrChromebox[1].

          [1] https://docs.mrchromebox.tech/docs/known-issues.html

          • yjftsjthsd-h 16 hours ago

            It's true that you'd need drivers for your hardware, but that's not really special; I own plenty of normal x86 PCs with hardware that Linux doesn't have drivers for. That said, AIUI Google is generally pretty good at pushing drivers upstream these days.

            And the aftermarket UEFI is a great thing - it means that Chromebooks are probably the easiest+cheapest way to have a coreboot machine - but you don't actually need that; the default firmware will, in developer mode, boot any properly-formed image, which is a touch annoying but does in fact let you boot any OS that handles that boot system without replacing the firmware. One of my main machines right now is an ARM chromebook that runs postmarketos on the stock firmware.

    • yjftsjthsd-h 18 hours ago

      The annoying thing is that it doesn't need do be like this; as you note, ARM & RISC-V are perfectly capable of doing UEFI et al., but everyone apparently prefers to save a tiny bit of money and just not bother.

    • pjmlp 17 hours ago

      Yeah, PC was also supposed to be like the other ecosystems, but IBM got fooled by Compaq in a clever way they failed to prevent.

      PS2 and MCA architecture was a kind of attempt to recover control over the PC, but it was too late.

      • AnimalMuppet an hour ago

        Yup. Microchannel had higher performance than ISA, but you were locked in to IBM. People instead wanted lower cost and the freedom to be able to use all the existing cards.

        This was a fundamental misunderstanding by IBM. The PC exploded, not because it was from IBM, but because it was not locked down. IBM wanted to re-create their locked-down world; the market was never going to go where IBM wanted it to.

    • Sakos 2 hours ago

      I've recently discovered it isn't just per SoC. It's per device. You can't even buy a new laptop with an existing SoC without first checking how far along the manufacturer of that specific device is with mainlining the device tree for that specific device and providing tbds and whatever else. And then they have to do that for every kernel version or you'll be stuck with a device that can't be updated beyond 6.11 forever. Just what.

    • EasyMark 15 hours ago

      As a user, I don’t care about x86 bloat or “baggage”, all I care about is how fast is it relative to its competitors and how stable is it for what I need it for. A lot of hardware people will be angry with that but I say let the market decide; if RISC V or ARM supplant it, great!

    • snvzz 15 hours ago

      >you can actually buy a computer that is compliant and will run Linux out of the box, even if with some glitches.

      Note that neither RISC-V nor aarch64 are any worse in that regard.

      If anything, they're doing much better platform standardization wise.

      • tliltocatl 8 hours ago

        On paper, yes. But how many desktop-suitable systems out there that actually implement it?

    • anthk 18 hours ago

      Doesn't RISC-V computers have something akin ISA<->APIC? And yes, ARM it's a clusterfuck all-in-one were every device it's bound to a kernel version and good luck upgrading the kernel with free devices. The device-tree it's a light Tivoization.

    • afr0ck 17 hours ago

      What you wrote doesn't make any sense. Arm has DTB [1]. Most SoCs re-use a lot of hardware IP blocks and they require very little modifications to DTB files in the kernel and device drivers to get them working. PCIe and USB support discoverability so no issue from that side.

      Arm ecosystem is cleaner in my experience and learned from the mistakes of the past. Arm CPUs are still not as fast as high-end x86 chips, but it's just a matter of time before that market is also eaten by Arm.

      [1] https://community.arm.com/oss-platforms/w/docs/525/device-tr...

      • tliltocatl 8 hours ago

        > require very little modifications to DTB files

        DTB only describes what blocks are present, if the kernel doesn't know what "crapvendor,lockinregulator" means it would not work. Vs ACPI that actually provides drivers, however crappy.

      • janice1999 16 hours ago

        x86 has a a mature open source driver system especially for graphics. Although there are great reverse engineering efforts (Collobra and others), with ARM SOCs you can find yourself dependent on blobs for graphics and locked into ancient and insecure Android and Linux images.

        • tliltocatl 7 hours ago

          Graphics is "fixable" as you can stick a PCIe video card into an ARM or RISC-V system that has PCIe and it will work. Integrated graphics is a mess, that's true. But then so is NVIDIA drivers (Do they ever care for graphics any more or is it just LLMs go brrr for them?).

        • snvzz 15 hours ago

          Yet people are running games[0] on their MILK-V Jupiter boards, using the same discrete GPUs that you would on an x86 system.

          Meantime, companies such as PowerVR or ARM are funding their own open source mesa3d drivers.

          0. https://box86.org/blog/

      • anthk 7 hours ago

        DTB it's hell. A lot of devices today only work with obsolete kernel releases.

  • Remnant44 20 hours ago

    This is hopeful. Whatever the ARM enthusiasts would like, x86 is going to stick around for a long time, and working together to evolve the ISA extensions in a more cohesive manner would go a long way.

    In particular, I'd really like AMD and Intel to get on the same page in terms of avx10 / avx512 support.

    Many people correctly note that avx512 support is not super relevant today, but this can be laid heavily at the feet of Intel's process troubles and a terrible decision to use ISA for market segmentation purposes.

    Zen4/zen5 show that it is possible to implement wide vector units on a cpu in an extremely beneficial way - even if you're running reduced width execution units under the hood, the improved frontend performance is really useful - and also actually saves power, as the decoders and schedulers account for a fair chunk of power consumption these days.

    • emn13 19 hours ago

      Phoronix has a specific benchmark subset that tries to tease out (at least for zen5) a hint of how much the wider data-path matters vs. how much the frontend/ISA extension itself, by disabling those selectively on an epyc zen5 chip: https://www.phoronix.com/review/amd-epyc-9755-avx512

      It's pretty clear that the ISA/frontend is much more impactful than the data-width on this set of workloads, and that also seems to jive with more indirect evidence from performance patterns more broadly.

      i.e. it's the new instructions (and potentially register width) that matter significantly more than the actually wider data-path.

    • adgjlsfhk1 19 hours ago

      honestly, as exciting as proper AVX-512 support will be, I am probably at least as excited for APX (16->32 general purpose registers). It brings x86 inline with ARM and RISCV, and just generally makes it a lot easier for compilers to not spill registers to the stack.

      • Remnant44 19 hours ago

        Agreed.. in a lot of ways, APX is like AVX(512) for scalar code.

        Double integer register count, three-operand instructions, improved predication support including conditional load/stores.. If I remember correctly, they actually implement it by having the scalar instructions use the EVEX prefix introduced with avx512.

        I would really love to seem them settle on APX + AVX10/512 as a next-generation x86 baseline.

    • snvzz 15 hours ago

      >x86 is going to stick around for a long time

      It won't. Due to its licensing, x86 does not stand a chance against RISC-V.

      Especially not after Apple, Microsoft and others (e.g. Box64) have demonstrated ability to run x86 code elsewhere reasonably well, thus providing a clear migration path.

      These actions by AMD and Intel are seen as a desperate attempt to keep x86-64 relevant. As non-x86 hardware increasingly shows up and runs legacy code with little issue, x86 is doomed.

      • netbsdusers 13 hours ago

        It's exactly because of licencing that the situation with RISC-V is a world of proprietary SoCs with no consistency, as opposed to the open system that is an x86 PC. (That the ISA is or isn't open is trifling and matters to neither user nor kernel developer.) Essentially every single board requires its very own custom port of your preferred OS.

        I will wait for x86's doom, but it will take patience. After all, people have been saying it's dying and will be replaced by some RISC alternative for over 30 years now.

        • snvzz 12 hours ago

          >It's exactly because of licencing that the situation with RISC-V is a world of proprietary SoCs with no consistency, as opposed to the open system that is an x86 PC. (That the ISA is or isn't open is trifling and matters to neither user nor kernel developer.) Essentially every single board requires its very own custom port of your preferred OS.

          Can you cite any sources for this? What makes you say this? Can you cite one (1) example?

          I ask because it is entirely inconsistent with my experience with the available SoCs and their boards. They all implement the specs that were available at the time of design. Sometimes drafts out of necessity. And all of them use OpenSBI as their machine mode firmware.

          This is false to the point it reads like FUD to me; It couldn't be further detached from reality.

  • sedatk 20 hours ago

    After 40+ years, finally :) I really want x86 and ARM to push each other for better power efficiency and performance instead of one side winning out. I’m using an ARM laptop nowadays and the battery life is so impressive.

    • pizza234 19 hours ago

      Very surprisingly, x86 can be competitive (and in some cases, even winning) in terms of power efficiency; see https://www.phoronix.com/review/amd-epyc-9965-ampereone/5.

      • MBCook 19 hours ago

        But isn’t that at the high end (when doing lots of work)?

        I thought the big advantage tended to be when not much was going on, either idle or simple tasks. And that’s where the big ARM advantage in power was.

      • hggigg 19 hours ago

        If you compare a good x86 to a crap ARM yes.

        Laptops are populated mostly with crap x86 and good ARM (apart from snapdragon which is a crap ARM)

        • emn13 19 hours ago

          That statement strongly suggests it's the implementation of the ISA that matters more than the ISA, whether that be due to process node, skill and investment by the manufacturer, or perhaps a bit of luck.

          By good ARM, do you mean specifically (and only) Apple? Given their budgets (both financial and transistor), their very long history and investment into their chips, their deep integration of stuff like memory and GPU, and their tendency to be a process node ahead of their competitors, Apple is a particularly hard example to interpret when it comes to ISA impact.

          Hard to imagine another company making all those choices, since they all have costs too, which many won't be willing or able to bear, and some of those costs (like the lack of flexibility due to the deep integration) are possibly a non-starter for many niches.

          • sedatk 18 hours ago

            I think parent means "Ampere" being a bad ARM implementation, and Snapdragon X or Apple M-series being the good ones.

            It's mentioned in the article too. Ampere performs really poorly on idle (101W vs EPYC's meager 19W). I haven't witnessed that problem on my Snapdragon X laptop.

            I'm not saying Ampere is bad myself though. It's obviously very competitive being a relatively new player and all. But, it's not as polished for desktop/laptop workloads as it seems.

            • MBCook 16 hours ago

              19W at idle seems extremely high to me.

    • Twirrim 19 hours ago

      I've been surprised by the lack of polish in Golang for Arm, given all major clouds have server class, high performance, Arm processors, and Google even has their own. There's stuff on their backlog that is still unaddressed around the code generation, and if you compare the output in Godbolt, you can see the inefficiencies.

  • ChuckMcM 19 hours ago

    Feels like one of the signs of the apocalypse :-) And it is a pretty stunning "conclusion" to a war that started 20 years ago with the introduction of Sledgehammer and the AMD64 instruction set.

    For me though it really emphasizes how much of a threat ARM (and presumably RISCV) architectures are to these two companies.

  • bewaretheirs 19 hours ago

    Odd that the article makes no mention of intel's APX extensions (which add more integer registers, 3-operand variants of most 2-operand instructions, and assorted other tweaks).

  • bcrl 15 hours ago

    The prior rumours that Jim Keller had AMD develop the K10 core with the option of using an ARM front end would have created an extremely interesting way to compare the merits of the 2 instruction sets. x86 has constraints that make it far more programmer friendly than ARM (weak memory ordering models make my brain hurt!), yet performance has been pushed ahead through techniques that were only dreamed of decades ago. I really wonder what Intel and AMD could come up with using the ARM64 instruction set combined with the knowledge gained from decades of pushing x86 and x86-64 to the limit.

  • ytch 14 hours ago

    As the news mentions, I hope it is the time pushing x86s Architecture[1] to real world.

    [1] https://www.intel.com/content/www/us/en/developer/articles/t...

  • bloated5048 14 hours ago

    RISC is the future

  • ConanRus 20 hours ago

    [dead]

  • rasz 20 hours ago

    Start with optional fixed instruction size mode.

  • rwaksmunski 21 hours ago

    Just let it fade away with dignity.

  • dmitrygr 21 hours ago

    > "x86 is the de facto standard [...]" AMD EVP of datacenter solutions Forrest Norrod said

    "was", Forrest, not "is"

  • anthk 20 hours ago

    Let it die, adopt RISC-V. X86 is built on cruft over cruft.

    • bell-cot 19 hours ago

      The article noted that some massive cruft cut-backs could be in the cards.

    • blackeyeblitzar 19 hours ago

      I was wondering about this as well. What do Intel and AMD gain from protecting x86? Why not just adopt RISC-V themselves and make the best processors and process for that? Wouldn’t that get them what they want (in terms of their company’s strategy)? There’s not really value in the architecture itself as much as the real products they sell, right (the actual processors)?

      • layer8 19 hours ago

        The value of the ISA is the huge volume of existing software for it, including hardware drivers. The whole x86-using industry migrating away from x86 would be a multi-decade and costly process. In addition, this would just mean more competition for Intel and AMD, as they would lose the moat of their x86 know-how. Why would they do that?

        • anthk 18 hours ago

          For high-tier environments consider GNU/Linux and *BSD already ported.

          On the rest, for office tasks either they run on the browser, Java or C# or fast enough Win32 shims.

          • sph 9 hours ago

            What about the rest of the software?

            • Sakos 2 hours ago

              You're on Hacker News. We don't care about anybody except users of open source software.

              • anthk an hour ago

                Either fast userspace emulation or hooks. Adobe did it twice or trice. Microsoft did that under DOS-Windows and then X86->ARM.

            • anthk 8 hours ago

              The legacy one will run with library hooks and CPU traps.

        • tenebrisalietum 18 hours ago

          Nah. Not in this day and age. Most of that software can be recompiled - hopefully no one is writing stuff in assembly, at least not to the point where it can't easily be ported. With many-core CPUs cheap and common, everything is fast. Hardware drivers matter less in the 2020s - there is far less diversity of hardware than there was in the 1990s. x86 "know-how" is vendor lock-in, not a benefit.

          • immibis 18 hours ago

            Will you recompile the proprietary Nvidia driver for a GTX260? (not a typo)

            • anthk 18 hours ago

              Nvidia would. Most of the work it's in the firmware blob after all.

              • 13 hours ago
                [deleted]
              • IshKebab 17 hours ago

                Nvidia obviously wouldn't.

                • anthk 8 hours ago

                  Nvidia's drivers are semifree today.

            • tenebrisalietum 17 hours ago

              No but Nvidia can with all their money.

              GPUs are really the only thing requiring their own hardware drivers that's commonly installed. Everything else common and meaningful to the masses is pretty much USB, an old-school serial port, a network interface, or a block device. Bus/interconnect drivers like NVMe, SATA and all that are very standardized and your x86 know-how doesn't buy you any advantage.

              Certainly there may be some issues with the trash heap that is ACPI/UEFI; and DTB can go ahead and steamroll that dumpster fire along with the rest of the x86 cruft - there was an x86 world before ACPI after all.

  • autoexecbat 21 hours ago

    Give us some riscv instruction decoders on the x86 cpus, let us pick per-process what ISA we use

    • Pet_Ant 21 hours ago

      Why would Intel and AMD put a hole in the wall that guards their duopoly? Justifiably it wouldn't be a sales booster in the short term and in the long term it's a threat.

      • anthk 7 hours ago

        Intel used to sell some embedded ARM CPU's too back in the day. They could adapt to RISC-V in few years.

        • Pet_Ant 22 minutes ago

          That was a side hustle. That was expanding their markets. They also tried with the Atom and the Xeon Phi, but this is now other architectures moving into their heartland. Sure, if RISC-V takes over then of course Intel will adapt, but right now they have the duopoly that they share with AMD. If RISC-V becomes a standard than IBM for example will get involved and they definitely know how to design a chip.

          But this really looks like another industry shift like when businesses all moved to Unix, or when unixes all moved to business, or homes consolidated on Wintel, each time moving to a more open commodity platform. RISC-V and (to a lesser degree) ARM are much more commodity than the duopoly.

  • rdudek 21 hours ago

    I wish we could just get away from x86 "standard" and move on. If there is something that still needs it, x86 emulation is very efficient nowadays. Just look what Apple has done with this ARM architecture. Even now, Qualcom's ARM processors running Windows are doing a fantastic job emulating x86 as needed.

    • trynumber9 21 hours ago

      Apple is the only one making ARM chips fast enough to be competitive even with emulation.

      Qualcomm isn't on that level - they're only on par with AMD and Intel without emulation.

      The market won't move from the x86 duopoly to Apple's walled garden because they have a fast chip. It's on ARM to make a licensable core that's so much faster than the x86 options that people actually move to it.

      • pseudosavant 20 hours ago

        > Apple is the only one making ARM chips fast enough to be competitive even with emulation.

        I'd modify that to: Apple is the only one making ARM chips fast enough to be competitive, period. All of the other cores from ARM or Qualcomm aren't as fast as the top Intel and AMD x86 CPUs, just maybe more efficient. It is the reason Windows has continued to fail on ARM, because they have to use the same slow off-the-shelf cores as everyone else.

        I'm a big Surface fan, and wish they had a version with an Apple M-class SoC in it, but every ARM version (which used the fastest non-Apple ARM core at the time) has been a dog compared to the same model with Intel. Just give me the iPad Air SoC in a Surface...

        • marmaduke 20 hours ago

          > All of the other cores from ARM or Qualcomm aren't as fast as the top Intel and AMD x86

          I don’t think top cpu perf is relevant. I was working on some C code for science stuff inside Termux on a Pixel 7a, and would’ve been perfectly ok having that perf on a standard format laptop. I even noticed some branch prediction was better than x86. It’s more an issue that no one is making a decent arm in laptop format with nvme, enough ram etc.

          • MadnessASAP 17 hours ago

            > I was working on some C code for science stuff inside Termux on a Pixel 7a

            I don't want this to come across as rude or condescending, but who hurt you?

            • marmaduke 10 hours ago

              Ha, I was benchmarking arm really, not writing from scratch.

        • solarkraft 20 hours ago

          > just maybe more efficient

          Isn’t efficiency all that matters nowadays? It’s exactly what gives Apple the crazy battery life and allows them a lot of thermal headroom to drive the chips with high power.

          • layer8 19 hours ago

            It’s not clear to what extent this is necessitated by the ISA as opposed to caused by the implementation. Recent x86 chips have caught up a lot in terms of efficiency.

      • Almondsetat 21 hours ago

        Apple's emulation literally implements some x86 in hardware, and they will likely drop that silicon when the transition end, so I won't rely on that

        • ArchOversight 21 hours ago

          It doesn't implement some x86 in hardware, it implements some memory ordering guarantees in hardware to match what x86 requires.

          However that is a very minor implementation, there is no actual x86 in Apple's ARM.

          https://www.sciencedirect.com/science/article/pii/S138376212...

          TSO could be implemented by other ARM processors easily as well, to provide the same memory ordering guarantees. Besides that the x86 code is translated by Rosetta to ARM instructions.

          • wmf 20 hours ago

            Besides TSO there are a bunch of ARM extensions designed to match x86 behavior. https://dougallj.wordpress.com/2022/11/09/why-is-rosetta-2-f... I'm curious whether Oryon and X925 implement these instructions and whether other emulators like FEX use them.

          • sqeaky 21 hours ago

            He didn't say it implemented any instructions, I would argue that the memory ordering is quite important and part of why other CPU architectures can sometimes be faster or more efficient than x86, it takes effort to guarantee atomicity and apparent read and write ordering. And when a CPU was expending effort that means either transistors or time.

          • Findecanor 21 hours ago

            Apple also implements a couple x86 CPU status register flags not present in AArch64.

      • skissane 19 hours ago

        > Apple is the only one making ARM chips fast enough to be competitive even with emulation.

        My big problem is Rosetta 2 doesn’t emulate AVX which more and more software uses.

        I work in an AI platform team. I’m not actually trying to do machine learning stuff under it, but I just want to start the Docker containers to test some unrelated functions on my laptop. And that happens to start Tensorflow and pgvector, even though I’m not really using either in anger in this case. And both try to use AVX, and then get a SIGILL, so those containers fail to start.

        Maybe should just build Linux ARM Docker containers but trying to get some ARM CI machines to build them with (we could just use our laptops but want to do it properly)

        • cesarb 19 hours ago

          > My big problem is Rosetta 2 doesn’t emulate AVX which more and more software uses.

          You can probably blame that on patents. Base x86-64, which includes SSE2, is old enough that all relevant patents have already expired (the x86-64 ISA documentation was first published by AMD 24 years ago, see https://web.archive.org/web/20000829042324/http://www.x86-64...). Other ISA extensions are newer, and might still be threatened by patents.

          • kbolino 17 hours ago

            Maybe patents are involved, but there's a bigger issue too: the Apple chips don't have support for the equivalent Arm instructions (SVE/SVE2) nor wide enough vectors in their SIMD units. Any AVX/AVX2 emulation is going to be dog slow, even if it isn't encumbered by patents.

            • skissane 13 hours ago

              For my use case, I don’t really care much about AVX performance, since I am using it very minimally.

              Using QEMU instead of Rosetta 2 gets past this, since QEMU doesn’t seem to be afraid of those patents, but it makes everything else a lot slower

              Maybe, if Apple made available a plug-in API for Rosetta 2, to enable plugins to emulate additional instructions. Then some open-source plug-in could implement the missing AVX instructions, but if Intel tried to claim Apple was infringing on the AVX patent, Apple could (truthfully) say “we have nothing to do with that plug-in, we just created the API it calls”

              Another approach would be if Apple open-sourced Rosetta 2, and then a community fork could implement this stuff. I doubt Apple will do that though - I think they view Rosetta 2’s superior x86 emulation as a commercial advantage over other ARM laptop vendors (such as Qualcomm’s ARM Windows systems), and they’d likely view open sourcing it as giving away that commercial advantage

      • rubyn00bie 21 hours ago

        I think it’s more on NVidia, Qualcomm, or AMD to engineer their own ARM based chips which can outcompete x86 variants. Both NVidia and AMD are rumored to be working on general purpose ARM based CPUs. Right now NVidia is likely the one to do it thanks to their absolutely obscene margins giving them more than enough money for R&D.

        I personally don’t think there is a lot of incentive for ARM to make the fastest possible cores. They’d be undermining those who are currently paying the most to license their IP. ARM’s real incentive is power efficiency and then letting the licensees use and abuse that for performance gains.

    • Pet_Ant 21 hours ago

      They want to keep x86 going because they have cross-licensed patents which keep their market share a walled-garden. Anything else would invite competition.

      Mind you, it's not just them. MIPS was pretty shady with their patent that even if trapped and implemented in software they sued so you could not compete without licensing and losing your margen. SPARC was open... until UltraSPARC IIRC and then they tried something similar.

      https://www.edn.com/mips-lexra-both-claim-victory-in-markman...

      > MIPS said the court’s ruling Friday rejected Lexra’s attempt to limit the claims of U.S. Patent No. 4,814,976 to hardware implementations of the unaligned load and store instructions (LWL, LWR, SWL, SWR) of the MIPS instruction set architecture. MIPS argues the claims should also cover software implementations like Lexra’s

      That is what is so important about it RISC-V is that it being an open ISA creates a commodity with true competition instead of competing oligopolies.

      • throwaway48476 21 hours ago

        Multi arch support has never been better and yet x86 is still competitive. Maybe in 10 years everything will be risc V but it doesn't seem to be happening very fast.

      • simcop2387 21 hours ago

        > That is what is so important about it RISC-V is that it being an open ISA creates a commodity with true competition instead of competing oligopolies.

        Not just that, but also by being fairly vanilla/boring about a lot of things in the ISA too. Thus letting the ISA itself be less an impediment to being compatible with ARM and x86_64 as far as behavior for memory ordering and such.

      • monocasa 14 hours ago

        Those patents are starting to expire. Original x86-64 is probably free and clear now, being released in 2001. The big sticking point for general code out there (cmpxchg16b) was released in 2008, so it only has a few more years left.

        If anything this x86 "reshaping" sounds like a way to get some new patent bricks for the wall of the garden.

    • umanwizard 21 hours ago

      IIRC Apple chips are particularly good at x86 emulation because they have a special option to run with an x86-compatible memory model which is stronger than the standard arm one. A generic arm chip is not necessarily good at x86 emulation.

      • saagarjha 21 hours ago

        It could be, if it has fast RCpc.

    • WithinReason 21 hours ago

      What's wrong with x86? According to Jim Keller the disadvantage is minor:

      https://www.youtube.com/watch?v=rfFuTgnvwgs&t=474s

      (I wish Linus let him finish though)

    • akira2501 21 hours ago

      No one "needs" a particular architecture. It's just that some of them are better suited to certain problems than others.

      If you really want to get rid of something and "move on" why not get rid of the wild class of devastating speculation bugs that every OOO processor in existence currently has?

      Far more valuable than worrying about how instruction bits are arranged in a stream.

      • zamadatix 21 hours ago

        The main problem with non-speculative options isn't hardware it's performance. You can turn x86 CPUs into in order machines without respinning hardware. Nobody wants to do it because it's slow as molasses. If there was a completely alternative design that was better than speculative execution wholesale then it'd be back to being the original "just go to the fastest thing".

      • Atotalnoob 20 hours ago

        Yes people need a particular architecture.

        There are many legacy systems that still rely on IBM chips, mainframes, etc.

        They can’t just move on without rebuilding the entire system from scratch. a monumental and risky proposition.

        Even if tomorrow everyone went all in on arm, risc5, whatever, x86 is here to stay.

      • umanwizard 15 hours ago

        If anyone knew how to get rid of all the speculation bugs, don’t you think they’d do so? What concretely are you proposing Intel and AMD do?

      • CamperBob2 20 hours ago

        why not get rid of the wild class of devastating speculation bugs that every OOO processor in existence currently has?

        Because almost no one but cloud-computing providers have a threat model that justifies the performance hit (and, not coincidentally, the carbon footprint) associated with crippled CPUs.

        • immibis 18 hours ago

          Laptops could do with the power use reduction.

          • umanwizard 15 hours ago

            I think you are underestimating how many times slower processors would be without any speculation at all. It would mean having to wait dozens of cycles on every branch so you can know whether it was taken before executing the next instruction, for example.

          • epcoa 16 hours ago

            Speculative execution like almost every modern CPU performance trick decreases power use.

            Mobile SoCs have OOO.

    • mey 21 hours ago

      Qualcom's chips do a very good job, but it's not perfect. Not sure if it's the chip or software, but if you need things to work (on either Apple or Windows ARM), emulation isn't suitable still. Apple gets away with it because they just wrote off the gaming market and everything else was forced to quickly migrate to the new platform.

    • tester756 21 hours ago

      >I wish we could just get away from x86 "standard" and move on.

      But actually why? is there reason that isn't licensing?

      Because perf or perf/watt is not the reason

      • t-3 17 hours ago

        The best reason I can think of is aesthetic: x86 is the ugliest (extant?) ISA.

        Nobody wants to read, write, or think about x86 assembly. Most RISCs are easy and enjoyable to read, reason about, and write by hand.

    • maximilianroos 21 hours ago

      Move on to what? ARM? Is ARM better in every way than x86?

      (genuine question)

      • kimixa 19 hours ago

        Apple have shown that you can get x86-tier performance, but it generally costs a similar amount of silicon and engineering effort as those x86 devices.

        The lack of other competitors managing this suggest the arm ISA isn't "fundamentally" better at delivering that performance, it doesn't seem easier in engineering effort or silicon cost. The Apple products tend to outperform in perf/watt though, but that's hard to really compare as they're focusing on a slightly different market that favors that over "Peak Server Performance".

        Is ARM better from a license POV? I also don't think so. Some people claim they want away from the "monopoly" of x86 copyright and license shenanigans from Intel and AMD, but I'd argue ARM control their ISA to a similar degree - you need to buy it off ARM to use it, and they have the right to revoke that license. See the Qualcomm/Nuvia legal mess, and that was when both companies were paying ARM already.

        So in many ways I see ARM vs x86 as the "Coke v Pepsi" of ISAs, they seem pretty similar from the outside, serving pretty much the same use case (even if how they go around serving that use case is different), but some people online have rather dramatic opinions they confuse with "Proven Objective Fact".

        RISC-V might be a good path away from just repeating the same "Single company controls all licensing" problem, but that's similar to how arm was 15 years ago - there's not really any proven high-performance cores approaching common "desktop" use cases yet, and at least ARM had to do a pretty clean re-write to go from then to now in armv8. Some of the things they "fixed" were very non-obvious until you actually tried to make a large, superscaler speculative implementation too - and who knows what pitfalls there may be in current ISA designs that trip over the "next" performance increasing techniques. Maybe they've managed to avoid all them for the near future, but like many things in R&D we don't really know until we get there.

        • MBCook 19 hours ago

          Well put.

          “Should we move off x86” is often really “should we leave Intel/AMD”.

          Maybe if there was another competitor making x86 chips they discussion would sound different.

          As it is, if you don’t like the offerings that you’re being given you simply have to switch architectures.

      • saagarjha 21 hours ago

        It’s not. There are a lot of ways it’s cleaner but x86 handles some things (debugging and handling of traps, for example) better.

      • snvzz 15 hours ago

        To RISC-V, due to its licensing.

        ARM is yet another dead end.

    • nerdjon 20 hours ago

      I think for most 'general' computing, yeah Arm is fine. It has been that way for a while with smart phones and tablets being the primary device many people use now. Either you can run it through a compatibility layer like Rosetta 2 or it shifts to native. The performance hit is unlikely to really be felt for something like Word.

      But the traditional gaming market (not mobile, I am not dismissing mobile but it is not the traditional market that is relevant to this conversation) is likely not going to make that shift anytime soon.

      To the best of my knowledge I have yet to see any ability for a consumer to build their own ARM pc (someone correct me if I am wrong here?) and there are many gamers that will fight tooth and nail to not give that up.

      If consoles did it, it would most likely mean a break in backwards compatibility or a lot of investment in emulation.

      With consoles often being the devices that set a performance standard for PC's, I doubt they would be moving to ARM in the next generation so would not happen until the 10th generation. We are likely 2-3 years away from hearing about gen 9, and then another ~8 years until ARM becomes a conversation for any serious game console. There just would not be any incentive for PC to make a serious switch until that point (or around that time) since it would also fruther complicate game development.

      There just have not been much movement in this regard. We are seeing a few ports come to Mac (and iPhone) but those are the exceptions.

      Not saying that companies are not trying to try to push Arm hardware for gaming, try to claim that their compatibility layer is just fine for playing games on Windows. I am sure some people will do it. But I just don't see any serious effort to push Arm into gaming outside of mobile devices.

      • Phrodo_00 19 hours ago

        > If consoles did it, it would most likely mean a break in backwards compatibility or a lot of investment in emulation.

        Consoles tend to break compatibility every like couple of generations anyway. The PS5 is only compatible with PS4, for example, because the PS3 used the PowerPC-based Cell. Also, Nintendo has been using ARM since like, the Gameboy Advance (Fun fact is that in handhelds nintendo tended to use the previous generation's CPU as a sound chip, and would use it when running previous generation carts) (Nintendo home consoles before the switch used PowerPC. The Nintendo 64 used MIPS)

        I agree with you on the custom PC market, but that has to be pretty small compared to consoles.

        • nerdjon 17 hours ago

          > Consoles tend to break compatibility every like couple of generations anyway.

          True, generally with an architecture change. (Nintendo being the exception because... well Nintendo).

          Recently though, especially with digital stores, companies have been getting more flack for it. There was a lot of concern over Sony being non-commital about the PS5 before release regarding PS4 compatibility.

          Xbox was celebrated when they started their initiative last gen to add support for Xbox 360 and OG xbox games (while limited, it was something).

          I am just not convinced that gamers are going to be as forgiving of it happening again in the digital age as they were before.

          > I agree with you on the custom PC market, but that has to be pretty small compared to consoles.

          Oh for sure, but I think it also tends to be a very vocal group. There is a reason `pcmr` is a thing.

      • heraldgeezer 20 hours ago

        This. I understand web devs can just use whatever runs Google Chrome but the real world is a bit more complex.

        CS2 has over 900k online on Steam right now... Where people push the game to 200fps plus. A PC that can do that can also do anything else. Dev work, VMs, Docker, whatever. Why switch to ARM? Because their soy starbuxx latte workplace gets 2-3h more in battery life?

        But I have to disagree on

        >With consoles often being the devices that set a performance standard for PC's, I doubt they would be moving to ARM in the next generation so would not happen until the 10th generation. We are likely 2-3 years away from hearing about gen 9, and then another ~8 years until ARM becomes a conversation for any serious game console. There just would not be any incentive for PC to make a serious switch until that point (or around that time) since it would also fruther complicate game development.

        Consoles used to be MIPS, POWERPC and stuff and PCs where x86 "back then".

        XBOX is like the first x86 console. So I dunno your point here. Really man.

        ???

        Switch is ARM right. Nintendo Switch.

        Sony and MS went from POWERPC to x86 as IBM could not make a PPC CPU fast enough and ARM was not good in a big formfactor. Sony and MS use AMD hardware but in PCs Nvidia GPUS are still the best, yes they are my experiece personally and look at numbers. XBOX1 used to be nvidia gforce 3 but abd pricing. So all went AMD later. PS3 was CELL 8 core PPC and Nvidia 70xx GPU. 360 was PPC and AMD gpu. Wii was PPC and AMD gpu.

        >There just have not been much movement in this regard. We are seeing a few ports come to Mac (and iPhone) but those are the exceptions.

        Nobody on Mac is actually a gamer.

        >(not mobile, I am not dismissing mobile but it is not the traditional market that is relevant to this conversation

        I will btw. I do not understand why people here love mobile games. No wait I do. They are easy, quick and P2W. They are busy adults or kids or pajeets with no money.

        In mobile, what sells? Gatcha games, pay to win trash. There are some gems but few and far between. They are built in a predatory way to take your money.

        • nerdjon 17 hours ago

          > Consoles used to be MIPS, POWERPC and stuff and PCs where x86 "back then".

          True, but at those times it was far more common for games to ship on one console or skip PC entirely. Even during the Xbox 360 generation which was still powerpc based.

          My point there is less that they won't move to ARM for consoles, just that at this point I would be shocked if they were moving to ARM for the 9th gen and we don't start hearing rumors about it now.

          And that doing so would have an impact on developers.

          > Nobody on Mac is actually a gamer.

          I always hated this generalization, Mac is my preferred OS but because I can't really game on it (despite it being quite powerful hardware) I have my custom build desktop. I would much prefer to have just my MBP.

          > I will btw. I do not understand why people here love mobile games. No wait I do. They are easy, quick and P2W. They are busy adults or kids or pajeets with no money.

          I agree with you mostly here. But the reason I mention this is I have gotten in an argument about what is "real" gaming and people love to point out how much mobile games make as if that is the important metric.

          I think there are just 2 different forms of gaming. Neither are necessarily wrong, but they are fundamentally different.

    • Sakos 21 hours ago

      I simply can't comprehend why people are so all-in on ARM. Snapdragon X Elite support in the Linux kernel is still a work in progress. There's still no GPU driver. It's been months. Is this going to happen every time there's a new Snapdragon? How is this sustainable long-term? If Intel or AMD release a new CPU architecture, it's basically supported out of the box, even if it might need some bug fixing (which actually happens fairly regularly).

      How does Qualcomm's commitment to Linux and an open platform actually stack up to x86? Because the biggest benefit of x86 is that it forces interoperability through heavy standardization and established culture. Every ARM manufacturer is basically doing their own thing and it requires the manufacturer to give a crap about Linux or a volunteer group months or years of effort to get things running (see Asahi).

      And what happens when an ARM-based device is EoL?

    • 2OEH8eoCRo0 21 hours ago

      What have they done with ARM? Cut out all I/O put memory on package and pay for the latest TSMC node?

      I think ARM is overhyped because people see Apple's success and just assume aarch64 > x86_64

      • tester756 21 hours ago

        >I think ARM is overhyped because people see Apple's success and just assume aarch64 > x86_64

        It seems like that's the case

        "ARM has bigger share in people's minds than actual numbers"

    • heraldgeezer 20 hours ago

      Why? AMDs new x86 is just as good as M3.

      For gaming, emulation is shit.

      I use a desktop gaming PC and I can do anything. Use it as a workstation too. Why should I switch to ARM.

      Why should our enterprise laptops be ARM?

      Just please, you are just spouting nonsense.

    • magic_hamster 21 hours ago

      Even if Apple supports x86 emulation, they are doing a subpar job when it comes to actually running things. I got a Mac recently and I was very surprised to see Docker running as an underprivileged second class citizen, literally breaking a lot of flags and features in tools like docker-compose. Come to think of it, it's not even limited to x86 containers (which do run with rosetta) but Docker as a whole.

      So far in emulation, trying to emulate a full blown x86 Linux distro was excruciatingly slow for me, not even remotely usable.

      I don't know where this incredible x86 emulation people are talking about on Mac is hiding. Maybe it's some legacy apps I never used. Everything I did try so far was simply not good enough.

      • ArchOversight 21 hours ago

        Docker on Mac is running a VM in a hypervisor to run Linux.

      • pantulis 21 hours ago

        > I don't know where this incredible x86 emulation people are talking about on Mac is hiding.

        It's in Rosetta, running x86 binaries compiled for older macOS versions.

        The state of Docker in macOS has never been good, with Orbdev being the best Docker compatible container engine for desktop Macs.

      • spockz 21 hours ago

        What flags are you talking about? Everything here runs just fine out of the box. Just slow for x86 images and slow compared to running docker engine or podman on Linux.