Hell Freezes Over as AMD and Intel Come Together for x86

(servethehome.com)

80 points | by rbanffy a day ago ago

103 comments

  • bhouston 19 hours ago

    This is mostly a recognition of the shared threat that ARM poses to both AMD and Intel.

    ARM took embedded first, then mobile, then gaming (on mobile and handhelds), then Macs, and now it is making real inroads into Windows laptops (e.g. Snapdragon X Elite) and servers (e.g. Graviton.)

    The next shoe to drop would be a high-end gaming PC that can take an NVIDIA or AMD graphics card powered by a Snapdragon X Elite-like ARM chip.

    Another shoe to drop would be a super-computer powered by ARM chips instead of x86. I don't think that has happened yet?

    After that, the last refuge of x86 is in legacy software that hasn't been natively ported to ARM. But there will be fewer and fewer cases of this as the years go by. For now I think it will be mostly games.

    x86 is under serious threat.

    • Wytwwww 19 hours ago

      > After that, the last refuge of x86

      It's not like ARM has taken over those markets yet. Snapdragon X is fine but nothing special compared AMD/Intel chips. I think we just might have a pretty distorted view of ARM vs x86 because of Apple. They are just much better at designing ARM chips than anyone else (including both Qualcomm and especially ARM itself). Servers is kind of a mixed story as well.

      There is nothing wrong with other companies trying to disrupt AMD/Intel duopoly and hopefully we'll get lower prices and more innovation because of that but calling x86 dead is a bit premature at this point..

      • bhouston 19 hours ago

        > Snapdragon X is fine but nothing special compared AMD/Intel chips.

        The issue is that ARM is now fine for Windows. Until Snapdragon, ARM was actually pretty crap outside of Apple. This is a huge step. Qualcomm acquired a lot of the original Apple team for the M1 via Nuvia and I expect them to be able to execute at least decently going forward.

        Apparently they are selling because they have better battery life than Intel/AMD laptops according to this: https://www.techpowerup.com/324301/battery-life-is-driving-s...

        > Servers is kind of a mixed story as well.

        Graviton is 30% cheap for the performance (e.g. performance adjusted price.) It is a no brainer for a lot of workflows, especially given so many developers are actually building on ARM machines in the first place. I want my servers to be 30% cheaper.

        > There is nothing wrong with other companies trying to disrupt AMD/Intel duopoly and hopefully we'll get lower prices and more innovation because of that but calling x86 dead is a bit premature at this point..

        Qualcomm is exploring acquiring Intel. A company who got its value from ARM is looking to take over the historic leader in x86 - doesn't that tell you something? https://www.reuters.com/markets/deals/qualcomm-approached-in...

        The writing is on the wall. x86 isn't dead and it won't be dead anytime soon, but it definitely in its twilight period.

        • jtc331 19 hours ago

          Graviton single thread performance is _not_ the same as the Intel or AMD offerings on AWS.

          • bhouston 19 hours ago

            Yeah, it seems whatever key improvements Qualcomm did for the Snapdragon X and Apple did for the Mx series haven't made it to the Graviton yet. When I said 30% earlier I was talking about value - my understanding is that Graviton is 30% cheaper once you factor in the performance differences.

            • Wytwwww 19 hours ago

              > Qualcomm did for the Snapdragon X and Apple did for the Mx series haven't made it to the Graviton yet

              Why would they, though?

              Neither Apple nor Qualcomm have any incentives to share their designs with ARM anymore than they do with Intel, AMD or each other. I wonder how AmpereOne will look assuming it actually ever comes out...

              • kmeisthax 16 hours ago

                Funny you say that, because ARM is actually suing Qualcomm specifically over this issue.

                The Nuvia designs Qualcomm bought were made under an ARM architectural license that specifically restricts all designs sold to only go in servers - not phones, laptops, or anything else. They also weren't allowed to sell the company without destroying the designs first, because Nuvia got a really sweet deal on the architectural license. Qualcomm thinks that doesn't matter, because they have a (much broader) ARM architectural license already - you can't force someone to buy the same license twice, under the principle of rights exhaustion.

                ARM furthermore has an incentive to keep architectural licensees from competing with ARM's in-house design licensing business. Apple Silicon isn't a threat to that business because Apple will never sell their own components to third-parties. In fact, that's why they hate right-to-repair so much[0]. They're better off licensing ARM patents and having Apple continue to work on LLVM than trying to squeeze them for more money. Qualcomm on the other hand sells phone chips to other companies, and currently spends a lot of money to package ARM's licensed designs into their own SoCs. Nuvia designs going into those phones would be a significant movement of money from ARM's pocket to Qualcomm's.

                [0] To be specific, in a right-to-repair world where individual components have to be sold at the same pricing arrangements available to the OEM, Apple would not be able to have exclusive parts in their phones that other vendors can't have. Apple can't say this, because it's hilariously self-serving and the general public correctly doesn't give a flying fuck about IP law, but you can infer it from conduct.

                Apple is perfectly willing to sell you assemblies, of course. Because no vendor is going to buy 100 unrelated parts to get the 1 they care about.

              • bhouston 19 hours ago

                > Neither Apple nor Qualcomm have any incentives to share their designs with ARM anymore than they do with Intel, AMD or each other.

                Techniques leak out as people move around, etc.

                For example, AMD had a lead on chiplet designs but now Intel has them in the latest generation. I would expect that ARM will have chiplets designs by the end of the decade as well.

                Look at Anthropic that seems to have an LLM that is on par with OpenAI GPT 4.

        • Wytwwww 19 hours ago

          > I want my servers to be 30% cheaper.

          Sure, I mean I agree that ARM can outcompete x86 there but mainly Amazon can just cut out Intel/AMD and make their chips themselves which can significantly reduce costs. Not because Neoverse is somehow inherently better or faster than its x86 equivalents. Of course from the perspective of most users that's effectively the same thing, why overpay for 1 fast core when you can just get 2 slightly slower ones.

          > Qualcomm is exploring acquiring Intel.

          That's far fetched and hard to believe. IIRC there were talks about Qualcomm maybe buying some secondary subsidiaries/departments form Intel and somehow it got extrapolated to Qualcomm acquiring Intel (unless there is any credible information to back that up?)

          • bhouston 19 hours ago

            > That's far fetched and hard to believe. IIRC there were talks about Qualcomm maybe buying some secondary subsidiaries/departments form Intel and somehow it got extrapolated to Qualcomm acquiring Intel (unless there is any credible information to back that up?)

            Also reported in WSJ: https://www.wsj.com/business/deals/qualcomm-approached-intel...

            That is Reuters and WSJ reporting it, not an X post by a anonymous rando. It mentions at least two sources.

            This is not a rumour, it was at least somewhat real. Doesn't mean it will happen, it seems to be very preliminary and exploratory.

            • not2b 19 hours ago

              It won't happen because neither the US nor the EU antitrust regulators will allow it. It isn't just up to Qualcomm and Intel.

              Some other deal, where Qualcomm buys some pieces of Intel, might be possible.

            • Wytwwww 18 hours ago

              > This is not a rumour, it was at least somewhat real

              How can you tell? I'm not denying that and obviously can't claim to know but I just don't see any public information that would allow us to make this conclusion.

              WSJ, Reuters and everyone else regularly report rumours that sound credible.

              Although if we're only talking about "exploring" rather than anything more that's probably true, there is no reason for Qualcomm to at least try and do the math of at what price point it might start making sense and they should be obviously be doing that.

              I just don't really see how is that particularly newsworthy since it's just extremely improbably (due to factors both Intel and Qualcomm can't directly control).

              • bhouston 18 hours ago

                > WSJ, Reuters and everyone else regularly report rumours that sound credible.

                Huh? You live in a different reality than I do. These are reputable sources and I need to move onto something more productive.

                • Wytwwww 18 hours ago

                  > These are reputable sources

                  Yes? And? Did I imply that there is something wrong with that or that they shouldn't be reporting it? I assume our definitions of what is a "credible rumour" differ...

                  > You live in a different reality than I do.

                  If you can seriously believe that Qualcomm could actually acquire Intel (unless Intel's management is engaging in some extreme amount of fraud to hide the fact that the company is on the brink of imminent bankruptcy) then yes, that must be the case...

                  • pzo 17 hours ago

                    Market cap of Qualcomm is 2x of Intel market cap(~200B vs ~100B). Current Intel stock price is the same as 1997. In 1 year intel stock price lost more than 50%.

                    • Wytwwww 16 hours ago

                      I'm not sure money would be the real issue.

                      Nvidia wasn't even allowed to buy ARM even though they weren't competitors and had almost no real overlap. This would be considerably harder to pull off.

                      Also you have that whole AMD64/x86 patent thing. Not sure about the details (and maybe the original agreement has expired and that part wasn't renewed) but according the original deal the cross licensing agreement would automatically expire if either party was acquired.

                      I think the first reason is more than enough but if it's not and they still have something similar about newer x86 related features developed after 2009 Qualcomm would only be buying Intel for the fabs and other assets or would have to pay off AMD if they want x86.

        • aspenmayer 18 hours ago
        • alluro2 19 hours ago

          "Qualcomm is exploring acquiring Intel"

          Huh, I didn't even realize they are sufficiently big for this to be potentially possible.

          • Wytwwww 18 hours ago

            That but I also think they are also at the same time much too big for this to be potentially possible since no government would approve it unless Intel was on the brink of bankruptcy.

      • ezst 15 hours ago

        > I think we just might have a pretty distorted view of ARM vs x86 because of Apple. They are just much better at designing ARM chips than anyone else

        and those chips become relatively unimpressive once everyone gets access to the same TSMC nodes a year and some down the road with AMD's x86 beating Apple's wonder ARM once again…

      • knowitnone 17 hours ago

        "It's not like ARM has taken over those markets yet." Key word here is "yet" so you are basically in agreement with their post

    • mat_epice 19 hours ago

      Arm has been in supercomputers for a while.

      Astra at Sandia Labs was the first Arm peta-scale supercomputer, and the first on the Top500. It debuted in 2018.

      Fugaku is the fastest Arm supercomputer, taking the #1 spot on the Top500 in 2020. It is currently #4.

      All NVIDIA Grace-Hopper systems will be Arm. There is one in the top 10 already, Alps at the Swiss CSCS, at #6. There are four more in the top 100.

      • bhouston 19 hours ago

        > Arm has been in supercomputers for a while.

        Apparently I am super wrong here. ARM has made serious inroads here as well.

        I think that ARM's willingness to allow their IP to be customized for their clients needs has really given them a lot of competitive advantages.

        • knowitnone 18 hours ago

          That might be where Intel and AMD failed. With the hindsight watching ARM gobble up a number of sectors, Intel and AMD should have allowed their IP to be customized which may have slowed ARM's advance. Competition is good though. If this continues, AMD and Intel may have to merge. I wonder how RISCV is going to shake things up? Intel and AMD should build RISCV chips and chance ARM. That'll form a nice loop.

          • bhouston 18 hours ago

            > I wonder how RISCV is going to shake things up?

            Right now RISC-V is ultra slow in all implementations I've seen. Like 30x slower than a top of the line Apple Mx series CPU. Maybe there is a high performing RISC-V chip out there but I haven't yet run into one.

            RISC-V benchmarks: https://browser.geekbench.com/search?q=RISC-V. Compare to an Apple M4 benchmark: https://browser.geekbench.com/v6/cpu/8224953

            That said, RISC-V is good for embedded applications where raw performance isn't a factor. I think no other markets are yet accessible to RISC-V chips until their performance massively improves.

        • 19 hours ago
          [deleted]
      • NortySpock 18 hours ago

        https://en.m.wikipedia.org/wiki/File:Processor_families_in_T...

        Sometimes useful to see how cpu architectures grow and then get crowded out by the next processor family -- at least among supercomputers.

      • snerbles 19 hours ago

        Grace is NVIDIA's Arm CPU for servers, so the same will apply to Grace-Blackwell.

    • sorenjan 19 hours ago

      I hope ARM doesn't win. I can run whatever I want on my x86 computer, but for some reason ARM based devices seem to need special device trees and custom drivers. I don't know exactly what the difference is, something about device enumeration at boot I think, but the last thing I want is for my computer to stop receiving update support after a few years like all Android phones. Or needing to use "custom ROMs" instead of just installing an OS.

      • seanw444 18 hours ago

        I agree. I don't want ARM to win, but I don't want x86 to keep winning either. As functional systems, they're fine. But I principally dislike both. Locked down, un-free, with spyware backdoors especially in x86's case. RISC-V really needs to gain some traction. Once that has most software compiled for it, and it's efficient enough, I'm gonna start using it.

      • not2b 18 hours ago

        Microsoft forced a lot of uniformity on x86 designs, with standard methods of probing to find all of the buses and devices, while historically ARM devices where embedded and the architecture was arbitrary, which was why you needed device trees. The kernel couldn't figure out the device configuration on its own.

        • M95D 3 hours ago

          But Microsoft didn't do that. Hardware industry did: ISA PnP, PCI, USB. Those are all hardware standards.

          ARM devices are embedded in the SoC. That's basically the definition of a SoC. Intel & co. at that time didn't produce SoCs. A CPU was just a CPU. It didn't have UARTs, GPU, SDHCI, I2c, SPI and other stuff in it.

          The only thing x86 (basically Intel, not Microsoft) did good was to standardize I/O addresses like framebuffer, IDE ports, serial I/O and later to make the rest discoverable via ACPI standard (which is a bad standard, btw, and UEFI is far worse).

          You may view ACPI as the x86 devicetree. The only difference is that x86 comes with ACPI written in a chip, while ARM firmware is a separate file you add into your bootable image next to the kernel, without the need for a separate EEPROM chip.

          You shouldn't be complaining about the ARM device descovery (devicetree), but about the absolute jungle of devices that ARM includes. Just think how many different USB controllers are out there. Each manufacturer designed it's own controller and every ARM chip needs a different driver. On x86 there are/were only 2: Intel (UHCI) and AMD (OHCI), and then they cooperated and made universal EHCI and xHCI.

        • bubblesnort 18 hours ago

          The BIOS on an IBM PC was cloned by Compaq iirc. Back in the day it was normal for software (including OS kernels) to call into BIOS ROMs because storage was limited.

          This was in a different era where most PCs weren't networked.

          x86_64 has moved to UEFI and SecureBoot since, but is still mostly expected to boot any live distro you'd like. Replacing x86 could drastically reduce the chances of replacing non-free vendor software with free software.

          • kmeisthax 14 hours ago

            And to add onto this: pre-ACPI, the IBM PC platform wasn't that far off from, say, any one particular SoC vendor in the ARM ecosystem. In fact, Microsoft originally planned to license out MS-DOS to competing x86 computer vendors using incompatible platforms and firmware[0]. Which is close[1] to the state of non-UEFI/non-PSCI ARM now. It's only because Compaq was able to legally clone the PC and it's BIOS that x86 standardized itself on one platform and firmware.

            The presence or lack thereof of standardized interfaces for firmware, device configuration, or the underlying platform are orthogonal to issues regarding Secure Boot and trust management. Apple Silicon Macs use nonstandard boot firmware (iBoot) but booting a "fully untrusted OS" (or fuOS) is an explicitly supported[2] use case on them, gated only by the user needing to boot recoveryOS (OTR specifically) once and enter their password to sign the alternative kernel. They even support per-volume boot policies, so you can keep your macOS install fully locked down while your Asahi Linux does whatever you want.

            And likewise Intel isn't stopping you from building in whatever user-hostile nonsense you want into x86 firmware. There's actually a whole range of laptops that have BIOS rootkits preinstalled, specifically to force-install Computrace onto whatever Windows install gets booted for corporate IT management purposes. The thing is, corporate IT has a terrible habit of leaving this shit on laptops they've sold", either because the laptop was stolen internally or because IT couldn't give a shit to do the computer equivalent of signing the title, so people wind up buying laptops that will lock up and wipe themselves if you ever install Windows on them.

            [0] The most successful of these being the PC-98, which lasted all the way up until the Windows 9x era

            [1] ARM SoC vendors additionally commit the crime of not being compatible with themselves. It is common for new SoCs to have completely different memory and device layouts. Apple is the only exception, ironically because they make both the OS and the SoC, which is the one time where such crimes would be excusable.

            [2] I'm told Apple's original intent was Boot Camp with Windows on ARM, but Microsoft wouldn't license Windows on ARM on Macs because they have an exclusivity deal with Qualcomm.

      • FredFS456 16 hours ago

        ARM cores can be used with UEFI and other hardware autodiscovery mechanisms. Not a limitation of the instruction set but rather the willingness of vendors to do the work. Many ARM servers have UEFI implemented.

    • klelatti 19 hours ago

      > Another shoe to drop would be a super-computer powered by ARM chips instead of x86. I don't think that has happened yet.

      Fugaku?

      https://newsroom.arm.com/blog/fujitsu-a64fx-arm

      • bhouston 19 hours ago

        Huh. I missed that. That is cool. I wonder if it was cheaper on a FLOP basis to go with ARM versus buying from Intel/AMD?

        I guess a secondary factor is that it was made of custom/low volume CPUs, which would increase the unit price as compared to the higher volume providers.

    • tbenst 19 hours ago

      The Nvidia GH200/GB200 “superchips” are all ARM processors. Seems likely that some of the next generation of foundation models will be trained on ARM

    • night862 18 hours ago

      I agree. Especially that last bit.

      I've thought about this quite a bit since I responded to a commenter who was somewhat was amazed at an ARM supercomputer quite a while ago about this subject. I decided to comment about this a lot more since it is my option to do so. cheers They had posited that ARM would take over, mostly recognizing the very low-power and fairly inexpensive ARM SoC in phone handsets and routers.

      This thought didn't sit quite right with me. I considered the power requirements and architectural needs of a larger computer system. With many, many PCIe lanes or some other interconnect, rapid storage commitments and the very responsive performance required by a "serious" architecture will cause all of these subsystems to continually draw current and dissipate heat. This is in stark contrast to power efficient computing devices like iPhone or ARM macbooks, and in my mind seems likely to eat the "gains" that people generally associate with these devices.

      The piece missing in my understanding is about the semiconductor industry and the electronics field in general, and is particularly interesting to me, because many of the highest-tier operators outsource %100~ of their production to TSMC or the other. There isn't anything magic or even very good about x86 architecture. I'm fairly certain the instruction set no longer maps to hardware in most cases, rather more analogous to function calls in typical software applications.

      Every now and then I like to reflect that iPhone came out in 2007. My car is older than that, and we were well on our way during the election of Barack Obama. And while legends hold that Power still exists, I had long forgotten about ALPHA or SPARC by then.

      Imagine for yourself this earth-shattering shift: What happens when the basic lithography processes, assembly practices, and validation procedures become such that for similar trade offs, any advanced hobbyist could order a wafer from future fab houses similar to JLPCB, Scaleway, OSH Park, TSMCWAY, or whatever on-demand? Like, what if you could just pay $$$$ to buy a wafer that you designed and then sell 14nm UltraSPARC on tindie?

    • 16 hours ago
      [deleted]
    • jitl 19 hours ago

      At least Apple’s x86-64 on ARM64 emulation is quite fast enough for workstation use. If ARM improvement in both performance and price continue to outpace x86, it may become economically advantageous to run x86 “legacy” software under emulation even in data center environments. So out of high performance computing on x86 I don’t really see legacy software as an advantage in the medium/long term.

    • RiverCrochet 18 hours ago

      Trivia:

      - The first game console with an ARM CPU was the 3D0 (1991 - not sure if it was an ARM1 or later version).

      - The first game console with an Intel CPU was the Magnavox Odyssey 2 with an Intel 8048 (it also had an Intel graphics chip) (1978).

      - The first game console with an x86 Intel CPU I believe was the FM Towns Marty (I think this is 1992 or 1993 - it had a 386).

      - The first successful game console with an x86 Intel CPU was the original Xbox (2001 or 2002) - and actually it was an AMD CPU (edit: nope it was an Intel).

      There were mobile phones with Intel CPUs in them for a short time - I think it was 2010-2011. I always wanted to see one (was like super curious how the boot firmware looked if it was user accessible). I wonder what happened.

      • alexjplant 18 hours ago

        > The first successful game console with an x86 Intel CPU was the original Xbox (2001 or 2002) - and actually it was an AMD CPU.

        I think it was actually a Pentium III with less cache, i.e. basically a Celeron. Only third-gen Microsoft consoles onward use AMD parts.

    • jamesaross 19 hours ago

      An ARM supercomputer was already #1 in 2020 [0]. The Japanese Fugaku is also notable because it doesn't use GPUs to achieve high performance, but rather wide vector units on the CPU.

      [0] https://top500.org/system/179807/

      • knowitnone 17 hours ago

        I'm sorry but being #1 or even in the top 500 matters very little because most of it is due to how many units are installed in the cluster including networking, memory, architecture, cooling, etc. I mean, if you didn't reach top 500, just add more units into the cluster until you do. Don't get me wrong because there is a lot of tech involved but we are talking about distributed processing instead of the performance and efficiency of a single processor.

    • jvanderbot 19 hours ago

      I'm excited to enter an era of serious competing CPU architectures again!

    • short_sells_poo 19 hours ago

      And here I thought windows on arm was increasingly sliding into irrelevance as nobody is buying it.

  • rob74 20 hours ago

    To me it looks like the two finally decided to cooperate because it's in their common interest to stop (or rather, delay) the x86 architecture's slide into irrelevance?

    • burnte 20 hours ago

      People have been talking about the irrelevance of x86 since the 80s. I remember the early 90s when "Intel is doomed" was daily news and opinion fodder.

      • Moto7451 19 hours ago

        Yup, and instead 68k, PA-RISC, Alpha, PPC, and others all fell to x86.

        ARM is clearly here to stay and took some of the 68k, PPC, MIPS, and low end/embedded x86 use cases.

        Since the industry loves the idea of displaced incumbents we’ll see a lot of RISC-V articles for years to come.

      • joshdavham 20 hours ago

        Don’t you think this time things may be different though? Many new computers, cloud platforms and even the iPad I’m typing on right now have/are switching to ARM and away from x86.

        • burnte 10 hours ago

          No, I don't think this time is different. We've seen ARM CPUs outselling x86 CPUs for a decade or more, the problem isn't the hardware, it's the software. x86 has better backwards compatibility than any other CPU that has ever been created. It runs so much software that it's got another decade of life in it at a minimum. I do think this is actually the first time we'll see really strong sales of competing ISAs in the non-mobile space and that's exciting, but the inertia of the excellently supported x86 ecosphere is massive and cannot turn on a dime.

          Plus, 20 years ago people said x86 could never hit power and performance targets that today are the norm. Everyone talks about x86 like the actual hardware is the same year after year. It's not. The underlying CPUs are amazing and both Intel and AMD have done amazing work keeping the ISA competitive. I don't see that changing any time soon.

        • bigstrat2003 18 hours ago

          Of course there will come a time when things are different. No king rules forever and all that. But the people predicting x86 death in the past would've said that those were the times when it was different too, and they would've been wrong. These kinds of change are hard to predict in advance.

        • Wytwwww 19 hours ago

          > even the iPad I’m typing

          When did it switch to ARM from x86.

          I don't really see any reasons why can't x86 and ARM just coexist long-term. Porting software is relatively easy or not even necessary these days and ARM main advantage (that companies other than Intel/AMD can use it) is entirely non-technical (development + licensing costs) so as long as Intel/AMD can keep up they have no incentives at all to switch.

          Amazon's and Ampere's ARM server chips are significantly slower and less power efficient than the equivalent AMD chips (for Ampere at least, note sure if Amazon is sharing their power usage data but I doubt it's massively lower..) they are also 30-40% cheaper because you don't have to pay for AMD/Intel's oligopoly margins.

          • jitl 19 hours ago

            At least as an AWS customer, switching from x86 to ARM looks quite appealing on the balance sheet! Same perf, many less $$$. Certainly there’s a large financial discount incentive for many workloads. A lot of stacks already have good support for ARM64 since a fair amount of developers run on macOS, so it’s pretty much just inertia keeping many on x86.

            • Wytwwww 19 hours ago

              Yeah, but presumably mainly because it's just much cheaper for Amazon to make their own (technically inferior but it doesn't matter if you don't care about single-core perf) chips than having to pay for Intel/AMD's margins. Of course from the end-user perspective it doesn't matter. But long-term it might less lead to less competition allowing them hike up prices in the future.

              Of course other hyperscalers can and presumably are doing the same. However everyone else will be left out unless Ampere (or somebody else?) steps up.

          • fluoridation 19 hours ago

            Isn't ARM's main and most visible advantage that its implementations are usually energy-efficient?

            • Wytwwww 19 hours ago

              Is it? e.g. EPYC 9754, the equivalent to Neoverse ARM servers seems to have significantly better performance/power than Ampere's chips. Gravitron 4 seems to be slightly faster than the AmpereOne A192-32X, I don't know how much power it uses but I doubt it's significantly different.

              https://www.phoronix.com/review/ampereone-a192-32x/12#:~:tex....

              Of course the Ampere chip is significantly cheaper which I think is the main and most visible advantage ARM has for servers at least.

              • fluoridation 19 hours ago

                My bad, I thought you meant in general, not specifically in the DC space.

        • Dalewyn 20 hours ago

          ARM should be more worried about getting usurped by China!RISC-V than x86 getting usurped by, well, anything.

          x86 still powers most of how we actually get stuff done.

          • echelon 20 hours ago

            Both should worry about getting usurped by RISC-V.

            x86 is getting cornered by ARM in consumer and data center. RISC-V is gunning for ARM.

            While ARM is open-ish, RISC-V is more open than both. When you're not the market leader, that's the strategy to become market leader.

            And it certainly helps that a major world power is putting in enormous energy to make their architecture the dominant alternative to "Western" architectures.

        • leptons 20 hours ago

          There still aren't ARM chips that can compete with the processing power of the highest-end x86 chips, if power consumption isn't an issue. It's nice that your iPad has an ARM chip in it, but nobody is going to be using an iPad for real workloads. And when it comes to available software, x86 just laughs at ARM.

          • FeepingCreature 19 hours ago

            This is outdated. The M3 Max is right there with Intel i9 on singlethreaded PassMark, and does so while pulling a lot less power.

          • Moto7451 19 hours ago

            You’re comparing the wrong computer to the high end of x86. Apple is doing it with Desktop M2 Ultra/M3 Max series chips and AWS with Graviton vs server chips.

            Apple has already caught Intel in single core. If power didn’t matter to Apple they could surely optimize for pure performance, but they’re not competitive in the spaces where power does not matter and we will never see them being that chip to market.

            There are always trade offs at the exotic tier and someone will always win one benchmark over the other, but you can’t say that ARM is being laughed at by Intel. If they were laughing they wouldn’t be partnering with AMD. While I don’t think Intel is super worried about Apple since it’s an Apples to Windows/Linux comparison, I do think they’re worried about Qualcomm and AWS catching Apple and actually eating into their market share.

            • Wytwwww 19 hours ago

              > Apple has already caught Intel in single core. If power didn’t matter to Apple they could surely optimize for pure performance

              But that's Apple, not ARM. AFAIK Neoverse/Gravitron's main advantage is price (not having to pay Intel/AMD "inflated" duopoly prices) rather than actual performance.

      • LeFantome 18 hours ago

        RISC-V is not going anywhere as it is not a single player. If it disappears, it will be because it has been replaced by a better alternative.

        ARM is not going anywhere for a while. It dominates the mobile and embedded markets and is making in-roads in others. If it eventually gets displanted, it will be by RIsC-V or its successor.

        X86 is vulnerable, more vulnerable than the other two. Taking it down is going to take a lot though. I would not bet against it. The biggest problem is that it is master of an ever shrinking empire. Even if they never lose it, they may matter less over time.

      • usrusr 19 hours ago

        Many deaths have been preceded by a large number of premature announcements. We tend to overestimate the predictive value of extrapolating from a series of wrong predictions in the past.

        • burnte 10 hours ago

          Oh absolutely, I agree and yes x86 will eventually die. When? No one has any idea but it's not anytime in the next decade.

    • throwaway48476 20 hours ago

      It's more that computers aren't getting faster the same way they did in the 90s and the advancements come from new accelerators and instruction sets which fragment the platform.

  • vegadw 19 hours ago

    I hope this means better ISA & extension consistency going forward and a push to make tools to adopt the extensions too. One of the bigger issues for x86-64 as an ISA is that nobody wants to compile to support features that not everyone will have, but picking what to use at run-time isn't easy either, so it makes everything slower than it needs to be.

  • tapanjk 5 hours ago

    > The initial advisory list includes Broadcom, Dell, Google, HP, Lenovo, Meta, Microsoft, Oracle, Red Hat, Linus Torvalds, and Tim Sweeny.

    Two individuals along with multi-billion dollar corporations. Curious why the organizations that these individuals represent were not included instead?

  • MrHamburger 17 hours ago

    To disrupt any kind of x86 dominance ARM Motherboard manufacturers will first need to get their UEFI and ACPI support working first, otherwise their boards are nothing more than one-purpose toys running only a specific build of a system.

    • M95D 3 hours ago

      But ACPI and UEFI won't solve anything. We already have devicetrees that are open and free vs. ACPI and UEFI that are just binary blobs. It would be even worse than Broadcom blobs on Raspberry Pi.

      What you probably want is less variability in SoC devices. Now each chip has a different SDHCI, different USB, different UART and different I2c, each needing a different driver. x86 only has 2: the Intel variant or the AMD variant.

  • ChrisArchitect 18 hours ago
  • dielll 3 hours ago

    Here rooting for Risc-V

  • dlojudice 18 hours ago

    Pat Gelsinger and Lisa Su interview:

    https://www.youtube.com/watch?v=7y32wpDhIGM

    • fluoridation 18 hours ago

      God, Su is barely intelligible. Her noise gate is turned up to hell.

  • jbverschoor 19 hours ago

    Intel is worth roughly it’s assets.

    An Intel-AMD merger would make sense, but it’s only extension of extinction if they don’t migrate from x86

    • knowitnone 17 hours ago

      You're calling it over before it's over. Is there a chance? Sure, just like there's a chance ARM could become extinct.

      • jbverschoor 11 hours ago

        It was already over 4 years ago. No cpu, no gpu, no 5G, no ultra low power or anything else that’s interesting

  • mikece 19 hours ago

    Is there any chance that Intel and AMD could merge to face the threats from ARM and Nvidia?

    • jitl 19 hours ago

      I’m not sure regulators would look too favorably on that merger, but also what would that give AMD? Enormous R&D costs and a mid-tier foundry business that’s losing money and unable to support their product lines.

  • fsflover 20 hours ago

    When wil we be able to completely disable the Intel ME and AMD PSP on modern devices?

    • transpute 20 hours ago
      • fsflover 19 hours ago

        "Disabled" is far from completely removed. Closed, proprietary software with unlimited system access says it's disabled. Do you trust that?

        • tredre3 19 hours ago

          Why are you moving the goalposts? Your question was specifically how to completely disable it, not how to lobby AMD/Intel into removing the feature from the silicon...

          • BoingBoomTschak 19 hours ago

            His point is still valid: how do you know it's disabled other than "just trust me"? If I was the NSA, I'd work very close with OEMs to bug the rare order with "Intel vPro™ - ME Inoperable, Custom Order" checked; bit like https://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa...

            • fluoridation 19 hours ago

              But by that measure, how do you know the platform doesn't have yet another remote control feature that you don't know about? Do you trust the platform or don't you?

              • snerbles 18 hours ago

                Ultimately, you can't. From Ken Thompson's "Reflections on Trusting Trust" [0], written in 1984:

                > The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.

                [0] https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...

                • fluoridation 18 hours ago

                  "Trust" in the security sense. https://en.wikipedia.org/wiki/Trusted_system

                  You can trust a system by accepting that if it is owned, you are owned. If you don't trust it you instead protect it with a system that you do trust. For example, if you don't trust any x86 system you would not make them face the Internet. You would put them behind a trusted firewall.

                  But you have to decide whether you trust a system or not. If you trust it then you have to believe it when it reports a certain feature is disabled. If you don't trust it to begin with then it doesn't matter if it's disabled or not. Your security shouldn't be relying on it anyway.

                  • M95D 2 hours ago

                    A firewall, as they're usually set up, only prevents incoming connections, not calling home and downloading commands.

                • fsflover 17 hours ago

                  > You can't trust code that you did not totally create yourself.

                  Which is a hint to the solution: https://news.ycombinator.com/item?id=41368835

          • fsflover 19 hours ago

            I said completely for a reason. How do I cut the power to it? Or let me see the source code and compile it myself, then I'll be sure.

    • throwaway48476 20 hours ago

      Why do you want to remove the backdoor?

      • mysterydip 20 hours ago

        They must have something to hide.

      • froh 19 hours ago

        backdoor schmackdoor --- it's a little BSD for your safety and security how dare you slander the poor thing with such a heinous label. tsk.