When the Intel 80386-33 came out we thought it was the pinnacle of CPUs, running our Novell servers! We now had a justification to switch from arcnet to token ring. Our servers could push things way faster!
Then, in the middle 1991, the AMD 80386-40 CPU came out. Mind completely blown! We ordered some (I think) Twinhead motherboards. They were so fast we could only use Hercules mono cards in them; all other video cards were fried. 16Mb token ring was out, so some of my clients moved to it with the fantastic CPU.
I have seen some closet-servers running Novell NetWare 3.14 (?) with that AMD CPU in the late '90s. There was a QUIC tape & tape drive in the machine that was never changed for maybe a decade? The machine never went down (or properly backed up).
> While the AM386 CPU was essentially ready to be released prior to 1991, Intel kept it tied up in court.[2] Intel learned of the Am386 when both companies hired employees with the same name who coincidentally stayed at the same hotel, which accidentally forwarded a package for AMD to Intel's employee.[3]
NW 3.12 was the final version I think. I recall patching a couple for W2K. NetWare would crash a lot (abend) until you'd fixed all the issues and then it would run forever, unless it didn't.
I once had a bloke writing a patch for eDirectory in real time in his basement whilst running our data on his home lab gear, on a weekend. I'm in the UK and he was in Utah. He'd upload an effort and I'd ftp it down, put it in place, reboot the cluster and test. Two iterations and job done. That was quite impressive support for a customer with roughly 5,000 users.
For me the CPU wasn't that important, per se. NWFS ate RAM: when the volumes were mounted, the system generated all sorts of funky caches which meant that you could apply and use trustee assignments (ACLs) really fast. The RAID controller and the discs were the important thing for file serving and ideally you had wires, switches and NICs to dole the data out at a reasonable rate.
Don't look too closely at the collision avoidance mechanism in 10base-T1S, standardized in 2020. Sure looks like a virtual token ring passing mechanism if you squint...
In 1996 we set up a rack (department store surplus) of Cyrix 586 (running on 486 socket C motherboards) running at 75mhz with 16mb of RAM and could serve 100 concurrent users with CGI scripts and image maps doing web serving and VOIP with over 1 million requests a month on a single T1 line.
Good luck doing that on a load balanced rack of 96 core AMD servers today.
Damn, first Intel missed out on Mobile, then it fumbled AI, and now it's being seriously challenged on its home turf. Pat has his work cut out for him.
They killed StrongARM because they believed the x86 Atom design could compete. Turns out that it couldn't and most of the phones with it weren't that great.
Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.
I'd argue that the Atom core itself could compete - it hit pretty much the same perf/watt targets as it's performance-competitive ARM equivalents.
But having worked with Intel on some of those SoCs, it's everything else that fell down. They were late, they were the "disfavored" teams by execs, they were the engineer's last priority, they had stupid hw bugs they refused to fix and respin, they were everything you could do to set up a project to fail.
This was the main thing, as by that point, all native code was being compiled to Arm and not x86. Using x86 meant that some apps, libraries, etc just didn't work.
Medfield was faster than A9 and Qualcomm Krait in performance, but not so much in power (see Motorola Razr i vs M where the dual-core ARM version got basically the same battery life as the single-core x86 version).
Shortly after though, ARM launched A15 and the game was over. A15 was faster per clock while using less power too. Intel's future Atom generations never even came close after that.
> Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.
First I've heard of this. Is this actually a possibility?
Maybe I'm just spitting out random BS, but if I understood Keller correctly when he spoke about Zen that (for it) it's not really a problem to change frontend ISA as large chunk of work is on the backend anyways. If that's the case in general with modern processors, would be cool to see a hybrid that can be switched from x86_64 to RISC-V and, to add even more avangarde to it, associate a core or few of FPGA on the same die. Intel, get on it!
> and, to add even more avangarde to it, associate a core or few of FPGA on the same die
The use cases for FPGAs in consumer devices are ... close to zero unless you're talking about implementing copy protection since reverse engineering FPGA bitstreams is pretty much impossible if you're not the NSA, MI6 or Mossad with infinite brains to throw at the problem (and more likely than not, insider knowledge from the vendors).
Yeah Otellini disclosed Jobs asked them for a CPU for the iPhone and he turned the request down because Jobs was adamant on a certain price and he just couldn't see it.
Even if it was hard to foresee the success of the iPhone, he surely had the Core Duo in his hands when this happened even if it didn't launch yet so the company just found its footing again and they should've attempted this moonshot: if the volume is low, the losses are low. If the volume is high then economies of scale will make it a win. This is not hindsight 20/20, this is true even if no one could've foreseen just how high the volume would've been.
Not to mention that ARM keeps closing in on their ISA moat via Apple, Ampere, Graviton and so on. Their last bastion is the fact that Microsoft keeps botching Windows for ARM every time they try to make it happen.
Intel has come back recently with a new series of "Lunar Lake" CPUs for laptops. They are actually very good. For now, Intel has regained the crown for Windows laptops.
Maybe Pat has lit the much needed fire under them.
> Future Intel generations of chips, including Panther Lake and Nova Lake, won’t have baked-on memory. “It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,” said Gelsinger on Intel’s Q3 2024 earnings call, as spotted by VideoCardz.[0]
“It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,”
When you prioritize yourself (way to run the business) over delivering what customers want you're finished. Some companies can get that wrong for a long time, but Intel has a competitor giving the customers much more of what they want. I want a great chip and honestly don't know, care, or give a fuck what's best for Intel.
I thought the OEMs liked the idea of being able to demand high profit margins on RAM upgrades at checkout, which is especially easy to justify when the RAM is on-package with the CPU. That way no one can claim the OEM was the one choosing to be anti-consumer by soldering the RAM to the motherboard, and they can just blame Intel.
Intel would definitely try to directly profit from stratified pricing rather than letting the OEM keep that extra margin (competition from AMD permitting).
X Elite is faster, but not enough to offset the software incompatibility or dealing with the GPU absolutely sucking.
Unfortunately for Intel, X Elite was a bad CPU that has been fixed with Snapdragon 8 Elite's update. The core uses a tiny fraction of the power of X Elite (way less than the N3 node shrink would offer). The core also got a bigger frontend and a few other changes which seem to have updated IPC.
Qualcomm said they are leading in performance per area and I believe it is true. Lunar Lake's P-core is over 2x as large (2.2mm2 vs 4.5mm2) and Zen5 is nearly 2x as large too at 4.2mm2 (Even Zen5c is massively bigger at 3.1mm2).
X Elite 2 will either be launching with 8 Elite's core or an even better variant and it'll be launching quite a while before Panther Lake.
Yeah, but can they run any modern OS well? The last N intel laptops and desktops I’ve used were incapable of stably running Windows, MacOS or Linux. (As in the windows and apple ones couldn’t run their preloaded operating systems well, and loading Linux didn’t fix it.)
Very strange. Enough bad things can be said about Intel CPUs, but I have never had any doubts about their stability. Except for that one recent generation that could age to death in a couple of months (I didn't have any of these).
AMD is IME more finicky with RAM, chipset / UEFI / builtin peripheral controller quality and so on. Not prohibitively so, but it's more work to get an AMD build to run great.
No trouble with any AMD or Intel Thinkpad T models, Lenovo has taken care of that.
LNL is a great paper launch but I have yet to see a reasonably priced LNL laptop so far. Nowadays I can find 16GB Airs and X Elite laptops for 700-900 bucks, and once you get into 1400 territory just pay a bit more for M4 MBPs which are far superior machines.
And also, they compete in the same price bracket as Zen 5, which are more performant with not that much worse battery life.
Is it? I presume that a large chunk of the AMD's $3.5B is MI3XX chips, and very little of Intel's $3.5B is AI, so doesn't that mean that Xeon likely still substantially outsells EPYC?
By whom though? I don't see how any company directly competing with Intel (or even orthogonal e.g. Nvidia and ARM) could be allowed to by Intel (they'd need approval in the US/EU and presumably a few other places) unless it's actually on the brink of bankruptcy?
I think for _most_ people it comes down to this: how much can I cram into the platform. More lanes is more high speed storage, special purpose processing, and networking interfaces.
VMware users are starting to say that Epyc is too powerful for one server because they don't want to lose too much capacity due to a single server failure. Tangentially related, network switch ASICs also have too much capacity for a single rack.
I don't agree that this is surprising. To be "dominant" in this space means more than raw performance or value. One must also dominate the details. It has taken AMD a long time to iron out a large number of these details, including drivers, firmware, chipsets and other matters, to reach real parity with Intel.
The good news is that AMD has, finally, mostly achieved that, and in some ways they are now superior. But that has taken time: far longer than it took AMD to beat Intel at benchmarks.
One thing to remember is that the enterprise space is very conservative: AMD needed to have server-grade CPUs for all of the security and management features on the market long enough for the vendors to certify them, promise support periods, etc. and they need to get the enterprise software vendors to commit as well.
The public clouds help a lot here by trivializing testing and locking in enough volume to get all of the basic stuff supported, and I think that’s why AMD was more successful now than they were in the Opteron era.
Server companies have long term agreements in place...waiting for those to expire before moving to AMD is not unexpected. This was the final outcome expected by many.
Intel did an amazing job at holding on to what they had. From Enterprise Sales connection which AMD had very little from 2017 to 2020. And then bundling other items, essentially discount without lowering price, and finally some heavy discount.
On the other hand AMD has been very conservative with their EPYC sales and forecast.
Servers are used for a long time and then Dell/HP/Lenovo/Supermicro has to deliver them and then customers have to buy them. This is a space with very long lead times. Not surprising.
Complicated. Performance per watt was better for Intel, which matters way more when you're running a large fleet. Doesn't matter so much for workstations or gamers, where all that matters is performance. Also, certification, enterprise management story, etc was not there.
Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.
Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.
You're thinking strictly about core performance per watt. Intel has been offering a number of accelerators and other features that make perf/watt look at lot better when you can take advantage of them.
AMD is still going to win a lot of the time, but Intel is better than it seems.
Intel does a lot of work developing sdks to take advantage of its extra CPU features and works with open source community to integrate them so they are actually used.
Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.
But those accelerators are also available for AMD platforms - even if how they're provided is a bit different (often on add-in cards instead of a CPU "tile").
And things like the MI300A mean that isn't really a requirement now either.
Performance per Watt was lost by Intel with the introduction of the original Epyc in 2017. AMD overtook in outright performance with Zen 2 in 2019 and hasn't looked back.
idk go look at the xeon versus amd equivalent benchmarks. theyve been converging although amd's datacenter offerings were always a little behind their consumer
this is one of those things where there's a lot of money on the line, and people are willing to do the math.
the fact that it took this long should tell you everything you need to know about the reality of the situation
AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.
The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.
> idk go look at the xeon versus amd equivalent benchmarks.
They all show AMD with a strong lead in power efficiency for the past 5 years.
I know what the benchmarks are like, I wish that you would go and update your knowledge. If we take cloud as a comparison it's cheaper to use AMD, think they're doing some math?
Sorry, you can have a cheap-ish FPGA that came out 10 years ago, or a new FPGA that costs more than your car and requires a $3000 software license to even program. Those are the only options allowed.
The COP is AMD/Xilinx. I have no idea what the agilex 3 and 5 costs are, I'm not an Altera user. I will note though, having used Lattice, Microchip, and (admittedly at the start of Titanium) Efinix, none of the tools come close to Vivado/Vitis. I'm on lattice at the moment and I've lost countless hours to the tools not working or working poorly on Linux relative to Xilinx. Hobbyist me doesn't care, I'll sink the hours in. Employee me does care, though.
Please, don't talk about how well AMD is doing! You'll only make the stock price slide another 10%, as night follows day... [irrational market grumbling intensifies]
The market can hardly be called irrational on this. AMD's market value pretty much already priced in that they would take over Intel's place in the datacenter, their valuation is more than double Intel's with a PE of 125, despite them being fabless and ARM gaining ground in the server space. That's why you are seeing big swings in prices, because anything short of "we are bankrupting Intel and fighting Nvidia in the AI accelerator space" is seen as a loss.
That's not how it works. You need to pump money into fabs to get them working, and Intel doesn't have money. If AMD had fabs to light up their money, they would also have a much lower valuation.
The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$. 225$ was hit when AMD was guiding ~3.5B in datacenter GPU revenue. Now, they're guiding to end the year at 5B+ datacenter GPU revenue, but the stock is ~140$?
I think it's because of how early Nvidia announced Blackwell (it isn't any meaningful volume yet), and the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year? I don't know how to explain the stock price.
Anyway, they hit record quarterly revenue this Q3 and are guiding to beat this record by ~1B next quarter. Price might move a lot based on how AMD guides for Q1 2025.
Being fabless does have an impact because it caps AMD's margins and makes x86 their only moat. They can only extract value if they remain competitive on price. Sure that does not impact Nvidia, but they get to have fat margins because they have virtually no competition.
> The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$.
That's volatility not irrationality. As I wrote AMD's valuation is built on the basis that they will keep executing in the DC space, Intel will keep shitting the bed and their MI series will eventually be competitive with Nvidia. These facts make investor skittish and any news about AMD causes the stock to move.
> the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year?
The only hyperscaler that picked up MI300X is Azure and they GA'ed it 2 weeks ago, both GCP and Azure are holding off. The uncertainty on when (if) it will catch on is a factor but the growing competition from those same hyperscaler building their own chip means that the opportunity window could be closing.
It's ok to be bullish on AMD the same way that I am bearish on it, but I would maintain that the swings have nothing to do with irrationality.
I am sure AMD has been delivering more value for even longer. I bet the currently deployed AMD Exaflops are significantly higher than Intel. It was a huge consideration for me when shopping between the two. As big as 50% more compute per dollar.
Interpretation notes: first time, in the era during which said companies broke out "datacenter" as a reporting category. The last time AMD was clearly on top in terms of product quality, they reported 2006 revenue of $5.3 billion for microprocessors while Intel reported $9.2 billion in the same category. In those years the companies incompletely or inconsistently reported separate sales for "server" or "enterprise".
Oh my, allow me to reminisce.
When the Intel 80386-33 came out we thought it was the pinnacle of CPUs, running our Novell servers! We now had a justification to switch from arcnet to token ring. Our servers could push things way faster!
Then, in the middle 1991, the AMD 80386-40 CPU came out. Mind completely blown! We ordered some (I think) Twinhead motherboards. They were so fast we could only use Hercules mono cards in them; all other video cards were fried. 16Mb token ring was out, so some of my clients moved to it with the fantastic CPU.
I have seen some closet-servers running Novell NetWare 3.14 (?) with that AMD CPU in the late '90s. There was a QUIC tape & tape drive in the machine that was never changed for maybe a decade? The machine never went down (or properly backed up).
Some AMD 80386DX-40 drama:
> While the AM386 CPU was essentially ready to be released prior to 1991, Intel kept it tied up in court.[2] Intel learned of the Am386 when both companies hired employees with the same name who coincidentally stayed at the same hotel, which accidentally forwarded a package for AMD to Intel's employee.[3]
Wonder if the hotel had a liability problem from that?
After all, it sounds like they directly caused a "billion dollar" type of problem for AMD through their mistake.
Far out LOL
That's amazing!
NW 3.12 was the final version I think. I recall patching a couple for W2K. NetWare would crash a lot (abend) until you'd fixed all the issues and then it would run forever, unless it didn't.
I once had a bloke writing a patch for eDirectory in real time in his basement whilst running our data on his home lab gear, on a weekend. I'm in the UK and he was in Utah. He'd upload an effort and I'd ftp it down, put it in place, reboot the cluster and test. Two iterations and job done. That was quite impressive support for a customer with roughly 5,000 users.
For me the CPU wasn't that important, per se. NWFS ate RAM: when the volumes were mounted, the system generated all sorts of funky caches which meant that you could apply and use trustee assignments (ACLs) really fast. The RAID controller and the discs were the important thing for file serving and ideally you had wires, switches and NICs to dole the data out at a reasonable rate.
Token ring networks! So glad we moved on from that.
> So glad we moved on from that.
Don't look too closely at the collision avoidance mechanism in 10base-T1S, standardized in 2020. Sure looks like a virtual token ring passing mechanism if you squint...
Quick! Everyone! Someone dropped the token. Get up and look behind your desks.
I remember that 386-40. That was a great time.
In 1996 we set up a rack (department store surplus) of Cyrix 586 (running on 486 socket C motherboards) running at 75mhz with 16mb of RAM and could serve 100 concurrent users with CGI scripts and image maps doing web serving and VOIP with over 1 million requests a month on a single T1 line.
Good luck doing that on a load balanced rack of 96 core AMD servers today.
Damn, first Intel missed out on Mobile, then it fumbled AI, and now it's being seriously challenged on its home turf. Pat has his work cut out for him.
They didn't miss out. They owned the most desirable mobile platform in StrongARM and cast it aside. They are the footgun masters.
They killed StrongARM because they believed the x86 Atom design could compete. Turns out that it couldn't and most of the phones with it weren't that great.
Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.
I'd argue that the Atom core itself could compete - it hit pretty much the same perf/watt targets as it's performance-competitive ARM equivalents.
But having worked with Intel on some of those SoCs, it's everything else that fell down. They were late, they were the "disfavored" teams by execs, they were the engineer's last priority, they had stupid hw bugs they refused to fix and respin, they were everything you could do to set up a project to fail.
> They were late
This was the main thing, as by that point, all native code was being compiled to Arm and not x86. Using x86 meant that some apps, libraries, etc just didn't work.
Intel and Google developed libhoudini to do binary translation of the native code to solve that problem. https://github.com/SGNight/Arm-NativeBridge, https://www.anandtech.com/show/5770/lava-xolo-x900-review-th..., http://blog.apedroid.com/2013/05/binary-translation-vs-nativ...
Medfield was faster than A9 and Qualcomm Krait in performance, but not so much in power (see Motorola Razr i vs M where the dual-core ARM version got basically the same battery life as the single-core x86 version).
Shortly after though, ARM launched A15 and the game was over. A15 was faster per clock while using less power too. Intel's future Atom generations never even came close after that.
Maybe the Atom core itself was performant, but I doubt they could take all the x86 crap around it to make it slim enough for a phone
> Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.
First I've heard of this. Is this actually a possibility?
Maybe I'm just spitting out random BS, but if I understood Keller correctly when he spoke about Zen that (for it) it's not really a problem to change frontend ISA as large chunk of work is on the backend anyways. If that's the case in general with modern processors, would be cool to see a hybrid that can be switched from x86_64 to RISC-V and, to add even more avangarde to it, associate a core or few of FPGA on the same die. Intel, get on it!
There were consumer devices with a processor designed to be flexible on its instruction set presented to the user.
https://en.wikipedia.org/wiki/Transmeta_Crusoe
https://youtu.be/xtuKqd-LWog?t=332
aka the company where Linus worked!
that also kinda failed to reach their goals unfortunately.
If you think about it, that's what Thumb mode on ARM is.
Plus the original Jazelle mode.
> and, to add even more avangarde to it, associate a core or few of FPGA on the same die
The use cases for FPGAs in consumer devices are ... close to zero unless you're talking about implementing copy protection since reverse engineering FPGA bitstreams is pretty much impossible if you're not the NSA, MI6 or Mossad with infinite brains to throw at the problem (and more likely than not, insider knowledge from the vendors).
"not really a problem to change" in the context and scope of a multi-billion dollar project employing thousands of people full time.
Sounds like Intel has a big boomer problem
They had a second attempt with x86 smartphone chips and bungled that too: https://www.pcworld.com/article/414673/intel-is-on-the-verge...
Yeah Otellini disclosed Jobs asked them for a CPU for the iPhone and he turned the request down because Jobs was adamant on a certain price and he just couldn't see it.
Even if it was hard to foresee the success of the iPhone, he surely had the Core Duo in his hands when this happened even if it didn't launch yet so the company just found its footing again and they should've attempted this moonshot: if the volume is low, the losses are low. If the volume is high then economies of scale will make it a win. This is not hindsight 20/20, this is true even if no one could've foreseen just how high the volume would've been.
Not to mention that ARM keeps closing in on their ISA moat via Apple, Ampere, Graviton and so on. Their last bastion is the fact that Microsoft keeps botching Windows for ARM every time they try to make it happen.
Intel has come back recently with a new series of "Lunar Lake" CPUs for laptops. They are actually very good. For now, Intel has regained the crown for Windows laptops.
Maybe Pat has lit the much needed fire under them.
Worth noting,
> Future Intel generations of chips, including Panther Lake and Nova Lake, won’t have baked-on memory. “It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,” said Gelsinger on Intel’s Q3 2024 earnings call, as spotted by VideoCardz.[0]
[0]: https://www.theverge.com/2024/11/1/24285513/intel-ceo-lunar-...
“It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,”
When you prioritize yourself (way to run the business) over delivering what customers want you're finished. Some companies can get that wrong for a long time, but Intel has a competitor giving the customers much more of what they want. I want a great chip and honestly don't know, care, or give a fuck what's best for Intel.
> When you prioritize yourself
Unless “way to run the business” means “delivering what the customer wants.”
Customer being the OEMs.
I thought the OEMs liked the idea of being able to demand high profit margins on RAM upgrades at checkout, which is especially easy to justify when the RAM is on-package with the CPU. That way no one can claim the OEM was the one choosing to be anti-consumer by soldering the RAM to the motherboard, and they can just blame Intel.
Intel would definitely try to directly profit from stratified pricing rather than letting the OEM keep that extra margin (competition from AMD permitting).
The only ugly (for Intel) detail being that they are fabbed by TSMC
Snapdragon X Plus/Elite is still faster and has better battery life. Lunar Lake does have a better GPU and of course better compatibility.
X Elite is faster, but not enough to offset the software incompatibility or dealing with the GPU absolutely sucking.
Unfortunately for Intel, X Elite was a bad CPU that has been fixed with Snapdragon 8 Elite's update. The core uses a tiny fraction of the power of X Elite (way less than the N3 node shrink would offer). The core also got a bigger frontend and a few other changes which seem to have updated IPC.
Qualcomm said they are leading in performance per area and I believe it is true. Lunar Lake's P-core is over 2x as large (2.2mm2 vs 4.5mm2) and Zen5 is nearly 2x as large too at 4.2mm2 (Even Zen5c is massively bigger at 3.1mm2).
X Elite 2 will either be launching with 8 Elite's core or an even better variant and it'll be launching quite a while before Panther Lake.
Yeah, but can they run any modern OS well? The last N intel laptops and desktops I’ve used were incapable of stably running Windows, MacOS or Linux. (As in the windows and apple ones couldn’t run their preloaded operating systems well, and loading Linux didn’t fix it.)
Very strange. Enough bad things can be said about Intel CPUs, but I have never had any doubts about their stability. Except for that one recent generation that could age to death in a couple of months (I didn't have any of these).
AMD is IME more finicky with RAM, chipset / UEFI / builtin peripheral controller quality and so on. Not prohibitively so, but it's more work to get an AMD build to run great.
No trouble with any AMD or Intel Thinkpad T models, Lenovo has taken care of that.
LNL is a great paper launch but I have yet to see a reasonably priced LNL laptop so far. Nowadays I can find 16GB Airs and X Elite laptops for 700-900 bucks, and once you get into 1400 territory just pay a bit more for M4 MBPs which are far superior machines.
And also, they compete in the same price bracket as Zen 5, which are more performant with not that much worse battery life.
LNL is too little too late.
An M4 Macbook Pro 14 with 32 GB of RAM and 1 TB storage is $2,199... a Lunar Lake with the same specs is $1199. [0]
[0] https://www.bestbuy.com/site/asus-vivobook-s-14-14-oled-lapt...
Lunarrow Lake is a big L for Intel because it's all Made by TSMC. A big reason I buy Intel is because they're Made by Intel.
We will see whatever they come out with for 17th gen onwards, but for now Intel needs to fucking pay back their CHIPS money.
Are they being fabbed by TSMC in the US, or overseas?
> seriously challenged on its home turf.
Is it? I presume that a large chunk of the AMD's $3.5B is MI3XX chips, and very little of Intel's $3.5B is AI, so doesn't that mean that Xeon likely still substantially outsells EPYC?
You forgot the 10 nm / 7 nm node troubles that continued for years and held back their CPU architectures (which honestly kept improving).
His work now boils down to prepping Intel for an acquisition.
By whom though? I don't see how any company directly competing with Intel (or even orthogonal e.g. Nvidia and ARM) could be allowed to by Intel (they'd need approval in the US/EU and presumably a few other places) unless it's actually on the brink of bankruptcy?
>unless it's actually on the brink of bankruptcy?
This may be in the cards.
IIRC Intel and AMD have a patent sharing agreement that dissolves if either is purchased.
I'm not a HW guy but my HW friends have been designing HCI solutions with AMD for maximum IO throughput because AMD CPUs have more PCI lanes.
I think for _most_ people it comes down to this: how much can I cram into the platform. More lanes is more high speed storage, special purpose processing, and networking interfaces.
VMware users are starting to say that Epyc is too powerful for one server because they don't want to lose too much capacity due to a single server failure. Tangentially related, network switch ASICs also have too much capacity for a single rack.
Surprising it took so long given how dominant the EPYC CPUs were for years.
I don't agree that this is surprising. To be "dominant" in this space means more than raw performance or value. One must also dominate the details. It has taken AMD a long time to iron out a large number of these details, including drivers, firmware, chipsets and other matters, to reach real parity with Intel.
The good news is that AMD has, finally, mostly achieved that, and in some ways they are now superior. But that has taken time: far longer than it took AMD to beat Intel at benchmarks.
One thing to remember is that the enterprise space is very conservative: AMD needed to have server-grade CPUs for all of the security and management features on the market long enough for the vendors to certify them, promise support periods, etc. and they need to get the enterprise software vendors to commit as well.
The public clouds help a lot here by trivializing testing and locking in enough volume to get all of the basic stuff supported, and I think that’s why AMD was more successful now than they were in the Opteron era.
Server companies have long term agreements in place...waiting for those to expire before moving to AMD is not unexpected. This was the final outcome expected by many.
Intel did an amazing job at holding on to what they had. From Enterprise Sales connection which AMD had very little from 2017 to 2020. And then bundling other items, essentially discount without lowering price, and finally some heavy discount.
On the other hand AMD has been very conservative with their EPYC sales and forecast.
Upgrade cycles at datacenters are really long.
AMD has been ahead for 5 years and upgrade cycles are 4-6 years so AMD should have ~80% market share by now.
Servers are used for a long time and then Dell/HP/Lenovo/Supermicro has to deliver them and then customers have to buy them. This is a space with very long lead times. Not surprising.
Nobody ever got fired for buying Intel.
But some caught on fire by standing too close.
That’s not a thing.
They should be.
Complicated. Performance per watt was better for Intel, which matters way more when you're running a large fleet. Doesn't matter so much for workstations or gamers, where all that matters is performance. Also, certification, enterprise management story, etc was not there.
Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.
Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.
> Performance per watt was better for Intel
No, not its not even close. AMD is miles ahead.
This is a Phoronix review for Turin (current generation): https://www.phoronix.com/review/amd-epyc-9965-9755-benchmark...
You can similarly search for phoronix reviews for the Genoa, Bergamo, and Milan generations (the last two generations).
You're thinking strictly about core performance per watt. Intel has been offering a number of accelerators and other features that make perf/watt look at lot better when you can take advantage of them.
AMD is still going to win a lot of the time, but Intel is better than it seems.
Are generic web server workloads going to use these features? I would assume the bulk of e.g. EC2 spent its time doing boring non-accelerated “stuff”.
Intel does a lot of work developing sdks to take advantage of its extra CPU features and works with open source community to integrate them so they are actually used.
Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.
Possibly AMD is doing similar but I'm not aware.
ICC, IPP, QAT, etc are definitely an edge.
In AI world they have OpenVINO, Intel Neural Compressor, and a slew of other implementations that typically offer dramatic performance improvements.
Like we see with AMD trying to compete with Nvidia software matters - a lot.
AMD is not doing similar stuff yet.
But those accelerators are also available for AMD platforms - even if how they're provided is a bit different (often on add-in cards instead of a CPU "tile").
And things like the MI300A mean that isn't really a requirement now either.
Performance per Watt was lost by Intel with the introduction of the original Epyc in 2017. AMD overtook in outright performance with Zen 2 in 2019 and hasn't looked back.
Outdated info. AMD / TSMC has beat Intel at efficiency for years. Intel has fallen behind. We need them to catch up and provide strong competition.
Intel has just been removed from the Dow index. They are under performing on multiple levels
https://apnews.com/article/dow-intel-nvidia-sherwinwilliams-...
Care to post any proof?
idk go look at the xeon versus amd equivalent benchmarks. theyve been converging although amd's datacenter offerings were always a little behind their consumer
this is one of those things where there's a lot of money on the line, and people are willing to do the math.
the fact that it took this long should tell you everything you need to know about the reality of the situation
Are you looking at userbenchmark? They are not even slightly reliable.
Oh thanks for the reminder! I gotta go read their 9800x3d review, I'm always up for a good laugh.
Edit: awww no trash talking it yet, unlike the 7800x3d :)
He is so much biased against AMD that PC builders and even Intel forums have banned that site.
Sorry, but everything about this is wrong.
AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.
The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.
> idk go look at the xeon versus amd equivalent benchmarks.
They all show AMD with a strong lead in power efficiency for the past 5 years.
I know what the benchmarks are like, I wish that you would go and update your knowledge. If we take cloud as a comparison it's cheaper to use AMD, think they're doing some math?
I’d still like a decent first fpga. Guys? I’m still here guys! Please make me some FPGAs!
Sorry, you can have a cheap-ish FPGA that came out 10 years ago, or a new FPGA that costs more than your car and requires a $3000 software license to even program. Those are the only options allowed.
Nah, the hobby strat is to buy a chunk o' circuit board and learn BGA soldering. "Chip Recovery," they call it.
https://www.ebay.com/itm/235469964291
Best start believin' in the crazy cyberpunk stories. You're in one!
Virtex UltraScales require Vivado EE so you'd still need the $3000 license to do anything with it :(
edit: legally that is, assuming there's even enough demand for these tools for anyone to bother cracking them
This is sofware written by hardware guys. Cracking it is the easy part. Then you have to make it work...
(legally)
Is Vivado easy to pirate? Now I'm interested
Yes, or you can just keep getting a trial license.
Buy a Xilinx U50C or U55 (C1100) - neither require a Vivado license and both have HBM/many LUTs (VU35P chips.) Neither will exceed $1500.
The new COP FPGAs are in the $100-400 range. Not cheap but nothing compared to the high end parts.
So Intel has abandoned the sub-100usd segment to AMD/Xilinx, Lattice, Efinix and Microchip?
The COP is AMD/Xilinx. I have no idea what the agilex 3 and 5 costs are, I'm not an Altera user. I will note though, having used Lattice, Microchip, and (admittedly at the start of Titanium) Efinix, none of the tools come close to Vivado/Vitis. I'm on lattice at the moment and I've lost countless hours to the tools not working or working poorly on Linux relative to Xilinx. Hobbyist me doesn't care, I'll sink the hours in. Employee me does care, though.
Luckily they are spinning off the FPGA business to be Altera again
There's also Cologne Chip.
I'd look at whatever nextpnr supports.
Therefore, AMD stock is down 17.1% in the past month.
Please, don't talk about how well AMD is doing! You'll only make the stock price slide another 10%, as night follows day... [irrational market grumbling intensifies]
The market can hardly be called irrational on this. AMD's market value pretty much already priced in that they would take over Intel's place in the datacenter, their valuation is more than double Intel's with a PE of 125, despite them being fabless and ARM gaining ground in the server space. That's why you are seeing big swings in prices, because anything short of "we are bankrupting Intel and fighting Nvidia in the AI accelerator space" is seen as a loss.
> despite them being fabless
That's not how it works. You need to pump money into fabs to get them working, and Intel doesn't have money. If AMD had fabs to light up their money, they would also have a much lower valuation.
The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$. 225$ was hit when AMD was guiding ~3.5B in datacenter GPU revenue. Now, they're guiding to end the year at 5B+ datacenter GPU revenue, but the stock is ~140$?
I think it's because of how early Nvidia announced Blackwell (it isn't any meaningful volume yet), and the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year? I don't know how to explain the stock price.
Anyway, they hit record quarterly revenue this Q3 and are guiding to beat this record by ~1B next quarter. Price might move a lot based on how AMD guides for Q1 2025.
> That's not how it works.
Being fabless does have an impact because it caps AMD's margins and makes x86 their only moat. They can only extract value if they remain competitive on price. Sure that does not impact Nvidia, but they get to have fat margins because they have virtually no competition.
> The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$.
That's volatility not irrationality. As I wrote AMD's valuation is built on the basis that they will keep executing in the DC space, Intel will keep shitting the bed and their MI series will eventually be competitive with Nvidia. These facts make investor skittish and any news about AMD causes the stock to move.
> the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year?
The only hyperscaler that picked up MI300X is Azure and they GA'ed it 2 weeks ago, both GCP and Azure are holding off. The uncertainty on when (if) it will catch on is a factor but the growing competition from those same hyperscaler building their own chip means that the opportunity window could be closing.
It's ok to be bullish on AMD the same way that I am bearish on it, but I would maintain that the swings have nothing to do with irrationality.
Isn't that ahead of schedule?
Everyone I think knew AMD is catching up but thought this was still a year or two out
I am sure AMD has been delivering more value for even longer. I bet the currently deployed AMD Exaflops are significantly higher than Intel. It was a huge consideration for me when shopping between the two. As big as 50% more compute per dollar.
Interpretation notes: first time, in the era during which said companies broke out "datacenter" as a reporting category. The last time AMD was clearly on top in terms of product quality, they reported 2006 revenue of $5.3 billion for microprocessors while Intel reported $9.2 billion in the same category. In those years the companies incompletely or inconsistently reported separate sales for "server" or "enterprise".
Still, there were always clearly defined product lines, like Athlon vs. Opteron.
greag