The Phoronix benchmarks show exactly the expected results.
However, there are some sites where the interpretation of similar results is ridiculous, for instance TechSpot.
On that site, the author complains that e.g. 9700X is faster than 7700X in the multi-threaded benchmarks by only a few percent. Nevertheless, the author fails to mention that 7700X is configured for an 105 W TDP, while 9700X is configured for a 65 W TDP.
For most of the new series, AMD has chosen a default configuration that favors energy efficiency over speed. Therefore, in the default configuration most of the improvements result in a much higher energy efficiency and an only slightly higher speed in multi-threaded benchmarks. The results measured by the reviewer were exactly as expected and the complaints make absolutely no sense.
After that, the reviewer notices that the new series show a lower power consumption by a few tens of watts, but then the reviewer complains that when compared to the entire computer system power consumption that means only around 10%, so it is not impressive.
However, the reviewer fails to point that this computer system includes an RTX 4090, which alone has a power consumption comparable with a couple of normal desktops. The reviewed computer has a power consumption around 500 W, many times greater than a normal desktop computer. Therefore a power saving that would be huge for a normal desktop appears modest for such a behemoth. Again, the complaints are ridiculous.
Moreover, the reviewer fails to notice that in single-threaded benchmarks the new Zen 5 CPUs match or even exceed the fastest Raptor Lake CPUs, despite the latter having much higher clock frequencies. That means that when the faster Zen 5 models will be launched next week, they will beat easily any Intel CPU in single-threaded benchmarks.
These results are excellent, and not a failure, like the weird reviewer concludes.
Moreover, that bad review has not tested the domains where Zen 5 has up to double performance when compared to any previous desktop CPU, like floating-point computations, cryptography or ML/AI. These use cases alone are a good enough reason for upgrades for many of those who use such CPUs for professional purposes and not for games.
Steve's reviews are focused on gaming and he probably isn't wrong there. 9700X seems to bring very little for gaming, compared to 7700X. 7700X is almost the same performance, sometimes even better, and cheaper to buy. Most gamers do not care about 10% lower power consumption, especially when they have to pay more upfront for it.
Yes, the better energy efficiency allows 9700X to boost higher to get somewhat higher performance in multi-thread, but for some reason, gaming does not benefit noticeably. Zen 5 seems interesting for laptops and low-power client devices, but so does the new Intel Lunar Lake (for now, at least), so we have to wait for reviews and comparisons there.
> has not tested... floating-point computations, cryptography or ML/AI.
That is true, but small fraction of people care about these specialized workloads, it's overhyped in marketing. But if you care about those, I think you may be right that Zen 5 may be more interesting there.
I agree that upgrading from Zen 4 to Zen 5 would make very little sense for a gamer.
However this fact was already well known and it has been discussed for some months.
It was not a surprise and verifying this fact is certainly a stupid justification for calling Zen 5 a flop.
Zen 5 does exactly what it has been announced that it will do. It provides a much greater energy efficiency than any previous x86 CPU.
It has a greater single-thread performance at a given clock frequency than any previous x86 CPU, but Arrow Lake S, which is expected in October or November, will have about the same IPC in the big cores, so about the same single-thread performance.
For any application that can use AVX-512 instructions, the desktop variant of Zen 5, i.e. Granite Ridge, can have a double throughput in comparison with any previous desktop CPU. For some people this will not matter at all, but for others this will be decisive.
The same happened at the previous SIMD throughput doublings that happened while keeping the same number of cores, e.g. Sandy Bridge after Nehalem or Haswell after Sandy Bridge.
For some people this did not matter, while for others it was a great improvement.
It's a flop for people who expect that after 20months, they'll get 13%-18% more. Perhaps Steve did not know this "already well known fact" and thus wrote about flop and Intel 11-th gen vibes.
How did you know before benchmarks were out? Did AMD say gaming performance will stagnate? (that would be very stupid thing for them to say).
In which applications is AVX-512 performance decisive? Video editing / 3D modelling?
There have been a lot of comments on many Internet forums, during the last months, based on various benchmarks of engineering samples, that the gaming performance of these new models will be only marginally better than of the existing Zen 4 models and some times even lower than of those with 3D cache.
Only when the models of Zen 5 with big 3D cache will be launched it is expected that they will be noticeably faster for gaming.
When a 5.5 GHz 9700X matches or exceeds in single thread performance a 6.0 GHz 14900K, that is an almost 10% over the older competition and it certainly is 13%-18% over the corresponding model of the same clock frequency from AMD's previous generation.
There are many professional applications where the AVX-512 performance is decisive. There would have been much more, had Intel not prevented this by their market segmentation policies, which force most software developers to support only the weakest and most obsolete CPUs. I am myself interested in certain CAD/EDA engineering applications, where I expect a good speed-up from a 9950X, at a much lower price than for any previous solutions. This is a nice change at a time when most computing solutions increase in price, instead of decreasing, like in the old days.
> There have been a lot of comments on many Internet forums
Still, the non-improvement in default setting surprised people, e.g. see the embarrasing confession by PCWorld, they did not believe the performance increase is so minuscule and asked AMD if that is for real.
> it certainly is 13%-18% over the corresponding model of the same clock frequency from AMD's previous generation.
More like 10%. And you have to overclock for that. Overclocking has become a fool's errand, you can expect it to cause problems, crashes, etc. Granted, if crashes are rare, gamers may go for it.
> It's a flop for people who expect that after 20months, they'll get 13%-18% more.
If you have unrealistic expectations, everything, everywhere is always going to be a flop.
You're not getting 18% more IPC at 30% energy savings in a single generation. That kind of uplift hasn't been seen probably since Pentium 3 vs Pentium 4 era, or maybe Nehalem vs Core Duo.
Regardless, if you run the Zen 5 CPUs at the same TDPs as the 7000 series, you can still easily get 15-20% uplift. It's just that AMD has chosen conservative defaults for energy efficiency.
And purely for gaming, you should be waiting for X3D versions.
Gamer/enthusiast segment expects performance increase, not energy savings. CPU consumption is not even the greatest power hog in gaming PC.
Zen 3 brought 20% more performance at much better power consumption than Zen 2, and this set expectations. Zen 4 was a weaker improvement, and some people hoped that was one time thing, and maybe Zen 5 will get back to Zen 3 level improvements or better. But the improvement is even worse this time.
That's why in this consumer segment, 9700X is like Intel 11gen, a token increase in performance (and sometimes, decrease) as compared to previous gen, and thus a meh product. In other segments, like in desktops for work, or laptops, focus is different, and the same performance at lower consumption is a great new feature. So it's not all bad - it's just meh for gamers and enthusiasts.
Yes, you can overclock, and expect to either win the lottery, or maybe get problems like Intel has. If AMD did not clock these higher by default, there is a good reason for that, and it is not because of green political reasons. AMD has every incentive to clock as high as possible, to look and sell better. Most probably, the current batches of 4nm chips out of TSMC aren't rock-solid at higher clocks.
Re X3D, yes those should be better. But this is marketed as 9700X, not as 9700, so it's a flop. PCWorld was so surprised by the non-improvement that they postponed their review and checked with AMD if their poor bench results really are what AMD intended them to see.
> I buy my computers to use them, not to be "efficienr"
Not every product is meant for you. I bet folk who cool their computers with HVAC and draw power by the MW do care about efficiency a lot.
Also, Zen 5 beats Intel in single thread performance while being efficient (by default). You could always change your bios to get even more perf using more power.
> Moreover, that bad review has not tested the domains where Zen 5 has up to double performance when compared to any previous desktop CPU, like floating-point computations, cryptography or ML/AI. These use cases alone are a good enough reason for upgrades for many of those who use such CPUs for professional purposes and not for games.
I hope AVX512 dies a painful death, and that AMD starts fixing real problems instead of trying to provide magic instructions to then create benchmarks that they can look good on.
Because absolutely nobody cares outside of benchmarks.
The same is largely true of AVX512 now - and in the future. Yes, you can find things that care. No, those things don't sell machines in the big picture.
I hope AMD gets back to basics: gets their design process working again, and concentrate more on regular code that isn't HPC or some other pointless special case.
I want my power limits to be reached with regular integer code, not with some AVX512 power virus that takes away top frequency (because people ended up using it for memcpy!) and takes away cores (because those useless garbage units take up space).
Stop with the special-case garbage, and make all the core common stuff that everybody cares about run as well as you humanly can. Then do a FPU that is barely good enough on the side, and people will be happy. AVX2 is much more than enough.
AMD has been doing really well on their CPU side the last few years, but with Intel's recent issues at the some time as Zen 5, I have to imagine this is going to be a good time for them going forward.
I remember when AMD was the budget choice that threw raw speed and cores at the problem with CPUs like the FX 83XX chips and Intel was the real player (memories of building my first computer in high school). I love the switch up. I hope Intel can get their comeback as well (without just throwing 240W at the processor). I love some competition in the x86 market.
AMD being good was really jarring when I got back into building PCs a few years ago. I was so used to the Phenom and Bulldozer era, when the consensus was AMD was pretty much a joke. How times have changed..
For non gaming builds, the Phenom processors were a great value. I had a Phenom 2 X6 for running virtualization labs, no issues whatsoever. Bulldozer was a flop, no real performance gains over the Phenom 2's.
Outside of niche areas, I think programmers can easily go at least 5 years without needing an upgrade. The average office worker can probably use recent hardware until it breaks. My 2016 Macbook would be fine for day-to-day work if the battery wasn't completely dead. My current laptop (M1) and desktop (3600) are both 2020 models and fine for most programming tasks.
My wife's PC is a i7-2600 circa 2011. I have the ram maxed out, an SSD, and an nvidia card and for day-to-day stuff you'd never know it wasn't brand new-- seriously. It even runs Windows 11 perfectly with the Rufus modified installer.
If you're chasing FPS in AAA games it's not going to cut it, but it boots in seconds, loads apps quickly, plays streaming video perfectly...
I'm in a similar boat. I'm rocking 3600 that I purchase in Feb 2020. I'd love a reason to upgrade, but I really can't justify it for my personal machine. It still does everything I want faster than I'd like. I'm personally just going to hold off until I need to so I can enjoy a massive improvement in a few years.
> my language doesn't care about compilation speed, so I'm generating e-waste at a significant personal expense to compensate
That isn't a personal critique, even sounding like a perfectly rational decision, but if anyone here is working on a language (or any other software) of any popularity, it's worth keeping those sorts of negative second-order effects in mind.
It's unclear who you're criticizing. The developer who uses a compiled language (even though all things considered compiled languages are generally more efficient)? The compiler developer who doesn't optimize the compiler (even though it's unclear compilers could be much faster)? Also, why do you think upgrading a computer intrinsically leads to e-waste? I've kept or given away every computer I've bought that still works. Why wouldn't I, when it's still perfectly capable of doing useful work.
Developers of wasteful software, such as unoptimized compilers. It's not an absolute criticism, just a call-out to keep performance in mind because of second-order effects.
> even though it's unclear compilers could be much faster
You've used languages with substantially faster compilers, right? The majority of the rust compiler's runtime isn't the borrow checker. Standard C++ compilers don't fundamentally do anything beyond what other systems languages do.
> intrinsically leads to e-waste
It's not _intrinsic_, but somewhere around 1/3 of computers are re-used or recycled. Most people, when upgrading, have done so because they perceive their previous computer as unsatisfactory for their normal tasks, and most people don't have background work or servers or whatnot to make use of old computers. Toss in the re-sale value of $100-$200, and it's not worth the time and effort for a lot of people to re-sell or recycle. Yes, they should do better, but with that reality in mind you can easily construct a _correct_ causal model of "if I write slow, popular software then I there will be much more e-waste than if I had not done so."
Sure, but developers, and especially C++ and Rust developers, make up a small minority of the total market. The general statement "most people will throw in the garbage their old computer when buying a new one" may not apply in general to such an unusual subset. Like I said, it doesn't apply to me, and I've upgraded specifically to get better build times.
Sure. It might apply more though, with the cost/benefit analysis strongly favoring fast actions rather than good actions for high-paid professionals too. We have some data suggesting e-waste is a problem. Maybe it's 10% in that demographic. That's still a 10k computers in the dump, purely because of slow Rust and C++ compile times, even if only 1% of people upgraded because of compile times and 10% of that demographic aren't responsible with their devices.
It's also (very roughly) 1.3 million kwh per year extra for rust and cpp developers to use those slower compilers once a month, relative to faster alternatives. A few core maintainers have the power to fix that and have chosen not to.
And maybe that's fine. But it's a real cost, quite large relative to the other impact those individuals might have, and it's worth acknowledging rather than dismissing.
>It's also (very roughly) 1.3 million kwh per year extra for rust and cpp developers to use those slower compilers once a month, relative to faster alternatives.
What do you mean by "faster alternatives"? There is only one Rust compiler, and all the C++ compilers are about equally fast. Do you mean languages that compile faster? Do you really think most projects can just be made in one language or another and it doesn't matter?
>A few core maintainers have the power to fix that and have chosen not to.
Nobody chooses to make inefficient software. First, there's only so many man-months in the month. Second, compilers are constantly evolving creatures. Optimizations, generally speaking, make software more difficult to maintain. What would you rather? A somewhat slow compiler that can optimize your code really well, or a compiler that runs really fast, but can only compile the language version from ten years ago and generates slow binaries because the maintainers can't work around the optimization redesign from fifteen years ago?
>And maybe that's fine. But it's a real cost, quite large relative to the other impact those individuals might have, and it's worth acknowledging rather than dismissing.
I think you're underestimating how complex modern compilers are. If people are dismissing your concerns it's because they understand that the price they pay for efficient programs is slow compilers.
> Standard C++ compilers don't fundamentally do anything beyond what other systems languages do.
C++ is an incredibly difficult to parse and complex language. This is by design, because the language has evolved over time and compatibility was preserved.
Also templates are incredibly expensive because of:
1. How compilation units work (modules fix this, but its taken decades to get here)
2. How complex they are, i.e. they're turing complete and recursively defined
3. Platform compatibility (read some STL code to see what I mean)
These things ARE fundamentally different from other systems languages. C doesn't have templates, and macros are glorified find replace. Go barely has generics, let alone higher-kinded types. Rust doesn't even have higher-kinded types (template templates).
And rust and Go are NEW, so they can go willy nilly with whatever syntax they want.
> Compiled language authors shouldn't write faster compilers because interpreted languages exist
My point is that the costs of those particular compiled languages is high enough that it's worth some additional investment in speed, regardless of the existence of interpreted languages.
> the costs of those particular compiled languages is high enough
Only if we are talking developer experience.
Even if you are working on something almost no-one uses. Spending a bit of time , or even hours, compiling, will be dwarfed by running more efficient code.
Just remember that there are billions of devices constantly (re)JITing Javascript code...
I'm a huge fan of 11-year-old CPUs. 2013 was the last year AMD released processors without the PSP. Not fit for every task, but for most general computing it's indistinguishable from the latest hardware. I like having a computer where everything is accessible. 2013 was also the last year NVIDIA released GPUs without signed firmware.
It's the same across almost all tech. Phones, tablets, laptops, CPUs, GPUs, displays, TVs. Unless you have specific needs at work or chase the most recent games, the need to update almost went away. I'm still on i5-9300 in my laptop and will happily use it till it dies. (Then look at the battery life in the replacement before considering performance at all)
Yep, after Zen 3-4/Alder Lake, we're now in another stagnation cycle like Intel Sandy Bridge brought in 2010-2015.
For casual users, a good test if it's time to upgrade is - has the CPU speed, the memory speed doubled? If not, for most people, it's better to keep your old device.
For competitive gamers, smaller gains make sense, but it gets expensive fast.
What was your opinion on the 5800x3D upgrade? I'm thinking of doing the same transition but haven't really had any performance complaints with my 3700X. I feel like the supply of 5800x3D will probably disappear pretty soon and the close out prices with it. I'm to lazy do a whole platform upgrade.
I went from 2700 to 5800X3D and the difference is absolutely dramatic and well worth it. Originally I had a 1440p monitor with Geforce 1080 which was fine. Then I bought a 4K monitor and it brought it down to it's knees. Then upgraded to 3080 and there definitely was an improvement, but FPS was still unacceptably low so upgraded to that X3D processor. It's a beast and I play most games on 140+fps.
> I feel like the supply of 5800x3D will probably disappear pretty soon and the close out prices with it.
AMD has introduced several new 5000-series processors this year, including the 5700X3D that's basically a 5800X3D with slightly lower clock speed. They're not cutting off production of these parts any time soon.
> Anybody else finding that the need for upgrades slowed down dramatically?
It's kinda random for me: at some point I used a Core i7 6th-gen (6700 then replaced the 6700 with a 6700K: long story) for seven years, something insane like that.
Then got a 3700X, upgraded after a year or so to a 7700X and I'm now seriously considering the 9700X (for the lower TDP and +12% on single-core perfs and maybe, at long last, some ECC).
But then I did put a process in place: when I buy a new machine, my wife gets my old one (so she now has the 3700X) and my mother-in-law gets the old-old one (so she's got the 6700K atm).
Basically: I feel good about upgrading my machine because everybody gets a faster machine when I upgrade.
I upgraded my 3900 to a 5900, and it is quite adequate for my needs. If the next upgrade means a new motherboard, ram, etc. I can wait until the difference is worth it.
It would be nice to see the idle power numbers, including the motherboard. A lot of the time I'm using my home PC for light tasks (browsing, listening to music, etc). The idle power is likely to dominate unless doing heavy processing tasks or gaming.
Also would be great if a Linux-focused outlet would mention whether the power saving features of the platform work at all. The difference between the idle power of a platform that reaches deep package sleep states and one that doesn't is very large.
Power consumption can only really be compared at the same performance level (IMO) - though I guess it depends on if you plan to tweak the processor defaults.
One of my biggest bugbears is people using maximally clocked processors (such as the i9) as indicative of power efficiency in general. Processors use way more power for those last few hundred megahertz.
I don't think this release is all that impressive, tbh. Fairly minor upgrade all told, and not an upgrade at all for those of use who value performance more then efficiency (admittedly, you can eke out some small improvement via PBO)
9700X system has materially the same performance and only a little lower consumption than similar 7700X system, which was released 21 months ago. It's stagnation like Intel 2010-2015.
Nice benchmarks. One caveat is - why not publish which motherboard/RAM combination you were using for the review? Looks like they were using an existing AM5 MB - but I don't see it in the review.
I haven't yet tested ECC with any Zen 5 desktop CPU. But yes in general with Zen 4 that ASRock Rack and Supermicro boards have worked out well. With time will try out ECC on Ryzen 9000 series.
Zen5 appears to officially support up to DDR5 5600, but unfortunately all of the ASRock Rack or Supermicro boards I looked at only supported DDR5 5200.
I may wait for new Zen5 boards, or maybe take a gamble on something like the Asus ProArt, where I saw comments online indicating that ECC is (unofficially?) supported.
Or other ASUS mainboards. For now ASUS seems to be the only desktop mainboard manufacturer that officially mentions in the docs support of "ECC and Non-ECC, Un-buffered Memory".
Depends on the workload. It certainly looks like Zen 5 will be meh for gaming build upgrades. On the other hand, these are great options for productivity and workstation builders.
Check Level1Tech's video. This would be a great generation for Epyc too.
Yeah, Moore's law seems to culminate and the rising tide with it. We seem to be entering an era where new tech improvements are mostly about integrated coprocessors and specialized workloads, and sometimes lower power.
I'm not sure I'd quantify it as "excellent". It's nearly identical to last generation's parts. The only pro so far is the very lower power usage, which is nice.
I guess thank you apple for finally allowing someone else to use the 3n process. Everyone applauding apple for their incredible silicon, but IMO it's 70% just thanks to tsmc.
I went 1700x -> 5700x on a lark, because it was really cheap.
I wasn't expecting much & wasn't sure why I was doing it (I do some occasional gaming but my rx580 is almost certainly the bottleneck). I was pleasantly surprised to find my computer noticeably snappier.
There are used 5800x's under $100 on eBay. I would absolutely recommend grabbing one of those. At $200+ (what I paid) it's nice & I was pleasantly surprised, but man, these things were already pretty fast!
The Phoronix benchmarks show exactly the expected results.
However, there are some sites where the interpretation of similar results is ridiculous, for instance TechSpot.
On that site, the author complains that e.g. 9700X is faster than 7700X in the multi-threaded benchmarks by only a few percent. Nevertheless, the author fails to mention that 7700X is configured for an 105 W TDP, while 9700X is configured for a 65 W TDP.
For most of the new series, AMD has chosen a default configuration that favors energy efficiency over speed. Therefore, in the default configuration most of the improvements result in a much higher energy efficiency and an only slightly higher speed in multi-threaded benchmarks. The results measured by the reviewer were exactly as expected and the complaints make absolutely no sense.
After that, the reviewer notices that the new series show a lower power consumption by a few tens of watts, but then the reviewer complains that when compared to the entire computer system power consumption that means only around 10%, so it is not impressive.
However, the reviewer fails to point that this computer system includes an RTX 4090, which alone has a power consumption comparable with a couple of normal desktops. The reviewed computer has a power consumption around 500 W, many times greater than a normal desktop computer. Therefore a power saving that would be huge for a normal desktop appears modest for such a behemoth. Again, the complaints are ridiculous.
Moreover, the reviewer fails to notice that in single-threaded benchmarks the new Zen 5 CPUs match or even exceed the fastest Raptor Lake CPUs, despite the latter having much higher clock frequencies. That means that when the faster Zen 5 models will be launched next week, they will beat easily any Intel CPU in single-threaded benchmarks.
These results are excellent, and not a failure, like the weird reviewer concludes.
Moreover, that bad review has not tested the domains where Zen 5 has up to double performance when compared to any previous desktop CPU, like floating-point computations, cryptography or ML/AI. These use cases alone are a good enough reason for upgrades for many of those who use such CPUs for professional purposes and not for games.
Steve's reviews are focused on gaming and he probably isn't wrong there. 9700X seems to bring very little for gaming, compared to 7700X. 7700X is almost the same performance, sometimes even better, and cheaper to buy. Most gamers do not care about 10% lower power consumption, especially when they have to pay more upfront for it.
Yes, the better energy efficiency allows 9700X to boost higher to get somewhat higher performance in multi-thread, but for some reason, gaming does not benefit noticeably. Zen 5 seems interesting for laptops and low-power client devices, but so does the new Intel Lunar Lake (for now, at least), so we have to wait for reviews and comparisons there.
> has not tested... floating-point computations, cryptography or ML/AI.
That is true, but small fraction of people care about these specialized workloads, it's overhyped in marketing. But if you care about those, I think you may be right that Zen 5 may be more interesting there.
I agree that upgrading from Zen 4 to Zen 5 would make very little sense for a gamer.
However this fact was already well known and it has been discussed for some months.
It was not a surprise and verifying this fact is certainly a stupid justification for calling Zen 5 a flop.
Zen 5 does exactly what it has been announced that it will do. It provides a much greater energy efficiency than any previous x86 CPU.
It has a greater single-thread performance at a given clock frequency than any previous x86 CPU, but Arrow Lake S, which is expected in October or November, will have about the same IPC in the big cores, so about the same single-thread performance.
For any application that can use AVX-512 instructions, the desktop variant of Zen 5, i.e. Granite Ridge, can have a double throughput in comparison with any previous desktop CPU. For some people this will not matter at all, but for others this will be decisive.
The same happened at the previous SIMD throughput doublings that happened while keeping the same number of cores, e.g. Sandy Bridge after Nehalem or Haswell after Sandy Bridge.
For some people this did not matter, while for others it was a great improvement.
It's a flop for people who expect that after 20months, they'll get 13%-18% more. Perhaps Steve did not know this "already well known fact" and thus wrote about flop and Intel 11-th gen vibes.
How did you know before benchmarks were out? Did AMD say gaming performance will stagnate? (that would be very stupid thing for them to say).
In which applications is AVX-512 performance decisive? Video editing / 3D modelling?
There have been a lot of comments on many Internet forums, during the last months, based on various benchmarks of engineering samples, that the gaming performance of these new models will be only marginally better than of the existing Zen 4 models and some times even lower than of those with 3D cache.
Only when the models of Zen 5 with big 3D cache will be launched it is expected that they will be noticeably faster for gaming.
When a 5.5 GHz 9700X matches or exceeds in single thread performance a 6.0 GHz 14900K, that is an almost 10% over the older competition and it certainly is 13%-18% over the corresponding model of the same clock frequency from AMD's previous generation.
There are many professional applications where the AVX-512 performance is decisive. There would have been much more, had Intel not prevented this by their market segmentation policies, which force most software developers to support only the weakest and most obsolete CPUs. I am myself interested in certain CAD/EDA engineering applications, where I expect a good speed-up from a 9950X, at a much lower price than for any previous solutions. This is a nice change at a time when most computing solutions increase in price, instead of decreasing, like in the old days.
> There have been a lot of comments on many Internet forums
Still, the non-improvement in default setting surprised people, e.g. see the embarrasing confession by PCWorld, they did not believe the performance increase is so minuscule and asked AMD if that is for real.
> it certainly is 13%-18% over the corresponding model of the same clock frequency from AMD's previous generation.
More like 10%. And you have to overclock for that. Overclocking has become a fool's errand, you can expect it to cause problems, crashes, etc. Granted, if crashes are rare, gamers may go for it.
> It's a flop for people who expect that after 20months, they'll get 13%-18% more.
If you have unrealistic expectations, everything, everywhere is always going to be a flop.
You're not getting 18% more IPC at 30% energy savings in a single generation. That kind of uplift hasn't been seen probably since Pentium 3 vs Pentium 4 era, or maybe Nehalem vs Core Duo.
Regardless, if you run the Zen 5 CPUs at the same TDPs as the 7000 series, you can still easily get 15-20% uplift. It's just that AMD has chosen conservative defaults for energy efficiency.
And purely for gaming, you should be waiting for X3D versions.
Gamer/enthusiast segment expects performance increase, not energy savings. CPU consumption is not even the greatest power hog in gaming PC.
Zen 3 brought 20% more performance at much better power consumption than Zen 2, and this set expectations. Zen 4 was a weaker improvement, and some people hoped that was one time thing, and maybe Zen 5 will get back to Zen 3 level improvements or better. But the improvement is even worse this time.
That's why in this consumer segment, 9700X is like Intel 11gen, a token increase in performance (and sometimes, decrease) as compared to previous gen, and thus a meh product. In other segments, like in desktops for work, or laptops, focus is different, and the same performance at lower consumption is a great new feature. So it's not all bad - it's just meh for gamers and enthusiasts.
Yes, you can overclock, and expect to either win the lottery, or maybe get problems like Intel has. If AMD did not clock these higher by default, there is a good reason for that, and it is not because of green political reasons. AMD has every incentive to clock as high as possible, to look and sell better. Most probably, the current batches of 4nm chips out of TSMC aren't rock-solid at higher clocks.
Re X3D, yes those should be better. But this is marketed as 9700X, not as 9700, so it's a flop. PCWorld was so surprised by the non-improvement that they postponed their review and checked with AMD if their poor bench results really are what AMD intended them to see.
> Zen 5 does exactly what it has been announced that it will do. It provides a much greater energy efficiency than any previous x86 CPU.
I buy my computers to use them, not to be "efficienr".
I dont' give a shit how much my laptop or desktop uses, but i need power when i use it.
> I buy my computers to use them, not to be "efficienr"
Not every product is meant for you. I bet folk who cool their computers with HVAC and draw power by the MW do care about efficiency a lot.
Also, Zen 5 beats Intel in single thread performance while being efficient (by default). You could always change your bios to get even more perf using more power.
He also does a test with pbo enabled which brings the power draw inline with the 7700x and it only is 10% better or less.
> Moreover, that bad review has not tested the domains where Zen 5 has up to double performance when compared to any previous desktop CPU, like floating-point computations, cryptography or ML/AI. These use cases alone are a good enough reason for upgrades for many of those who use such CPUs for professional purposes and not for games.
I hope AVX512 dies a painful death, and that AMD starts fixing real problems instead of trying to provide magic instructions to then create benchmarks that they can look good on.
Because absolutely nobody cares outside of benchmarks.
The same is largely true of AVX512 now - and in the future. Yes, you can find things that care. No, those things don't sell machines in the big picture.
I hope AMD gets back to basics: gets their design process working again, and concentrate more on regular code that isn't HPC or some other pointless special case.
I want my power limits to be reached with regular integer code, not with some AVX512 power virus that takes away top frequency (because people ended up using it for memcpy!) and takes away cores (because those useless garbage units take up space).
Stop with the special-case garbage, and make all the core common stuff that everybody cares about run as well as you humanly can. Then do a FPU that is barely good enough on the side, and people will be happy. AVX2 is much more than enough.
In desktop chips I really could not care less if the TDP is 65W or 105W.
AMD has been doing really well on their CPU side the last few years, but with Intel's recent issues at the some time as Zen 5, I have to imagine this is going to be a good time for them going forward.
I remember when AMD was the budget choice that threw raw speed and cores at the problem with CPUs like the FX 83XX chips and Intel was the real player (memories of building my first computer in high school). I love the switch up. I hope Intel can get their comeback as well (without just throwing 240W at the processor). I love some competition in the x86 market.
AMD being good was really jarring when I got back into building PCs a few years ago. I was so used to the Phenom and Bulldozer era, when the consensus was AMD was pretty much a joke. How times have changed..
Phenom was really good. Ran mine for 10 years before getting into Ryzen. But Bulldozer was a letdown.
For non gaming builds, the Phenom processors were a great value. I had a Phenom 2 X6 for running virtualization labs, no issues whatsoever. Bulldozer was a flop, no real performance gains over the Phenom 2's.
Anybody else finding that the need for upgrades slowed down dramatically?
Did a 3700X -> 5800x3D upgrade for a last gasp of AM4 and at this pace I'll probably wait till what zen 6? 7?
Outside of niche areas, I think programmers can easily go at least 5 years without needing an upgrade. The average office worker can probably use recent hardware until it breaks. My 2016 Macbook would be fine for day-to-day work if the battery wasn't completely dead. My current laptop (M1) and desktop (3600) are both 2020 models and fine for most programming tasks.
My wife's PC is a i7-2600 circa 2011. I have the ram maxed out, an SSD, and an nvidia card and for day-to-day stuff you'd never know it wasn't brand new-- seriously. It even runs Windows 11 perfectly with the Rufus modified installer.
If you're chasing FPS in AAA games it's not going to cut it, but it boots in seconds, loads apps quickly, plays streaming video perfectly...
Also my experience. My macbook air M1 16gb of ram still is all I need for my programming work.
Depends on what you do, I found that without the active cooling it throttles a lot. The similar MBP worked great.
I'm in a similar boat. I'm rocking 3600 that I purchase in Feb 2020. I'd love a reason to upgrade, but I really can't justify it for my personal machine. It still does everything I want faster than I'd like. I'm personally just going to hold off until I need to so I can enjoy a massive improvement in a few years.
Depends on what you are working on.
For Rust and C++ compilation fast CPU/RAM/SSD makes a massive difference, so I'm always eager to upgrade to the latest and greatest.
> my language doesn't care about compilation speed, so I'm generating e-waste at a significant personal expense to compensate
That isn't a personal critique, even sounding like a perfectly rational decision, but if anyone here is working on a language (or any other software) of any popularity, it's worth keeping those sorts of negative second-order effects in mind.
It's unclear who you're criticizing. The developer who uses a compiled language (even though all things considered compiled languages are generally more efficient)? The compiler developer who doesn't optimize the compiler (even though it's unclear compilers could be much faster)? Also, why do you think upgrading a computer intrinsically leads to e-waste? I've kept or given away every computer I've bought that still works. Why wouldn't I, when it's still perfectly capable of doing useful work.
> who I'm criticizing
Developers of wasteful software, such as unoptimized compilers. It's not an absolute criticism, just a call-out to keep performance in mind because of second-order effects.
> even though it's unclear compilers could be much faster
You've used languages with substantially faster compilers, right? The majority of the rust compiler's runtime isn't the borrow checker. Standard C++ compilers don't fundamentally do anything beyond what other systems languages do.
> intrinsically leads to e-waste
It's not _intrinsic_, but somewhere around 1/3 of computers are re-used or recycled. Most people, when upgrading, have done so because they perceive their previous computer as unsatisfactory for their normal tasks, and most people don't have background work or servers or whatnot to make use of old computers. Toss in the re-sale value of $100-$200, and it's not worth the time and effort for a lot of people to re-sell or recycle. Yes, they should do better, but with that reality in mind you can easily construct a _correct_ causal model of "if I write slow, popular software then I there will be much more e-waste than if I had not done so."
Sure, but developers, and especially C++ and Rust developers, make up a small minority of the total market. The general statement "most people will throw in the garbage their old computer when buying a new one" may not apply in general to such an unusual subset. Like I said, it doesn't apply to me, and I've upgraded specifically to get better build times.
Sure. It might apply more though, with the cost/benefit analysis strongly favoring fast actions rather than good actions for high-paid professionals too. We have some data suggesting e-waste is a problem. Maybe it's 10% in that demographic. That's still a 10k computers in the dump, purely because of slow Rust and C++ compile times, even if only 1% of people upgraded because of compile times and 10% of that demographic aren't responsible with their devices.
It's also (very roughly) 1.3 million kwh per year extra for rust and cpp developers to use those slower compilers once a month, relative to faster alternatives. A few core maintainers have the power to fix that and have chosen not to.
And maybe that's fine. But it's a real cost, quite large relative to the other impact those individuals might have, and it's worth acknowledging rather than dismissing.
>It's also (very roughly) 1.3 million kwh per year extra for rust and cpp developers to use those slower compilers once a month, relative to faster alternatives.
What do you mean by "faster alternatives"? There is only one Rust compiler, and all the C++ compilers are about equally fast. Do you mean languages that compile faster? Do you really think most projects can just be made in one language or another and it doesn't matter?
>A few core maintainers have the power to fix that and have chosen not to.
Nobody chooses to make inefficient software. First, there's only so many man-months in the month. Second, compilers are constantly evolving creatures. Optimizations, generally speaking, make software more difficult to maintain. What would you rather? A somewhat slow compiler that can optimize your code really well, or a compiler that runs really fast, but can only compile the language version from ten years ago and generates slow binaries because the maintainers can't work around the optimization redesign from fifteen years ago?
>And maybe that's fine. But it's a real cost, quite large relative to the other impact those individuals might have, and it's worth acknowledging rather than dismissing.
I think you're underestimating how complex modern compilers are. If people are dismissing your concerns it's because they understand that the price they pay for efficient programs is slow compilers.
> Standard C++ compilers don't fundamentally do anything beyond what other systems languages do.
C++ is an incredibly difficult to parse and complex language. This is by design, because the language has evolved over time and compatibility was preserved.
Also templates are incredibly expensive because of: 1. How compilation units work (modules fix this, but its taken decades to get here) 2. How complex they are, i.e. they're turing complete and recursively defined 3. Platform compatibility (read some STL code to see what I mean)
These things ARE fundamentally different from other systems languages. C doesn't have templates, and macros are glorified find replace. Go barely has generics, let alone higher-kinded types. Rust doesn't even have higher-kinded types (template templates).
And rust and Go are NEW, so they can go willy nilly with whatever syntax they want.
This is an odd critique.
Many interpreted languages consume easily upwards of 50x the energy at runtime...
> Compiled language authors shouldn't write faster compilers because interpreted languages exist
My point is that the costs of those particular compiled languages is high enough that it's worth some additional investment in speed, regardless of the existence of interpreted languages.
> the costs of those particular compiled languages is high enough
Only if we are talking developer experience.
Even if you are working on something almost no-one uses. Spending a bit of time , or even hours, compiling, will be dwarfed by running more efficient code.
Just remember that there are billions of devices constantly (re)JITing Javascript code...
I'm on an 11 year old CPU :)
The slowest things are videos that lack hardware acceleration. These fancy new video encodings are a form of forced obsolescence IMO.
I'm a huge fan of 11-year-old CPUs. 2013 was the last year AMD released processors without the PSP. Not fit for every task, but for most general computing it's indistinguishable from the latest hardware. I like having a computer where everything is accessible. 2013 was also the last year NVIDIA released GPUs without signed firmware.
> The slowest things are ...
and code using C++ templates and new C++1x/2x features. And compilation in general because code base grows over time.
AV1 really consumes CPU but x265 and VP9 are easy on the CPU with software decoding.
A shot in the dark - i5-3570K? This is the one I use!
1680v2 and before that 4930k
It's the same across almost all tech. Phones, tablets, laptops, CPUs, GPUs, displays, TVs. Unless you have specific needs at work or chase the most recent games, the need to update almost went away. I'm still on i5-9300 in my laptop and will happily use it till it dies. (Then look at the battery life in the replacement before considering performance at all)
Yep, after Zen 3-4/Alder Lake, we're now in another stagnation cycle like Intel Sandy Bridge brought in 2010-2015.
For casual users, a good test if it's time to upgrade is - has the CPU speed, the memory speed doubled? If not, for most people, it's better to keep your old device.
For competitive gamers, smaller gains make sense, but it gets expensive fast.
What was your opinion on the 5800x3D upgrade? I'm thinking of doing the same transition but haven't really had any performance complaints with my 3700X. I feel like the supply of 5800x3D will probably disappear pretty soon and the close out prices with it. I'm to lazy do a whole platform upgrade.
I went from 2700 to 5800X3D and the difference is absolutely dramatic and well worth it. Originally I had a 1440p monitor with Geforce 1080 which was fine. Then I bought a 4K monitor and it brought it down to it's knees. Then upgraded to 3080 and there definitely was an improvement, but FPS was still unacceptably low so upgraded to that X3D processor. It's a beast and I play most games on 140+fps.
> I feel like the supply of 5800x3D will probably disappear pretty soon and the close out prices with it.
AMD has introduced several new 5000-series processors this year, including the 5700X3D that's basically a 5800X3D with slightly lower clock speed. They're not cutting off production of these parts any time soon.
Same here; at this point I'll probably only upgrade when CAMM2 becomes more widespread
> Anybody else finding that the need for upgrades slowed down dramatically?
It's kinda random for me: at some point I used a Core i7 6th-gen (6700 then replaced the 6700 with a 6700K: long story) for seven years, something insane like that.
Then got a 3700X, upgraded after a year or so to a 7700X and I'm now seriously considering the 9700X (for the lower TDP and +12% on single-core perfs and maybe, at long last, some ECC).
But then I did put a process in place: when I buy a new machine, my wife gets my old one (so she now has the 3700X) and my mother-in-law gets the old-old one (so she's got the 6700K atm).
Basically: I feel good about upgrading my machine because everybody gets a faster machine when I upgrade.
I'll probably add my daughter into the mix too.
I upgraded my 3900 to a 5900, and it is quite adequate for my needs. If the next upgrade means a new motherboard, ram, etc. I can wait until the difference is worth it.
Yeah that has me a little spooked too. Ideally want a 4K HDR gaming setup...that means fancy monitor and fancy GPU...equals many thousands cost
I'll wait for AM6, in 2 years maybe
I use a 3700X and have had no reason to upgrade yet.
I'm still on 2700, crickets. My last upgrade was so I could have 32GB RAM, not better CPU performance.
Ok. The Power Consumption of Ryzen 7 9700x threw me off a bit at the end. I wasn't expecting such a gap b/w i9 14900k.
Competitive Price, leading in most performance matrices with very good power consumption numbers. That really is a Home run.
It would be nice to see the idle power numbers, including the motherboard. A lot of the time I'm using my home PC for light tasks (browsing, listening to music, etc). The idle power is likely to dominate unless doing heavy processing tasks or gaming.
Also would be great if a Linux-focused outlet would mention whether the power saving features of the platform work at all. The difference between the idle power of a platform that reaches deep package sleep states and one that doesn't is very large.
Power consumption can only really be compared at the same performance level (IMO) - though I guess it depends on if you plan to tweak the processor defaults.
One of my biggest bugbears is people using maximally clocked processors (such as the i9) as indicative of power efficiency in general. Processors use way more power for those last few hundred megahertz.
I don't think this release is all that impressive, tbh. Fairly minor upgrade all told, and not an upgrade at all for those of use who value performance more then efficiency (admittedly, you can eke out some small improvement via PBO)
The 14900k is a 24-core CPU. Don't doubt that "efficiency" cores draw current. The e-cores cluster alone can draw over 135W if you push it.
9700X system has materially the same performance and only a little lower consumption than similar 7700X system, which was released 21 months ago. It's stagnation like Intel 2010-2015.
Nice benchmarks. One caveat is - why not publish which motherboard/RAM combination you were using for the review? Looks like they were using an existing AM5 MB - but I don't see it in the review.
It should be on the system table on the 2nd page. Its a bit small but SVG can zoom in. It was an ASUS ROG STRIX X670E with latest BIOS.
Edit: but yeah I need to find a way to scale that table better to make it easier to read.
Any suggestions for ECC?
Would you suggest going with an ASRock Rack motherboard, even for desktop use, like you used here? https://www.phoronix.com/review/amd-ryzen9-ddr5-ecc
I'm strongly tempted to get a Zen5 CPU, but am unsure of the motherboard.
I haven't yet tested ECC with any Zen 5 desktop CPU. But yes in general with Zen 4 that ASRock Rack and Supermicro boards have worked out well. With time will try out ECC on Ryzen 9000 series.
Zen5 appears to officially support up to DDR5 5600, but unfortunately all of the ASRock Rack or Supermicro boards I looked at only supported DDR5 5200.
I may wait for new Zen5 boards, or maybe take a gamble on something like the Asus ProArt, where I saw comments online indicating that ECC is (unofficially?) supported.
Looking forward to Ryzen 9000 ECC benchmarks.
Or other ASUS mainboards. For now ASUS seems to be the only desktop mainboard manufacturer that officially mentions in the docs support of "ECC and Non-ECC, Un-buffered Memory".
Yes, I see now that while not advertised on seller's websites, Asus's product pages do indeed say that.
If you wanted to upgrade from Zen3/4, don't get your hopes up...
https://www.youtube.com/watch?v=OF_bMt9fVm0
On average, 9700X is few percent faster than 7700X, and has slightly lower consumption. Upgrading from Zen 3/Zen 4 not warranted.
Depends on the workload. It certainly looks like Zen 5 will be meh for gaming build upgrades. On the other hand, these are great options for productivity and workstation builders.
Check Level1Tech's video. This would be a great generation for Epyc too.
[1]: https://youtu.be/JZuV35LgjxU
Yeah, Moore's law seems to culminate and the rising tide with it. We seem to be entering an era where new tech improvements are mostly about integrated coprocessors and specialized workloads, and sometimes lower power.
I'm not sure I'd quantify it as "excellent". It's nearly identical to last generation's parts. The only pro so far is the very lower power usage, which is nice.
They are faster than the previous generation in nearly all of the benchmarks while using much less power. I wouldn't call that 'nearly identical'.
By maybe 3% on average, if that. Some benchmarks elsewhere show cases where they're actually a tiny bit slower, due to the lower clock speeds.
I guess thank you apple for finally allowing someone else to use the 3n process. Everyone applauding apple for their incredible silicon, but IMO it's 70% just thanks to tsmc.
These are 4nm.
And 30% soldering RAM on-package.
A bit personal question, but can someone tell me if upgrading a 1800X to a 9700X worth it? Or should I just upgrade the whole enchilada (mobo, RAM)?
You can't upgrade different cpu socket 1800x is am4 and 9700x is am5. You could upgrade to something like 5800x if needed.
I went 1700x -> 5700x on a lark, because it was really cheap.
I wasn't expecting much & wasn't sure why I was doing it (I do some occasional gaming but my rx580 is almost certainly the bottleneck). I was pleasantly surprised to find my computer noticeably snappier.
There are used 5800x's under $100 on eBay. I would absolutely recommend grabbing one of those. At $200+ (what I paid) it's nice & I was pleasantly surprised, but man, these things were already pretty fast!
I believe the 9700X would require a new board with an AM5 socket and only accepts DDR5 RAM so whole enchilada.
You can upgrade to a much better AM4 CPU for very cheap (used)
Buy a 5700x3d for a cheap upgrade.