Well, firstly, it isn't. There are higher Geekbench 6 CPU scores, even forgetting the ones that appear to be fake or at least broken.
But secondly, that would absolutely not indicate that it is the "fastest single-core performer in consumer computing". That would indicate that it is the highest scoring Geekbench 6 CPU in consumer computing.
Whether or not that's actually a good proxy for the former statement is up to taste, but in my opinion it's not. It gives you a rough idea of where the performance stands, but what you really need to be able to compare CPUs is a healthy mix of synthetic benchmarks and real-world workloads. Things like the time it takes to compile some software, scores in video game benchmarks, running different kinds of computations, time to render videos in Premiere or scenes in Blender, etc.
In practice though, it's hard to make a good Apples-to-Intels performance comparison, since it will wind up crossing both OS boundaries and CPU architecture boundaries, which adds a lot of variables. At least real world tests will give an idea of what it would be like day-to-day even if it doesn't necessarily reveal truisms about which CPU is the absolute best design.
Of course it's reasonable to use Geekbench numbers to get an idea of where a processor stands, especially relative to similar processors, but making a strong claim like this based off of Geekbench numbers is pretty silly, all things considered.
Still... these results are truly quite excellent. It would suffice to say that if you did take the time to benchmark these processors you would find the M4 processor performs extremely well against other processors, including ones that suck up more juice for sure, but this isn't too surprising overall. Apple is already on the TSMC N3E process, whereas AMD is currently using TSMC N4P and Intel is currently using TSMC N3B on their most cutting edge chips. So on top of any advantages they might have for other reasons (like jamming the RAM onto the CPU die, or simply better processor design) they also have a process node advantage.
SPEC has been the industry standard benchmark for comparing the performance of systems using different instruction sets for decades now.
Traditionally, Anandtech would have been the first media outlet to publish the single core and multicore integer and floating point SPEC test results for a new architecture, but hopefully some trusted outlet will take up the burden.
For instance, Anandtech's Zen 5 laptop SKU results vs the M3 from the end of July:
> Even Apple's M3 SoC gets edged out here in terms of floating point performance, which, given that Apple is on a newer process node (TSMC N3B), is no small feat. Still, there is a sizable deficit in integer performance versus the M3, so while AMD has narrowed the gap with Apple overall, they haven't closed it with the Ryzan AI 300 series.
Zen 5 beat Core Ultra, but given that Zen 5 only edged out the M3 in floating point workloads, I wouldn't be so quick to claim the M4 doesn't outperform Zen 5 single core scores before the test results come out.
just considering the number of simple calculations a CPU can compute isn't a very good comparison. apple's chips are using an ARM architecture, which is a Reduced Instruction Set Computer (RISC) setup, vs x86-64, which is a Complex-ISC (CISC).
The only good comparison is to judge a variety of real world programs compiled for each architecture, and run them.
> The only good comparison is to judge a variety of real world programs compiled for each architecture, and run them.
I'm guessing that you don't realize that you are describing SPEC?
It's been around since the days when every workstation vendor had their own bespoke CPU design and it literally takes hours to run the full set of workloads.
From the same Anandtech article linked above:
> SPEC CPU 2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.
More info:
> SPEC is the Standard Performance Evaluation Corporation, a non-profit organization founded in 1988 to establish standardized performance benchmarks that are objective, meaningful, clearly defined, and readily available. SPEC members include hardware and software vendors, universities, and researchers.
SPEC was founded on the realization that "An ounce of honest data is worth a pound of marketing hype".
took at look at these benchmarks. they appear to be using some extremely antiquated code, and workloads, that do not take advantage of any of the advanced features and instruction introduced over the past 15-20 years in the x86-64 architecture.
additionally, the only updates they appear to have made in the last 5+ years involve optimizing the suite for Apple chips.
thus, it leaves out massive parts of modern computing, and the (many) additions to x86-64 that have been introduced since the 00s.
i'd encourage you to look into the advancements that have occurred in SIMD instructions since the olden days, and the way in which various programs, and compilers, are changed to take advantage of them
ARM is nice and all, but the benchmark you've linked appears to be some extremely outdated schlock that is peddled for $1000 a pop from a web page out of the history books. Really. Take a look through what the benchmarks on that page are actually using for tooling.
I'd consider the results valid if they were calculated using an up to date, and maintained, toolset, like that provided by openbenchmarking.org (the owner of which has been producing some excellent ARM64 vs Intel benchmarks on various workloads, particularly recently).
https://en.wikipedia.org/wiki/SPECint talks about a dozen programs they selected for single-core logic and discrete math, including gcc and bzip2 (there are more than a dozen others using floats).
The higher single-thread GB6 scores are from overclocked Intel or AMD CPUs.
The M4 core @ 4.5 GHz has a significantly higher ST GB6 performance than Lion Cove @ 5.7 GHz or Zen 5 @ 5.7 GHz (which are almost equal at the same clock frequency, with at most a 2% advantage for Lion Cove).
Having higher GB6 scores should be representative for general-purpose computing, but there are application domains where the performance of the Apple cores has been poor in the past and M4 is unlikely to have changed anything, i.e. the computations with big integer numbers or the array operations done on the CPU cores, not on the AMX/SME accelerators.
Nevertheless, I believe that your point about the existence of higher ST GB6 scores is not weakened by the fact that those CPUs are overclocked.
For a majority of the computer users in the world, the existence of higher ST performance that can be provided by either overclocked Intel/AMD CPUs or by CPUs made by Apple is equally irrelevant, because those users would never choose any of these 2 kinds of CPUs for their work.
Remember too back in the day when you're looking at a Mac with 1/4 the power of a PC, and it's 4x the price. I think we're starting to see those ratios reversed completely. And at the same time, the power, heat, etc.. is just sitting at the floor.
Yeah, this rings true. I'm not an Apple customer, but I certainly remember the days when Mac users had to justify the much worse bang-for-the-buck of Apple hardware (then) with it being the only ticket to their preferred software experience.
These days it appears more that the hardware is fantastic, especially in the laptop form factor and thermal envelope, and perhaps the downside is a languishing macOS.
I use macOS daily for dev and office work, and to me it doesn’t feel languishing at all. Add Homebrew and Docker or Podman, and we’re off.
The only places I can see there could be features missing are:
- IT management type stuff where it looks like Apple are happy just delegating to Microsoft (eg. my workstation is managed with InTune and runs Microsoft Defender pushed by IT),
- CUDA support if you’re into AI on NVIDIA
- Gaming I hear, but I don’t have time for that anyway :)
Of course this is biased, because I also generally just _like_ the look and feel of macOS
> you really need to be able to compare CPUs is a healthy mix of synthetic benchmarks and real-world workloads
I like how Apple got roasted on every forum for using real world workloads to compare M series processors to other processor. The moment there’s a statistic pointing to “theoretical” numbers, we’re back to using real world workload comparison
What's crazy is that the M4 Pro is in the Mac mini, something so tiny can handle that chip. The Mac Studio with the M4 Max will be awesome but the Mini is remarkable.
Back in the early 90s, when Apple was literally building their first product line with the Mac, they would come out with their second big honking powerhouse Mac: the Macintosh IIx. It blew everything out of the water. Then they would come out with their next budget all-in-one machine. But computing was improving so fast, with prices for components dropping so quickly, that the Macintosh SE/30 ended up as impressive as the Macintosh IIx with a much lower price. That's how the legend of the SE/30 was born, turning it into the best Mac ever for most people.
With how fast and impressive the improvements are coming with the M-series processors, it often feels like we're back in the early 90s. I thought the M1 Macbook Air would be the epitome of Apple's processor renaissance, but it sure feels like that was only the beginning. When we look historically at these machines in 20 years, we'll think of a specific machine as the best early Apple Silicon Mac. I don't think that machine is even out yet.
In the 90s, you probably wouldn't want to be using a desktop from 4 years ago, but the M1 is already 4 years old and will probably be fine for most people for years yet.
No kidding. The M1 MacBook Pro I got from work is the first time I've ever subjectively considered a computer to be just as fast as it was the day I got it.
I think by the time my work-provided M1 MacBook Pro arrived, the M2s were already out, but of course I simply didn't care. I actually wonder when it will be worth the hassle of transferring all my stuff over to a new machine. Could easily be another 4 years.
Maybe the desktops, but the laptops were always nigh-unusable for my workloads (nothing special, just iOS dev in Xcode). The fans would spin up to jet takeoff status, it would thermal throttle, and performance would nosedive.
There was a really annoying issue with a lot of the intel MacBooks where due to the board design one of the two power sockets would cause them to run quite a bit hotter.
Yeah I remember that, I posted a YouTube video complaining about it 6 years ago, before I could find any other references to the issue online. https://www.youtube.com/watch?v=Rox2IfViJLg
That would cause it to throttle even when idle! But even on battery or using the right-hand ports, under continuous load (edit-build-test cycles) it would quickly throttle.
Or your lap gets hot. Or the fans drive you mad. Good luck with the available ports. Oh, it’s slow AF too, but if you get the right model you can use that stupid Touch Bar.
I bought an M1 MacBook Pro just to use it for net and watching movies when in bed or traveling. I got the Mac because of its 20 hours battery life.
Since Snapdragon X laptops caught up to Apple on battery life I might as well buy one of those when I'll need to change. I don't need the fastest mobile CPU for watching movies and browsing the internet. But I like to have a decent amount of memory to keep a hundred tabs open.
Apple's marketing is comparing this season's M4s to M1s and even two generations of Intel ago. The 2x or 4x numbers suggests they are targeting and catering to this longer cycle where subliminally suggested updates are remarkably better, rather than suggesting an annual treadmill even though each release is "our best ever".
I mean, most people don't buy a new phone each year, let alone something as expensive as a laptop. They are probably still targeting Intel Mac, or M1 users for the most part.
To be fair, MOST computers are like that nowadays, regardless of brand. I'm using a Intel desktop that is ~8 years old and runs fine with an upgraded GPU.
Sure, apple isn't the only one making good laptops, though they do make some of the best. My point was just that we definitely aren't back at 90s level of progress. Frequency has barely been scaling since node shrinks stopped helping power density much, and the node shrinks are fewer and farther between.
So long as Apple is willing to keep operating system updates available for the platform. This is by far the most frustrating thing. Apple hardware, amazing and can last for years and even decades. Supported operating system updates, only a couple of years.
I'm typing this from my mid-2012 retina mac book pro. I'm on Mojave and I'm well out of support for the operating system patches. But the hardware keeps running like a champ.
Apple hardware, amazing and can last for years and even decades. Supported operating system updates, only a couple of years.
That’s not accurate.
Just yesterday, my 2017 Retina 4k iMac got a security update to macOS Ventura 13.7.1 and Safari even though it’s listed as “vintage.”
Now that Apple makes their own processors and GPUs, there’s really no reason in the foreseeable future that Apple would need to stop supporting any Mac with an M-series chip.
The first M1 Macs shipped in November 2020—four years ago but they can run the latest macOS Sequoia with Apple Intelligence.
Unless Apple makes some major changes to the Mac’s architecture, I don’t expect Apple to stop supporting any M series Mac anytime soon.
I owned an SE/30. I watched my first computer video on that thing, marveling that it was able to rasterize (not the right word) the color video real-time. I wish I had hung onto that computer.
Agreed. It might share the title with the M1 Air which was incredible for an ultraportable, but the M1MBP was just incredible period. Three generations later it's still more machine than most people need. M2/3/4 sped things up but the M1 set the bar.
I was the same with the M1 Air until a couple months ago when I decided I wanted more screen real estate. That plus the 120Hz miniLED and better battery and sound make the 16" a great upgrade as long as the size and weight aren't an issue. I just use it at home so it's fine but the Air really is remarkable for portability.
I have the M1 Air, too. I just plug in to a nice big Thunderbolt display when I need more screen!
I'll likely upgrade to the M4 Air when it comes out. The M4 MacBook Pro is tempting, but I value portability and they're just so chunky and heavy compared to the Air.
It’s not a server so it’s not a crime to not always be using all of it and it’s not upgradable so it needs to be right the first time. I should have got 32GB to just be sure.
Apple's sky-high RAM prices and strong resale values make this a tough call, though. It might just about be better to buy only the RAM you need and upgrade earlier, considering you can often get 50% or more of the price of a new one back by selling your old one.
Thankfully, Apple recently made 16GB the base RAM in all Macs (including the M2/M3 MacBook Airs) anyway. 8GB was becoming a bad joke and it could add 40% to the price of some models to upgrade it!
Yep, that's definitely a thing I'm proud or correctly foreseeing. I was upgrading from an old machine on 8GB, but I figured especially with memory being non upgradable it was better being safe than sorry, and if I kept the machine a decade it would come out to sanwitch money in the end.
>the Macintosh IIx. It blew everything out of the water.
naa... Amiga had the A2500 around the same time, the Mac IIx wasn't better with regards to specs in most ways. And at about $4500 more expensive (Amiga 2500 was around $3300, Mac IIx was $7769), it was vastly overpriced as is typical for Apple products.
Worth remembering that Amiga went out of business just a few years later, while Apple today is the largest company in the world by market capitalisation. Doesn't matter how good the product is: if you're not selling it for a profit, you don't have a sustainable business. Apple products aren't overpriced as long as consumers are still buying them and coming back for more.
I've got one and it's really not that impressive. I use it as a "desktop" though and not as a laptop (as in: it's on my desk hooked to a monitor, never on my laps).
I'm probably gonna replace it with a Mini with that M4 chip anyway but...
My AMD 7700X running Linux is simply a much better machine/OS than that MacBook M1 Air. I don't know if it's the RAM on the 7700X or the WD-SN850X SSD or Linux but everything is simply quicker, snappier, faster on the 7700X than on the M1.
I hope the M4 Mini doesn't disappoint me as much as the M1 Air.
Yes, but I suspect the 64GB of memory in the studio compared to 24GB in the mini the is going to make that studio a lot faster in many real-world scenarios.
It would be $2,199 for the highest end CPU and the 64GB of memory but I think you're point remains: the Studio is not a great buy until it receives the M4 upgrades.
I’m already planning on swapping mine for an M4 Ultra.
I love my M1 Studio. It’s the Mac I always wanted - desktop Mac with no integrates peripherals, a ton of ports - although I still use a high end hub to plug in… lot more. Two big external SSDs, my input peripherals (I’m a wired mouse and keyboard kind of guy) then a bunch of audio and USB midi devices.
It’s even a surprisingly capable gaming machine for what it is. Crossover is pretty darn good these days, and there are ARM native Factorio and World of Warcraft ports that run super well.
I like it in every way except price. It just works, comes back online after a power outage, etc. I don't recall any unscheduled disconnects.
--
Additional thoughts:
I think there are complaints about the fan being loud so I swapped it out when I first got it. I also have it in my basement so I don't hear anything anyway -- HDDs are loud, especially the gold ones
I am still astounded the huge change moving from an Intel Mac to an Apple Silicon Mac (M1) has had in terms of battery performance and heat. I don't think i've heard the fans a single time I've had this machine and it's been several years.
I never thought I'd see a processor that was 50% faster single-core and 80% faster multi-core and just shrug. My M1 Pro still feels so magically fast.
I'm really happy that Apple keeps pushing things and I'll be grateful when I do decide to upgrade, but my M1 Pro has just been such a magical machine. Every other laptop I've ever bought (Mac or PC) has run its fan regularly. I did finally get fan noise on my M1 Pro when pegging the CPU at 800% for a while (doing batch conversion of tons of PDFs to images) - and to be fair, it was sitting on a blanket which was insulating it. Still, it didn't get hot, unlike every other laptop I've ever owned did even under normal usage.
It's just been such a joyful machine.
I do look forward to an OLED MacBook Pro and I know how great a future Apple Silicon processor will be.
My best Apple purchases in 20 years of being their customer: The Macbook M1 Pro 16 inch and the Pro Display XDR. When Steve Jobs died I really thought Apple was done, but their most flawless products (imho) came much later.
Yeah, don’t forget the 10 dark years between the Butterfly Keyboard Macbook Pro 2016, the Emoji Macbook Air, until the restoration of the USB ports… around 2022.
I had the 2015 MBP and I held onto it until the M1 came out…I still have it and tbh it’s still kind of a great laptop. The two best laptops of the past decade for sure.
Yeah, I have an M1 Max 64GB and don't feel any need to upgrade. I think I'll hit the need for more ram before a processor increase with my current workload.
I've got a coworker who still has an Intel MacBook Pro, 8-core i9 and all that, and I've been on M chips since they launched. The other day he was building some Docker images while we were screensharing and I was dumbfounded at how long it took. I don't think I can even remember a recent time when building images, even ones pushed to CDK etc., takes more than a minute or so. We waited and waited and finally after 8 minutes it was done.
He told me his fans were going crazy and his entire desk was hot after that. Apple silicon is just a game changer.
Sounds like they were building an aarch64 image, building an x86_64 image on Apple Silicon will also be slow - unless you are saying the M* builds x86_64 faster than an i9?
That was what finally got me to spend the cash and go with Apple Silicon - we switched to a Docker workflow and it was just doooooog slow on the Intel Macs.
But this M1 Max MBP is just insane. I'm nearly 50 and it's the best machine I've ever owned; nothing is even close.
For sure. I had one of those for work when I got my personal M1 Air and I couldn't believe how much faster it was. A fanless ultraportable faster than an 8-core i9!
I was so happy when I finally got an M1 MBP for work because as you say Docker is so much faster on it. I feel like I don't wait for anything anymore. Can't even imagine these new chips.
> I am still astounded the huge change moving from an Intel Mac to an Apple Silicon Mac (M1) has had in terms of battery performance and heat
The battery life improvements are great. Apple really did a terrible job with the thermal management on their last few Intel laptops. My M1 Max can consume (and therefore dissipate) more power than my Intel MBP did, but the M1 thermal solution handles it quietly.
The thermal solution on those Intel MacBooks was really bad.
Those MacBooks were designed when Intel was promising new, more efficient chips and they didn’t materialize. Apple was forced to use the older and hotter chips. It was not a good combination.
Another factor might be that Intel MacBook Pros got thinner and thinner. The M1 MBP was quite a bit thicker than its Intel predecessors, and I think the form factor has remained the same since then.
> My M1 Max can consume (and therefore dissipate) more power than my Intel MBP did, but the M1 thermal solution handles it quietly.
You have to really, REALLY put in effort to make it operate at rated power. My M2 MBA idles at around 5 watts, my work 2019 16-inch i9 is around 30 watts in idle.
Lol ... You were not around for the ppc -> Intel change ... Same thing happened then ... Remarkable performance uplift from the last instruction set ... And we had Rosetta which allowed compatibility... The m1 and arm took power efficiency to another level .... But yeah what has happened before will happen again
The thing then was it was just Apple catching up with windows computers which had had a considerable performance lead for a while. It didn't really seem magical to just see it finally matched. (Yes Intel Mac's got better then Windows computers but that was later. At launch it was just matching)
It's very different this time because you can't match the performance/battery trade off in anyway.
Intel chips had better integer performance and PowerPC chips had better floating point performance, which is why Apple always used Photoshop performance tests to compare the two platforms.
Apple adopted Intel chips only after Intel replaced the Pentium 4 with the much cooler running Core Solo and Core Duo chips, which were more suitable for laptops.
Apple dropped Intel for ARM for the exact same reason. The Intel chips ran too hot for laptops, and the promised improvements never shipped.
The G5 in desktops was more competitive but laptops were stuck on G4s that were pretty easy to beat by lots of things in the Windows world by the time of the Intel switch. And Photoshop was largely about vectorized instructions, as I recall, not just general purpose floating point.
Yes, and when it became clear that laptop sales would one day outpace desktop sales, Apple made the Intel switch, despite it meaning they had to downgrade from 64 bit CPUs to 32 bit CPUs until Core2 launched.
The Apple ecosystem was most popular in the publishing industry at the time, and most publishing software used floating point math on tower computers with huge cooling systems.
Since IBM originally designed the POWER architecture for scientific computing, it makes sense that floating point performance would be what they optimized for.
If it's a M1 Macbook Air there's a very good reason you've never heard a fan!
Blows my mind how it doesn't even have a fan and is still rarely even anything above body temperature. My 2015 MBP was still going strong for work when I bailed on it late last year but the transition purely on the heat/sound emitted has been colossal.
It's not just that; at times I pushed all CPU cores to 100% in the M1 Mini and even after 30+ minutes I couldn't hear the fan. Too bad the Macbook Airs got nothing but a literal aluminium sheet as cooling solution.
Factorio: Space Age is the first piece of software that my M1 shows performance issues with. I'm not building xCode projects or anything, but it is a great Mac. Maybe even the greatest.
There's a known issue on arm macs with external monitors that messes with the framerates. Hopefully it gets fixed soon because pre-space age factorio was near flawless in performance on my m2.
I do wonder if PC desktops will eventually move to a similar design. I have a 7800x3d on my desktop, and the thing is a beast but between it and the 3090 I basically have a space heater in my room
A game I play with friends introduced a Mac version. I thought it would be great to use my Apple Silicon MacBook Pro for some quiet, low-power gaming.
The frame rate wasn’t even close to my desktop (which is less powerful than yours). I switched back to the PC.
Last time I looked, the energy efficiency of nVidia GPUs in the lower TDP regions wasn’t actually that different from Apple’s hardware. The main difference is that Apple hardware isn’t scaled up to the level of big nVidia GPUs.
I sincerely believe that the market for desktop PCs is completely coopted by the gaming machines. They do not care one whit about machine size or energy efficiency, with only one concern in mind: bare performance. This means they buy ginormous machines, incredibly inefficient CPUs and GPUs, with cavernous internals to chuck heat out with no care for decibels.
But they spend voriously. And so the desktop PC market is theirs and theirs alone.
Desktop PCs have become the Big Block V8 Muscle Cars of the computing world. Inefficient dinosaur technology that you pour gasoline through and the output is heat and massive raw power.
Desktops are actually pickup trucks. Very powerful and capable, capable of everyday tasks, but less efficient at them. Unbeatable at their specialty, though.
Yeah. It's been the case for a while now that if someone just wants a general computer, they buy a laptop (even commonly a mac).
That's why the default advice if you're looking for 'value' is to buy a gaming console to complement your laptop. Both will excel at their separate roles for a decade without requiring much in the way of upgrades.
The desktop pc market these days is a luxury 'prosumer' market that doesn't really care about value as much. It feels like we're going back to the late 90's, early 2000's.
The price of a high end gaming pc (7800x3d and 4080) is around 2k USD. That's comparable to the MacBook Pro.
Yeah sure, if you start buying unnecessary luxury cases, fans and custom water loops it can jump up high, but that's more for clueless rich kids or enthusiasts. So I wouldn't place pc gaming as an expensive hobby today, especially considering Nvidia money grubbing practices that won't stay forever.
I just bought a Beelink SER9 mini pc, about the same size as the Mac Mini. It's got the ridiculously named AMD Ryzen AI 9 HX 370 processor, a laptop CPU that is decently fast for an X64 chip (2634/12927 Geekbench 6 scores) but isn't really competition for the M4. The GPU isn't up to desktop performance levels either but it does have a USB4 port capable of running eGPUs.
It would make sense, but it depends heavily on Windows / Linux support, compatibility with nvidia / amd graphics cards, and exclusivity contracts with Intel / AMD. Apple is not likely to make their chips available to OEMs at any rate, and I haven't heard of any 4th party working on a powerful desktop ARM based CPU in recent years.
It would be nice. Similarly have a 5950X/3080Ti tower and it’s a great machine, but if it were an option for it to be as small and low-noise as the new Mini (or even the previous mini or Studio), I’d happily take it.
For what it is worth, I'm running that with open loop water cooling. If your chassis has the space for it, my rig won't even need to turn on fans for large amounts of the day. (Loop was sized for a threadripper, which were not really around for home builders) Size is an issue, however :)
ARM processors have always been good at handling heat and low power (like AWS Graviton), but what laptop did you have before that would overheat that much during normal usage? That seems like a very poor design.
I had a MBP 2019 which with default fan settings was really hot from the 1h videocall in Bluejeans. Or 5 minutes navigating in Google maps and street view in Chrome.
Only solution was to increase fan speed profile to max rpm.
On my 2019, if a single process hits 100% of one core the fan becomes quite noticeable (not hairdryer though) and the top of the keyboard area where the CPU is gets rather toasty.
anything that pegged the CPU for extended periods of time, caused many Apple laptop models to overheat. There is some design tradeoff about power specs, cooling, "typical workloads" and other things.. A common and not-new example of heat-death-workload was video editing..
Not for everyone. It turns out by following standard ergonomic guidelines I was doing more damage. I have to actually look way down at monitors, even on my desk. It has to be well below eye height, basically slammed.
Sometimes even not opening any apps is not enough if Spotlight decides that now is the time to update its index or something similar. Honestly nuts looking back at it.
I remember when macOS switched to evented way of handling input and for some reason decided that dropping keyboard events is okay...anyway if spotlight was updating its index, then unlocking your laptop with a password was impossible.
Last year I bought an M1 Pro used, but the last MPB I had was an early 2015. I just didn't bothered upgrading, in fact the later Intel models were markedly worse (keyboard, battery life, quality control). The Apple Silicon era is going to be the PowerPC prime over again.
> I don't think i've heard the fans a single time I've had this machine and it's been several years.
Yes I agree. I sometimes compile LLVM just to check whether it all still works. (And of course to have the latest LLVM from main ready in case I need it. Obviously.)
On extremely heavy workloads the fans do engage on my M1 Max, but I need to get my ear close to the machine to hear them.
Recently my friend bought a laptop with Intel Ultra 9 185h. It roared fans even when opening Word. That was extraordinary and if it was me making the purchase I would have sent it back straight away.
My friend did fiddle a lot with settings, had to update BIOS and eventually the fan situation was somewhat contained, but man I am never going to buy Intel / AMD laptop. You don't know how annoying fan noise is until you get a laptop that is fast and doesn't make any noise. With Intel is like having a drill pointed to your head that can goes off at any moment and let's not mention phantom fan noise, where it gets imprinted in your head that your brain makes you think the fans are on, but they are not.
Apple has achieved something extraordinary. I don't like MacOS, but I am getting used to it. I hope one day this Asahi effort will let us replace it.
When I play Baldur's Gate 3 on my M2 Max, the fans get loud. You need a workload that is both CPU-heavy and GPU-heavy for that. When you are stressing only the CPU or the GPU but not both, the fans stay quiet.
16” M1 still perfectly good machine for my work (web dev). Got a battery replacement which also replaced top cover and keyboard - it’s basically new. Extended applecare for another year which will turn it into fully covered 4 year device.
Is it crazy? The chip itself is small. I'm not up on the subject but is it unusual? Are we talking power draw and cooling adding greatly to the size? I guess the M4 Pro must have great specs when it comes to running cool.
Yes, but you only really encounter that when pushing the CPU to 100% for more than a few minutes. The cooling is objectively terrible, but still easily enough for most users, that's the crazy thing.
maybe? as local LLM/SD etc get more common it might be common to push it. I've been getting my fans to come on and get burning hot quite often lately because of new tech. I get that I'm a geek but with Apple, Google and everyone else trying to run local ML it's only a matter of time.
Apple's chips already have AI accelerators for things like content-based image search. They would never retroactively worsen battery life and performance just for a few more AI features when they could instead use it as selling point for the next hardware generation.
And if you regularly use local generative AI models the Pro model is the more reasonable choice. At that point you can forget battery life either way.
After posting this I thought of a few possible use cases. They might never come to pass but ... Some tech similar to DLSS might come along that lets streaming services like youtube and netflix to send 1/10th the data and get twice as good an image but require extreme processing on the client. It would certainly be in their interest (less storage, less bandwidth, decompression-upscaling costs pushed to client) Whether that will ever happen I have no idea. I was just trying to think of an example of something that might need lots of compute power at home for the masses.
Another could be realtime video modification. People like to stream and facetime. They might like it even more if they could change their appearance more than they already can using realtime ML based image processing. We already have some of that in the various video conferencing / facetime apps but it's possible it could jump up in usage and needed compute power with the right application.
Hopefully not? I honestly don't know. It's been around three years (whichever year it was they replaced Target Disk Mode) since I followed Apple news very closely.
It might be different post-Intel? I'm too lazy to dig up sources for Apple's past lost class action lawsuits, etc.
That Rossman guy, the internet-famous repairman, built his youtube channel on videos about Apple's inadequate thermal management. They're probably still archived on his channel.
Hell, I haven't owned a Mac post the year 2000 that didn't regularly hit temperatures above 90 celsius.
Why would you, or anyone, ever compare a line of Intel machines with a line of machines that have a vastly different architecture and power usage? It'd be like comparing Lamborghini's tractors and cars and asking if the tractors will scrape on steep driveways because you know the cars do.
On the other hand, it is comparing Apples to Apples.
The Gods didn't deliver specs to Apple for Intel machines locking the company to placement/grades/design/brands/sizes of chassis, fans, logic board, paste etc. Apple, in the Intel years, just prioritized small form factor, at the expense of longevity.
And Apple's priorities are likely still the same.
My concern is that, given cooler-running chips, Apple will decrease form factor until even the cooler-running chips overheat. The question, in my mind, is only whether the team at Apple who design chips can improve them to a point where the chips run so coolly that the rest of Apple can't screw it up (ie: with inadequate thermal design).
If that has happened, then... fantastic, that's good for consumers.
Jonny Ive left and Apple decided thinness wasn’t the only value.
100% Apple Silicon is that for computers. Very rarely do my fans whizz up. It’s noticeable when someone is using an x64 and you’re working with them because you will hear their computer’s fans on.
The work Apple has done to create a computer with good thermals is outrageous. Minimising distances for charges to be induced over.
I run Linux on my box. It’s great for what it does but these laptops are just the slickest computers I have ever used.
Never gets hot. Fans only come on during heavy compilation tasks or graphic intensive workloads.
That is encouraging to read, and hopefully it truly is the case that Apple has weened itself from its obsession with thinness.
Some of the choices Apple made after SJ's death left such an unpleasant taste in my mouth that I know have knee-jerk reactions to certain Apple announcements. One of those is that I experience nausea when Apple shrinks the form factor of a product. Hopefully that has clouded my judgement here, and in fact these Mac Minis have sufficient airflow to survive several years.
110 celsius heat... not good for lead-free solder... not good for computer.
This whole thread is starting to feel surreal to me. Pretty soon everyone will have me believing I dreamt up Apple's reputation for bad thermal management.
Well, when you don’t appear to know or care about the actual issues stemming from poor thermals (Intel relying too much on turbo clocks, toasty crotches, low battery life, noisy fans) and instead complain about made-up issues, yeah.
My frustration was with the totality of comments in the thread, not yours exclusively. I'd have no problem with any one reply in this thread, on its own. Apologies if I came across as rude.
There's nothing in a comment thread so cringeworthy and boring as a person trumpeting their own expertise, so I'll refrain, and leave off here.
Does anyone know if this Mac Mini can survive longer than a year? Apple's approach to hardware design doesn't prioritize thermal issues.
I've had an M1 Mac Mini inside a hot dresser drawer with a TV on top since 2020.
It doesn't do much other than act as a media server. But it's jammed pretty tight in there with an eero wifi router, an OTA ATSC DVR, a box that records HDMI, a 4K AppleTV, a couple of external drives, and a full power strip. That's why it's hot.
So far, no problems. Except for once when I moved, it's been completely hands-off. Software updates are done over VNC.
I'll admit to some reflexive skepticism here. I know GeekBench at least used to be considered an entirely unserious indicator of performance and any discussion relating to its scores used to be drowned out by people explaining why it was so bad.
Do those criticisms still hold? Are serious people nowadays taking Geekbench to be a reasonably okay (though obviously imperfect) performance metric?
I verified Geekbench results to be very tightly correlated with my use case and workloads (JVM, Clojure development and compilation) as measured by my wall times. So yes, I consider it to be a very reliable indicator of performance.
> Apple has now made four generations of chips and each one were class leading upon release.
Buying up most of TSMC's latest node capacity certainly helps. Zen chips on the same node turn out to be very competitive, butAMD don't get first dibs.
It’s more like Apple fronts the cash for TSMC’s latest node. But regardless, in what way does that detract from their chips being class-leading at release?
Because the others can't use that node, there are no others in that same class. If there was a race, but one person is on foot, and the other is in a car, it's not surprising if the person in the car finishes first.
Eventually Apple moves off a node and the others move on.
People pretend like this isn’t a thing that’s already happened, and that there aren’t fair comparisons. But there are. And even when you compare like for like Apple Silicon tends to win.
Line up the node, check the wattages, compare the parts. I trust you can handle the assignment.
I don’t think a lot of people fully understand how closely Apple works with TSMC on this, too. Both in funding them, holding them accountable, and providing the capital needed for the big foundry bets. It’s kind of one of those IYKYK things, but Apple is a big reason TSMC actually is the market leader.
If that's all we cared about we wouldn't be discussing a Geekbench score in the first place. The OP could have just posted the statement without ever mentioning a benchmark.
I was just curious if people had experience with how reliable Geekbench has been at showing relative performance of CPUs lately.
I blame it on the PC crowd being unconsciously salty the most prestigious CPU is not available to them. You heard the same stuff when talking about Android performance versus iPhone.
There is a lot to criticize about Apple's silicon design, but they are leading the CPU market in terms of mindshare and attention. All the other chipmakers all feel like they're just trying to follow Apple's lead. It's wild.
I was surprised and disappointed to see that the industry didn’t start prioritizing heat output more after the M1 generation came out. That was absolutely my favorite thing about it, it made my laptop silent and cool.
But anyway, what is it you see to criticize about Apple‘s Apple Silicon design? The way RAM is locked on package so it’s not upgradable, or something else?
I’m kind of surprised, I don’t hear a lot of people suggesting it has a lot to be criticized for.
It was wild to see the still ongoing overclocking Ghz competition, while suddenly one could use a laptop with good performance, no fans, no noise and while using it mobile.
As demonstrated by the M1-M3 series of chips, essentially all of that lead was due to being the first chips on a smaller process, rather than to anything inherent to the chip design. Indeed, the Mx series of chips tend to be on the slower side of chips for their process sizes.
Most people who say things like this tend to deeply misunderstand TDP and end up making really weird comparisons. Like high wattage desktop towers compared to fan-less MacBook Airs.
The process lead Apple tends to enjoy no doubt plays a huge role in their success. But you could also turn around and say that’s the only reason AMD has gained so much ground against Intel. Spoiler: it’s not. Process node and design work together for the results you see. People tend to get very stingy with credit for this though if there’s an Apple logo involved.
The performance-per-watt isn’t necessarily the best. These scores are achieved when boosted and allowed to draw significantly more power. Apple CPUs may seem efficient because, most of the time, computers don’t require peak performance. Modern ARM microarchitectures have been optimized for standby and light usage, largely due to their extensive use in mobile devices. Some of MediaTek and Qualcomm's CPUs can offer better performance-per-watt, especially at lower than peak performance. The issue with these benchmarks is that they overlook these nuances in favor of a single number. Even worse, people just accept these numbers without thinking about what they mean.
Geekbench is an excellent benchmark, and has a pretty good correlation with the performance people see in the real world where there aren't other limitations like storage speed.
There is a sort of whack-a-mole thing where adherents of particular makers or even instruction sets dismiss evidence that benefits their alternatives, and you find that at the root of almost all of the "my choice doesn't win in a given benchmark means the benchmark is bad" rhetoric. Then they demand you only respect some oddball benchmark where their favoured choice wins.
AMD fans long claimed that Geekbench was in cahoots with Intel. Then when Apple started dominating, that it was in cahoots with ARM, or favoured ARM instruction sets. It's endless.
Any proprietary benchmark that's compiled with the mystery meat equivalent of compiler/flags isn't "excellent" in any way.
SPECint compiled with either the vendor compiler (ICC, AOCC) or the latest gcc/clang would be a good neutral standard, though I'd also want to compare SIMD units more closely with x265 and Highway based stuff (vips, libjxl).
And how do you handle the fact that you can't really (yet) use the same OS for both platforms? Scheduler and power management counts, even for dumb number crunching.
I'm not skeptical of Apple's M-series chips. They have proven themselves to be quite impressive and indeed quite competitive with traditional desktop CPUs even at very low wattages.
I'm skeptical of Geekbench being able to indicate that this specific new processor is robustly faster than say a 9950x in single-core workloads.
It's robustly faster at the things that Geekbench is measuring. You can find issue with the test criteria (measures meaningless things or is easy to game) but the tests themselves are certainly sound.
It'll still be at the top of SPECint 2017 which is the real industry standard.
Geekbench 6.3 slightly boosted Apple Silicon scores by adding SME - a very niche instruction set extension which is never used in SPECint workloads. So the gap may not be as wide as GB6.3 implies.
It's by no means a be all end all "read this number and know everything you need to know" benchmark but it tends to be good enough to give you a decent idea of how fast a device will be for a typical consumer.
If I could pick 1 "generic" benchmark to base things off of I'd pick PassMark though. It tends to agree with Geekbench on Apple Silicon performance but it is a bit more useful when comparing non-typical corner cases (high core count CPUs and the like).
Best of all is to look at a full test suite and compare for the specific workload types that matter to you... but that can often be overkill if all you want to know is "yep, Apple is pulling ahead on single thread performance".
I'm confused. They're claiming "Apple’s M4 Max is the first production CPU to pass 4000 Single-Core score in Geekbench 6." yet I can see hundreds of other test results for single core performance above 4000 in the last 2 years?
They've been announced, within the past two weeks, and as far as I can tell aren't actually available for purchase from retailers yet: the only thing I've seen actually purchasable is Crucial's 6400MT/s CUDIMMs, and Newegg has an out-of-stock listing for a G.Skill kit rated for 9600MT/s.
The linked Geekbench result from August running at 7614 MT/s clearly wasn't using CUDIMMs; it was a highly-overclocked system running the memory almost 20% faster than the typical overclocked memory speeds available from reasonably-priced modules.
That result is completely different from pretty much every other 13700k result and it is definitely not reflective of how a 13700k performs out of the box.
Geekbench doesn't really give accurate information (or enough of it) in the summary report to make that kind of conclusion for an individual result. The one bit of information it does reliably give, memory frequency, says the CPU's memory controller was OC'd to 7600 MT/s from the stock 5600 MT/s so it feels safe to say that number with 42% more performance than the entry in the processor chart also had some other tweaks going on (if not actual frequency OCs/static frequency locks then exotic cooling or the like). The main processor chart https://browser.geekbench.com/processor-benchmarks which will give you a solid idea of where stock CPUs rank - if a result has double digit differences from that number assume it's not a stock result.
E.g. this is one of the top single core benchmark result for any Intel CPU https://browser.geekbench.com/v6/cpu/5568973 and it claims the maximum frequency was stock as well (actually 300 MHz less than thermal velocity boost limits if you count those).
Could those be overclockers? I often see strange results on there that looks like either overclockers or prototypes. Maybe they mean this is the fastest general purpose single core you can buy that is that fast off the shelf with no tinkering.
I'm driving a 2022 XPS. Lots of people will (and should) disagree, but I've completely shifted over from Thinkpads to Dell XPS (or Precision) for my laptops.
Running a 2024 xps 13 with Ubuntu for work and it's been solid. Had a Lenovo before this which was great bang for the buck but occasional issues with heating up during sleep. Would consider trying a Framework next.
The M4 is almost 1/3rd faster than the top Intel (on this benchmark)?
I had no idea the difference was that big. I don’t know what a normal geek bench score is, so I just sort of assumed that the top of the lung Intel part would be something like 3700 or 3800. Enough that Apple clearly took a lead but nothing crazy.
I have been out of the PC world for a long time, but in terms of performance efficiency, is Apple running away from the competition? Or are AMD and Intel producing similar performing chips at the same wattage?
Apple is slightly pulling away. AMD's top desktop chips were on par with M1/M2/M3 1T but now they cannot match even M4 despite releasing a new design (Zen 5) this year.
It's partially because AMD is on a two year cadence while Apple is on approximately a yearly cadence. And AMD has no plans to increase the cadence of their Zen releases.
2020 - M1, Zen 3
2021 - ...
2022 - M2, Zen 4
2023 - M3
2024 - M4, Zen 5
Edit: I am looking at peak 1T performance, not efficiency. In that regard I don't think anyone has been close.
> Edit: I am looking at peak 1T performance, not efficiency. In that regard I don't think anyone has been close.
Indeed. Anything that approaches Apple performance does so at a much higher power consumption. Which is no biggie for a large-ish desktop (I often recommend getting middle-of-the-road tower servers for workstations).
Don’t thermals basically explode non-linearly with speed?
It’s possible Apple’s chips could be dramatically faster if they were willing to use up 300W.
I remember seeing an anecdote where Johny Srouji, the chief Apple Silicon designer, said something like the efficiency cores get 90% of the performance of the performance cores at like 10% of the power.
I don’t remember the exact numbers but it was staggering. While the single core wouldn’t be as high, it sounded as if they could (theoretically) make a chip of only 32 efficiency cores and just sip power.
Their margins tend to allow them to always use the latest TSMC process so they will often be pretty good just based on that. They are also ARM chips which obviously have been more focused on efficiency historically.
They actually work with TSMC to develop the latest nodes. They also fund the bulk of the development. It's not as if without Apple's funds someone else will get the same leading edge node.
Oh how the mighty have fallen. For decades, when comparing Mac versus PCs, it was always about performance, with any other consideration always derided.
Yet here we are, with the excuses of margins and silicon processes generations. But you haven't answered the question. Is Apple pulling ahead or is the x86 cabal able to keep up?
My assessment is that ARM is running away from the competition. Apple is indeed designing the chip, but without the ARM architecture, Apple would have nothing to work with. This is not to diminish the incredible work of Apple’s VLSI team who put the chip architecture together and deftly navigated the Wild West of the fabrication landscape, but if you look at the specialized server chip side, it’s now dominated by ARM IP. I think ARM is the real winner here.
They have a good silicon design team, but having so much money that they can just buy out exclusive access to TSMCs most advanced processes doesn't hurt either. The closest ARM competitor to the M4, the Snapdragon X Elite, is a full node behind on 4nm while Apple is already using 2nd generation 3nm.
For some benchmarks the Snapdragon is on par with the M3. But the weirdo tests I found online did not say which device they compared, since the M3 is available in fan-less machines, which limits its potential.
That’s a really fair point. I think it’s tough for anyone else to break into the consumer / desktop segment with ARM chips. Apple can do it because they control the whole stack.
Is this really about the architecture itself, or about the licensing of it? AMD and Intel are, afaik, the only ones legally allowed to use x86, and likely have no plans to allow anyone else.
For many workloads I think they are pulling definitely ahead. However, I think there is still much to gain in software. For example, my Linux/Fedora desktop with 5900X is many times more responsive than my 16” M1 Pro.
Java runs faster. GraalVM native generated native images run way waster. Golang runs faster. X86_64 has seen more love from optimalisations than aarch64 has. One of the things I hit was different GC/memory performance due to different page sizes. Moreover, docker runs natively on Linux, and the network stack itself is faster.
But even given all of that, the 16” M1 PRO edges close to the desktop. (When it is not constrained by anti virus.) And it does this in a portable form factor, with way less power consumption. My 5900X tops out at about 180W.
So yes, I would definitely say they are pulling ahead.
I suspect that’s an OS issue. Linux is simply more optimized and faster at equivalent OS stuff.
Which isn’t too surprising given a lot of the biggest companies in the world have been optimizing the hell out of it for their servers for the last 25+ years.
On the flipside of the coin though Apple also clearly optimizes their OS for power efficiency. Which is likely paying some good dividends.
I was looking into this recently as my M1 Max screen suddenly died out of the blue within warranty and Apple are complete crooks wrt honouring warranties.
The AMD mobile chips are right there with M3 for battery life and have excellent performance only I couldn't find a complete system which shipped with the same size battery as the MBP16. They're either half or 66% of the capacity.
> and Apple are complete crooks wrt honouring warranties
Huh? I've used AC for both MBP and iPhones a number of times over the years, and never had an issue. They are known for some of the highest customer ratings in the industry.
They claimed that it wasn't covered because the machine was brought in Germany. I live in The Netherlands and brought it here. Also I contacted Apple Support to checked my serial number and then gave me the address to take it to. Which I did.
They charged me $100 to get my machine back without repair.
Also bear in mind that the EU is a single market, warranties etc are, by law, required to be honoured over the ENTIRE single market. Not just one country.
Especially when the closest Apple Store to me is IN GERMANY.
I have since returned it to Amazon who will refund it (they're taking their sweet time though, I need to call them next week as they should have transferred already).
So you haven't purchased it from Apple but instead you've purchased it from Amazon. This may change things. In Europe you have two ways of dealing with it, either by manufacturer warranty (completely good will and on terms set by the manufacturer) or by consumer rights (warranted you by law, overruling any warranty restrictions).
Sellers often will try to steer you to use warranty as it removes their responsibility, Amazon is certainly shady here. Apple will often straight on give you a full refund or a new device (often newer model), that happened to me with quite few iPhones and MacBooks.
I had a Macbook Pro 2018 that a friend of mine bought for me in Moscow because it was much cheaper there (due to grey import, I think). I didn't have Apple Care or anything. When its touchbar stopped working in 2020, I brought it to Apple Store in Amsterdam and complained about it and also about faulty butterfly keys (one keycap fell off, "t" and "e" key were registering double presses each time). So the guys at Apple Store simply took it and replaced the whole top case so I've got a new keyboard, new touchbar, and - the best part - a new battery.
10 years ago, the Genius Bar would fix my problem to my satisfaction in almost no time -- whether or not I had Apple Care. They'd send it off for repair immediately or fix it in less than an hour. 2 out of 3 iPhone 6 that had an issue, they just handed me a new device.
Today, Apple wastes my time.
Instead of the old experience of common sense, today the repair people apparently must do whatever the diagnostic app on the iPad says. My most recent experience was spending 1.5 hours to convince the guy to give me a replacement Airpods case. Time before that was a screen repair where they broke a working FaceID ... but then told me the diagnostics app told them it didn't work, so they wouldn't fix it.
I'm due for another year of AppleCare on my MBP M1, and I'm leaning towards not re-upping it. Even though it'd be 1/20th of the cost of a new one, I don't want to waste time arguing with them anymore.
On the same node, the performance is quite similar. Apple's previous CPU (M3) has been a 3nm part, while AMD's latest and greatest Zen 5 is still on TSMC's 4nm.
There have always been higher performing x64 chips than the M series but they use several times more power to get that.
Apple seems to be reliably annihilating everyone on performance per watt at the high end of the performance curve. It makes sense since the M series are mobile CPUs on ‘roids.
How impressed should I be. In terms of Apples history of manufacturing chips compared to, say, intel. This is their 4th generation of the M chip and it seems to be so far ahead of intel, a company with significant bigger history of chip production.
They were in the PowerPC consortium starting in 1991, co-developed ARM6 starting in the late 80s and the M series chips are part of the Apple Silicon family that goes back to at least 2010's Apple A4 (with non-Apple branded chips before then).
They've been in the chip designing business for a while.
Actually difficult to know if it was Keller. Apple bought PA Semi which is where he came from. But he came on as a VP after it was founded by other engineers who had worked on the Alpha chips a year before that. Did he make the difference? Who knows.
What does seem to be constant is that the best CPU designs have been touched by the hands of people who can trace their roots to North Eastern US. Maybe the correlation doesn't exist and the industry is small and incestuous enough that most designs are worked on by people from everywhere, but sometimes it seems like some group at DEC or Multiflow stumbled on the holy grail of CPU design and all took a drink from the cup.
It is impressive but it is also important to remember that Intel, AMD, and Qualcomm make dozens of different chips while Apple makes a handfull. That means they can't be as focused as Apple.
It’s not quite a fair comparison, given Intel has their own fab, while Apple uses TSMC—and pays them a lot to get exclusive access to new nodes before anyone else.
I have a M1 MacBook Air that I use for Docker, VSCode, etc. And it still runs very smoothly. Interestingly, the only times it slows down is when opening Microsoft Excel.
And the port assortment is overall nicer in terms of not requiring an External TB4 hub for production environments (I literally have something plugged into every port on my M1 Max Mac Studio, even on the front!)
> Mac Mini (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $2500
> Mac Studio (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $5000
In those configurations, the Studio would have roughly 2x the GPU power of the Mini, with equivalent CPU power. It also has twice as many Thunderbolt ports (albeit TB4 instead of TB5), and can support more monitors.
AFAIK, memory bandwidth. M2 Ultra 800GB/s, whereas M4 Max is just 546GB/s. For example, local LLM inference has a big bottleneck on bandwidth. 50% extra is significant.
I wish the Studio received an upgrade, with a new M4 Ultra potentially going over 1TB/s. It also offers better cooling for long computations.
This confuses me because I thought all of the Mx series chips in the same generation ran at the same speed and has the same single-core capabilities?
The main thing that caused differential single-core CPU performance was just throttling under load for the devices that didn't have active cooling, such as the MacBook Air and iPad Pros.
Based on this reasoning, the M4, M4 Pro and M4 Max in active cooled devices, the MacBook Pro and Mac Mini, should have the same single-core performance ratings, no?
It might be down to the memory latency, the base M4 uses LPDDR5X-7500 while the bigger models use LPDDR5X-8533. I think that split is new this generation, and the past gens used the same memory across the whole stack.
The Pro and Max have more cache and more memory bandwidth. Apple also appears to be crippling the frequency by a percent or two so that the Max can be the top.
It depends on the game. If there are a lot of game simulation calculations to do for every frame, then you're going to be CPU constrained. If it's a storybook that you're walking through and every pixel is raytraced using tons of 8000x8000 texture maps, then it's going to be GPU constrained.
Most games are combinations of the two, and so some people are going to be CPU limited and some people are going to be GPU limited. For games I play, I'm often CPU limited; I can set the graphics to low at 1280 x 720, or ultra at 3840 x 2160 and get the same FPS. That's CPU limiting.
I recently swapped out my AMD 3800X with a 5900X as an alternative to a full platform upgrade. I got it mostly for non-gaming workloads, however I do enjoy games.
Paired with my aging but still chugging 2080Ti, the max framerates in games I play did not significantly increase.
However I did get a significant improvement in 99-percentile framerate, and the games feel much smoother. YMMV, but it surprised me a bit.
How many people does this actually affect? Gamers are better off with AMD X3D chips, and most productivity workloads need good multicore performance. Obviously MR is great silicon and I don't want to downplay that, but I'm not sure that best singlecore performance is an overly useful metric for the people who need performance.
Single core performance is what I need as a developer for quick compilation or updates of Javascript in the browser, when working on a Clojure/ClojureScript app. This affects me a lot.
Usually when I see advances, it's less about future proofing and more about obsoletion of old hardware. A more exaggerated case of this was in the 90s, people would upgrade to a 200 MHz p1 thinking they were future proofing but in a couple years you had 500Mhz P2s.
Or users of Slack, Spotify, Teams.. you name it. But I don't want to make an excuse that Electron-like frameworks should be encouraged to be used even more if we have super single core computers available.
The problem with the M chips is that you have to use macOS (or tinker with Asahi two generations behind). They are great hardware, but just not an option at all for me for that reason.
Mac OS was awful. OS X was amazing. macOS feels like increasingly typical design-by-committee rudderless crapware by a company who wishes it didn't have to exist alongside iOS.
How is it amazing? In my experience it is full of bugs and bad design choices if you ever dare to steer from the path Apple expects masses to take. If you try to use workspaces/desktops to the full extent, you know.
Sir, you are not a real gamer(tm) either. Use a puny alternative OS and lose 3 fps, Valve support or not? Unacceptable!
As for Linux, I abandoned it as the main GUI OS for Macs about 10 years ago. I have linux and windows boxes but the only ones with displays/keyboards are the macs and it will stay that way.
It is definitely not a given you lose FPS on Linux. It is not uncommon for games to give better FPS on Linux. It will all end up depending on the exact games you want to play.
That explains your comment then, lots of things changed from 10 years ago and gaming on Linux is pretty good now. The last games you can't play are the ones with strong anti-cheats basically. You can't compare that to the Mac situation where you can't play anything.
Though a MacBook Pro 16" with M4 Max(that's what achieved this geekbench score), but the same amount of memory (64GB) and the same amount of storage (4TB) as my PC, would cost 6079€. That is roughly twice as much as my whole PC Build did cost, and I'm able to expand Storage and upgrade my CPU and GPU in the future (for way less than buying a new Mac in a few years)
If they release a Studio without increasing prices next year, these specs will cost you 4500€. That's more comparable to your build (sans the upgrade options of course).
There's a 20x spread in Speedometer results on OpenBenchmarking, just including modern Intel Core CPUs, so yeah I would not be surprised if an M4 outran a 68030 by anywhere from 50x to 1000x
Apple has gotten into Windows and PC territory with their naming for chips and models. Kind of funny to see the evolution of a compact product line and naming convention slowly turn into spreadsheet worthy comparison charts.
That all said, I only have an M1 and it's still impressive to me.
I think they're still keeping it somewhat together, agree it got ever more confusing with the introduction of more performance tiers but after 3 generations it's been easy to keep track of: Mx (base) -> Pro -> Max -> Ultra.
Think it's still quite far away from naming conventions of PC territory.
Now I got curious on what naming scheme could be clearer for Apple's performance tiers.
I agree it’s kind of weird. I do wonder if the ultra was part of their original M1 plan, or it came along somewhere in the middle of development and they just had to come up with a name to put it above the Max.
That said it’s far better than any PC scheme. It used to be easy enough when everything was megahertz. But I left the Wintel world around 2006 or so and stopped paying attention.
I’ve been watching performance reviews of some video game things recently and to my ears it’s just total gobbledygook now. The 13900KS, 14900K, 7900X, 7950X3D, all sorts of random letters and numbers. I know there’s a method to the madness but if you don’t know it it’s a mess. At least AMD puts a generation in their names. Ryzen 9 is newer than Ryzen 7.
Intel has been using i3, i5, i7, and i9 forever. But the problem is you can’t tell what generation they are just from that. Making them meaningless without knowing a bunch more.
At least as far as I know they didn’t remember everything. I remember when graphics cards were easy because a higher number meant better, until the numbers got too big so they released the best new ones with a lower number for a while.
At least I find Apple’s name tractable both between generations and within a generation.
yeah, as I was jokingly implying the names themselves aren't what I would have went with, but overall sticking to generation + t-shirt size + 2 bins is about as simple as it gets.
People are using Thunderbolt clustering for AI inference. Historically Thunderbolt networking has been much slower than you'd expect so people didn't bother trying HPC.
That depends on what you want to play and what other things that suck that you’re willing to tolerate.
The GPUs in previous M chips aren’t beating AMD or NVidia’s top offerings on anything except VRAM but you can definitely play games with them. Apple has released their Game Porting Toolkit a couple years ago which is basically like Wine/Proton in Linux and if you’re comfortable with Wine and approximately what a Steam Deck can run then that’s about what you can expect to run on a newer Mac.
Installing Steam or GOG Galaxy with something like Whiskey.app (which leverages the game porting toolkit) opens up a large number of games on macOS. Games that need Windows root kits are probably a pain point, and you’re probably not going to push all those video setting sliders to the far right for Ultra graphics on a 4K screen, but there’s a lot of games that are very playable on macOS and M chips.
In addition to Whisky, it seems to not be well known that VMWare Fusion is free for personal use and can run the ARM version of Windows 11 with GPU acceleration. I tried it on my M1 Pro MBP and mid-range Steam games ran surprisingly well; an M4 should be even better.
Wow, had no idea this worked as well as it does. I remember the initial hype when this showed up but didn't follow along. Looks like I don't have to regard my Steam library as entirely gone.
Steam Deck-level performance is quite fine, I mainly just want to replay the older FromSoft games and my favorite indies every now and then.
Fair warning, I haven't dug that deep into compatibility issues or 32 bit gaming compatibility but it's definitely something to experiment with and for the most part you can find out for free before making a purchasing decision.
First and foremost, it's just worth checking if your game has a native port: https://store.steampowered.com/macos People might be surprised what's already available.
With Wine syscall and Rosetta x86 code translation, issues do pop up from time to time though, like games that have cutscenes that are encoded as Windows Media Player specific formats, or any other media codecs which aren't immediately available since it's not like games advertise those technology requirements anywhere and you may encounter video stuttering or artifacts since the hardware is obviously dramatically different than what the game developers were originally developing against and there's things happening in the background that an x86 Windows system never does. This isn't stuff that's overly Mac specific since it usually impacts Linux equally but it's a hurdle to jump that you don't have to deal with in native Windows. Like I said, playing Windows games outside of Windows is just a different set of pain points and you have to be able to tolerate it. Some people think it's worth it and some people would rather have higher game availability and keep the pain of Windows. Kudos to Valve with creating a linux based handheld and the Wine and Proton projects for improving this situation dramatically though.
Besides the Game Porting Toolkit (which was originally intended for game developers to create native application bundles that could be put on the App Store), there's also Crossover for Mac that does their own work towards resolving a lot of these issues and they have a compatibility list you can view on their site: https://www.codeweavers.com/ and alternatively, some games run acceptably inside virtualization if you're willing to still deal with Windows in a sandboxed way. Parallels is able to run many games with better compatibility since you're actually running Windows, though last I checked DX12 was a problem.
Gaming isn't just about hardware, it's also about software, economics, trust and relationships.
Apple has quite impressive hardware (though their GPUs are still not close to high-end discrete GPUs), but they're also fast enough. The problem now is that Apple systematically does not have a culture that respects gaming or is interested in courting gamers. Games also rely on OS stability, but Apple has famously short and severe deprecation periods.
They ocassionally make pushes in that direction, but I think they lack the will to make a concerted effort, and I also think they lack the restraint to not try and force everything through their own payment processors and distribution systems causing sour relations with developers.
They’ve always been good enough for gaming. The problem has just been whether or not publishers would bother releasing the games. It’s unfortunate that Apple can’t seem to really build enough momentum here to become a gaming destination.
Apple's aggressive deprecation policies haven't done them any favors when it comes to games, they expect software to be updated to their latest ISAs and APIs in perpetuity but games are rarely supported forever. In many cases the developers don't even exist anymore. A lot of native Mac game ports got wiped out by 32bit being EOL'ed, and it'll probably happen again when they inevitably phase out Rosetta 2 and OpenGL support.
It has always baffled me why Apple doesn't take gaming seriously. It's another revenue stream, it would sell more Macs. It's profit.
Is it just some weird cultural thing? Or is there some kind of genuine technical reason for it, like it would involve some kind of tradeoffs around security or limiting architecture changes or something?
Especially with the M-series chips, it feels like they had the opportunity to make a major gaming push and bring publishers on board... but just nothing, at least with AAA games. They're content with cartoony content in Apple Arcade solely on mobile.
I always assumed it was the nature of the gaming workload on the hardware for why they don't ever promote it. AAA games pegging the CPU/GPU at near max for long periods of time goes against what they optimise their machines for. I just think they don't want to promote that sort of stress on the system. On top Apple taking themselves very seriously and seeing gaming as below them.
Apple has one of the, if not the biggest gaming platforms in existence (the iphone and ipad), but everyone seems to have a blind spot for that and disregards it. Sure, the Mac isn't a big gaming platform for them because their systems are mostly used professionally (assumption), but there too, the Mac represents only 1/10th of the sales they get from the iPhone, and that's only on the hardware.
Mac gaming is a nice-to-have; it's possible, there's tools, there's Steam for Mac there's toolkits to port PC games to Mac, there's a games category in the Mac app store, but it isn't a major point in their marketing / development.
But don't claim that Apple doesn't take gaming seriously, gaming for them it's a market worth tens of billions, they're embroiled in a huge lawsuit with Epic about it, etc. Finally, AAA games get ported to mobile as well and once again earn hundreds of millions in revenue (e.g. CoD mobile).
I feel like for myself at least, mobile gaming is more akin to casino gaming than video gaming. Sure, iOS has loads of gaming revenue but the games just ain't fun and are centred way too heavily on getting microtransactions out of people.
If you look at things like Game Porting Toolkit, Apple actually is investing resources here.
It just feels like they came along so late to really trying that it’s going to be a minute for things to actually happen.
I would love to buy the new Mac Mini and sit it under my TV as a mini console. But it just feels like we’re not quite there yet for that purpose, even though the horse power is there.
Apple owns the second largest gaming platform by users and games, and first by profit: iPhone.
In terms of gaming that's only on PC and consoles, I didn't understand Apple's blazé attitude until I discovered this eye-opening fact: there are around 300 million people who are PC and console gamers, and that number is NOT growing. It's stagnant.
Turns out Apple is uninterested by a stagnant market, and dedicates all its gaming effort where growth is: mobile.
But they are. They need to subsidize porting AAA games to solve the chicken-and-egg problem.
Gaming platforms don't just arise organically. They require partnership between platform and publishers, organized by the platform and with financial investment by the platform.
Apple does take gaming seriously. They've built out comprehensive Graphics APIs and things like the GPTK to make migrating games to Apple's ecosystem actually not too bad for developers. The problem is that a lot of game devs just target Windows because every "serious" gamer has a windows PC. It's a chicken-and-egg problem that results from Apple always having a serious minority share of the desktop market. So historically Apple has focused on the segments of the market that they can more easily break into.
They do take gaming seriously, that's likely the bulk of their AppStore revenue after all.
They just don't care about desktop gaming, which is somewhat understandable. While the m-series chips have a GPU, it's about as performant for games as a dedicated GPU from 10-14 years ago (It only needs a fraction of the electricity though, but very few desktop gamers care about that).
The games you can play have to run at silly low resolution (fullHD at most) and rarely even reach 60fps.
I think they will get there in time. They like to focus on things and not spread themselves thin. They always wanted to get the gaming market share but AI is taking all their time now.
Given that a Mac mini with an M4 is basically the same size and shape as an Apple TV, they could make a new Apple TV that was a gaming console as well.
Why is the Apple TV only focused on passive entertainment?
Is this one of those cases where "flop" means "this product would have a billion dollar market cap if it was a company, but since it's Apple, it's a flop".
No they haven't. For years the best you could get was "meh" to terrible GPUs at high price points. Like $2000+ was where getting a discrete GPU began. The M series stuff finally allows the entry level to have decent GPUs but they have less storage out of the box than a $300 Xbox Series S. Apple's priorities just don't align well with gamers. They prioritize resolution over refresh rate and response time, make mice unusable for basically any FPS made in the past 20 years and way overcharge for storage and RAM.
Valve has/continues to do way more to make Linux a viable gaming platform than Apple will likely ever do for mac
I get it, you want to leave windows by way of mac. But your options are to either bite the bullet and expend a tiny bit of your professional skill on setting up a machine with linux, or stay on windows for the foreseeable future.
Well we're about to find out now that CDPR have announced Cyberpunk 2077 will get a native Metal port. I for one am extremely curious with the result. Apple have made very lofty claims about their GPU performance, but without any high-end games running natively, it's been hard to evaluate those claims.
That said, expectations should be kept at a realistic level. Even if the M4 has the fastest embedded GPU (it probably does), it's still an embedded GPU. They aren't going to be topping any absolute performance charts.
Was this verified independently? Because people can submit all sorts of results for Geekbench scores. Look at all these top scorers (most of which are obviously fake or overclocked chips): https://browser.geekbench.com/v6/cpu/singlecore
For how long? There are a lot of superlatives ("simply incredible" etc) - when some new AMD or Intel CPU beats this score, will that be "simply incredible" too?
New chips are slightly faster than previous ones. I am not incredulous about this. Were it a 2x or 3x or 4x improvement or something, sure. But it ain't - it's incremental. I note how even in the Apple marketing they compare it to generations 3 or 4 chips ago (e.g. comparing increases against i7 performance from years ago etc, not against the M3 from a year or so ago because then it is "only" 12% - still good, but not "simply incredible" in my eyes).
Why is so hard for people to understand why apple did that?
They want the people who are still clinging to intel mac to convert finally. And as for m1 comparisons, people are not changing laptops every year and that is the cohort of m users that is the most likely to upgrade. It's smart to do what apple did.
I get that argument, but it comes across as hugely disingenuous to me especially when couched with so much glitz and glamour and showmanship. They're aim is to present these things as huge quantum leaps in performance and it's only if you look into the details that it's clear that they're not and they're fudging the figures to make them look better than they are.
"New Car 2025 has a simply incredible top speed 30x greater than previous forms of transport!* (* - previous form of transport slow walk at 4mph)"
It's marketing bullshit really let's be honest. I don't accept that their highly-polished entire marketing spiel and song and dance is aimed 100% only at people who have 3 or 4 generation old Mac already. They're not spending all this time and money and effort just to try and get people to upgrade. If you believe that, then you are in the distortion field.
No one in the industry uses Apple's marketing in any real sense. The marketing is not for you - its sole purpose is to sell more Macs to their target market.
That you are distracted by it is not Apple's problem - and most other industry players don't GAF about Apple's self-comparisons either.
shrug I just upgraded an M1-ultra studio to an M4-Max MBP. I'm not going to splash that much cash every year on an upgrade, and I don't think that's uncommon.
Just like the phone comparisons are from more than one year ago, the computer comparisons (which are even more expensive) make more sense to be from more than one year ago. I don't see why you wouldn't target the exact people you're trying to get to upgrade...
Yet you do not propose an alternative theory that makes sense.
Our point: Apple is laser-focused on comparing with laptops that are 4-5 year old. That's usually when Mac users start thinking about upgrading. They're building their marketing for them. It causes issues when directly trying to compare with the last generation.
Your point: Apple shouldn't be glamorous and a good showman when marketing their products because they know the only true marketing is comparing directly with your very last chip. Any other type of marketing is bullshit.
> I note how even in the Apple marketing they compare it to generations 3 or 4 chips ago
Apple is just marketing to the biggest buyer group (2 generation upgrades) in their marketing material?
This isn’t like iPhones where people buy them every 1-2 years (because they break or you lose it etc), laptops have a longer shelf life, you usually run to the ground over 2+ yrs and then begrudgingly upgrade.
And my 2019 Intel MBP is still working too. Use it every day.
The idea of a 6x (or whatever) performance jump is certainly tempting. Exactly as they intend it to be. If I was in charge of replacing it I would be far more likely to buy than if I had an M3.
We're on a perpetual upgrade treadmill. Even if the latest increment means an uncharacteristically good performance or longevity improvements... I can't bring myself to care.
There are a LOT of corporate Macs out there that are still on Intel.
The replacement cycle may just be that long. Or maybe they chose to stick with Intel. Maybe because that’s what they were used to or maybe because had specific software needs. So they were still buying them after Apple Silicon machines had been released.
Yeah it’s not a big deal for the enthusiast crowd. But for some of their customers it’s absolutely a consideration.
Different chips though, and different links. (Also, it’d be nice if we stopped linking directly to social media posts and instead used an intermediary that didn’t require access or accounts just to follow discussions here.)
Well, firstly, it isn't. There are higher Geekbench 6 CPU scores, even forgetting the ones that appear to be fake or at least broken.
But secondly, that would absolutely not indicate that it is the "fastest single-core performer in consumer computing". That would indicate that it is the highest scoring Geekbench 6 CPU in consumer computing.
Whether or not that's actually a good proxy for the former statement is up to taste, but in my opinion it's not. It gives you a rough idea of where the performance stands, but what you really need to be able to compare CPUs is a healthy mix of synthetic benchmarks and real-world workloads. Things like the time it takes to compile some software, scores in video game benchmarks, running different kinds of computations, time to render videos in Premiere or scenes in Blender, etc.
In practice though, it's hard to make a good Apples-to-Intels performance comparison, since it will wind up crossing both OS boundaries and CPU architecture boundaries, which adds a lot of variables. At least real world tests will give an idea of what it would be like day-to-day even if it doesn't necessarily reveal truisms about which CPU is the absolute best design.
Of course it's reasonable to use Geekbench numbers to get an idea of where a processor stands, especially relative to similar processors, but making a strong claim like this based off of Geekbench numbers is pretty silly, all things considered.
Still... these results are truly quite excellent. It would suffice to say that if you did take the time to benchmark these processors you would find the M4 processor performs extremely well against other processors, including ones that suck up more juice for sure, but this isn't too surprising overall. Apple is already on the TSMC N3E process, whereas AMD is currently using TSMC N4P and Intel is currently using TSMC N3B on their most cutting edge chips. So on top of any advantages they might have for other reasons (like jamming the RAM onto the CPU die, or simply better processor design) they also have a process node advantage.
SPEC has been the industry standard benchmark for comparing the performance of systems using different instruction sets for decades now.
Traditionally, Anandtech would have been the first media outlet to publish the single core and multicore integer and floating point SPEC test results for a new architecture, but hopefully some trusted outlet will take up the burden.
For instance, Anandtech's Zen 5 laptop SKU results vs the M3 from the end of July:
> Even Apple's M3 SoC gets edged out here in terms of floating point performance, which, given that Apple is on a newer process node (TSMC N3B), is no small feat. Still, there is a sizable deficit in integer performance versus the M3, so while AMD has narrowed the gap with Apple overall, they haven't closed it with the Ryzan AI 300 series.
https://www.anandtech.com/show/21485/the-amd-ryzen-ai-hx-370...
Zen 5 beat Core Ultra, but given that Zen 5 only edged out the M3 in floating point workloads, I wouldn't be so quick to claim the M4 doesn't outperform Zen 5 single core scores before the test results come out.
just considering the number of simple calculations a CPU can compute isn't a very good comparison. apple's chips are using an ARM architecture, which is a Reduced Instruction Set Computer (RISC) setup, vs x86-64, which is a Complex-ISC (CISC).
The only good comparison is to judge a variety of real world programs compiled for each architecture, and run them.
> The only good comparison is to judge a variety of real world programs compiled for each architecture, and run them.
I'm guessing that you don't realize that you are describing SPEC?
It's been around since the days when every workstation vendor had their own bespoke CPU design and it literally takes hours to run the full set of workloads.
From the same Anandtech article linked above:
> SPEC CPU 2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.
More info:
> SPEC is the Standard Performance Evaluation Corporation, a non-profit organization founded in 1988 to establish standardized performance benchmarks that are objective, meaningful, clearly defined, and readily available. SPEC members include hardware and software vendors, universities, and researchers.
SPEC was founded on the realization that "An ounce of honest data is worth a pound of marketing hype".
https://www.spec.org/cpu2017/Docs/overview.html
took at look at these benchmarks. they appear to be using some extremely antiquated code, and workloads, that do not take advantage of any of the advanced features and instruction introduced over the past 15-20 years in the x86-64 architecture.
additionally, the only updates they appear to have made in the last 5+ years involve optimizing the suite for Apple chips.
thus, it leaves out massive parts of modern computing, and the (many) additions to x86-64 that have been introduced since the 00s.
i'd encourage you to look into the advancements that have occurred in SIMD instructions since the olden days, and the way in which various programs, and compilers, are changed to take advantage of them
ARM is nice and all, but the benchmark you've linked appears to be some extremely outdated schlock that is peddled for $1000 a pop from a web page out of the history books. Really. Take a look through what the benchmarks on that page are actually using for tooling.
I'd consider the results valid if they were calculated using an up to date, and maintained, toolset, like that provided by openbenchmarking.org (the owner of which has been producing some excellent ARM64 vs Intel benchmarks on various workloads, particularly recently).
> the only updates they appear to have made in the last 5+ years involve optimizing the suite for Apple chip
How do you theorize that generic C or C++ code that you compile using GCC has been "optimized for an Apple chip"?
Feankly, it's impossible to take any of this comment seriously.
Does optimising the tests towards one architecture seem like a fair way of testing?
It's optimizing as in previously it had far less attention.
https://en.wikipedia.org/wiki/SPECint talks about a dozen programs they selected for single-core logic and discrete math, including gcc and bzip2 (there are more than a dozen others using floats).
Over time, RISC and CISC borrowed from each other: https://cs.stanford.edu/people/eroberts/courses/soco/project...
The higher single-thread GB6 scores are from overclocked Intel or AMD CPUs.
The M4 core @ 4.5 GHz has a significantly higher ST GB6 performance than Lion Cove @ 5.7 GHz or Zen 5 @ 5.7 GHz (which are almost equal at the same clock frequency, with at most a 2% advantage for Lion Cove).
Having higher GB6 scores should be representative for general-purpose computing, but there are application domains where the performance of the Apple cores has been poor in the past and M4 is unlikely to have changed anything, i.e. the computations with big integer numbers or the array operations done on the CPU cores, not on the AMX/SME accelerators.
Nevertheless, I believe that your point about the existence of higher ST GB6 scores is not weakened by the fact that those CPUs are overclocked.
For a majority of the computer users in the world, the existence of higher ST performance that can be provided by either overclocked Intel/AMD CPUs or by CPUs made by Apple is equally irrelevant, because those users would never choose any of these 2 kinds of CPUs for their work.
Remember too back in the day when you're looking at a Mac with 1/4 the power of a PC, and it's 4x the price. I think we're starting to see those ratios reversed completely. And at the same time, the power, heat, etc.. is just sitting at the floor.
Yeah, this rings true. I'm not an Apple customer, but I certainly remember the days when Mac users had to justify the much worse bang-for-the-buck of Apple hardware (then) with it being the only ticket to their preferred software experience.
These days it appears more that the hardware is fantastic, especially in the laptop form factor and thermal envelope, and perhaps the downside is a languishing macOS.
I use macOS daily for dev and office work, and to me it doesn’t feel languishing at all. Add Homebrew and Docker or Podman, and we’re off.
The only places I can see there could be features missing are:
- IT management type stuff where it looks like Apple are happy just delegating to Microsoft (eg. my workstation is managed with InTune and runs Microsoft Defender pushed by IT),
- CUDA support if you’re into AI on NVIDIA
- Gaming I hear, but I don’t have time for that anyway :)
Of course this is biased, because I also generally just _like_ the look and feel of macOS
> you really need to be able to compare CPUs is a healthy mix of synthetic benchmarks and real-world workloads
I like how Apple got roasted on every forum for using real world workloads to compare M series processors to other processor. The moment there’s a statistic pointing to “theoretical” numbers, we’re back to using real world workload comparison
What's crazy is that the M4 Pro is in the Mac mini, something so tiny can handle that chip. The Mac Studio with the M4 Max will be awesome but the Mini is remarkable.
Indeed, the $1599 M4 Pro Mac mini beats the current $3999 M2 Ultra Mac Studio on GeekBench 6 multi-core: https://www.macrumors.com/2024/10/31/m4-pro-chip-benchmark-r...
Back in the early 90s, when Apple was literally building their first product line with the Mac, they would come out with their second big honking powerhouse Mac: the Macintosh IIx. It blew everything out of the water. Then they would come out with their next budget all-in-one machine. But computing was improving so fast, with prices for components dropping so quickly, that the Macintosh SE/30 ended up as impressive as the Macintosh IIx with a much lower price. That's how the legend of the SE/30 was born, turning it into the best Mac ever for most people.
With how fast and impressive the improvements are coming with the M-series processors, it often feels like we're back in the early 90s. I thought the M1 Macbook Air would be the epitome of Apple's processor renaissance, but it sure feels like that was only the beginning. When we look historically at these machines in 20 years, we'll think of a specific machine as the best early Apple Silicon Mac. I don't think that machine is even out yet.
In the 90s, you probably wouldn't want to be using a desktop from 4 years ago, but the M1 is already 4 years old and will probably be fine for most people for years yet.
No kidding. The M1 MacBook Pro I got from work is the first time I've ever subjectively considered a computer to be just as fast as it was the day I got it.
I think by the time my work-provided M1 MacBook Pro arrived, the M2s were already out, but of course I simply didn't care. I actually wonder when it will be worth the hassle of transferring all my stuff over to a new machine. Could easily be another 4 years.
Funny that we can buy renewed Intel Macs for less than $200 and do exactly the same.
Maybe the desktops, but the laptops were always nigh-unusable for my workloads (nothing special, just iOS dev in Xcode). The fans would spin up to jet takeoff status, it would thermal throttle, and performance would nosedive.
The M1 Pro was a revelation.
There was a really annoying issue with a lot of the intel MacBooks where due to the board design one of the two power sockets would cause them to run quite a bit hotter.
Yeah I remember that, I posted a YouTube video complaining about it 6 years ago, before I could find any other references to the issue online. https://www.youtube.com/watch?v=Rox2IfViJLg
That would cause it to throttle even when idle! But even on battery or using the right-hand ports, under continuous load (edit-build-test cycles) it would quickly throttle.
Oh yeah, I'm very aware. My work machine was a 2015 MBP until about 6 months ago. It was really bad towards the end.
...until the battery runs out hours earlier.
Or your lap gets hot. Or the fans drive you mad. Good luck with the available ports. Oh, it’s slow AF too, but if you get the right model you can use that stupid Touch Bar.
Where can you get Intel Macs for $200?
I bought an Intel 13-inch MacBook Pro for a friend that I’m working with for $200-$250 from woot.com.
I bought an M1 MacBook Pro just to use it for net and watching movies when in bed or traveling. I got the Mac because of its 20 hours battery life.
Since Snapdragon X laptops caught up to Apple on battery life I might as well buy one of those when I'll need to change. I don't need the fastest mobile CPU for watching movies and browsing the internet. But I like to have a decent amount of memory to keep a hundred tabs open.
Apple's marketing is comparing this season's M4s to M1s and even two generations of Intel ago. The 2x or 4x numbers suggests they are targeting and catering to this longer cycle where subliminally suggested updates are remarkably better, rather than suggesting an annual treadmill even though each release is "our best ever".
I think it's also part of the sales pitch, tho – a lot of folks are sitting on M1s and pretty happy but wondering if they should upgrade.
Yeah they did this last year and before. It’s super annoying. I’d say it’s super stupid, but I’m sure from a marketing point of view it isn’t.
I was going to say why not compare it to something older! 100000x faster than a pc-xt!
I mean, most people don't buy a new phone each year, let alone something as expensive as a laptop. They are probably still targeting Intel Mac, or M1 users for the most part.
To be fair, MOST computers are like that nowadays, regardless of brand. I'm using a Intel desktop that is ~8 years old and runs fine with an upgraded GPU.
Sure, apple isn't the only one making good laptops, though they do make some of the best. My point was just that we definitely aren't back at 90s level of progress. Frequency has barely been scaling since node shrinks stopped helping power density much, and the node shrinks are fewer and farther between.
So long as Apple is willing to keep operating system updates available for the platform. This is by far the most frustrating thing. Apple hardware, amazing and can last for years and even decades. Supported operating system updates, only a couple of years.
I'm typing this from my mid-2012 retina mac book pro. I'm on Mojave and I'm well out of support for the operating system patches. But the hardware keeps running like a champ.
Apple hardware, amazing and can last for years and even decades. Supported operating system updates, only a couple of years.
That’s not accurate.
Just yesterday, my 2017 Retina 4k iMac got a security update to macOS Ventura 13.7.1 and Safari even though it’s listed as “vintage.”
Now that Apple makes their own processors and GPUs, there’s really no reason in the foreseeable future that Apple would need to stop supporting any Mac with an M-series chip.
The first M1 Macs shipped in November 2020—four years ago but they can run the latest macOS Sequoia with Apple Intelligence.
Unless Apple makes some major changes to the Mac’s architecture, I don’t expect Apple to stop supporting any M series Mac anytime soon.
Have you tried OpenCore Patcher? It allows newer macOS to be installed on unsupported macs.
Luckily m1 has Linux.
Playing Chuck Yeager’s air combat or Glider Pro on the SE/30 was great.
I owned an SE/30. I watched my first computer video on that thing, marveling that it was able to rasterize (not the right word) the color video real-time. I wish I had hung onto that computer.
> Back in the early 90s, when Apple was literally building their first product line with the Mac,
cough
like saying, "Back in the 70s with Paul McCartney's first band, Wings (...)"
kids? get off my lawn
Getting older comes faster than you think. I was an adult, blinked, and decades had passed seemingly in a moment.
I think that machine is this M1 Pro MBP I'm using right now, and will probably still be using in 20 years.
I look forward to seeing you at a conference in 20 years' time where we'll both be running Fedora 100 on our M1 MacBook Pros.
Agreed. It might share the title with the M1 Air which was incredible for an ultraportable, but the M1MBP was just incredible period. Three generations later it's still more machine than most people need. M2/3/4 sped things up but the M1 set the bar.
The airs are incredible. I’ve been doing all my personal dev work on an M2 Air for years, it’s the best laptop I’ve ever owned.
I’m only compelled to upgrade for more ram, but I only feel the pressure of 8gb in rare cases. (I do wish I could swap the ram)
I was the same with the M1 Air until a couple months ago when I decided I wanted more screen real estate. That plus the 120Hz miniLED and better battery and sound make the 16" a great upgrade as long as the size and weight aren't an issue. I just use it at home so it's fine but the Air really is remarkable for portability.
I have the M1 Air, too. I just plug in to a nice big Thunderbolt display when I need more screen!
I'll likely upgrade to the M4 Air when it comes out. The M4 MacBook Pro is tempting, but I value portability and they're just so chunky and heavy compared to the Air.
My only regret is getting base RAM.
It’s not a server so it’s not a crime to not always be using all of it and it’s not upgradable so it needs to be right the first time. I should have got 32GB to just be sure.
Apple's sky-high RAM prices and strong resale values make this a tough call, though. It might just about be better to buy only the RAM you need and upgrade earlier, considering you can often get 50% or more of the price of a new one back by selling your old one.
Thankfully, Apple recently made 16GB the base RAM in all Macs (including the M2/M3 MacBook Airs) anyway. 8GB was becoming a bad joke and it could add 40% to the price of some models to upgrade it!
Yep, that's definitely a thing I'm proud or correctly foreseeing. I was upgrading from an old machine on 8GB, but I figured especially with memory being non upgradable it was better being safe than sorry, and if I kept the machine a decade it would come out to sanwitch money in the end.
>the Macintosh IIx. It blew everything out of the water.
naa... Amiga had the A2500 around the same time, the Mac IIx wasn't better with regards to specs in most ways. And at about $4500 more expensive (Amiga 2500 was around $3300, Mac IIx was $7769), it was vastly overpriced as is typical for Apple products.
Worth remembering that Amiga went out of business just a few years later, while Apple today is the largest company in the world by market capitalisation. Doesn't matter how good the product is: if you're not selling it for a profit, you don't have a sustainable business. Apple products aren't overpriced as long as consumers are still buying them and coming back for more.
The Nubus in the IIx was great.
I'm reminded of the Intel dominance. Whatever happened?
> I thought the M1 Macbook Air
I've got one and it's really not that impressive. I use it as a "desktop" though and not as a laptop (as in: it's on my desk hooked to a monitor, never on my laps).
I'm probably gonna replace it with a Mini with that M4 chip anyway but...
My AMD 7700X running Linux is simply a much better machine/OS than that MacBook M1 Air. I don't know if it's the RAM on the 7700X or the WD-SN850X SSD or Linux but everything is simply quicker, snappier, faster on the 7700X than on the M1.
I hope the M4 Mini doesn't disappoint me as much as the M1 Air.
Yes, but I suspect the 64GB of memory in the studio compared to 24GB in the mini the is going to make that studio a lot faster in many real-world scenarios.
In that case, you can get the mini with 64GB of memory for $1999.
It would be $2,199 for the highest end CPU and the 64GB of memory but I think you're point remains: the Studio is not a great buy until it receives the M4 upgrades.
And the bandwidth. A M4 Ultra would be a nice upgrade for large LLM inference on a budget.
this is crazy, i'm more than happy with the current performance of my M1 Max Studio but an M4 Max or Ultra might actually be too good to pass up
I’m already planning on swapping mine for an M4 Ultra.
I love my M1 Studio. It’s the Mac I always wanted - desktop Mac with no integrates peripherals, a ton of ports - although I still use a high end hub to plug in… lot more. Two big external SSDs, my input peripherals (I’m a wired mouse and keyboard kind of guy) then a bunch of audio and USB midi devices.
It’s even a surprisingly capable gaming machine for what it is. Crossover is pretty darn good these days, and there are ARM native Factorio and World of Warcraft ports that run super well.
I've been thinking about getting a Mac mini as a small server box due to how powerful it is.
It's my plex server and nas (m1). I've abandoned all the complicated complications and just have an 8 bay thunderbolt enclosure full of drives (JBOD)
And pg server. And a few web sites server. And something running in orb stack.
It's the 8gb model and I have around 2gb free most of the time
What enclosure do you have? Do you like it?
Owc thunderbay 8
I like it in every way except price. It just works, comes back online after a power outage, etc. I don't recall any unscheduled disconnects.
--
Additional thoughts: I think there are complaints about the fan being loud so I swapped it out when I first got it. I also have it in my basement so I don't hear anything anyway -- HDDs are loud, especially the gold ones
What would you be running on it?
I’d like a few VMs for a media server and the associated apps. Pihole too ideally, but I keep that separate as that VM going bad is never good.
And both those machines are way faster than the last Intel Mac Pro which started at around $7000 iirc
I am still astounded the huge change moving from an Intel Mac to an Apple Silicon Mac (M1) has had in terms of battery performance and heat. I don't think i've heard the fans a single time I've had this machine and it's been several years.
Nor have I had any desire to upgrade.
> Nor have I had any desire to upgrade
I never thought I'd see a processor that was 50% faster single-core and 80% faster multi-core and just shrug. My M1 Pro still feels so magically fast.
I'm really happy that Apple keeps pushing things and I'll be grateful when I do decide to upgrade, but my M1 Pro has just been such a magical machine. Every other laptop I've ever bought (Mac or PC) has run its fan regularly. I did finally get fan noise on my M1 Pro when pegging the CPU at 800% for a while (doing batch conversion of tons of PDFs to images) - and to be fair, it was sitting on a blanket which was insulating it. Still, it didn't get hot, unlike every other laptop I've ever owned did even under normal usage.
It's just been such a joyful machine.
I do look forward to an OLED MacBook Pro and I know how great a future Apple Silicon processor will be.
My best Apple purchases in 20 years of being their customer: The Macbook M1 Pro 16 inch and the Pro Display XDR. When Steve Jobs died I really thought Apple was done, but their most flawless products (imho) came much later.
What do you like about the Pro Display XDR?
Yeah, don’t forget the 10 dark years between the Butterfly Keyboard Macbook Pro 2016, the Emoji Macbook Air, until the restoration of the USB ports… around 2022.
That was truely the dark age of Apple.
Those were the Ive Unleashed years.
Unbridled control over all design in one hyper opinionated guy was an error well resolved.
The first guy did alright.
I had the 2015 MBP and I held onto it until the M1 came out…I still have it and tbh it’s still kind of a great laptop. The two best laptops of the past decade for sure.
2008 mbp was my last apple laptop. Still feel quite slighted by Apple's treatment on that one.
I have the M4 iPad with the new OLED. That screen would be great in a Macbook Air.
Yeah, I have an M1 Max 64GB and don't feel any need to upgrade. I think I'll hit the need for more ram before a processor increase with my current workload.
Same for me, in a Mac Studio. It's only 2 years old so it's not like I would expect it to suck, but it's a rocket.
Folding/rolling screen would be awesome. 16” form factor that turns into 20” screen.
I've got a coworker who still has an Intel MacBook Pro, 8-core i9 and all that, and I've been on M chips since they launched. The other day he was building some Docker images while we were screensharing and I was dumbfounded at how long it took. I don't think I can even remember a recent time when building images, even ones pushed to CDK etc., takes more than a minute or so. We waited and waited and finally after 8 minutes it was done.
He told me his fans were going crazy and his entire desk was hot after that. Apple silicon is just a game changer.
Sounds like they were building an aarch64 image, building an x86_64 image on Apple Silicon will also be slow - unless you are saying the M* builds x86_64 faster than an i9?
Are you saying it's faster to build a binary with native architecture? Why is that?
Because there two ways to get to the same result:
- use native toolchain to produce artifacts for a different architecture
- use emulation to run different arch toolchain to produce different arch artifacts
First one is fast, second one is slow. In docker only second variant is possible.
That doesn't sound right. It's not like you need to run in WebAssembly to produce WebAssembly binary.
Why would you need to emulate x86 to produce x86-shaped bytes?
That was what finally got me to spend the cash and go with Apple Silicon - we switched to a Docker workflow and it was just doooooog slow on the Intel Macs.
But this M1 Max MBP is just insane. I'm nearly 50 and it's the best machine I've ever owned; nothing is even close.
For sure. I had one of those for work when I got my personal M1 Air and I couldn't believe how much faster it was. A fanless ultraportable faster than an 8-core i9!
I was so happy when I finally got an M1 MBP for work because as you say Docker is so much faster on it. I feel like I don't wait for anything anymore. Can't even imagine these new chips.
Same situation for me.
I’m going to be very happy when it’s time to replace my Intel MBP at work.
It’s rarely loud, but boy it likes to be warm/toasty at the stupidest things.
Oh I know the feeling… when using ict-managed windows laptops at work
> I am still astounded the huge change moving from an Intel Mac to an Apple Silicon Mac (M1) has had in terms of battery performance and heat
The battery life improvements are great. Apple really did a terrible job with the thermal management on their last few Intel laptops. My M1 Max can consume (and therefore dissipate) more power than my Intel MBP did, but the M1 thermal solution handles it quietly.
The thermal solution on those Intel MacBooks was really bad.
Those MacBooks were designed when Intel was promising new, more efficient chips and they didn’t materialize. Apple was forced to use the older and hotter chips. It was not a good combination.
Another factor might be that Intel MacBook Pros got thinner and thinner. The M1 MBP was quite a bit thicker than its Intel predecessors, and I think the form factor has remained the same since then.
Yes, but I had about every generation of intel MPB - it never was as good at it as M1 MBP.
> My M1 Max can consume (and therefore dissipate) more power than my Intel MBP did, but the M1 thermal solution handles it quietly.
You have to really, REALLY put in effort to make it operate at rated power. My M2 MBA idles at around 5 watts, my work 2019 16-inch i9 is around 30 watts in idle.
Lol ... You were not around for the ppc -> Intel change ... Same thing happened then ... Remarkable performance uplift from the last instruction set ... And we had Rosetta which allowed compatibility... The m1 and arm took power efficiency to another level .... But yeah what has happened before will happen again
I was around for that.
The thing then was it was just Apple catching up with windows computers which had had a considerable performance lead for a while. It didn't really seem magical to just see it finally matched. (Yes Intel Mac's got better then Windows computers but that was later. At launch it was just matching)
It's very different this time because you can't match the performance/battery trade off in anyway.
I owned a g4 power Mac ... Yes moving to intel at the time was magical... Maybe not for you but for me it was....
Intel chips had better integer performance and PowerPC chips had better floating point performance, which is why Apple always used Photoshop performance tests to compare the two platforms.
Apple adopted Intel chips only after Intel replaced the Pentium 4 with the much cooler running Core Solo and Core Duo chips, which were more suitable for laptops.
Apple dropped Intel for ARM for the exact same reason. The Intel chips ran too hot for laptops, and the promised improvements never shipped.
The G5 in desktops was more competitive but laptops were stuck on G4s that were pretty easy to beat by lots of things in the Windows world by the time of the Intel switch. And Photoshop was largely about vectorized instructions, as I recall, not just general purpose floating point.
Yes, and when it became clear that laptop sales would one day outpace desktop sales, Apple made the Intel switch, despite it meaning they had to downgrade from 64 bit CPUs to 32 bit CPUs until Core2 launched.
The Apple ecosystem was most popular in the publishing industry at the time, and most publishing software used floating point math on tower computers with huge cooling systems.
Since IBM originally designed the POWER architecture for scientific computing, it makes sense that floating point performance would be what they optimized for.
Yeah, but that Rosetta was usually delivering "i guess it works?" results. It was so slow.
If it's a M1 Macbook Air there's a very good reason you've never heard a fan!
Blows my mind how it doesn't even have a fan and is still rarely even anything above body temperature. My 2015 MBP was still going strong for work when I bailed on it late last year but the transition purely on the heat/sound emitted has been colossal.
It's not just that; at times I pushed all CPU cores to 100% in the M1 Mini and even after 30+ minutes I couldn't hear the fan. Too bad the Macbook Airs got nothing but a literal aluminium sheet as cooling solution.
Factorio: Space Age is the first piece of software that my M1 shows performance issues with. I'm not building xCode projects or anything, but it is a great Mac. Maybe even the greatest.
There's a known issue on arm macs with external monitors that messes with the framerates. Hopefully it gets fixed soon because pre-space age factorio was near flawless in performance on my m2.
What! That's exactly what I'm doing. Woah, I can't wait for the inevitable fix, those guys have been releasing patches like crazy.
I do wonder if PC desktops will eventually move to a similar design. I have a 7800x3d on my desktop, and the thing is a beast but between it and the 3090 I basically have a space heater in my room
A game I play with friends introduced a Mac version. I thought it would be great to use my Apple Silicon MacBook Pro for some quiet, low-power gaming.
The frame rate wasn’t even close to my desktop (which is less powerful than yours). I switched back to the PC.
Last time I looked, the energy efficiency of nVidia GPUs in the lower TDP regions wasn’t actually that different from Apple’s hardware. The main difference is that Apple hardware isn’t scaled up to the level of big nVidia GPUs.
I sincerely believe that the market for desktop PCs is completely coopted by the gaming machines. They do not care one whit about machine size or energy efficiency, with only one concern in mind: bare performance. This means they buy ginormous machines, incredibly inefficient CPUs and GPUs, with cavernous internals to chuck heat out with no care for decibels.
But they spend voriously. And so the desktop PC market is theirs and theirs alone.
Desktop PCs have become the Big Block V8 Muscle Cars of the computing world. Inefficient dinosaur technology that you pour gasoline through and the output is heat and massive raw power.
Desktops are actually pickup trucks. Very powerful and capable, capable of everyday tasks, but less efficient at them. Unbeatable at their specialty, though.
Well because that's the audience that upgrades before something breaks and also lets you capture high-end market of professionals.
Yeah. It's been the case for a while now that if someone just wants a general computer, they buy a laptop (even commonly a mac).
That's why the default advice if you're looking for 'value' is to buy a gaming console to complement your laptop. Both will excel at their separate roles for a decade without requiring much in the way of upgrades.
The desktop pc market these days is a luxury 'prosumer' market that doesn't really care about value as much. It feels like we're going back to the late 90's, early 2000's.
Unless you play games where you stare at the map while balancing exel spreadsheets.
That's okay, Factorio has awesome Apple Silicon support.
What about Paradox games? genuinely curious about that.
Stellaris is great on my M2
The price of a high end gaming pc (7800x3d and 4080) is around 2k USD. That's comparable to the MacBook Pro.
Yeah sure, if you start buying unnecessary luxury cases, fans and custom water loops it can jump up high, but that's more for clueless rich kids or enthusiasts. So I wouldn't place pc gaming as an expensive hobby today, especially considering Nvidia money grubbing practices that won't stay forever.
After having my PC for (almost) 4 years, I can say that this beast is the last large form computer I will buy.
I just bought a Beelink SER9 mini pc, about the same size as the Mac Mini. It's got the ridiculously named AMD Ryzen AI 9 HX 370 processor, a laptop CPU that is decently fast for an X64 chip (2634/12927 Geekbench 6 scores) but isn't really competition for the M4. The GPU isn't up to desktop performance levels either but it does have a USB4 port capable of running eGPUs.
It would make sense, but it depends heavily on Windows / Linux support, compatibility with nvidia / amd graphics cards, and exclusivity contracts with Intel / AMD. Apple is not likely to make their chips available to OEMs at any rate, and I haven't heard of any 4th party working on a powerful desktop ARM based CPU in recent years.
It would be nice. Similarly have a 5950X/3080Ti tower and it’s a great machine, but if it were an option for it to be as small and low-noise as the new Mini (or even the previous mini or Studio), I’d happily take it.
For what it is worth, I'm running that with open loop water cooling. If your chassis has the space for it, my rig won't even need to turn on fans for large amounts of the day. (Loop was sized for a threadripper, which were not really around for home builders) Size is an issue, however :)
That 3090 uses about 5x more power than the 7800x3d.
M3 Pro user here, agree with the same points. It's nice to be able to actually have your laptop on your lap, without burning your legs.
ARM processors have always been good at handling heat and low power (like AWS Graviton), but what laptop did you have before that would overheat that much during normal usage? That seems like a very poor design.
My 2010 i7 MBP would do that under heavy loads. All aluminum body, with fans, and when that CPU really had to work it put out a lot of heat.
Compiling gcc multi thread a few times would be enough.
Not only that, they seemed to get hotter with each subsequent generation.
My 2015 could get hotter than my 2010. I think my work 2019 gets hotter still.
I think the Intels were hotter than my G4, but it’s been too long and the performance jump was worth it.
Got an M1 Air, it blows them all out of the water (even 2019, others aren’t a surprise). And it does it fanless, as opposed to emulating a jet engine.
And if you really push it, it gets pleasantly warm. Not possibly-burn-inducingly hot.
I had a MBP 2019 which with default fan settings was really hot from the 1h videocall in Bluejeans. Or 5 minutes navigating in Google maps and street view in Chrome.
Only solution was to increase fan speed profile to max rpm.
On my 2019, if a single process hits 100% of one core the fan becomes quite noticeable (not hairdryer though) and the top of the keyboard area where the CPU is gets rather toasty.
It’s way too easy to heat up.
I’d place my top of the line Intel Mac on my feet to warm them and then bend over and write code while sitting on my chair.
anything that pegged the CPU for extended periods of time, caused many Apple laptop models to overheat. There is some design tradeoff about power specs, cooling, "typical workloads" and other things.. A common and not-new example of heat-death-workload was video editing..
you still shouldn't have it on your lap though. it's bad for your posture.
Not for everyone. It turns out by following standard ergonomic guidelines I was doing more damage. I have to actually look way down at monitors, even on my desk. It has to be well below eye height, basically slammed.
Hey, I can have my Intel MBP on my lap without burning my legs (or making me feel nauseated).
As long as I don't open Chrome, Safari, Apple Notes, or really any other app...
Sometimes even not opening any apps is not enough if Spotlight decides that now is the time to update its index or something similar. Honestly nuts looking back at it.
I remember when macOS switched to evented way of handling input and for some reason decided that dropping keyboard events is okay...anyway if spotlight was updating its index, then unlocking your laptop with a password was impossible.
It still blows my mind how infrequently I have to charge my M3 Pro MacBook Pro. It is a complete game changer.
Last year I bought an M1 Pro used, but the last MPB I had was an early 2015. I just didn't bothered upgrading, in fact the later Intel models were markedly worse (keyboard, battery life, quality control). The Apple Silicon era is going to be the PowerPC prime over again.
> I don't think i've heard the fans a single time I've had this machine and it's been several years.
Yes I agree. I sometimes compile LLVM just to check whether it all still works. (And of course to have the latest LLVM from main ready in case I need it. Obviously.)
On extremely heavy workloads the fans do engage on my M1 Max, but I need to get my ear close to the machine to hear them.
Recently my friend bought a laptop with Intel Ultra 9 185h. It roared fans even when opening Word. That was extraordinary and if it was me making the purchase I would have sent it back straight away.
My friend did fiddle a lot with settings, had to update BIOS and eventually the fan situation was somewhat contained, but man I am never going to buy Intel / AMD laptop. You don't know how annoying fan noise is until you get a laptop that is fast and doesn't make any noise. With Intel is like having a drill pointed to your head that can goes off at any moment and let's not mention phantom fan noise, where it gets imprinted in your head that your brain makes you think the fans are on, but they are not.
Apple has achieved something extraordinary. I don't like MacOS, but I am getting used to it. I hope one day this Asahi effort will let us replace it.
When I play Baldur's Gate 3 on my M2 Max, the fans get loud. You need a workload that is both CPU-heavy and GPU-heavy for that. When you are stressing only the CPU or the GPU but not both, the fans stay quiet.
I went from an i9 MBP to an M1 Max earlier this year. I can't even describe it. Blows my mind.
Do you have an Air or Pro?
16” M1 still perfectly good machine for my work (web dev). Got a battery replacement which also replaced top cover and keyboard - it’s basically new. Extended applecare for another year which will turn it into fully covered 4 year device.
Is it crazy? The chip itself is small. I'm not up on the subject but is it unusual? Are we talking power draw and cooling adding greatly to the size? I guess the M4 Pro must have great specs when it comes to running cool.
Here's the geekbench link https://browser.geekbench.com/v6/cpu/8593555
How/where are they getting 128gb of ram? I don't see that as an option for any of the pre-orders pages.
Still pretty impressive, I get 1217/10097 with dual xeon gold 6136 that doubles as a space heater in the winter.
Switch to M4 Max 16-core CPU, it will unlock 64 and 128GB options for memory.
Does anyone know if this Mac Mini can survive longer than a year? Apple's approach to hardware design doesn't prioritize thermal issues*.
In fact, the form factor is why I'm leaning toward taking a pass - I don't want a Mac Mini I would have to replace every 12 months.
* or rather, Apple doesn't target low enough temperatures to keep machines healthy beyond warranty
I'm not sure why you think it would be worse than a MacBook Air which literally has no fan
Are the new MacBook Airs the ones that have throttling issues due to heat?
Yes, but you only really encounter that when pushing the CPU to 100% for more than a few minutes. The cooling is objectively terrible, but still easily enough for most users, that's the crazy thing.
maybe? as local LLM/SD etc get more common it might be common to push it. I've been getting my fans to come on and get burning hot quite often lately because of new tech. I get that I'm a geek but with Apple, Google and everyone else trying to run local ML it's only a matter of time.
Apple's chips already have AI accelerators for things like content-based image search. They would never retroactively worsen battery life and performance just for a few more AI features when they could instead use it as selling point for the next hardware generation.
And if you regularly use local generative AI models the Pro model is the more reasonable choice. At that point you can forget battery life either way.
After posting this I thought of a few possible use cases. They might never come to pass but ... Some tech similar to DLSS might come along that lets streaming services like youtube and netflix to send 1/10th the data and get twice as good an image but require extreme processing on the client. It would certainly be in their interest (less storage, less bandwidth, decompression-upscaling costs pushed to client) Whether that will ever happen I have no idea. I was just trying to think of an example of something that might need lots of compute power at home for the masses.
Another could be realtime video modification. People like to stream and facetime. They might like it even more if they could change their appearance more than they already can using realtime ML based image processing. We already have some of that in the various video conferencing / facetime apps but it's possible it could jump up in usage and needed compute power with the right application.
Hopefully not? I honestly don't know. It's been around three years (whichever year it was they replaced Target Disk Mode) since I followed Apple news very closely.
Do you have any source on this?
It might be different post-Intel? I'm too lazy to dig up sources for Apple's past lost class action lawsuits, etc.
That Rossman guy, the internet-famous repairman, built his youtube channel on videos about Apple's inadequate thermal management. They're probably still archived on his channel.
Hell, I haven't owned a Mac post the year 2000 that didn't regularly hit temperatures above 90 celsius.
Why would you, or anyone, ever compare a line of Intel machines with a line of machines that have a vastly different architecture and power usage? It'd be like comparing Lamborghini's tractors and cars and asking if the tractors will scrape on steep driveways because you know the cars do.
On the other hand, it is comparing Apples to Apples.
The Gods didn't deliver specs to Apple for Intel machines locking the company to placement/grades/design/brands/sizes of chassis, fans, logic board, paste etc. Apple, in the Intel years, just prioritized small form factor, at the expense of longevity.
And Apple's priorities are likely still the same.
My concern is that, given cooler-running chips, Apple will decrease form factor until even the cooler-running chips overheat. The question, in my mind, is only whether the team at Apple who design chips can improve them to a point where the chips run so coolly that the rest of Apple can't screw it up (ie: with inadequate thermal design).
If that has happened, then... fantastic, that's good for consumers.
Jonny Ive left and Apple decided thinness wasn’t the only value.
100% Apple Silicon is that for computers. Very rarely do my fans whizz up. It’s noticeable when someone is using an x64 and you’re working with them because you will hear their computer’s fans on.
The work Apple has done to create a computer with good thermals is outrageous. Minimising distances for charges to be induced over.
I run Linux on my box. It’s great for what it does but these laptops are just the slickest computers I have ever used.
Never gets hot. Fans only come on during heavy compilation tasks or graphic intensive workloads.
That is encouraging to read, and hopefully it truly is the case that Apple has weened itself from its obsession with thinness.
Some of the choices Apple made after SJ's death left such an unpleasant taste in my mouth that I know have knee-jerk reactions to certain Apple announcements. One of those is that I experience nausea when Apple shrinks the form factor of a product. Hopefully that has clouded my judgement here, and in fact these Mac Minis have sufficient airflow to survive several years.
Do you disagree with Intel's stated Tjunction, or disagree that Intel is capable of controlling clocks to remain within its stated thermal limits?
Like even with Intel chips that actually died early en masse (13th and 14th gen), the issue wasn't temperature.
2x correct amount of thermal paste... not good.
Insufficient airflow from blowers... not good.
110 celsius heat... not good for lead-free solder... not good for computer.
This whole thread is starting to feel surreal to me. Pretty soon everyone will have me believing I dreamt up Apple's reputation for bad thermal management.
Well, when you don’t appear to know or care about the actual issues stemming from poor thermals (Intel relying too much on turbo clocks, toasty crotches, low battery life, noisy fans) and instead complain about made-up issues, yeah.
My frustration was with the totality of comments in the thread, not yours exclusively. I'd have no problem with any one reply in this thread, on its own. Apologies if I came across as rude.
There's nothing in a comment thread so cringeworthy and boring as a person trumpeting their own expertise, so I'll refrain, and leave off here.
I've had a mac mini m1 on my desk with nearly 100% uptime since launch.
It only gets powered off only when there's a power outage or when I do an update.
Does anyone know if this Mac Mini can survive longer than a year? Apple's approach to hardware design doesn't prioritize thermal issues.
I've had an M1 Mac Mini inside a hot dresser drawer with a TV on top since 2020.
It doesn't do much other than act as a media server. But it's jammed pretty tight in there with an eero wifi router, an OTA ATSC DVR, a box that records HDMI, a 4K AppleTV, a couple of external drives, and a full power strip. That's why it's hot.
So far, no problems. Except for once when I moved, it's been completely hands-off. Software updates are done over VNC.
I'll admit to some reflexive skepticism here. I know GeekBench at least used to be considered an entirely unserious indicator of performance and any discussion relating to its scores used to be drowned out by people explaining why it was so bad.
Do those criticisms still hold? Are serious people nowadays taking Geekbench to be a reasonably okay (though obviously imperfect) performance metric?
I verified Geekbench results to be very tightly correlated with my use case and workloads (JVM, Clojure development and compilation) as measured by my wall times. So yes, I consider it to be a very reliable indicator of performance.
Curious how you verified that? I should possibly do thesame
Run Geekbench on a sample of hardware. Run your workload along same hardware. Regress.
That's not very scientific at all. With how close the CPUs are, how would you compare the tiny differences?
You’re not the only one — but just curious where this skepticism comes from.
This is M4 — Apple has now made four generations of chips and each one were class leading upon release. What more do you need to see?
> Apple has now made four generations of chips and each one were class leading upon release.
Buying up most of TSMC's latest node capacity certainly helps. Zen chips on the same node turn out to be very competitive, butAMD don't get first dibs.
It’s more like Apple fronts the cash for TSMC’s latest node. But regardless, in what way does that detract from their chips being class-leading at release?
Because the others can't use that node, there are no others in that same class. If there was a race, but one person is on foot, and the other is in a car, it's not surprising if the person in the car finishes first.
I have some sympathy with this view because it's in no way mass market.
Nevertheless, product delivery is a combination of multiple things of which the basic tech is just one component.
Eventually Apple moves off a node and the others move on.
People pretend like this isn’t a thing that’s already happened, and that there aren’t fair comparisons. But there are. And even when you compare like for like Apple Silicon tends to win.
Line up the node, check the wattages, compare the parts. I trust you can handle the assignment.
I disagree with the wording "AMD don't get first dibs". It's more like "AMD won't pay for first dibs"
I don’t think a lot of people fully understand how closely Apple works with TSMC on this, too. Both in funding them, holding them accountable, and providing the capital needed for the big foundry bets. It’s kind of one of those IYKYK things, but Apple is a big reason TSMC actually is the market leader.
If that's all we cared about we wouldn't be discussing a Geekbench score in the first place. The OP could have just posted the statement without ever mentioning a benchmark.
I was just curious if people had experience with how reliable Geekbench has been at showing relative performance of CPUs lately.
I don’t think they are skeptical of the chip itself. Just asking about the benchmark used.
If I was reviewing cars and used the number of doors as a benchmark for speed, surely I’d get laughed at.
Right but we keep repeating this cycle. A new M series chip comes, the geekbench leaks and its class leading.
Immediately people “but geEk BeNcH”
And then actual people get their hands on the machines for their real workloads and essentially confirm the geekbench results.
If this was the first time, then fair enough. But it’s a Groundhog Day style sketch comedy at this point with M4.
I blame it on the PC crowd being unconsciously salty the most prestigious CPU is not available to them. You heard the same stuff when talking about Android performance versus iPhone.
There is a lot to criticize about Apple's silicon design, but they are leading the CPU market in terms of mindshare and attention. All the other chipmakers all feel like they're just trying to follow Apple's lead. It's wild.
I was surprised and disappointed to see that the industry didn’t start prioritizing heat output more after the M1 generation came out. That was absolutely my favorite thing about it, it made my laptop silent and cool.
But anyway, what is it you see to criticize about Apple‘s Apple Silicon design? The way RAM is locked on package so it’s not upgradable, or something else?
I’m kind of surprised, I don’t hear a lot of people suggesting it has a lot to be criticized for.
It was wild to see the still ongoing overclocking Ghz competition, while suddenly one could use a laptop with good performance, no fans, no noise and while using it mobile.
By the way, have you heard about the recent Xiaomi SU7 being the fastest 4-doors car on the Nurburgring Nordschleife?
It has 4 doors! It’s all over the shitty car news medias. The car is a prototype with only one seat though.
As demonstrated by the M1-M3 series of chips, essentially all of that lead was due to being the first chips on a smaller process, rather than to anything inherent to the chip design. Indeed, the Mx series of chips tend to be on the slower side of chips for their process sizes.
Show your work.
Most people who say things like this tend to deeply misunderstand TDP and end up making really weird comparisons. Like high wattage desktop towers compared to fan-less MacBook Airs.
The process lead Apple tends to enjoy no doubt plays a huge role in their success. But you could also turn around and say that’s the only reason AMD has gained so much ground against Intel. Spoiler: it’s not. Process node and design work together for the results you see. People tend to get very stingy with credit for this though if there’s an Apple logo involved.
In power efficiency maybe, but not top performance
Literally yes top single core performance. (And incidentally also efficiency)
I stand corrected, that's incredible.
https://www.cpubenchmark.net/singleThread.html
https://www.cpu-monkey.com/en/cpu_benchmark-cinebench_r23_si...
https://www.cpubenchmark.net/high_end_cpus.html
Did no one check the scores? They're not the top consumer CPU by quite a range. It's probably the best power per watt, but not the most powerful CPU.
The performance-per-watt isn’t necessarily the best. These scores are achieved when boosted and allowed to draw significantly more power. Apple CPUs may seem efficient because, most of the time, computers don’t require peak performance. Modern ARM microarchitectures have been optimized for standby and light usage, largely due to their extensive use in mobile devices. Some of MediaTek and Qualcomm's CPUs can offer better performance-per-watt, especially at lower than peak performance. The issue with these benchmarks is that they overlook these nuances in favor of a single number. Even worse, people just accept these numbers without thinking about what they mean.
Geekbench is an excellent benchmark, and has a pretty good correlation with the performance people see in the real world where there aren't other limitations like storage speed.
There is a sort of whack-a-mole thing where adherents of particular makers or even instruction sets dismiss evidence that benefits their alternatives, and you find that at the root of almost all of the "my choice doesn't win in a given benchmark means the benchmark is bad" rhetoric. Then they demand you only respect some oddball benchmark where their favoured choice wins.
AMD fans long claimed that Geekbench was in cahoots with Intel. Then when Apple started dominating, that it was in cahoots with ARM, or favoured ARM instruction sets. It's endless.
Any proprietary benchmark that's compiled with the mystery meat equivalent of compiler/flags isn't "excellent" in any way.
SPECint compiled with either the vendor compiler (ICC, AOCC) or the latest gcc/clang would be a good neutral standard, though I'd also want to compare SIMD units more closely with x265 and Highway based stuff (vips, libjxl).
And how do you handle the fact that you can't really (yet) use the same OS for both platforms? Scheduler and power management counts, even for dumb number crunching.
I'd be reflexively skeptical if I didn't have a M1 Mac. It really is something.
I'm not skeptical of Apple's M-series chips. They have proven themselves to be quite impressive and indeed quite competitive with traditional desktop CPUs even at very low wattages.
I'm skeptical of Geekbench being able to indicate that this specific new processor is robustly faster than say a 9950x in single-core workloads.
It's robustly faster at the things that Geekbench is measuring. You can find issue with the test criteria (measures meaningless things or is easy to game) but the tests themselves are certainly sound.
> You can find issue with the test criteria (measures meaningless things or is easy to game).
That's exactly their point.
On the other hand, I have yet to see any benchmark where people didn’t crawl out of the woodwork to complain about it.
It'll still be at the top of SPECint 2017 which is the real industry standard. Geekbench 6.3 slightly boosted Apple Silicon scores by adding SME - a very niche instruction set extension which is never used in SPECint workloads. So the gap may not be as wide as GB6.3 implies.
Does SPECint cover heavily memory bound pointer chasing stuff? Not up to date.
Yes, yes it does.
https://www.spec.org/cpu2017/Docs/overview.html#Q13
GB6 is great. Older versions weren’t always very representative of real workloads. Mostly because their working data sets were way too small.
But GB6 aligns pretty well with SPEC2017.
It's by no means a be all end all "read this number and know everything you need to know" benchmark but it tends to be good enough to give you a decent idea of how fast a device will be for a typical consumer.
If I could pick 1 "generic" benchmark to base things off of I'd pick PassMark though. It tends to agree with Geekbench on Apple Silicon performance but it is a bit more useful when comparing non-typical corner cases (high core count CPUs and the like).
Best of all is to look at a full test suite and compare for the specific workload types that matter to you... but that can often be overkill if all you want to know is "yep, Apple is pulling ahead on single thread performance".
You are thinking of AnTuTu.
If it shows a good result for Apple then it's perfectly accurate, otherwise it's flawed.
I'm confused. They're claiming "Apple’s M4 Max is the first production CPU to pass 4000 Single-Core score in Geekbench 6." yet I can see hundreds of other test results for single core performance above 4000 in the last 2 years?
Are those production results?
https://browser.geekbench.com/v6/cpu/1962935 says it was running at 13.54 GHz. https://browser.geekbench.com/v6/cpu/4913899 looks... questionable.
https://browser.geekbench.com/v6/cpu/7531877 seems fine?
7614 MT/s on the RAM is a pretty large overclock for desktop DDR5.
There are 8000MT/s CUDIMMs for the new Intel Chips now...
They've been announced, within the past two weeks, and as far as I can tell aren't actually available for purchase from retailers yet: the only thing I've seen actually purchasable is Crucial's 6400MT/s CUDIMMs, and Newegg has an out-of-stock listing for a G.Skill kit rated for 9600MT/s.
The linked Geekbench result from August running at 7614 MT/s clearly wasn't using CUDIMMs; it was a highly-overclocked system running the memory almost 20% faster than the typical overclocked memory speeds available from reasonably-priced modules.
Geekbench is run pre-release by the manufacturers.
But by definition that means it’s not a production machine yet.
So it doesn’t invalidate Apple‘s chip being the fastest in single core for a production machine.
It seems to be a pretty large outlier. https://browser.geekbench.com/processors/intel-core-i7-13700...
Yeah that's fair lol
As far as I can tell those are all scroes from overclocked CPUs.
https://browser.geekbench.com/v6/cpu/7531877 doesn't seem to be
That result is completely different from pretty much every other 13700k result and it is definitely not reflective of how a 13700k performs out of the box.
Geekbench doesn't really give accurate information (or enough of it) in the summary report to make that kind of conclusion for an individual result. The one bit of information it does reliably give, memory frequency, says the CPU's memory controller was OC'd to 7600 MT/s from the stock 5600 MT/s so it feels safe to say that number with 42% more performance than the entry in the processor chart also had some other tweaks going on (if not actual frequency OCs/static frequency locks then exotic cooling or the like). The main processor chart https://browser.geekbench.com/processor-benchmarks which will give you a solid idea of where stock CPUs rank - if a result has double digit differences from that number assume it's not a stock result.
E.g. this is one of the top single core benchmark result for any Intel CPU https://browser.geekbench.com/v6/cpu/5568973 and it claims the maximum frequency was stock as well (actually 300 MHz less than thermal velocity boost limits if you count those).
AMDs upcoming flagship desktop CPU (9800 X3D) reaches about 3300 points on singlecore (the previous X3D hit 2700ish)
Are you saying a product that has not been released yet will be faster than a product that is?
And that a desktop part is going to outperform a laptop part?
No, neither of those.
I think he was backing up Apple's claim.
Could those be overclockers? I often see strange results on there that looks like either overclockers or prototypes. Maybe they mean this is the fastest general purpose single core you can buy that is that fast off the shelf with no tinkering.
All I want is a top of the line MBP, with all it’s performance and insane battery life, but running a Linux distro of my choice :(
These things are so fast that you can run a virtual Linux without even noticing performance issues.
asahi linux
Doesn't run on M3 or M4 yet.
I'm guessing "of my choice" was key there. Though I suppose you could use asahi just as a virtualizer.
Agreed, but probably getting a Lenovo Legion will be your best bet in the near term.
I'm driving a 2022 XPS. Lots of people will (and should) disagree, but I've completely shifted over from Thinkpads to Dell XPS (or Precision) for my laptops.
Running a 2024 xps 13 with Ubuntu for work and it's been solid. Had a Lenovo before this which was great bang for the buck but occasional issues with heating up during sleep. Would consider trying a Framework next.
Am I missing something? I don't know where this informations came from, but you can check out GeekBench v6 single-core benchmarks here.
https://browser.geekbench.com/v6/cpu/singlecore
Something is fishy when the top two claim to be Android phones running Ryzen chips, and the third is allegedly a 13GHz Core i3.
Followed by an i9 with 5 cores and a multicore benchmark score of zero.
Probably crashed before it could complete the test.
Gosh, back in my day, we were lucky to squeeze out a few extra MHz. The overclockers today must be really skilled.
This is a short and fascinating peek into that world: https://youtu.be/qr26jxPIDm0
The better chart is https://browser.geekbench.com/processor-benchmarks/ which tries to discount outliers that might be liquid nitrogen cooled and/or faked
The M4 is almost 1/3rd faster than the top Intel (on this benchmark)?
I had no idea the difference was that big. I don’t know what a normal geek bench score is, so I just sort of assumed that the top of the lung Intel part would be something like 3700 or 3800. Enough that Apple clearly took a lead but nothing crazy.
No wonder it’s such a big deal.
Even thought that was updated hours ago it doesn't list Zen 5 or Arrow Lake.
"Do not trust any benchmarks you did not fake yourself"
I have been out of the PC world for a long time, but in terms of performance efficiency, is Apple running away from the competition? Or are AMD and Intel producing similar performing chips at the same wattage?
Apple is slightly pulling away. AMD's top desktop chips were on par with M1/M2/M3 1T but now they cannot match even M4 despite releasing a new design (Zen 5) this year.
It's partially because AMD is on a two year cadence while Apple is on approximately a yearly cadence. And AMD has no plans to increase the cadence of their Zen releases.
2020 - M1, Zen 3
2021 - ...
2022 - M2, Zen 4
2023 - M3
2024 - M4, Zen 5
Edit: I am looking at peak 1T performance, not efficiency. In that regard I don't think anyone has been close.
> Edit: I am looking at peak 1T performance, not efficiency. In that regard I don't think anyone has been close.
Indeed. Anything that approaches Apple performance does so at a much higher power consumption. Which is no biggie for a large-ish desktop (I often recommend getting middle-of-the-road tower servers for workstations).
Don’t thermals basically explode non-linearly with speed?
It’s possible Apple’s chips could be dramatically faster if they were willing to use up 300W.
I remember seeing an anecdote where Johny Srouji, the chief Apple Silicon designer, said something like the efficiency cores get 90% of the performance of the performance cores at like 10% of the power.
I don’t remember the exact numbers but it was staggering. While the single core wouldn’t be as high, it sounded as if they could (theoretically) make a chip of only 32 efficiency cores and just sip power.
Their margins tend to allow them to always use the latest TSMC process so they will often be pretty good just based on that. They are also ARM chips which obviously have been more focused on efficiency historically.
They actually work with TSMC to develop the latest nodes. They also fund the bulk of the development. It's not as if without Apple's funds someone else will get the same leading edge node.
To a greater or lesser extent, Apple funds tsmc’s latest nodes.
Oh how the mighty have fallen. For decades, when comparing Mac versus PCs, it was always about performance, with any other consideration always derided.
Yet here we are, with the excuses of margins and silicon processes generations. But you haven't answered the question. Is Apple pulling ahead or is the x86 cabal able to keep up?
Apple is ahead. The fab stuff is an explanation of why they are ahead, not an excuse for being behind.
My assessment is that ARM is running away from the competition. Apple is indeed designing the chip, but without the ARM architecture, Apple would have nothing to work with. This is not to diminish the incredible work of Apple’s VLSI team who put the chip architecture together and deftly navigated the Wild West of the fabrication landscape, but if you look at the specialized server chip side, it’s now dominated by ARM IP. I think ARM is the real winner here.
Even compared to other ARM cores, Apple is in a league of its own.
They have a good silicon design team, but having so much money that they can just buy out exclusive access to TSMCs most advanced processes doesn't hurt either. The closest ARM competitor to the M4, the Snapdragon X Elite, is a full node behind on 4nm while Apple is already using 2nd generation 3nm.
So then it should be comparable to the M1 or M2? Which isn't bad at all, if true.
But is it, for the same power consumption?
For some benchmarks the Snapdragon is on par with the M3. But the weirdo tests I found online did not say which device they compared, since the M3 is available in fan-less machines, which limits its potential.
Outside of Ampere(who are really more server focused) who else is designing desktop/laptop ARM cpus?
That’s a really fair point. I think it’s tough for anyone else to break into the consumer / desktop segment with ARM chips. Apple can do it because they control the whole stack.
Qualcomm and Nvidia.
NVIDIA don’t have custom cores
What if I told you they don't need them.
What if I told you that the rest of the context was about custom cores.
They also have the advantage that they could break software compatibility with the M1, e.g. using 16 kB pages and 128 byte cache blocks.
They do mixed page mode depending on the program running.
Is this really about the architecture itself, or about the licensing of it? AMD and Intel are, afaik, the only ones legally allowed to use x86, and likely have no plans to allow anyone else.
For many workloads I think they are pulling definitely ahead. However, I think there is still much to gain in software. For example, my Linux/Fedora desktop with 5900X is many times more responsive than my 16” M1 Pro.
Java runs faster. GraalVM native generated native images run way waster. Golang runs faster. X86_64 has seen more love from optimalisations than aarch64 has. One of the things I hit was different GC/memory performance due to different page sizes. Moreover, docker runs natively on Linux, and the network stack itself is faster.
But even given all of that, the 16” M1 PRO edges close to the desktop. (When it is not constrained by anti virus.) And it does this in a portable form factor, with way less power consumption. My 5900X tops out at about 180W.
So yes, I would definitely say they are pulling ahead.
I suspect that’s an OS issue. Linux is simply more optimized and faster at equivalent OS stuff.
Which isn’t too surprising given a lot of the biggest companies in the world have been optimizing the hell out of it for their servers for the last 25+ years.
On the flipside of the coin though Apple also clearly optimizes their OS for power efficiency. Which is likely paying some good dividends.
AMD's latest Strix Point mobile chips are on par with M3 silicon: https://youtube.com/watch?v=Z8WKR0VHfJw
I was looking into this recently as my M1 Max screen suddenly died out of the blue within warranty and Apple are complete crooks wrt honouring warranties.
The AMD mobile chips are right there with M3 for battery life and have excellent performance only I couldn't find a complete system which shipped with the same size battery as the MBP16. They're either half or 66% of the capacity.
> and Apple are complete crooks wrt honouring warranties
Huh? I've used AC for both MBP and iPhones a number of times over the years, and never had an issue. They are known for some of the highest customer ratings in the industry.
They claimed that it wasn't covered because the machine was brought in Germany. I live in The Netherlands and brought it here. Also I contacted Apple Support to checked my serial number and then gave me the address to take it to. Which I did.
They charged me $100 to get my machine back without repair.
Also bear in mind that the EU is a single market, warranties etc are, by law, required to be honoured over the ENTIRE single market. Not just one country.
Especially when the closest Apple Store to me is IN GERMANY.
I have since returned it to Amazon who will refund it (they're taking their sweet time though, I need to call them next week as they should have transferred already).
So you haven't purchased it from Apple but instead you've purchased it from Amazon. This may change things. In Europe you have two ways of dealing with it, either by manufacturer warranty (completely good will and on terms set by the manufacturer) or by consumer rights (warranted you by law, overruling any warranty restrictions).
Sellers often will try to steer you to use warranty as it removes their responsibility, Amazon is certainly shady here. Apple will often straight on give you a full refund or a new device (often newer model), that happened to me with quite few iPhones and MacBooks.
Know your rights.
I had a Macbook Pro 2018 that a friend of mine bought for me in Moscow because it was much cheaper there (due to grey import, I think). I didn't have Apple Care or anything. When its touchbar stopped working in 2020, I brought it to Apple Store in Amsterdam and complained about it and also about faulty butterfly keys (one keycap fell off, "t" and "e" key were registering double presses each time). So the guys at Apple Store simply took it and replaced the whole top case so I've got a new keyboard, new touchbar, and - the best part - a new battery.
10 years ago, the Genius Bar would fix my problem to my satisfaction in almost no time -- whether or not I had Apple Care. They'd send it off for repair immediately or fix it in less than an hour. 2 out of 3 iPhone 6 that had an issue, they just handed me a new device.
Today, Apple wastes my time.
Instead of the old experience of common sense, today the repair people apparently must do whatever the diagnostic app on the iPad says. My most recent experience was spending 1.5 hours to convince the guy to give me a replacement Airpods case. Time before that was a screen repair where they broke a working FaceID ... but then told me the diagnostics app told them it didn't work, so they wouldn't fix it.
I'm due for another year of AppleCare on my MBP M1, and I'm leaning towards not re-upping it. Even though it'd be 1/20th of the cost of a new one, I don't want to waste time arguing with them anymore.
I would go multiple routes with Apple if you're able. They tend to be pretty good with in warranty and even out of warranty.
Is that in performance, or performance while matching thermals?
If a competing laptop has to run significantly hotter/louder than in my mind that’s not on par.
The M1 was a complete surprise, it was so far ahead that it was ridiculous.
The M2-4 are still ahead (in their niche), but since the M1, Intel and AMD have been playing catchup.
Or more accurately AMD is playing catch up with the Strix series, while Intel seems too busy shooting themselves in the foot to bother.
> is Apple running away from the competition?
No.
On the same node, the performance is quite similar. Apple's previous CPU (M3) has been a 3nm part, while AMD's latest and greatest Zen 5 is still on TSMC's 4nm.
There have always been higher performing x64 chips than the M series but they use several times more power to get that.
Apple seems to be reliably annihilating everyone on performance per watt at the high end of the performance curve. It makes sense since the M series are mobile CPUs on ‘roids.
Irony being that’s the same thing Intel learned from the P4.
They gave it up for Core, grown from their low power chips, which took them far further with far less power than the P4 would have used.
TSMC is running away from the competition
How impressed should I be. In terms of Apples history of manufacturing chips compared to, say, intel. This is their 4th generation of the M chip and it seems to be so far ahead of intel, a company with significant bigger history of chip production.
They were in the PowerPC consortium starting in 1991, co-developed ARM6 starting in the late 80s and the M series chips are part of the Apple Silicon family that goes back to at least 2010's Apple A4 (with non-Apple branded chips before then).
They've been in the chip designing business for a while.
> back to at least 2010's Apple A4
basically Jim Keller happened, I think they are still riding on that architecture
Actually difficult to know if it was Keller. Apple bought PA Semi which is where he came from. But he came on as a VP after it was founded by other engineers who had worked on the Alpha chips a year before that. Did he make the difference? Who knows.
What does seem to be constant is that the best CPU designs have been touched by the hands of people who can trace their roots to North Eastern US. Maybe the correlation doesn't exist and the industry is small and incestuous enough that most designs are worked on by people from everywhere, but sometimes it seems like some group at DEC or Multiflow stumbled on the holy grail of CPU design and all took a drink from the cup.
It is impressive but it is also important to remember that Intel, AMD, and Qualcomm make dozens of different chips while Apple makes a handfull. That means they can't be as focused as Apple.
The majority of those different chips use the same cores though as each other, and vary mostly in packaging or binning.
It’s not quite a fair comparison, given Intel has their own fab, while Apple uses TSMC—and pays them a lot to get exclusive access to new nodes before anyone else.
Yet Intel is using TSMC for their latest chips now too.
The perf and power use is nice but I don't need the dual architecture stuff in my professional life.
Been extremely happy with Windows and WSL last couple years, so happy to be a node or two behind on AMD laptops.
Otherwise I use a workstation primarily anyway.
I have a M1 MacBook Air that I use for Docker, VSCode, etc. And it still runs very smoothly. Interestingly, the only times it slows down is when opening Microsoft Excel.
same BUT my 16G ram is not enough and disk goes full.
Excel performance on Mac is a disaster, and I dont understand why.
Everytime I paste something it lags for 1-2 seconds… so infuriating!!
Don’t worry, it’s bad on Windows too
Whenever it's open, system animations are very janky.
It was originally a Mac app before Microsoft bought it too, wasn’t it?
So what is the role of the Mac Studio now?
It only has:
- faster memory and up to 192 GB.
- 1 ekstra Thunderbolt port.
That is not much for such a large price difference:
Mac Mini (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $2500
Mac Studio (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $5000
The Mac Studio hasn't been updated yet. The equation changes once it's also on the M4 Max and Ultra.
Does it? What can it do better than M4 / 128GB…
Well, judging by the M1 and M2, the M4 Ultra will support 256GB of memory, so there's that. And it will have 2x the GPU and 2x the CPU cores...
And the port assortment is overall nicer in terms of not requiring an External TB4 hub for production environments (I literally have something plugged into every port on my M1 Max Mac Studio, even on the front!)
> Mac Mini (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $2500
> Mac Studio (fastest CPU, 64 GB ram, 1 TB SSD, 10 GbE): $5000
In those configurations, the Studio would have roughly 2x the GPU power of the Mini, with equivalent CPU power. It also has twice as many Thunderbolt ports (albeit TB4 instead of TB5), and can support more monitors.
It's probably also got better cooling. And you get some ordinary USB sockets as well!
AFAIK, memory bandwidth. M2 Ultra 800GB/s, whereas M4 Max is just 546GB/s. For example, local LLM inference has a big bottleneck on bandwidth. 50% extra is significant.
I wish the Studio received an upgrade, with a new M4 Ultra potentially going over 1TB/s. It also offers better cooling for long computations.
The GPU difference might be material.
But it is obviously a bad time to invest in a Mac Studio.
This confuses me because I thought all of the Mx series chips in the same generation ran at the same speed and has the same single-core capabilities?
The main thing that caused differential single-core CPU performance was just throttling under load for the devices that didn't have active cooling, such as the MacBook Air and iPad Pros.
Based on this reasoning, the M4, M4 Pro and M4 Max in active cooled devices, the MacBook Pro and Mac Mini, should have the same single-core performance ratings, no?
It might be down to the memory latency, the base M4 uses LPDDR5X-7500 while the bigger models use LPDDR5X-8533. I think that split is new this generation, and the past gens used the same memory across the whole stack.
Ah, interesting. I didn't catch that change.
The Pro and Max have more cache and more memory bandwidth. Apple also appears to be crippling the frequency by a percent or two so that the Max can be the top.
Too bad it's still sluggish for latest tech game dev with engines like UE :( It'd be great to ditch the Windows ecosystem, at least at dev time.
Game performance is often GPU bound, not CPU bound.
It depends on the game. If there are a lot of game simulation calculations to do for every frame, then you're going to be CPU constrained. If it's a storybook that you're walking through and every pixel is raytraced using tons of 8000x8000 texture maps, then it's going to be GPU constrained.
Most games are combinations of the two, and so some people are going to be CPU limited and some people are going to be GPU limited. For games I play, I'm often CPU limited; I can set the graphics to low at 1280 x 720, or ultra at 3840 x 2160 and get the same FPS. That's CPU limiting.
I recently swapped out my AMD 3800X with a 5900X as an alternative to a full platform upgrade. I got it mostly for non-gaming workloads, however I do enjoy games.
Paired with my aging but still chugging 2080Ti, the max framerates in games I play did not significantly increase.
However I did get a significant improvement in 99-percentile framerate, and the games feel much smoother. YMMV, but it surprised me a bit.
> If there are a lot of game simulation calculations
Why not move at least some of that into the GPU as well? Lots of different branchy code paths for the in-game objects?
Latency of communication with GPU is too large, and GPUs suck at things CPUs are good at.
Do you have a source? My experience is the opposite is often true
It's called Cyberpunk 2077
Good news! It's coming to Apple Silicon next year...
https://www.cyberpunk.net/en/news/50947/just-announced-cyber...
1. parent is talking about Unreal Editor, not playing games
2. yes, different pieces of software have different bottlenecks under different configurations... what is the point of a comment like this?
Even if it’s a relatively niche tool, if that’s the tool of your job depends on then that’s the thing that gates if you can use the computer.
And priced as the rest of them together.
Can't wait to see if / when they release the m4 ultra.
I bet the Studio and the Pro will have that option. I'm hoping the Pro has more versatile PCIe slots as well.
I don't trust Geekbench.
How many people does this actually affect? Gamers are better off with AMD X3D chips, and most productivity workloads need good multicore performance. Obviously MR is great silicon and I don't want to downplay that, but I'm not sure that best singlecore performance is an overly useful metric for the people who need performance.
Single core performance is what I need as a developer for quick compilation or updates of Javascript in the browser, when working on a Clojure/ClojureScript app. This affects me a lot.
It affects the millions of people that buy the machine by way of longevity.
Usually when I see advances, it's less about future proofing and more about obsoletion of old hardware. A more exaggerated case of this was in the 90s, people would upgrade to a 200 MHz p1 thinking they were future proofing but in a couple years you had 500Mhz P2s.
People who browse the web and want that fastest javascript performance they can get.
Or users of Slack, Spotify, Teams.. you name it. But I don't want to make an excuse that Electron-like frameworks should be encouraged to be used even more if we have super single core computers available.
Even if we ignore them, most tasks people do on a computer end up being heavily influenced by single threaded performance.
Amdahl's law is still in control. For a great mini users single threaded performance is extremely important.
But for a javascript (browser or electron) workload the new 16 Gb as the starting ram still isn't enough :)
Based on my limited knowledge. Most applications aren’t great at using all cores, so single-core performance is really important most of the time.
Yep. And even if they can use multiple threads, faster single threaded performance means each of those multiple threads gets done faster.
There’s a reason consumer CPUs aren’t slower with 1024 cores instead.
> Gamers are better off with AMD X3D chips
Yeah but then you'd have to use Windows. I'd rather just play whatever games can be emulated and take the performance penalty.
It helps that most AAAs put me to sleep...
The problem with the M chips is that you have to use macOS (or tinker with Asahi two generations behind). They are great hardware, but just not an option at all for me for that reason.
Mac OS is amazing, using a Mac for games is not a good idea. Most AAA, AA, and indie games don't run on Mac.
Mac OS was awful. OS X was amazing. macOS feels like increasingly typical design-by-committee rudderless crapware by a company who wishes it didn't have to exist alongside iOS.
How is it amazing? In my experience it is full of bugs and bad design choices if you ever dare to steer from the path Apple expects masses to take. If you try to use workspaces/desktops to the full extent, you know.
There's nothing stopping you from using Linux.
> Yeah but then you'd have to use Windows.
Why? Linux gaming has been great since Wine.
Even better now with Valve investment.
Surely leagues better than gaming with macOS.
Sir, you are not a real gamer(tm) either. Use a puny alternative OS and lose 3 fps, Valve support or not? Unacceptable!
As for Linux, I abandoned it as the main GUI OS for Macs about 10 years ago. I have linux and windows boxes but the only ones with displays/keyboards are the macs and it will stay that way.
I have all 3 OSs each on their own hardware:)
4 if you count steamdeck.
I do the real gaming, not some subpar emulated crap or anemic macOS steam library.
It is definitely not a given you lose FPS on Linux. It is not uncommon for games to give better FPS on Linux. It will all end up depending on the exact games you want to play.
That explains your comment then, lots of things changed from 10 years ago and gaming on Linux is pretty good now. The last games you can't play are the ones with strong anti-cheats basically. You can't compare that to the Mac situation where you can't play anything.
Intel should be utterly embarrassed.
Can’t wait for the Mac Studio/Pro to be released.
Wild. It absolutely shits on my 13900K - https://browser.geekbench.com/v6/cpu/compare/7692643?baselin...
Same, I have a Ryzen 9 7950X and it has 130-140% better performance (according to Geekbench)
https://browser.geekbench.com/v6/cpu/compare/8593555?baselin...
Though a MacBook Pro 16" with M4 Max(that's what achieved this geekbench score), but the same amount of memory (64GB) and the same amount of storage (4TB) as my PC, would cost 6079€. That is roughly twice as much as my whole PC Build did cost, and I'm able to expand Storage and upgrade my CPU and GPU in the future (for way less than buying a new Mac in a few years)
Apple’s upgraded are famously expensive, it’s how they get their giant margins. As an Apple enthusiast, yeah it sucks.
Anyway, they made an Ultra version of the M1 and M2 that was even better than the Max versions by having significantly more cores.
If they do that again (Mac Pro?) it will be one hell of a chip.
If they release a Studio without increasing prices next year, these specs will cost you 4500€. That's more comparable to your build (sans the upgrade options of course).
It’s so refreshing after the normal “here’s today’s enshitification of this thing you used to love” threads to read threads like this.
Anybody got a Speedometer 3.0 result from the M4 Max? It seems more relevant to "consumer computing".
It has to be at least 50 times the speed of a fast m68030 ;)
There's a 20x spread in Speedometer results on OpenBenchmarking, just including modern Intel Core CPUs, so yeah I would not be surprised if an M4 outran a 68030 by anywhere from 50x to 1000x
So Ultra used to be the max but now Max is max… until Ultra goes past the max and Max is no longer the max.
Until the next Max that goes beyond Ultra!
Apple has gotten into Windows and PC territory with their naming for chips and models. Kind of funny to see the evolution of a compact product line and naming convention slowly turn into spreadsheet worthy comparison charts.
That all said, I only have an M1 and it's still impressive to me.
I think they're still keeping it somewhat together, agree it got ever more confusing with the introduction of more performance tiers but after 3 generations it's been easy to keep track of: Mx (base) -> Pro -> Max -> Ultra.
Think it's still quite far away from naming conventions of PC territory.
Now I got curious on what naming scheme could be clearer for Apple's performance tiers.
I agree it’s kind of weird. I do wonder if the ultra was part of their original M1 plan, or it came along somewhere in the middle of development and they just had to come up with a name to put it above the Max.
That said it’s far better than any PC scheme. It used to be easy enough when everything was megahertz. But I left the Wintel world around 2006 or so and stopped paying attention.
I’ve been watching performance reviews of some video game things recently and to my ears it’s just total gobbledygook now. The 13900KS, 14900K, 7900X, 7950X3D, all sorts of random letters and numbers. I know there’s a method to the madness but if you don’t know it it’s a mess. At least AMD puts a generation in their names. Ryzen 9 is newer than Ryzen 7.
Intel has been using i3, i5, i7, and i9 forever. But the problem is you can’t tell what generation they are just from that. Making them meaningless without knowing a bunch more.
At least as far as I know they didn’t remember everything. I remember when graphics cards were easy because a higher number meant better, until the numbers got too big so they released the best new ones with a lower number for a while.
At least I find Apple’s name tractable both between generations and within a generation.
Ryzen 9 refers to flagship CPUs, not year. Same scheme as i3...i9. The first number in 4-digits is the generation but they increment it by 2.
This year's Zen 5 lineup consists of R9 9950x (has 16 cores), R9 9900x (12c), R7 9800x3d (8c with 3D Cache), R7 9700x (8c) and R5 9600x (6c).
Max implies it's the top. There shouldn't be anything above max
What do you believe the word ultra means?
“Ultra” means going beyond others.
yeah, as I was jokingly implying the names themselves aren't what I would have went with, but overall sticking to generation + t-shirt size + 2 bins is about as simple as it gets.
Same. I have an M1 Air and it is an amazing machine!
Can't wait until there's an M4 Ultramax too!
And these CPUs will be available in… max!
Until there's an Ultramax+ Pro 2
Personally, I'm waiting for Panamax edition.
Will that me the maximum Ultra or the Ultimate Max?
This is almost certainly a dumb question, but has anybody tried using these for scientific computing/HPC type stuff?
I mean no Infiniband of course, but how bad would a cluster of these guys using Thunderbolt 5 for networking be? 80Gbps is not terrible…
People are using Thunderbolt clustering for AI inference. Historically Thunderbolt networking has been much slower than you'd expect so people didn't bother trying HPC.
Good enough to be used for gaming? Really want Apple to get into that because dealing with Windows sucks.
That depends on what you want to play and what other things that suck that you’re willing to tolerate.
The GPUs in previous M chips aren’t beating AMD or NVidia’s top offerings on anything except VRAM but you can definitely play games with them. Apple has released their Game Porting Toolkit a couple years ago which is basically like Wine/Proton in Linux and if you’re comfortable with Wine and approximately what a Steam Deck can run then that’s about what you can expect to run on a newer Mac.
Installing Steam or GOG Galaxy with something like Whiskey.app (which leverages the game porting toolkit) opens up a large number of games on macOS. Games that need Windows root kits are probably a pain point, and you’re probably not going to push all those video setting sliders to the far right for Ultra graphics on a 4K screen, but there’s a lot of games that are very playable on macOS and M chips.
In addition to Whisky, it seems to not be well known that VMWare Fusion is free for personal use and can run the ARM version of Windows 11 with GPU acceleration. I tried it on my M1 Pro MBP and mid-range Steam games ran surprisingly well; an M4 should be even better.
Wow, had no idea this worked as well as it does. I remember the initial hype when this showed up but didn't follow along. Looks like I don't have to regard my Steam library as entirely gone.
Steam Deck-level performance is quite fine, I mainly just want to replay the older FromSoft games and my favorite indies every now and then.
Fair warning, I haven't dug that deep into compatibility issues or 32 bit gaming compatibility but it's definitely something to experiment with and for the most part you can find out for free before making a purchasing decision.
First and foremost, it's just worth checking if your game has a native port: https://store.steampowered.com/macos People might be surprised what's already available.
With Wine syscall and Rosetta x86 code translation, issues do pop up from time to time though, like games that have cutscenes that are encoded as Windows Media Player specific formats, or any other media codecs which aren't immediately available since it's not like games advertise those technology requirements anywhere and you may encounter video stuttering or artifacts since the hardware is obviously dramatically different than what the game developers were originally developing against and there's things happening in the background that an x86 Windows system never does. This isn't stuff that's overly Mac specific since it usually impacts Linux equally but it's a hurdle to jump that you don't have to deal with in native Windows. Like I said, playing Windows games outside of Windows is just a different set of pain points and you have to be able to tolerate it. Some people think it's worth it and some people would rather have higher game availability and keep the pain of Windows. Kudos to Valve with creating a linux based handheld and the Wine and Proton projects for improving this situation dramatically though.
Besides the Game Porting Toolkit (which was originally intended for game developers to create native application bundles that could be put on the App Store), there's also Crossover for Mac that does their own work towards resolving a lot of these issues and they have a compatibility list you can view on their site: https://www.codeweavers.com/ and alternatively, some games run acceptably inside virtualization if you're willing to still deal with Windows in a sandboxed way. Parallels is able to run many games with better compatibility since you're actually running Windows, though last I checked DX12 was a problem.
With Thunderbolt 5 it should be fairly reasonable to use an external GPU for more power.
Apple Silicon Macs don't have support for eGPUs: https://support.apple.com/en-us/102363
Maybe with future TB5 support they will include that feature.
Apple no longer has drivers for anything newer than AMD RDNA2 and have completely dismantled the driver team.
Unless you're running bootcamp you're extremely limited by driver support.
Gaming isn't just about hardware, it's also about software, economics, trust and relationships.
Apple has quite impressive hardware (though their GPUs are still not close to high-end discrete GPUs), but they're also fast enough. The problem now is that Apple systematically does not have a culture that respects gaming or is interested in courting gamers. Games also rely on OS stability, but Apple has famously short and severe deprecation periods.
They ocassionally make pushes in that direction, but I think they lack the will to make a concerted effort, and I also think they lack the restraint to not try and force everything through their own payment processors and distribution systems causing sour relations with developers.
Incidentally, CD Projekt announced Cyberpunk 2077 Ultimate Edition for Mac yesterday. There is hope! :)
https://www.cyberpunk.net/en/news/50947/just-announced-cyber...
They’ve always been good enough for gaming. The problem has just been whether or not publishers would bother releasing the games. It’s unfortunate that Apple can’t seem to really build enough momentum here to become a gaming destination.
Apple's aggressive deprecation policies haven't done them any favors when it comes to games, they expect software to be updated to their latest ISAs and APIs in perpetuity but games are rarely supported forever. In many cases the developers don't even exist anymore. A lot of native Mac game ports got wiped out by 32bit being EOL'ed, and it'll probably happen again when they inevitably phase out Rosetta 2 and OpenGL support.
It has always baffled me why Apple doesn't take gaming seriously. It's another revenue stream, it would sell more Macs. It's profit.
Is it just some weird cultural thing? Or is there some kind of genuine technical reason for it, like it would involve some kind of tradeoffs around security or limiting architecture changes or something?
Especially with the M-series chips, it feels like they had the opportunity to make a major gaming push and bring publishers on board... but just nothing, at least with AAA games. They're content with cartoony content in Apple Arcade solely on mobile.
I always assumed it was the nature of the gaming workload on the hardware for why they don't ever promote it. AAA games pegging the CPU/GPU at near max for long periods of time goes against what they optimise their machines for. I just think they don't want to promote that sort of stress on the system. On top Apple taking themselves very seriously and seeing gaming as below them.
Apple has one of the, if not the biggest gaming platforms in existence (the iphone and ipad), but everyone seems to have a blind spot for that and disregards it. Sure, the Mac isn't a big gaming platform for them because their systems are mostly used professionally (assumption), but there too, the Mac represents only 1/10th of the sales they get from the iPhone, and that's only on the hardware.
Mac gaming is a nice-to-have; it's possible, there's tools, there's Steam for Mac there's toolkits to port PC games to Mac, there's a games category in the Mac app store, but it isn't a major point in their marketing / development.
But don't claim that Apple doesn't take gaming seriously, gaming for them it's a market worth tens of billions, they're embroiled in a huge lawsuit with Epic about it, etc. Finally, AAA games get ported to mobile as well and once again earn hundreds of millions in revenue (e.g. CoD mobile).
The iPhone gaming market is abusive and predatory, essentially a mass exploit on human psychology. Certainly not something to be proud of.
I feel like for myself at least, mobile gaming is more akin to casino gaming than video gaming. Sure, iOS has loads of gaming revenue but the games just ain't fun and are centred way too heavily on getting microtransactions out of people.
If you look at things like Game Porting Toolkit, Apple actually is investing resources here.
It just feels like they came along so late to really trying that it’s going to be a minute for things to actually happen.
I would love to buy the new Mac Mini and sit it under my TV as a mini console. But it just feels like we’re not quite there yet for that purpose, even though the horse power is there.
Apple owns the second largest gaming platform by users and games, and first by profit: iPhone.
In terms of gaming that's only on PC and consoles, I didn't understand Apple's blazé attitude until I discovered this eye-opening fact: there are around 300 million people who are PC and console gamers, and that number is NOT growing. It's stagnant.
Turns out Apple is uninterested by a stagnant market, and dedicates all its gaming effort where growth is: mobile.
> Is it just some weird cultural thing?
I think so. I think no one in apple management has ever played computer games for fun so they simply do not understand what customers would want.
> It has always baffled me why Apple doesn't take gaming seriously.
They aren't really the ones that have to.
But they are. They need to subsidize porting AAA games to solve the chicken-and-egg problem.
Gaming platforms don't just arise organically. They require partnership between platform and publishers, organized by the platform and with financial investment by the platform.
> They need to subsidize porting AAA games to solve the chicken-and-egg problem.
glances at the Steam Machine
And how long do they have to fail at that before trying a new approach?
Apple does take gaming seriously. They've built out comprehensive Graphics APIs and things like the GPTK to make migrating games to Apple's ecosystem actually not too bad for developers. The problem is that a lot of game devs just target Windows because every "serious" gamer has a windows PC. It's a chicken-and-egg problem that results from Apple always having a serious minority share of the desktop market. So historically Apple has focused on the segments of the market that they can more easily break into.
They do take gaming seriously, that's likely the bulk of their AppStore revenue after all.
They just don't care about desktop gaming, which is somewhat understandable. While the m-series chips have a GPU, it's about as performant for games as a dedicated GPU from 10-14 years ago (It only needs a fraction of the electricity though, but very few desktop gamers care about that).
The games you can play have to run at silly low resolution (fullHD at most) and rarely even reach 60fps.
> They do take gaming seriously
They do take gambling seriously.
I think they will get there in time. They like to focus on things and not spread themselves thin. They always wanted to get the gaming market share but AI is taking all their time now.
Given that a Mac mini with an M4 is basically the same size and shape as an Apple TV, they could make a new Apple TV that was a gaming console as well.
Why is the Apple TV only focused on passive entertainment?
Apple TV is a gaming console.
https://www.apple.com/apple-arcade/
I'm not sure how many chances they'll get to persuade developers that this time they really mean it. It sounds like Apple Arcade is a flop.
Is this one of those cases where "flop" means "this product would have a billion dollar market cap if it was a company, but since it's Apple, it's a flop".
It is isn't it.
https://forums.appleinsider.com/discussion/234969/apple-arca...
No they haven't. For years the best you could get was "meh" to terrible GPUs at high price points. Like $2000+ was where getting a discrete GPU began. The M series stuff finally allows the entry level to have decent GPUs but they have less storage out of the box than a $300 Xbox Series S. Apple's priorities just don't align well with gamers. They prioritize resolution over refresh rate and response time, make mice unusable for basically any FPS made in the past 20 years and way overcharge for storage and RAM.
Valve has/continues to do way more to make Linux a viable gaming platform than Apple will likely ever do for mac
I get it, you want to leave windows by way of mac. But your options are to either bite the bullet and expend a tiny bit of your professional skill on setting up a machine with linux, or stay on windows for the foreseeable future.
Well we're about to find out now that CDPR have announced Cyberpunk 2077 will get a native Metal port. I for one am extremely curious with the result. Apple have made very lofty claims about their GPU performance, but without any high-end games running natively, it's been hard to evaluate those claims.
That said, expectations should be kept at a realistic level. Even if the M4 has the fastest embedded GPU (it probably does), it's still an embedded GPU. They aren't going to be topping any absolute performance charts.
you can game on linux though. almost all games work just fine. (well, almost)
Was this verified independently? Because people can submit all sorts of results for Geekbench scores. Look at all these top scorers (most of which are obviously fake or overclocked chips): https://browser.geekbench.com/v6/cpu/singlecore
"in Geekbench 6."
"8 too long", says HN.
unfortunately I could only afford the M4 Pro model MBP lol
For how long? There are a lot of superlatives ("simply incredible" etc) - when some new AMD or Intel CPU beats this score, will that be "simply incredible" too?
New chips are slightly faster than previous ones. I am not incredulous about this. Were it a 2x or 3x or 4x improvement or something, sure. But it ain't - it's incremental. I note how even in the Apple marketing they compare it to generations 3 or 4 chips ago (e.g. comparing increases against i7 performance from years ago etc, not against the M3 from a year or so ago because then it is "only" 12% - still good, but not "simply incredible" in my eyes).
Why is so hard for people to understand why apple did that?
They want the people who are still clinging to intel mac to convert finally. And as for m1 comparisons, people are not changing laptops every year and that is the cohort of m users that is the most likely to upgrade. It's smart to do what apple did.
I get that argument, but it comes across as hugely disingenuous to me especially when couched with so much glitz and glamour and showmanship. They're aim is to present these things as huge quantum leaps in performance and it's only if you look into the details that it's clear that they're not and they're fudging the figures to make them look better than they are.
"New Car 2025 has a simply incredible top speed 30x greater than previous forms of transport!* (* - previous form of transport slow walk at 4mph)"
It's marketing bullshit really let's be honest. I don't accept that their highly-polished entire marketing spiel and song and dance is aimed 100% only at people who have 3 or 4 generation old Mac already. They're not spending all this time and money and effort just to try and get people to upgrade. If you believe that, then you are in the distortion field.
No one in the industry uses Apple's marketing in any real sense. The marketing is not for you - its sole purpose is to sell more Macs to their target market.
That you are distracted by it is not Apple's problem - and most other industry players don't GAF about Apple's self-comparisons either.
shrug I just upgraded an M1-ultra studio to an M4-Max MBP. I'm not going to splash that much cash every year on an upgrade, and I don't think that's uncommon.
Just like the phone comparisons are from more than one year ago, the computer comparisons (which are even more expensive) make more sense to be from more than one year ago. I don't see why you wouldn't target the exact people you're trying to get to upgrade...
Yet you do not propose an alternative theory that makes sense.
Our point: Apple is laser-focused on comparing with laptops that are 4-5 year old. That's usually when Mac users start thinking about upgrading. They're building their marketing for them. It causes issues when directly trying to compare with the last generation.
Your point: Apple shouldn't be glamorous and a good showman when marketing their products because they know the only true marketing is comparing directly with your very last chip. Any other type of marketing is bullshit.
The alternative theory is they are trumping up the numbers in a disingenuous way to make it sound better than it is.
But they're not "trumping" (which makes it sound as if they're making it up). They're just looking at
- who is likely to upgrade.
- target advertising at those people.
Seems eminently sensible to me.
> I note how even in the Apple marketing they compare it to generations 3 or 4 chips ago
Apple is just marketing to the biggest buyer group (2 generation upgrades) in their marketing material?
This isn’t like iPhones where people buy them every 1-2 years (because they break or you lose it etc), laptops have a longer shelf life, you usually run to the ground over 2+ yrs and then begrudgingly upgrade.
The + is doing some heavy lifting there. I’m on a 2019 XPS running Fedora with niri. It doesn’t feel like it’s kicking the bucket any time soon.
And my 2019 Intel MBP is still working too. Use it every day.
The idea of a 6x (or whatever) performance jump is certainly tempting. Exactly as they intend it to be. If I was in charge of replacing it I would be far more likely to buy than if I had an M3.
They’re trying to entire likely buyers.
Incremental progress gonna increment.
We're on a perpetual upgrade treadmill. Even if the latest increment means an uncharacteristically good performance or longevity improvements... I can't bring myself to care.
There are a LOT of corporate Macs out there that are still on Intel.
The replacement cycle may just be that long. Or maybe they chose to stick with Intel. Maybe because that’s what they were used to or maybe because had specific software needs. So they were still buying them after Apple Silicon machines had been released.
Yeah it’s not a big deal for the enthusiast crowd. But for some of their customers it’s absolutely a consideration.
> Apple's M4 Max chip is the fastest single-core performer in consumer computing
Single taskink OSs are long gone. Single core performance is irrelevant in the world of multitasking/multithreading/ preemtible threads.
There are lots of apps that only run in a single thread. If you want them to run fast, you need fast single-core performance.
Amdahl's law has not actually been overturned.
If that were true, why isn't my GPU running my UI loop or running my JS event-loop?
Single-core performance is still king for UI latency and CPU-bound tasks.
[dupe] https://news.ycombinator.com/item?id=42014791
Different chips though, and different links. (Also, it’d be nice if we stopped linking directly to social media posts and instead used an intermediary that didn’t require access or accounts just to follow discussions here.)
whoops, Related:
Mac Mini with M4 Pro is the fastest Mac ever benchmarked
https://news.ycombinator.com/item?id=42014791