Here's my big question: are there datasheets/programmers manuals available or is this yet another proprietary mess of a SoC that ships undocumented Linux drivers with binary blobs? No thanks.
I will not spend money on hardware no one can reliably patch or write drivers for. I also want other operating system maintainers to be able to write drivers and get booting.
With them only merging upstream now, it'll be a while before you can actually use Linux on these devices. You can build your own kernel from upstream, but it's probably a better idea to wait until Arch or Gentoo package the necessary pre-configured kernels.
From what I can tell, the Elite SoCs are a lot less outdated-semi-proprietary-Linux-fork-y than many other Qualcomm chips.
That means nothing for the community who may need or want to fix and patch issues on their own. Instead we're beholden to Qualcomm to fix major issues on an OS it may or may not care about supporting. It also excludes other open source operating systems such as the BSD's who have to then reverse engineer the undocumented Linux drivers.
A better question: can a small company like Framework or even MNT Research build and support an open laptop around this chip?
While not this chip, MNT Research has been working on a processor module for Qualcomm Dragonwing QCS6490 and is manufacturing the first wave of test PCBs now:
Framework doesn't even develop their own firmware; most of the engineering in PCs is done by Intel/AMD/ODMs/IBVs. The whole ecosystem is based on vendor support not datasheets.
Firmware is not preventing Framework or anyone from offering a repairable laptop. Firmware also doesn't matter once the kernel is loaded. We need the datasheets.
I was under the impression that the firmware is responsible for loading the ACPI tables but the OS takes over and runs the code in its ACPI VM once running.
This article seems to be about Qualcomm adding the device trees for the X2 CPUs. Which isn't the first gen.
As someone with a first gen, the device trees are, as I understand it, one of the issues with trying to just install any distro, except for that special Ubuntu one.
I can't just (for example) grab the latest fedora, and try and run that.
As someone who has used the Snapdragon X Elite (12 core Oryon) Dev Kit as a daily driver for the past year, I find this exciting. The X Elite performance still blows my mind today - so the new X2 Elite with 18 cores is likely going to be even more impressive from a performance perspective!
I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)
Surface Pro 11 owner here. SQL Server won't install on ARM without hacks. Hyper-V does not support nested virtualization on ARM. Most games are broken with unplayable graphical glitches with Qualcomm video drivers, but fortunately not all. Most Windows recovery tools do not support ARM: no Media Creation Tool, no Installation Assistant, and recovery drives created on x64 machines aren't compatible [EDIT: see reply, I might be mistaken on this]. Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.
Day-to-day, it's all fine, but I may be returning to x64 next time around. I'm not sure that I'm receiving an offsetting benefit for these downsides. Battery life isn't something that matters for me.
You ABSOLUTELY do not have to create a recovery drive from a Snapdragon based device. I've done it multiple times from x64 Windows for both a SPX and 11.
Hmm, thank you, that's good to know. Did you just apply the Snapdragon driver zip over the x64 recovery drive? It didn't work for me when my OS killed itself but I could easily have done something wrong in my panic over the machine not working. Since I only have the one Snapdragon device, I was making the assumption that it would have worked if I had a second one, but I didn't actually know that.
Thanks again for this. Honestly, it may sway my choice on returning to x64 vs. sticking with ARM64 next time. The other issues are relatively minor and can be dealt with, but I didn't like thinking that I was one OS failure away from a bricked machine that I couldn't recover.
>Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.
That's just creation of a recovery drive for anything that Microsoft itself makes. It's the same process for the Intel Surface devices too.
>no Media Creation Tool
Why would anyone care about that? Most actively avoid Microsoft's media creation tool and use Rufus instead.
One reason is that Apple sold subsidized devkits to developers starting around 6 months before Apple Silicon launched, while the X Elite devkit was not subsidized, came with Windows 11 Home (meaning that you had to pay another $100 to upgrade to Pro if you were an actual professional developer who needed to join the computer to your work domain), and didn't ship until after months after X Elite laptops started shipping. As a result, when the X Elite launched basically everything had to run under emulation.
I think another reason is Apple's control over the platform vs Microsoft's. Apple has the ability to say "we're not going to make any more x86 computers, you're gonna have to port your software to ARM", while Microsoft doesn't have that ability. This means that Snapdragon has to compete against Intel/AMD on its own merits. A couple months after X Elite launched, Intel started shipping laptops with the Lunar Lake architecture. This low-power x86 architecture managed to beat X Elite on battery life and thermals without having to deal with x86 emulation or poor driver support. Of course it didn't solve Intel's problems (especially since it's fabricated at TSMC rather than by Intel), but it demonstrated that you could get comparable battery life without having to switch architectures, which took a lot of wind out of X Elite's sails.
Apple had a great translation layer (Rosetta) that allows you to run x64 code, and it's very fast. However, Apple being Apple, they are going to discontinue this feature in 2026, that's when we'll see some Apple users really struggling to go fully arm, or just ditch their MacBook. I know if Apple does follow through with killing Rosetta, I'll do the latter.
It's a transpiler that takes the x86-64 binary assembly and spits out the aarch64 assembly only on the first run AFAIK. This is then cached on storage for consecutive runs.
Apple silicon also has special hardware support for x86-64's "TSO" memory order (important for multithreaded code) and half-carry status flag.
BTW. A more common term for what Rosetta does is "binary translation". A "transpiler" typically compiles from one high-level language to another, never touching machine code.
Did it? From that list: SQL server doesn't work on Mac and there's no Apple equivalent, virtualisation is built into the system so that kind of worked but with restrictions, games barely exist Mac so a few that cared did the ports but it's still minimal. There's basically no installation media for Macs in the same way as windows in general.
What I'm trying to say is - the scope is very different / smaller there. There's a tonne of things that didn't work on Macs both before and after and the migration was not that perfect either.
Out of the gate, Apple silicon lacked nested virtualization, too. They added it in the M3 chip and macOS 15. Macs have different needs than Windows though; I think it's less of a big deal there. On Windows we need it for running WSL2 inside a VM.
I'd guess the M3 features aren't required for nested virtualization, and it was more of a sw design decision to only add the support when some helpful hardware features were shipped too. Eg here's nested virtualization support for ARM on Linux in 2017: https://lwn.net/Articles/728193/
Nested virt does need hardware support to implement efficiently and securely. The Apple chips added that over time, eg M2 actually had somewhat workable support but still incomplete and hacky https://lwn.net/Articles/928426/ - the GIC (interrupt controller) was a mess to virtualise in older versions, which is different from the instruction set of the CPU.
On Windows nested virtualization already existed before WSL, all the kernel and device drivers security features introduced on Windows 10, and made always enabled on Windows 11, require running Hyper-V, which is a type 1 hypervisor.
So it is rather easy having to deal with nested virtualization, even those of us that seldom use WSL.
Yes, nested virtualization has existed for a long time... on Intel. On Windows, it is not supported on ARM. For a long time it wasn't even supported on AMD! They added AMD nested virtualization support in Windows Server 2022!
Note that when the Windows host is invisibly running under Hyper-V, your other Hyper-V VMs are its "siblings" and not nested children. You're not using nested virtualization in that situation. It's only when running a Hyper-V VM inside another Hyper-V VM. WSL2 is a Hyper-V VM, so if you want to run WSL2 inside a Windows Hyper-V VM which is inside your Windows host, it ends up needing to nest.
Nested virtualization is not required for WSL2 or Hyper-V VMs. It's only required if you want to run VMs from within WSL2 (Windows 11 only) or Hyper-V VMs within Hyper-V VMs.
Yeah, I understand this and said it correctly in my post. We need nested virtualization to run WSL2 inside a VM: this is a Linux VM inside a Windows VM inside a Windows host. WSL2 is already a VM, so if you want to run that inside a VM, it requires nested virtualization. Nested virtualization is one of those features that people don't know about unless they need it, and they find out for the first time when they get an error message from Hyper-V. If you have a development VM on a system without nested virtualization, you're stuck with WSL1 inside that VM, or using a "sibling" Linux VM that you set up manually (the latter was my actual solution to this issue).
Actually, the Macho file format was multiarch by design (On Windows we're still stuck with Program Files (x86))..
Anyway, before dropping 32bit, they've dropped PowerPC.
Another consideration, Apple is the king of dylib, you're usually dynamically linking to the OS frameworks/libs. so they can actually plan their glue smarter so the frameworks would still work in native arch.
(that was really important with PPC->Intel where you also had big endian...)
Having a narrow product line helped Apple a lot. Similarly being able to deprecate things faster than business-oriented Microsoft. Apple also controls silicon implementation. So they could design hardware features that enabled low to zero overhead x86 emulation. All in all Rosetta 2 was a pretty good implementation.
Microsoft is trying to retain binary compatibility across architectures with ARM64EC stuff which is intriguing and horrifying. They, however, didn't put any effort into ensuring Qualcomm is implementing the hardware side well. Unlike Apple, Qualcomm has no experience in making good desktop systems and it shows.
I didn't say otherwise. They probably realized they can pull a complete desktop CPU design off at the latest with iPad, probably earlier. They were probably not happy using Intel chips and their business strategy has always been controlling and limiting HW capabilities as much as possible.
Linux is different. Decades of being tied to x86 made the OS way more coupled with the processor family than one might think.
Decades of bugfixes, optimizations and workarounds were made assuming a standard BIOS and ACPI standards.
Specially on the desktop side.
That, and the fact that SoC vendors are decades behind on driver quality. They remind me of the NDiswrapper era.
Also, a personal theory I have is that have unfair expectations with ARM Linux. Back then, when x86 Linux had similar compatibility problems, there was nothing to be compared with, so people just accepted that Linux was going to be a pain and that was it.
Now the bar is higher. People expect Linux to work the way it does in x86, in 2025.
Linux runs perfectly on MIPS, Power, Sparc, obviously ARM - cue the millions of phone running Linux today, RiscV, and at least a dozen other architectures with little to no user. It's absolutely not tied to x86.
This doesn't pass the smell test when Linux powers so many smart or integrated devices and IoT on architectures like ARM, MIPS, Xtensa, and has done so for decades.
I didn't even count Android here which is Linux kernel as first class citizen on billions of mostly ARM-based phones.
Apple already went through this before with PowerPC -> x86. They had universal binaries, Rosetta, etc. to build off of. And they got to do it with their own hardware, which includes some special instructions intended to help with emulation.
> Apple already went through this before with PowerPC -> x86
Not to mention 68K -> PowerPC.
Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.
Every Mac transitions to ARM, only a very small amount of Windows PCs are running ARM. SO right now there's not an large user base to incentivise software to be written for it.
You are right that Windows on ARM cannot be called a success. But if you make Windows/macOS cross platform software then your software needs to be written for ARM anyway.
So if you support macOS/x86, macos/ARM, and Windows/x86, then the additional work to add Windows/ARM is rather small, unless you do low-level stuff (I remember Fortnite WoA port taking almost a year from announcement to release due to anticheat).
I have a similar Windows Arm64 machine (Lenovo "IdeaPad 5 Slim"), RDP into it works OK.
There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.
On the bright side, there's a good chance that Windows on ARM is not well supported by malware. There's a situation where you benefit from things being broken.
Yeah, I too was surprised to find the dev experience very good: all JetBrains IDEs work well, Visual Studio appears to work fine, and most language toolchains seem well supported.
Have I had any app compatibility issues?
To quote Hamlet, Act 3, Scene 3, Line 87: "No."
The Prism binary emulation for x86 apps that don't have an ARM equivalent has been stellar with near-native performance (better than Rosetta in macOS). And I've tried some really obscure stuff!
I suspect that's due to the GPU and not due to Prism, because they basically just took a mobile GPU and stuffed it into a laptop chip. Generally performance seems to be on par with whatever a typical flagship Android devices can do.
Desktop games that have mobile ports generally seem to run well, emulation is pretty solid too (e.g. Dolphin). Warcraft III runs OK-ish.
The GPUs don't go toe-to-toe with current gen desktop GPUs but they should be significantly better than the GTX 650, a mid range desktop GPU from 2012, the game (2019) lists as recommended. It does sound like something odd is going on than just lack of hardware.
That something odd is called GPU drivers. Even Intel struggled (they recently announced that they are dropping all gpu driver older than Alchemist development) to get games running on their iGpus
Ironically, the app I've had the most trouble with is Visual Studio 2022. Since it has a native ARM64 build and installation of the x64 version is blocked, there are a bunch of IDE extensions that are unavailable.
Today Qualcomm CEO stated[0] that the combination of Android and ChromeOS, e.g. Android Computers, will be available on Snapdragon laptops. Maybe these X2 CPUs will be in those laptops.
For people complaining about battery control and android emulation on linux, ChromeOS is a boon.
You effectively get an actual Linux distro + most of android, with a side of Chrome. It's way closer to "a real computer" than an iPad for instance, and only loses to the Surface Pro/Z13 line in term of versatility IMHO.
It really wasn't bad, my only deal breakers were keyboard remapping being non existent and the bluetooth stack being flaky.
I got a ChromeOS device a few years ago and it was great. I think they get an underserved bad reputation from being the locked-down devices you're forced to use in schools, but a personal ChromeOS device is a capable computer that can run any Android app or desktop Linux app.
Though having said that, in the past year I've replaced ChromeOS with desktop Linux (postmarketOS) and I love it even more now. 4GB of RAM was a bit slim for running everything in micro-VMs for "security," which is what ChromeOS does. I've had no trouble with battery life or Android emulation (Waydroid) since switching.
I've used VS Code on ChromeOS with the GPU acceleration flag for many, many years without any issues on a couple different devices (x64 and more recently, arm64). It can even hide the window chrome so looks 1:1 with VS Code on any other platform. And many other GUI Linux apps where the Android version feels too much like a toy in comparison, it's an incredibly versatile feature for dev work.
Sorry, but "CLI stuff" is not "as far as it goes" with desktop Linux apps on ChromeOS. ChromeOS provides Wayland and PulseAudio servers to the apps as well so GUI and audio works too. It even synchronises file associations and installs a ChromeOS-like GTK theme into the container. The Linux GUI apps I had installed back when I used it felt completely native.
It worked on my device. The page you linked looks very outdated and doesn't have my device's board or any device made in the past 5 years. The lists of unsupported devices also look pretty reasonable - old kernels, CPUs that don't support virtualisation and 32-bit ARM. Since modern ChromeOS uses the same virtualisation to run Android apps, I doubt there's a modern device where it doesn't work.
If you look at the verified hardware list for ChromeOS Flex[0], you can get an idea of what ChromeOS devices are being deployed for. Apart from education and companies that use Google Workspace, there's a lot of ChromeOS devices deployed as kiosks and call center computers. This is reflected not only in obscure documentation, but also in the marketing material[1].
The "enterprise" managability and reduced attack surface is driving Google to jack up Chromebook prices. The "Chromebook Plus" models are nearing the same price as a midrange Dell Inspiron, HP OmniBook, or Lenovo IdeaPad. You may have also noticed M4 MacBook Airs can be bought for the price of an iPhone 17, and I suspect that's partially a response from Apple to the Chromebook price increases. Buying a $600 Chromebook might have been sane for someone tired of Microsoft and not interested in a $1000 Macbook Air, but in 2025, with the Macbook Air prices going down significantly[2], Chromebooks are not as appealing to regular consumers (different story for businesses).
We’ve been using X Elite Snapdragon laptops (Thinkpad T14s and Yoga Slim running Ubuntu’s concept images) to build large amounts of ARM software without the need for cross-compiling. The hardware peripheral support isn’t 100% yet (good enough) but I’ve been impressed with the performance.
ARM seems to be popular in the server space and it’s nice to see it trickling down to the PC market.
To be fair, they did say "PC" specifically. It's not uncommon to consider that a category that doesn't include Apple (e.g. the "I'm a Mac" "I'm a PC" ads from years ago)
People just need to quit using the term "PC" to refer to desktop or laptop hardware that happens to be running Windows. Laptops running MacOS are "personal computers," as are desktops running Linux, or effing phones running Android, for that matter.
I think the issue is that there's clearly as need for a term for the category of "things that usually run Windows (but you can also probably put Linux on it, and like, even one of the BSDs if you're feeling adventurous)". PC isn't a great one from a linguistic perspective, but there's not an alternative I've heard that seems likely to catch on. There probably also should be a better term for "laptop/desktop", since as you mention "computer" itself is not really narrow enough if you're being pedantic, but at the end of the day, right now the only really differentiator we have is context. In the context here, it was honestly more clear clear what was meant by "PC" in the top-level comment than it was whether the person responding to it actually misunderstood or was trying to make a point.
Did qualcomm ever get its act together with firmware/drivers and linux? It's a dead end to me if they aren't at an Intel/AMD level of openness on this front.
Does anybody know if the X2 supports the x86 Total store ordering (TSO) memory ordering model? That's how Apple silicon does such efficient emulation of x86. I'd think that would be even MORE important for a Windows ARM64 laptop where there is so much more legacy x86 software going back decades.
Does anyone have benchmarks for Rosetta with TSO vs the Linux version with no-TSO? I guess it might be a bit challenging to achieve apples to apples, although you could run a test benchmark on OSX and then Asahi on the same hardware, I think?
I've always been curious about just how much Rosetta magic is the implementation and how much is TSO; Prism in Windows 24H2 is also no slouch. If the recompiler is decent at tracing data dependencies it might not have to fence that much on a lot of workloads even without hardware TSO.
People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed, other factors like enhanced hardware flag conversion support and function call optimizations play a significant role too:
> People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed
This is a misinterpretation of what the author wrote! There is a real and significant performance impact in emulating x86 TSO semantics on non-TSO hardware. What the author argues is that enabling TSO process-wide (like macOS does with Rosetta) resolves this impact but it carries counteracting overhead in non-emulated code (such as the emulator itself or in ARM64EC).
The claimed conclusion is that it's better to optimize TSO emulation itself rather than bruteforce it on the hardware level. The way Microsoft achieved this is by having their compiler generate metadata about code that requires TSO and by using ARM64EC, which forwards any API calls to x86 system libraries to native ARM64 builds of the same libraries. Note how the latter in particular will shift the balance in favor of software-based TSO emulation since a hardware-based feature would slow down the native system libraries.
Without ecosystem control, this isn't feasible to implement in other x86 emulators. We have a library forwarding feature in FEX, but adding libraries is much more involved (and hence currently limited to OpenGL and Vulkan). We're also working on detecting code that needs TSO using heuristics, but even that will only ever get us so far. FEX is mainly used for gaming though, where we have a ton of x86 code that may require TSO (e.g. mono/Unity) but wouldn't be handled by ARM64EC, so the balance may be in favor of hardware TSO either way here.
For reference, this is the paragraph (I think) you were referring to:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
How is it a misinterpretation? To re-quote that last sentence:
> In my opinion, TSO is a red herring that isn't really improving performance, but it sounds nice on paper.
That's the author directly saying that TSO isn't the major emulation performance gain that people think it is. You're correct that there are countering effects between TSO's benefits to the emulated code vs. the negative effects on the emulator and other non-emulated code in the same process that are fine running non-TSO, but to users, this distinction doesn't matter. All that matters is the performance of emulated program as a whole.
As for the volatile metadata, you're correct that MSVC inserts additional data to aid the emulation. What's not so great is that:
- It was basically an almost undocumented, silent addition to MSVC.
- In some cases, it will slow down the generated x64 code slightly by adding NOPs where necessary to disambiguate the volatile access metadata.
- It only affects code statically compiled with a recent version of MSVC (late VS2019 or later). It doesn't help executables compiled with non-MSVC compilers like Clang, nor any JIT code, nor is there any documentation indicating how to support either of these cases.
> How is it a misinterpretation? To re-quote that last sentence:
I think we agree in our understanding, but condensing it down to "TSO isn't as much of a deal as claimed" is misleading:
* Efficient TSO emulation is crucial (both on Windows and elsewhere)
* The blog claims hardware TSO is non-ideal on Windows only (because Microsoft adapted the ecosystem to facilitate software-based TSO emulation). (Even then, it's unclear if the author quantified the concrete impact)
* Hardware TSO is still of tremendous value on systems that don't have ecosystem support
> [volatile metadata] doesn't help executables compiled with non-MSVC compilers like Clang, nor any JIT code, nor is there any documentation indicating how to support either of these cases.
That's funny, I hadn't considered third party compilers. Those applications would still benefit from ARM64EC (i.e. native system libraries), but the actual application code would be affected quite badly by the TSO impact then, depending on how good their fallback heuristics are. (Same for older titles that were compiled before volatile metadata was added)
Following up that last part -- I recompiled my x64 codebase with /volatileMetadata-, which reduced the volatile metadata by ~20K (the remainder most likely from the statically linked CRT). The profiling results were negligible, under noise level between the builds and both about 15-30% below the native ARM64 build.
The interesting part is when the compatibility settings for the executables are modified to change the default multi-core setting from Fast to Strict Multi-Core Operation. In that mode, the build without volatile metadata runs about 20% slower than the default build. That indicates that the x64 emulator may be taking some liberties with memory ordering by default. Note that while this application is multithreaded, the worker threads do little and it is very highly single thread bottlenecked.
20% is about the general order of magnitude we observed in FEX a while ago, though as you enable all TSO compatibility settings (including those rarely needed) it'll be much higher even. As people elsewhere in the thread mentioned it'd be interesting to see how FEX fares on Asahi with hardware TSO enabled vs disabled (but with conversative TSO emulation as set up by default) since it's less of a blackbox.
> Efficient TSO emulation is crucial (both on Windows and elsewhere)
Yes, but this is not in contention...? No one is disputing that TSO semantics in the emulated x86 code need to be preserved and that it needs to be done fast, we're talking about the tradeoffs of also having TSO support on the host platform.
> The blog claims hardware TSO is non-ideal on Windows only (because Microsoft adapted the ecosystem to facilitate software-based TSO emulation). (Even then, it's unclear if the author quantified the concrete impact)
> Hardware TSO is still of tremendous value on systems that don't have ecosystem support
That isn't what the author said. From the article:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
That is a direct statement on Rosetta/macOS and does not mention Prism/Windows. How correct that assessment may be is another matter, but it is not talking about Windows only.
> Those applications would still benefit from ARM64EC (i.e. native system libraries), but the actual application code would be affected quite badly by the TSO impact then, depending on how good their fallback heuristics are.
I will have to check this, I don't think it's that bad. JITted programs run much, much better on my Snapdragon X device than the older Snapdragon 835, but there are a lot of variables there (CPU much faster/wider, Windows 11 Prism vs. Windows 10 emulator, x86 vs x64 emulation). I have a program with native x64/ARM64 builds that runs at -25% speed in emulated x64 vs native ARM64, I'm curious myself to see how it runs with volatile metadata disabled.
For really old software, it tends not to make good use of multiple cores anyway and you can simply emulate just a single core to achieve total store ordering.
Anything modern and popular and you can probably get it recompiled to ARM64
Unfortunately games are the most common demanding multithread applications. Studios throw a binary over the fence and then get dissolved. Seems to be the way the entire industry operates.
Maybe more ISA diversity will incentivize publishers to improve long-term software support but I have little hope.
Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s and Nvidia cards around 1800GB/s and no word if it supports 256-512GB of memory.
> Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s
Most Apple Silicon is much less than 800 GB/s.
The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part.
It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge.
For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea.
This, and always check benchmarks instead of assuming memory bandwidth is the only possible bottleneck. Apple Silicon definitely does not fully use its advertised memory bandwidth when running LLMs.
The base model X2 Elite has memory bandwidth of 152 GB/s. M4 Pro is a modest win against the Extreme as mentioned, and Qualcomm has no M4 Max competitor that I'm aware of.
I think the pure hardware specs compare reasonably against AS, aside from the lack of a Max of course. Apple's vertical integration and power efficiency make their product much more compelling though, at least to me. (Qualcomm, call me when the Linux support is good.)
Yet the apps top the App Store charts. Considering that these are not upgradable I think the specs are relevant. Just as I thought Apple shipping systems with 8 GB minimums was not good future proofing.
These all have nightmarish support. They're not a big deal for Qualcomm so the driver support is garbage. And you're stuck on their kernel like one of those Raspberry Pi knock offs. It's just really hard to take them seriously.
> And you're stuck on their kernel like one of those Raspberry Pi knock offs. It's just really hard to take them seriously.
Qualcomm has beem mainlining Snapdragon X drivers to the 6.x kernel tree for over a year now. There have been multiple frontpage HN posts about this in the past 12 months.
Webcam/mic/speaker support may be a WIP depending on your model, but snapdragon X Elite has been booting Linux for months now, using only drivers in Linus' tree. The budget chips (Snapdragon X Plus) have far less direct support form Qualcomm, but some independent hackers have put in heroic effort to make those run Linux too.
I you're willing to go back a few generations, Asahi Linux supports the Mac Mini (M1, M2 and M2 Pro). Support is missing for USB-C displays (it has HDMI) and Thunderbolt, but other than that you can have an awesome experience on these (and probably get yourself a good deal these days)
If Snapdragon (or ARM players in general) wanted to challenge x86 and Apple dominance, do they need to compete in the exact same arena? Could they carve out a niche (example: ultra-efficient always-on machines) and then expand?
Exactly! That makes this move all the more interesting. The smartphone SoC market is saturated, and margins are shrinking. Laptops/PCs give Qualcomm a chance to leverage its IP in a higher-ASP segment. Expanding is logical, but the competitive bar is way higher.
“ARM chip” is a pretty broad umbrella. Apple’s M-series is based on the ARM ISA, the microarchitecture is Apple’s own design, and the SoCs are built with very different cache hierarchies, memory bandwidth, and custom accelerators. I was simply using Apple as an example of another big player.
“Multi-day” battery life sounds wild! That’s probably the biggest thing for users. It would be good for Apple to get some competition because their M-chips seemed so far away from everything else.
Still, even if someone uses it for two hours a day and then just closes it being able to run for multiple days without charging the way Macs can is fantastic.
I agree it seems incredibly unlikely that you’re doing multiple days of eight hours of work without charging.
Longer is always better, so if it’s true at all great for them.
Any battery life claim needs to be aligned with the consumer-class operating system and application layer (iOS, Android, etc). Multi-day battery life on a non-Google-Pixel Android device with typical usage would be interesting.
FOSS support for Windows ARM has been hampered by Github (owned by MS) not supporting free Windows ARM runners. They may be finally getting their act together but are years late to the game.
AFAIK Windows on ARM is completely pushed by Microsoft (obviously they're limited by their own competence) and Qualcomm has been kind of phoning it in.
I trust MS in this. NT has been multi-arch since day one. x86 wasn’t even the original lead architecture.
They also know the score. Intel is not in a good place, and Apple has been showing them up in lower power segments like laptops, which happen to be the #1 non-server segment by far.
They don’t want to risk getting stuck the way Apple did three times (68k, POC, Intel) where someone else was limiting their sales.
So they’re laying groundwork. If it’s a backup plan, they’re ready. If ARM takes off and x86 keeps going well, they’re even better off.
When laptop OEMs stop catering to the lowest common denominator corporate IT purchasers (departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap).
I have a Yoga Slim 7x, which has the ARM. Screen quality is fantastic along with build quality, touchpad and keyboard feel :shrug:
It really depends on what Laptop line you buy. Dells have overwhelmingly become garbage, right next to HP.
Speaker quality on a laptop oth? Couldn't care less, I use headphones/earbuds 99% of the time because If I'm going portable computer, I'm traveling and I don't want to be an inconsiderate arse.
The Yoga Slim 7x is a rather unique outlier. I was on the market for a non-Mac laptop a little while ago, and the was literally the only one that met my standards.
> departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap)
Translation: departments which don't care about worker's wellbeing.
Looking at the SOCs used, only Dell, Microsoft, and Samsung used the 2nd fastest SoC, the X1E-80-100 - the Dell and Microsoft laptops could be configured with 64GB soldered.
Samsung also used the fastest SoC (the only OEM to do so), the X1E-84-100. From a search of their USA website, you're stuck with only 16GB on any of their Snapdragon laptops. :(
I'd hope whichever OEM(s) uses the Snapdragon X2 Elite Extreme SoC (X2E-96-100) allows users to configure RAM up to 64GB or 128GB.
I'm holding my breath though. I have a Samsung Edge 4 laptop and I didn't find the battery life impressive - prob got around 6 hours under coding / programming tasks. GPU performance is terrible too.
I feel like I'm constantly charger-tending all my non-Apple silicon laptops.
M-series instant wake from sleep is also years ahead of the Windows wakeup roulette, so even if this new processor helps with time away from chargers... we still have the Windows sleep/hibernate experience.
You can probably pretty easily just say Prime==Performance and Performance==Efficiency, but I think the "Prime" branding is kind of a carry over from Snapdragon mobile chips where they commonly use three tiers of core designs rather than the two. They still want to advertise the tier 2 cores as fast so T3 is efficiency, T2 is performance, T1 is Prime.
As an example, the Snapdragon 700-series had Prime, Gold, and Silver branding on it's cores.
I’m a huge Framework fan: preordered the 13 and Desktop, have done mainboard + LCD upgrades on personal and work machines, etc. Likewise, I’ve used ARM machines as general-purpose Linux workstations, starting with the PineBook Pro up to my current Radxa Orion. It seems like a great combo!
Unfortunately, firmware and OS support are hard for any vendor, especially one as small (compared to, say, Lenovo or HP) and fast-moving as Framework. Spreading that to yet another ISA and driver ecosystem seems like it would drag down quality and pace of updates on every other system, which IMHO would be a bad trade.
I wonder how much intended audience for these chips cares about “elite” and “ultra-premium” buzz wording. I’m sure it’s a good chip but cmon, it’s not for TikTok watching..
Linux support is still basically non-existent for the first gen, and they made all this deal about supporting Linux and the open source community. This is to say, don't trust them
That'd definitely fit the Qualcom pattern of trying to force you to update by not upstreaming their linux drivers.
This is one place where windows has an advantage over linux. Window's longterm support for device drivers is generally really good. A driver written for Vista is likely to run on 11.
Old situation: "Android drivers" are technically Linux drivers in that they are drivers which are built for a specific, usually ancient, version of Linux with no effort to upstream, minimal effort to rebase against newer kernels, and such poor quality that there's a reason they're not upstreamed.
New situation: "Android drivers" are largely moved to userspace, which does have the benefit of allowing Google to give them a stable ABI so they might work against newer kernels with little to no porting effort. But now they're not really Linux drivers.
In neither case does it really help as much as you'd hope.
Not surprising considering I haven't seen a programming manual or actual datasheet for these things in the first place. Usually helps if you tell the community how to interact with your hardware ..
Not even true: Arm, Intel, AMD, and most other hardware vendors (who are actively making an effort to support Linux on their parts) actually publish useful[^1] documentation.
edit: Also, not knocking the Qualcomm folks working on Linux here, just observing that the lack of hardware documentation doesn't exactly help reeling in contributors.
[^1]: Maybe in some cases not as useful as it could be when bringing up some OS on hardware, but certainly better than nothing
the snapdragon x2 elite extreme (X2E-96-100) SoC supports "128GB+" but qualcomm hasn't specified what the max limit is. this soc also has higher memory bandwidth (228GB/s over 192-bit bus) than the x2 elite.
Here's my big question: are there datasheets/programmers manuals available or is this yet another proprietary mess of a SoC that ships undocumented Linux drivers with binary blobs? No thanks.
I will not spend money on hardware no one can reliably patch or write drivers for. I also want other operating system maintainers to be able to write drivers and get booting.
I haven't dug deep, but it looks like Qualcomm has been working on merging code into the Linux kernel: https://www.phoronix.com/news/Qualcomm-X2-Elite-Linux-8EG5
With them only merging upstream now, it'll be a while before you can actually use Linux on these devices. You can build your own kernel from upstream, but it's probably a better idea to wait until Arch or Gentoo package the necessary pre-configured kernels.
From what I can tell, the Elite SoCs are a lot less outdated-semi-proprietary-Linux-fork-y than many other Qualcomm chips.
That means nothing for the community who may need or want to fix and patch issues on their own. Instead we're beholden to Qualcomm to fix major issues on an OS it may or may not care about supporting. It also excludes other open source operating systems such as the BSD's who have to then reverse engineer the undocumented Linux drivers.
A better question: can a small company like Framework or even MNT Research build and support an open laptop around this chip?
While not this chip, MNT Research has been working on a processor module for Qualcomm Dragonwing QCS6490 and is manufacturing the first wave of test PCBs now:
https://source.mnt.re/reform/reform-qcs6490
Framework doesn't even develop their own firmware; most of the engineering in PCs is done by Intel/AMD/ODMs/IBVs. The whole ecosystem is based on vendor support not datasheets.
Firmware is not preventing Framework or anyone from offering a repairable laptop. Firmware also doesn't matter once the kernel is loaded. We need the datasheets.
"Firmware doesn't matter once the kernel is loaded".
ACPI enters the chat... It can send pieces of code interpreted by the kernel on any hardware event.
I have a Framework laptop and yeah the ACPI firmware is totally buggy and the Linux kernel fails at interpreting it in various cases.
I was under the impression that the firmware is responsible for loading the ACPI tables but the OS takes over and runs the code in its ACPI VM once running.
This article seems to be about Qualcomm adding the device trees for the X2 CPUs. Which isn't the first gen.
As someone with a first gen, the device trees are, as I understand it, one of the issues with trying to just install any distro, except for that special Ubuntu one.
I can't just (for example) grab the latest fedora, and try and run that.
Or Fedora, which tracks kernels pretty closely (e.g. this Fedora, running the normal update channel is on 6.16)
Last time I checked the Fedora ISOs didn't include the device trees necessary to even begin installation.
Correct, you kind of have to do a bit of work to it to get it to boot.
Now, I haven't tried the latest beta of Fedora 43, but my guess is this won't change.
This is exactly the issue with Qualcomm. If they actually had datasheets/ref manuals/open drivers... it'd be a no brainer. The part looks great.
The reality is this company is notoriously a law firm with a small technical staff on the side.
As someone who has used the Snapdragon X Elite (12 core Oryon) Dev Kit as a daily driver for the past year, I find this exciting. The X Elite performance still blows my mind today - so the new X2 Elite with 18 cores is likely going to be even more impressive from a performance perspective!
I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)
Unless they added low power cores to it, its probably isn't great. Chip design was for originally for datacenters.
Didn't laptops with Snapdragon X Elite CPUs have pretty good battery life?
https://www.pcworld.com/article/2375677/surface-laptop-2024-...
X2 Elite shouldn't be that different I think.
they do but not extraordinary either.
ive a x elite and a bunch of other laptops
i like the mba 13 (but barely) and the zbook 395+
the x elite is just a bit slow,.incompatible and newer x86 battery life isnt far off
Looks like the shift key isn't too reliable either.
If you read anything online, you'll realize that the battery life 'is' great. For example, LTT: https://www.youtube.com/watch?v=zFMTJm3vmh0
Reading youtube videos are ya?
They were joking. The dev kit didn’t have a battery.
They did add E-cores in X2.
Wait, you got one of those Dev kits? How? I thought they were all cancelled.
Edit: apparently they did end up shipping.
They got cancelled after they started shipping, and even people who received the hardware got refunded.
How's the compatibility? Are there any apps that don't work that are critical?
Surface Pro 11 owner here. SQL Server won't install on ARM without hacks. Hyper-V does not support nested virtualization on ARM. Most games are broken with unplayable graphical glitches with Qualcomm video drivers, but fortunately not all. Most Windows recovery tools do not support ARM: no Media Creation Tool, no Installation Assistant, and recovery drives created on x64 machines aren't compatible [EDIT: see reply, I might be mistaken on this]. Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.
Day-to-day, it's all fine, but I may be returning to x64 next time around. I'm not sure that I'm receiving an offsetting benefit for these downsides. Battery life isn't something that matters for me.
You ABSOLUTELY do not have to create a recovery drive from a Snapdragon based device. I've done it multiple times from x64 Windows for both a SPX and 11.
Hmm, thank you, that's good to know. Did you just apply the Snapdragon driver zip over the x64 recovery drive? It didn't work for me when my OS killed itself but I could easily have done something wrong in my panic over the machine not working. Since I only have the one Snapdragon device, I was making the assumption that it would have worked if I had a second one, but I didn't actually know that.
Yes, just copy the zip over like the instructions say.
Thanks again for this. Honestly, it may sway my choice on returning to x64 vs. sticking with ARM64 next time. The other issues are relatively minor and can be dealt with, but I didn't like thinking that I was one OS failure away from a bricked machine that I couldn't recover.
>Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.
That's just creation of a recovery drive for anything that Microsoft itself makes. It's the same process for the Intel Surface devices too.
>no Media Creation Tool
Why would anyone care about that? Most actively avoid Microsoft's media creation tool and use Rufus instead.
That’s brutal.. I wonder why the Apple Silicon transition seemed so much smoother in comparison.
One reason is that Apple sold subsidized devkits to developers starting around 6 months before Apple Silicon launched, while the X Elite devkit was not subsidized, came with Windows 11 Home (meaning that you had to pay another $100 to upgrade to Pro if you were an actual professional developer who needed to join the computer to your work domain), and didn't ship until after months after X Elite laptops started shipping. As a result, when the X Elite launched basically everything had to run under emulation.
I think another reason is Apple's control over the platform vs Microsoft's. Apple has the ability to say "we're not going to make any more x86 computers, you're gonna have to port your software to ARM", while Microsoft doesn't have that ability. This means that Snapdragon has to compete against Intel/AMD on its own merits. A couple months after X Elite launched, Intel started shipping laptops with the Lunar Lake architecture. This low-power x86 architecture managed to beat X Elite on battery life and thermals without having to deal with x86 emulation or poor driver support. Of course it didn't solve Intel's problems (especially since it's fabricated at TSMC rather than by Intel), but it demonstrated that you could get comparable battery life without having to switch architectures, which took a lot of wind out of X Elite's sails.
Apple had a great translation layer (Rosetta) that allows you to run x64 code, and it's very fast. However, Apple being Apple, they are going to discontinue this feature in 2026, that's when we'll see some Apple users really struggling to go fully arm, or just ditch their MacBook. I know if Apple does follow through with killing Rosetta, I'll do the latter.
It's a transpiler that takes the x86-64 binary assembly and spits out the aarch64 assembly only on the first run AFAIK. This is then cached on storage for consecutive runs.
Apple silicon also has special hardware support for x86-64's "TSO" memory order (important for multithreaded code) and half-carry status flag.
BTW. A more common term for what Rosetta does is "binary translation". A "transpiler" typically compiles from one high-level language to another, never touching machine code.
Apple also implemented x86 memory semantics for aarch64 to allow for simpler translation and faster execution.
In HW?
Yup! See here: https://www.sciencedirect.com/science/article/pii/S138376212...
Not OP, but I don’t think so. Rosetta inserts ARM barrier instructions in its generated code to emulate x86 memory ordering.
Did it? From that list: SQL server doesn't work on Mac and there's no Apple equivalent, virtualisation is built into the system so that kind of worked but with restrictions, games barely exist Mac so a few that cared did the ports but it's still minimal. There's basically no installation media for Macs in the same way as windows in general.
What I'm trying to say is - the scope is very different / smaller there. There's a tonne of things that didn't work on Macs both before and after and the migration was not that perfect either.
Out of the gate, Apple silicon lacked nested virtualization, too. They added it in the M3 chip and macOS 15. Macs have different needs than Windows though; I think it's less of a big deal there. On Windows we need it for running WSL2 inside a VM.
I'd guess the M3 features aren't required for nested virtualization, and it was more of a sw design decision to only add the support when some helpful hardware features were shipped too. Eg here's nested virtualization support for ARM on Linux in 2017: https://lwn.net/Articles/728193/
Nested virt does need hardware support to implement efficiently and securely. The Apple chips added that over time, eg M2 actually had somewhat workable support but still incomplete and hacky https://lwn.net/Articles/928426/ - the GIC (interrupt controller) was a mess to virtualise in older versions, which is different from the instruction set of the CPU.
On Windows nested virtualization already existed before WSL, all the kernel and device drivers security features introduced on Windows 10, and made always enabled on Windows 11, require running Hyper-V, which is a type 1 hypervisor.
So it is rather easy having to deal with nested virtualization, even those of us that seldom use WSL.
Yes, nested virtualization has existed for a long time... on Intel. On Windows, it is not supported on ARM. For a long time it wasn't even supported on AMD! They added AMD nested virtualization support in Windows Server 2022!
Note that when the Windows host is invisibly running under Hyper-V, your other Hyper-V VMs are its "siblings" and not nested children. You're not using nested virtualization in that situation. It's only when running a Hyper-V VM inside another Hyper-V VM. WSL2 is a Hyper-V VM, so if you want to run WSL2 inside a Windows Hyper-V VM which is inside your Windows host, it ends up needing to nest.
Nested virtualization is not required for WSL2 or Hyper-V VMs. It's only required if you want to run VMs from within WSL2 (Windows 11 only) or Hyper-V VMs within Hyper-V VMs.
Yeah, I understand this and said it correctly in my post. We need nested virtualization to run WSL2 inside a VM: this is a Linux VM inside a Windows VM inside a Windows host. WSL2 is already a VM, so if you want to run that inside a VM, it requires nested virtualization. Nested virtualization is one of those features that people don't know about unless they need it, and they find out for the first time when they get an error message from Hyper-V. If you have a development VM on a system without nested virtualization, you're stuck with WSL1 inside that VM, or using a "sibling" Linux VM that you set up manually (the latter was my actual solution to this issue).
For one thing Apple dropped 32-bit before they transitioned to ARM while Windows compatibility goes back 30 years.
Actually, the Macho file format was multiarch by design (On Windows we're still stuck with Program Files (x86))..
Anyway, before dropping 32bit, they've dropped PowerPC.
Another consideration, Apple is the king of dylib, you're usually dynamically linking to the OS frameworks/libs. so they can actually plan their glue smarter so the frameworks would still work in native arch. (that was really important with PPC->Intel where you also had big endian...)
You also get "Program Files (ARM)" (including a complementary "SysArm32") on older arm64 systems too.
Because it was handled by the only tech company left that actually cares about the end user. Not exactly a mystery.
Having a narrow product line helped Apple a lot. Similarly being able to deprecate things faster than business-oriented Microsoft. Apple also controls silicon implementation. So they could design hardware features that enabled low to zero overhead x86 emulation. All in all Rosetta 2 was a pretty good implementation.
Microsoft is trying to retain binary compatibility across architectures with ARM64EC stuff which is intriguing and horrifying. They, however, didn't put any effort into ensuring Qualcomm is implementing the hardware side well. Unlike Apple, Qualcomm has no experience in making good desktop systems and it shows.
> Apple also controls silicon implementation.
People sometimes say that as if came without foresight or cost or other complexities in their business.
No, in the end they are hyper strategic and it pays off.
I didn't say otherwise. They probably realized they can pull a complete desktop CPU design off at the latest with iPad, probably earlier. They were probably not happy using Intel chips and their business strategy has always been controlling and limiting HW capabilities as much as possible.
Given how Apple makes it maintenance hostile and secures against their end customers, no.
Because Apple controls verything vs Windows/Linux world where hundres (thouthands?) of OEM create things?
I agree with you on the Windows side.
Linux is different. Decades of being tied to x86 made the OS way more coupled with the processor family than one might think.
Decades of bugfixes, optimizations and workarounds were made assuming a standard BIOS and ACPI standards.
Specially on the desktop side.
That, and the fact that SoC vendors are decades behind on driver quality. They remind me of the NDiswrapper era.
Also, a personal theory I have is that have unfair expectations with ARM Linux. Back then, when x86 Linux had similar compatibility problems, there was nothing to be compared with, so people just accepted that Linux was going to be a pain and that was it.
Now the bar is higher. People expect Linux to work the way it does in x86, in 2025.
And manpower in FOSS is always limited.
Linux runs perfectly on MIPS, Power, Sparc, obviously ARM - cue the millions of phone running Linux today, RiscV, and at least a dozen other architectures with little to no user. It's absolutely not tied to x86.
> Decades of being tied to x86
This doesn't pass the smell test when Linux powers so many smart or integrated devices and IoT on architectures like ARM, MIPS, Xtensa, and has done so for decades.
I didn't even count Android here which is Linux kernel as first class citizen on billions of mostly ARM-based phones.
You are talking out of your ass here. If you make bold statements like this you need to provide evidence. Linux works fine on many platforms...
my asahi linux m1 mac book air would disagree with you
Apple already went through this before with PowerPC -> x86. They had universal binaries, Rosetta, etc. to build off of. And they got to do it with their own hardware, which includes some special instructions intended to help with emulation.
> Apple already went through this before with PowerPC -> x86
Not to mention 68K -> PowerPC.
Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.
Every Mac transitions to ARM, only a very small amount of Windows PCs are running ARM. SO right now there's not an large user base to incentivise software to be written for it.
You are right that Windows on ARM cannot be called a success. But if you make Windows/macOS cross platform software then your software needs to be written for ARM anyway.
So if you support macOS/x86, macos/ARM, and Windows/x86, then the additional work to add Windows/ARM is rather small, unless you do low-level stuff (I remember Fortnite WoA port taking almost a year from announcement to release due to anticheat).
The first few months were a little tricky depending on what software you needed, but it did smooth out pretty quickly.
Does Remote Desktop into the Surface work well?
When I'm home, I often just remote desktop into my laptop.
I'm wondering if remoting into ARM Windows is as good?
Yes everything in user space works as expected. Note that NT has supported non-x86 processors since 1992.
According to some accounts, the name NT even was a reference to the Intel i860, which was the original target processor.
I have a similar Windows Arm64 machine (Lenovo "IdeaPad 5 Slim"), RDP into it works OK.
There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.
I had to use a trick to "cache" the password on the "server" end first, see https://superuser.com/questions/1715525/how-to-login-windows...
On the bright side, there's a good chance that Windows on ARM is not well supported by malware. There's a situation where you benefit from things being broken.
Most apps for dev work actually work; - RStudio - VS Code - WSL2 - Fusion 360 - Docker
Only major exception is: - Android Studio's Emulator (although, the IDE does work)
Yeah, I too was surprised to find the dev experience very good: all JetBrains IDEs work well, Visual Studio appears to work fine, and most language toolchains seem well supported.
JetBrains stuff (love it!) is built on Java, so I’m not terribly surprised. I don’t know how much native code there is though.
Plus they’ve been through the Apple Silicon change, so it’s not the first time they’ve been on non-x86 either.
Have I had any app compatibility issues? To quote Hamlet, Act 3, Scene 3, Line 87: "No."
The Prism binary emulation for x86 apps that don't have an ARM equivalent has been stellar with near-native performance (better than Rosetta in macOS). And I've tried some really obscure stuff!
For me it is too slow to run Age of Empires 2: DE multiplayer. More than ten year old Laptops with Intel chips are faster there.
I suspect that's due to the GPU and not due to Prism, because they basically just took a mobile GPU and stuffed it into a laptop chip. Generally performance seems to be on par with whatever a typical flagship Android devices can do.
Desktop games that have mobile ports generally seem to run well, emulation is pretty solid too (e.g. Dolphin). Warcraft III runs OK-ish.
The GPUs don't go toe-to-toe with current gen desktop GPUs but they should be significantly better than the GTX 650, a mid range desktop GPU from 2012, the game (2019) lists as recommended. It does sound like something odd is going on than just lack of hardware.
https://www.videocardbenchmark.net/gpu.php?gpu=Snapdragon+X+...
https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+6...
That something odd is called GPU drivers. Even Intel struggled (they recently announced that they are dropping all gpu driver older than Alchemist development) to get games running on their iGpus
There are also some architectural differences between mobile & desktop GPUs which may impact games that are not optimized for the platform: https://chipsandcheese.com/p/the-snapdragon-x-elites-adreno-...
That's certainly not what the reviews say.
Adobe apps that ran fine on Rosetta didn't work at all on Prism.
https://www.pcmag.com/articles/how-well-does-windows-on-arms...
Same here. I've not had any issues with my Surface Pro 11.
Ironically, the app I've had the most trouble with is Visual Studio 2022. Since it has a native ARM64 build and installation of the x64 version is blocked, there are a bunch of IDE extensions that are unavailable.
X Elite does not have AVX instructions (they are emulated instead)
Today Qualcomm CEO stated[0] that the combination of Android and ChromeOS, e.g. Android Computers, will be available on Snapdragon laptops. Maybe these X2 CPUs will be in those laptops.
[0] https://www.techradar.com/phones/android/ive-seen-it-its-inc...
Does anyone buy these?
For people complaining about battery control and android emulation on linux, ChromeOS is a boon.
You effectively get an actual Linux distro + most of android, with a side of Chrome. It's way closer to "a real computer" than an iPad for instance, and only loses to the Surface Pro/Z13 line in term of versatility IMHO.
It really wasn't bad, my only deal breakers were keyboard remapping being non existent and the bluetooth stack being flaky.
I got a ChromeOS device a few years ago and it was great. I think they get an underserved bad reputation from being the locked-down devices you're forced to use in schools, but a personal ChromeOS device is a capable computer that can run any Android app or desktop Linux app.
Though having said that, in the past year I've replaced ChromeOS with desktop Linux (postmarketOS) and I love it even more now. 4GB of RAM was a bit slim for running everything in micro-VMs for "security," which is what ChromeOS does. I've had no trouble with battery life or Android emulation (Waydroid) since switching.
Let's hope pKVM and other Android virtualization stuff can fill in the gap here.
Not really any, Crostini has plenty of restrictions.
Cool if one wants to CLI stuff alongside Web and Android apps, but that is as far as it goes for GNU/Linux, with many yes but.
https://chromium.googlesource.com/chromiumos/docs/+/1792b43f...
I've used VS Code on ChromeOS with the GPU acceleration flag for many, many years without any issues on a couple different devices (x64 and more recently, arm64). It can even hide the window chrome so looks 1:1 with VS Code on any other platform. And many other GUI Linux apps where the Android version feels too much like a toy in comparison, it's an incredibly versatile feature for dev work.
Sorry, but "CLI stuff" is not "as far as it goes" with desktop Linux apps on ChromeOS. ChromeOS provides Wayland and PulseAudio servers to the apps as well so GUI and audio works too. It even synchronises file associations and installs a ChromeOS-like GTK theme into the container. The Linux GUI apps I had installed back when I used it felt completely native.
Without hardware acceleration and sound issues depending on the model, that is why I linked the page, as I was expecting such reply.
It worked on my device. The page you linked looks very outdated and doesn't have my device's board or any device made in the past 5 years. The lists of unsupported devices also look pretty reasonable - old kernels, CPUs that don't support virtualisation and 32-bit ARM. Since modern ChromeOS uses the same virtualisation to run Android apps, I doubt there's a modern device where it doesn't work.
Yes, looking at the FAQ, for example, it claims that USB is flat out unsupported on Linux which hasn't been true for 4+ years so it's very outdated.
ChromeOS (at least Flex) supports keyboard remapping now.
If you look at the verified hardware list for ChromeOS Flex[0], you can get an idea of what ChromeOS devices are being deployed for. Apart from education and companies that use Google Workspace, there's a lot of ChromeOS devices deployed as kiosks and call center computers. This is reflected not only in obscure documentation, but also in the marketing material[1].
The "enterprise" managability and reduced attack surface is driving Google to jack up Chromebook prices. The "Chromebook Plus" models are nearing the same price as a midrange Dell Inspiron, HP OmniBook, or Lenovo IdeaPad. You may have also noticed M4 MacBook Airs can be bought for the price of an iPhone 17, and I suspect that's partially a response from Apple to the Chromebook price increases. Buying a $600 Chromebook might have been sane for someone tired of Microsoft and not interested in a $1000 Macbook Air, but in 2025, with the Macbook Air prices going down significantly[2], Chromebooks are not as appealing to regular consumers (different story for businesses).
[0] https://support.google.com/chromeosflex/answer/11513094?sjid...
[1] https://chromeos.google/business-solutions/use-case/contact-...
[2] https://www.zdnet.com/article/the-m4-macbook-air-is-selling-...
ChromeOS is popular in schools and for extremely locked down, managed corporate devices.
[dead]
We’ve been using X Elite Snapdragon laptops (Thinkpad T14s and Yoga Slim running Ubuntu’s concept images) to build large amounts of ARM software without the need for cross-compiling. The hardware peripheral support isn’t 100% yet (good enough) but I’ve been impressed with the performance.
ARM seems to be popular in the server space and it’s nice to see it trickling down to the PC market.
Trickling? Apple’s been on ARM for five years with great results.
To be fair, they did say "PC" specifically. It's not uncommon to consider that a category that doesn't include Apple (e.g. the "I'm a Mac" "I'm a PC" ads from years ago)
People just need to quit using the term "PC" to refer to desktop or laptop hardware that happens to be running Windows. Laptops running MacOS are "personal computers," as are desktops running Linux, or effing phones running Android, for that matter.
I think the issue is that there's clearly as need for a term for the category of "things that usually run Windows (but you can also probably put Linux on it, and like, even one of the BSDs if you're feeling adventurous)". PC isn't a great one from a linguistic perspective, but there's not an alternative I've heard that seems likely to catch on. There probably also should be a better term for "laptop/desktop", since as you mention "computer" itself is not really narrow enough if you're being pedantic, but at the end of the day, right now the only really differentiator we have is context. In the context here, it was honestly more clear clear what was meant by "PC" in the top-level comment than it was whether the person responding to it actually misunderstood or was trying to make a point.
How's the battery life?
Hard to say, we keep them on the chargers most of the time. I haven’t measured it. No one’s complained yet, for what it’s worth.
Did qualcomm ever get its act together with firmware/drivers and linux? It's a dead end to me if they aren't at an Intel/AMD level of openness on this front.
Does anybody know if the X2 supports the x86 Total store ordering (TSO) memory ordering model? That's how Apple silicon does such efficient emulation of x86. I'd think that would be even MORE important for a Windows ARM64 laptop where there is so much more legacy x86 software going back decades.
Does anyone have benchmarks for Rosetta with TSO vs the Linux version with no-TSO? I guess it might be a bit challenging to achieve apples to apples, although you could run a test benchmark on OSX and then Asahi on the same hardware, I think?
I've always been curious about just how much Rosetta magic is the implementation and how much is TSO; Prism in Windows 24H2 is also no slouch. If the recompiler is decent at tracing data dependencies it might not have to fence that much on a lot of workloads even without hardware TSO.
People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed, other factors like enhanced hardware flag conversion support and function call optimizations play a significant role too:
http://www.emulators.com/docs/abc_exit_xta.htm
> People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed
This is a misinterpretation of what the author wrote! There is a real and significant performance impact in emulating x86 TSO semantics on non-TSO hardware. What the author argues is that enabling TSO process-wide (like macOS does with Rosetta) resolves this impact but it carries counteracting overhead in non-emulated code (such as the emulator itself or in ARM64EC).
The claimed conclusion is that it's better to optimize TSO emulation itself rather than bruteforce it on the hardware level. The way Microsoft achieved this is by having their compiler generate metadata about code that requires TSO and by using ARM64EC, which forwards any API calls to x86 system libraries to native ARM64 builds of the same libraries. Note how the latter in particular will shift the balance in favor of software-based TSO emulation since a hardware-based feature would slow down the native system libraries.
Without ecosystem control, this isn't feasible to implement in other x86 emulators. We have a library forwarding feature in FEX, but adding libraries is much more involved (and hence currently limited to OpenGL and Vulkan). We're also working on detecting code that needs TSO using heuristics, but even that will only ever get us so far. FEX is mainly used for gaming though, where we have a ton of x86 code that may require TSO (e.g. mono/Unity) but wouldn't be handled by ARM64EC, so the balance may be in favor of hardware TSO either way here.
For reference, this is the paragraph (I think) you were referring to:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
How is it a misinterpretation? To re-quote that last sentence:
> In my opinion, TSO is a red herring that isn't really improving performance, but it sounds nice on paper.
That's the author directly saying that TSO isn't the major emulation performance gain that people think it is. You're correct that there are countering effects between TSO's benefits to the emulated code vs. the negative effects on the emulator and other non-emulated code in the same process that are fine running non-TSO, but to users, this distinction doesn't matter. All that matters is the performance of emulated program as a whole.
As for the volatile metadata, you're correct that MSVC inserts additional data to aid the emulation. What's not so great is that:
- It was basically an almost undocumented, silent addition to MSVC.
- In some cases, it will slow down the generated x64 code slightly by adding NOPs where necessary to disambiguate the volatile access metadata.
- It only affects code statically compiled with a recent version of MSVC (late VS2019 or later). It doesn't help executables compiled with non-MSVC compilers like Clang, nor any JIT code, nor is there any documentation indicating how to support either of these cases.
> How is it a misinterpretation? To re-quote that last sentence:
I think we agree in our understanding, but condensing it down to "TSO isn't as much of a deal as claimed" is misleading:
* Efficient TSO emulation is crucial (both on Windows and elsewhere)
* The blog claims hardware TSO is non-ideal on Windows only (because Microsoft adapted the ecosystem to facilitate software-based TSO emulation). (Even then, it's unclear if the author quantified the concrete impact)
* Hardware TSO is still of tremendous value on systems that don't have ecosystem support
> [volatile metadata] doesn't help executables compiled with non-MSVC compilers like Clang, nor any JIT code, nor is there any documentation indicating how to support either of these cases.
That's funny, I hadn't considered third party compilers. Those applications would still benefit from ARM64EC (i.e. native system libraries), but the actual application code would be affected quite badly by the TSO impact then, depending on how good their fallback heuristics are. (Same for older titles that were compiled before volatile metadata was added)
Following up that last part -- I recompiled my x64 codebase with /volatileMetadata-, which reduced the volatile metadata by ~20K (the remainder most likely from the statically linked CRT). The profiling results were negligible, under noise level between the builds and both about 15-30% below the native ARM64 build.
The interesting part is when the compatibility settings for the executables are modified to change the default multi-core setting from Fast to Strict Multi-Core Operation. In that mode, the build without volatile metadata runs about 20% slower than the default build. That indicates that the x64 emulator may be taking some liberties with memory ordering by default. Note that while this application is multithreaded, the worker threads do little and it is very highly single thread bottlenecked.
20% is about the general order of magnitude we observed in FEX a while ago, though as you enable all TSO compatibility settings (including those rarely needed) it'll be much higher even. As people elsewhere in the thread mentioned it'd be interesting to see how FEX fares on Asahi with hardware TSO enabled vs disabled (but with conversative TSO emulation as set up by default) since it's less of a blackbox.
> Efficient TSO emulation is crucial (both on Windows and elsewhere)
Yes, but this is not in contention...? No one is disputing that TSO semantics in the emulated x86 code need to be preserved and that it needs to be done fast, we're talking about the tradeoffs of also having TSO support on the host platform.
> The blog claims hardware TSO is non-ideal on Windows only (because Microsoft adapted the ecosystem to facilitate software-based TSO emulation). (Even then, it's unclear if the author quantified the concrete impact)
> Hardware TSO is still of tremendous value on systems that don't have ecosystem support
That isn't what the author said. From the article:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
That is a direct statement on Rosetta/macOS and does not mention Prism/Windows. How correct that assessment may be is another matter, but it is not talking about Windows only.
> Those applications would still benefit from ARM64EC (i.e. native system libraries), but the actual application code would be affected quite badly by the TSO impact then, depending on how good their fallback heuristics are.
I will have to check this, I don't think it's that bad. JITted programs run much, much better on my Snapdragon X device than the older Snapdragon 835, but there are a lot of variables there (CPU much faster/wider, Windows 11 Prism vs. Windows 10 emulator, x86 vs x64 emulation). I have a program with native x64/ARM64 builds that runs at -25% speed in emulated x64 vs native ARM64, I'm curious myself to see how it runs with volatile metadata disabled.
This is more like what I’d expect! This is a great article too, thank you, this is the kind of thing I come to HN for :)
There was a paper with benchmarks posted recently here but I cant find it immediately. I think it was 6-10% from memory.
i mean, FEX runs on a linux host both with and without TSO, can be compared directly
(the downstream asahi kernel supports TSO)
For really old software, it tends not to make good use of multiple cores anyway and you can simply emulate just a single core to achieve total store ordering.
Anything modern and popular and you can probably get it recompiled to ARM64
Unfortunately games are the most common demanding multithread applications. Studios throw a binary over the fence and then get dissolved. Seems to be the way the entire industry operates.
Maybe more ISA diversity will incentivize publishers to improve long-term software support but I have little hope.
Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s and Nvidia cards around 1800GB/s and no word if it supports 256-512GB of memory.
> Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s
Most Apple Silicon is much less than 800 GB/s.
The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part.
It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge.
For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea.
This, and always check benchmarks instead of assuming memory bandwidth is the only possible bottleneck. Apple Silicon definitely does not fully use its advertised memory bandwidth when running LLMs.
As I stated this is the top Qualcomm model we're talking about, not the base which is significantly lower.
Given their top model underperforms the most common M4 chip and the M5 is about to be released it's not very impressive at all.
Even the old M2 Max in my early 2023 MacBook Pro has 400GB/s.
The base model X2 Elite has memory bandwidth of 152 GB/s. M4 Pro is a modest win against the Extreme as mentioned, and Qualcomm has no M4 Max competitor that I'm aware of.
https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets...
I think the pure hardware specs compare reasonably against AS, aside from the lack of a Max of course. Apple's vertical integration and power efficiency make their product much more compelling though, at least to me. (Qualcomm, call me when the Linux support is good.)
Most consumers don’t care about local LLMs anyway.
Yet the apps top the App Store charts. Considering that these are not upgradable I think the specs are relevant. Just as I thought Apple shipping systems with 8 GB minimums was not good future proofing.
Looking at the Mac App Store in the US, no they don't. There's not an LLM app in sight (local or otherwise).
What apps with local llm top app store charts?
They asked ChatGPT.
These all have nightmarish support. They're not a big deal for Qualcomm so the driver support is garbage. And you're stuck on their kernel like one of those Raspberry Pi knock offs. It's just really hard to take them seriously.
Ironically M1 chip is better supported on Linux.
> And you're stuck on their kernel like one of those Raspberry Pi knock offs. It's just really hard to take them seriously.
Qualcomm has beem mainlining Snapdragon X drivers to the 6.x kernel tree for over a year now. There have been multiple frontpage HN posts about this in the past 12 months.
Webcam/mic/speaker support may be a WIP depending on your model, but snapdragon X Elite has been booting Linux for months now, using only drivers in Linus' tree. The budget chips (Snapdragon X Plus) have far less direct support form Qualcomm, but some independent hackers have put in heroic effort to make those run Linux too.
Yes, but the M1/M2 only…
Really hope they sort out Linux support on these. Seems like it would make a great travel laptop
I just want an ARM Linux MiniMac equivalent. At a reasonable price.
I you're willing to go back a few generations, Asahi Linux supports the Mac Mini (M1, M2 and M2 Pro). Support is missing for USB-C displays (it has HDMI) and Thunderbolt, but other than that you can have an awesome experience on these (and probably get yourself a good deal these days)
Why do you want an ARM mini PC?
If Snapdragon (or ARM players in general) wanted to challenge x86 and Apple dominance, do they need to compete in the exact same arena? Could they carve out a niche (example: ultra-efficient always-on machines) and then expand?
Are you aware of countless SoCs meant for use in smartphones and below? This is them expanding.
Exactly! That makes this move all the more interesting. The smartphone SoC market is saturated, and margins are shrinking. Laptops/PCs give Qualcomm a chance to leverage its IP in a higher-ASP segment. Expanding is logical, but the competitive bar is way higher.
Also a bunch of Chromebooks with MediaTek chips.
Apple chips are ARM chips.
“ARM chip” is a pretty broad umbrella. Apple’s M-series is based on the ARM ISA, the microarchitecture is Apple’s own design, and the SoCs are built with very different cache hierarchies, memory bandwidth, and custom accelerators. I was simply using Apple as an example of another big player.
Well so is the snapdragon X elite, including the older snapdragons (anyone remember scorpion cores on QSD8x50?)
“Multi-day” battery life sounds wild! That’s probably the biggest thing for users. It would be good for Apple to get some competition because their M-chips seemed so far away from everything else.
Careful; the multi-day claims may depend on having an unrealistically huge battery, or being active only sporadically across the time period.
Still, even if someone uses it for two hours a day and then just closes it being able to run for multiple days without charging the way Macs can is fantastic.
I agree it seems incredibly unlikely that you’re doing multiple days of eight hours of work without charging.
Longer is always better, so if it’s true at all great for them.
Any battery life claim needs to be aligned with the consumer-class operating system and application layer (iOS, Android, etc). Multi-day battery life on a non-Google-Pixel Android device with typical usage would be interesting.
Any thermal design power data? It's difficult to evaluate their efficiency claims (work per watt) without it.
It looks like Lenovo and others are starting to get NUCs/MiniPCs out with these. I'd love to have one of these for Proxmox.
> "The platform is capable of booting kernel at EL2 with kvm-unit tests performed on it for sanity."
https://lore.kernel.org/lkml/20250925-v3_glymur_introduction...
EL2 support is huge, means virtualization will work on non-Windows OSes (e.g: Linux KVM), unlike with previous gen.
It doesn't say which generation of core is it. Are they the same as the one in Elite Gen 5?
Has Microsoft actually pushed for the ARM changes? Because I don't believe Qualcomm can do it alone.
FOSS support for Windows ARM has been hampered by Github (owned by MS) not supporting free Windows ARM runners. They may be finally getting their act together but are years late to the game.
Yes, it's the same Oryon V3.
AFAIK Windows on ARM is completely pushed by Microsoft (obviously they're limited by their own competence) and Qualcomm has been kind of phoning it in.
I trust MS in this. NT has been multi-arch since day one. x86 wasn’t even the original lead architecture.
They also know the score. Intel is not in a good place, and Apple has been showing them up in lower power segments like laptops, which happen to be the #1 non-server segment by far.
They don’t want to risk getting stuck the way Apple did three times (68k, POC, Intel) where someone else was limiting their sales.
So they’re laying groundwork. If it’s a backup plan, they’re ready. If ARM takes off and x86 keeps going well, they’re even better off.
Not a single benchmark even against the previous generation. Just a "legendary leap in performance".
Bigly fast, trust them!
Blazingly fast, even
They showed benchmarks in the video but it's probably best to wait for independent reviews anyway.
Phoronix!
why is it so hard for these companies to do any kind of descent marketing? more importantly, when do we get descent macbook air competitors?
> when do we get descent macbook air competitors
When laptop OEMs stop catering to the lowest common denominator corporate IT purchasers (departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap).
I have a Yoga Slim 7x, which has the ARM. Screen quality is fantastic along with build quality, touchpad and keyboard feel :shrug:
It really depends on what Laptop line you buy. Dells have overwhelmingly become garbage, right next to HP.
Speaker quality on a laptop oth? Couldn't care less, I use headphones/earbuds 99% of the time because If I'm going portable computer, I'm traveling and I don't want to be an inconsiderate arse.
The Yoga Slim 7x is a rather unique outlier. I was on the market for a non-Mac laptop a little while ago, and the was literally the only one that met my standards.
> departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap)
Translation: departments which don't care about worker's wellbeing.
This is just a laptop cpu, not an end consumer product…
They’re not marketing to consumers, or even really enthusiasts though right?
They’re marketing to OEMs.
Who is likely to package this into existing lines, from the majors? Is this a future lenovo/thinkpad carbon?
I would assume it'll follow the path as the first X Elite.
MS put out surface & surface laptop with it, Lenovo did do the ThinkPad X1 with it, and Dell put it in the XPS line.
It's likely to be in Thinkpads (unless Lenovo lost so much money on the X Elite that they ragequit ARM). They also had a testimonial from HP.
the OEMs who used the Snapdragon X1 Elite in windows laptops, from https://en.wikipedia.org/wiki/List_of_devices_using_Qualcomm... :
Acer, Asus, Dell, HP, Lenovo, Microsoft, Samsung
Looking at the SOCs used, only Dell, Microsoft, and Samsung used the 2nd fastest SoC, the X1E-80-100 - the Dell and Microsoft laptops could be configured with 64GB soldered.
Samsung also used the fastest SoC (the only OEM to do so), the X1E-84-100. From a search of their USA website, you're stuck with only 16GB on any of their Snapdragon laptops. :(
I'd hope whichever OEM(s) uses the Snapdragon X2 Elite Extreme SoC (X2E-96-100) allows users to configure RAM up to 64GB or 128GB.
X1 Carbon is part of the Intel Evo Platform. These are co-developed with Intel and therefore this line is exclusive to them.
X13s was confirmed to be sunset, another T14s is the most likely candidate among the ThinkPads.
Damn. They sunset the x13s? That's been my daily driver for a few months now. I was really hoping we'd see another one based around the Snapdragon X2.
Those memory bandwidth numbers are making me proud of being a LPDDR4 holdout.
I'm holding my breath though. I have a Samsung Edge 4 laptop and I didn't find the battery life impressive - prob got around 6 hours under coding / programming tasks. GPU performance is terrible too.
I feel like I'm constantly charger-tending all my non-Apple silicon laptops.
M-series instant wake from sleep is also years ahead of the Windows wakeup roulette, so even if this new processor helps with time away from chargers... we still have the Windows sleep/hibernate experience.
Seems to be the first Arm CPU to hit 5 GHz. I couldn’t find the ISA details, and curious if they will support SME, like the M-series Apple chips?
It does have SME.
Single core only @turbo-boost.
18 cores = 12 Prime and 6 Performance Cores
Not sure what a prime core is.
For comparison the M4 Pro can go as high as 10 performance cores and 4 efficiency cores.
Looks like some benchmarks have started leaking: https://www.notebookcheck.net/Snapdragon-8-Elite-Gen-5-perfo...
Mind you, Geekerwan managed to push the A19 Pro to 4019 in Geekbench 6 by using active cooling. https://youtu.be/Y9SwluJ9qPI
Today I learned that people are overclocking phone CPUs/SoCs
Active cooling just means adding a fan.
You can probably pretty easily just say Prime==Performance and Performance==Efficiency, but I think the "Prime" branding is kind of a carry over from Snapdragon mobile chips where they commonly use three tiers of core designs rather than the two. They still want to advertise the tier 2 cores as fast so T3 is efficiency, T2 is performance, T1 is Prime.
As an example, the Snapdragon 700-series had Prime, Gold, and Silver branding on it's cores.
Framework wen?
I’m a huge Framework fan: preordered the 13 and Desktop, have done mainboard + LCD upgrades on personal and work machines, etc. Likewise, I’ve used ARM machines as general-purpose Linux workstations, starting with the PineBook Pro up to my current Radxa Orion. It seems like a great combo!
Unfortunately, firmware and OS support are hard for any vendor, especially one as small (compared to, say, Lenovo or HP) and fast-moving as Framework. Spreading that to yet another ISA and driver ecosystem seems like it would drag down quality and pace of updates on every other system, which IMHO would be a bad trade.
yes plz
Yum. If it had a decent hardware security maybe we could get GrapheneOS on it
i wonder if intel and nvidia will catch up before they manage to deliver decent linux support...
Why can't I scroll on this page with the trackpad? Mouse scroll and arrow scroll both work fine.
I wonder how much intended audience for these chips cares about “elite” and “ultra-premium” buzz wording. I’m sure it’s a good chip but cmon, it’s not for TikTok watching..
Linux support is still basically non-existent for the first gen, and they made all this deal about supporting Linux and the open source community. This is to say, don't trust them
The truth is much more subtle than "nonexistent" IMO [1].
Clearly it's a priority because the support for ChromeOS/android support is a big headline this year.
[1] https://discourse.ubuntu.com/t/ubuntu-24-10-concept-snapdrag...
Also worth noting that not all the bits needing support are inside of the Snapdragon, so specific vendor support from Dell, Lenovo etc is required.
My (admittedly cynical) interpretation is that they are dropping support for desktop Linux completely and shipping Android drivers instead.
That'd definitely fit the Qualcom pattern of trying to force you to update by not upstreaming their linux drivers.
This is one place where windows has an advantage over linux. Window's longterm support for device drivers is generally really good. A driver written for Vista is likely to run on 11.
A stable driver ABI will do that. And a couple billion in revenue to fund bending over backwards to make sure stuff doesn't break.
I thought “Android drivers” were Linux drivers?
I think the situation is:
Old situation: "Android drivers" are technically Linux drivers in that they are drivers which are built for a specific, usually ancient, version of Linux with no effort to upstream, minimal effort to rebase against newer kernels, and such poor quality that there's a reason they're not upstreamed.
New situation: "Android drivers" are largely moved to userspace, which does have the benefit of allowing Google to give them a stable ABI so they might work against newer kernels with little to no porting effort. But now they're not really Linux drivers.
In neither case does it really help as much as you'd hope.
Old Android also had a bunch of weird kernel drivers that were not upstream; they mostly are now so Android kernel is converging on Linux finally.
Android drivers don't support Wayland etc.
They “supported Linux” by putting it in a virtual machine guarded by the hardware against the machine’s owner. No thank you.
Not surprising considering I haven't seen a programming manual or actual datasheet for these things in the first place. Usually helps if you tell the community how to interact with your hardware ..
That ended 10-20 years ago. The best you can hope for now is vendor-provided drivers.
Not even true: Arm, Intel, AMD, and most other hardware vendors (who are actively making an effort to support Linux on their parts) actually publish useful[^1] documentation.
edit: Also, not knocking the Qualcomm folks working on Linux here, just observing that the lack of hardware documentation doesn't exactly help reeling in contributors.
[^1]: Maybe in some cases not as useful as it could be when bringing up some OS on hardware, but certainly better than nothing
How's the WSL2 support on these Aarch64 Windows systems?
I'm not a huge fan of working in WSL, because I actively dislike the Windows GUI.
I have both Ubuntu and Docker Desktop set up in WSL2 on my X Elite laptop, they both work great, no issues (at least none that I have run into).
They expected linux devs to build it for free
In some cases the linux devs want to build it for free, but they still need enough information to work with
how much ram can these support ?
Supposedly 128 GB although I doubt vendors will ship that much.
the snapdragon x2 elite extreme (X2E-96-100) SoC supports "128GB+" but qualcomm hasn't specified what the max limit is. this soc also has higher memory bandwidth (228GB/s over 192-bit bus) than the x2 elite.
also see https://wccftech.com/snapdragon-x2-elite-extreme-die-package...
128GB is what they can ship using RAM chips available today, but the SoC supports more.
128GB can't be the current or future limit for a 192-bit bus. There's a missing factor of 3.