Most SOCs on the market today have a mix of various CPU cores. It's common to see designs with a few big ARM Cortex-A cores running an OS like Linux or Android, and then some smaller Cortex-M microcontroller cores that do housekeeping things like security checks, power management, realtime features, peripheral management, etc.
If I were to guess, Qualcomm wants to replace its various Cortex-M cores with RISC-V equivalents. This saves them money on licensing, reduces their dependency on ARM, and doesn't break customer-facing compatibility. Ventana is probably more of an aquihire to get their designer team.
"We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us. Resistance is futile." -Qualcomm, probably
Yea, this to me signals that Qualcomm is starting to hedge its ARM bets. Given all the kerfuffle around licensing they have had with ARM already, I suspect that they are signaling to ARM that they have options and so ARM's leverage is a lot lower than it might be. That said, there are also huge switching costs to Qualcomm's customers, so this is not a move it takes lightly. In the mean time, I'm sure those Ventana engineers can also help them improve their ARM designs, too.
Fully agree - Ventana's cores are more like Cortex A76 kinds of things, and are on a completely different scale from typical Cortex-M cores.
But switching to RISC-V would shut Qualcomm out from QNX and would limit its Android compatibility. And on the Qualcomm chips that I've seen so far, they're really bought in on both QNX and Android. That's why I think this is probably an aquihire more than a desire to ship Ventana's CPU cores.
bad, bad, bad sign, when a company starts to penny pinch like that.
but unfortunately very in-line with the thesis that qualcomm is getting squeezed by a commodifying market where value-add opportunity is shifting outside of the SoC platform.
I'd be surprised if Qualcomm replaces their application processors (the cores that typically run Android/Linux or QNX) with RISC-V any time soon. Aarch64's ecosystem is huge, and Qualcomm would cut their customers off from it by moving fully to RISC-V.
They're more likely to replace the smaller CPU cores imo.
Qualcomm acquired Nuvia in order to bypass the licence fees charged by ARM, with I can guess ARM tried to block in good terms first, and latter in bad terms without success as we saw. It may make sense now that ARM is refusing to license them the newer ones.
Qualcomm may be solely to blame themselves, as they now has to invest in researching and developing an underdeveloped architecture, quickly, while their competitors -including Chinese ones- take advantage with newer ARM designs (and perhaps they could even develop their own alternatives peacefully in the meantime).
Now they're getting counter sued by Qualcomm because it turns out they allegedly violated their own TLA (license to get off the shelf cores) and their ALA (architecture license).
Qualcomm is claiming that Arm is refusing to license the v10 architecture to them and refused to license some other TLA cores requiring them to get the Nuvia Custom CPU team to build cores for those products instead.
This explains their expansion into Risc-V it's a hedge against Arm interfering with QC's business.
Why does Qualcomm need this? They don't need to license RISC-V.
Is all the IP they acquired with Nuvia[1] tainted? Or were they just using ARM-derived internals?
From my understanding, just slapping on a different instruction decoder isn't a big technical hurdle. Actually, I wonder if it would be possible to design a chip with both an ARM and a RISC-V decoder on the same die and just fuse-off the ARM die on select units to avoid any fees...
ARM cancelled their architecture license and sued them, Qualcomm won, but with a threat like that to your core business it's best to have an escape hatch.
They'll need to license future versions of the ARM ISA and now they know the licensor is hostile.
This. SiFive, for example, is a proprietory core design based on the open source RISC V spec. Hazard3 [0] on the other hand, is an open source core design.
There are a lot of little cores in phones doing little core things. Having a first rate design team experienced in an ISA that is royalty free probably makes sense. They'll be able to expand the use of RISCV up the value chain ver time.
Buying a team that's already working on RISCV also reduces the chances of ARM lawyers getting involved.
Why would you acquire massive out-of-order super fast CPU team if you wanted a bunch of small cores. There are much cheaper teams and cores you could use for that.
> Actually, I wonder if it would be possible to design a chip with both an ARM and a RISC-V decoder on the same die and just fuse-off the ARM die on select units to avoid any fees...
That's not quite what Raspberry Pi did with the RP2350 (the ARM and RV cores are wholly separate) but they did include the ability to fuse off one side or the other, so I wonder if they'll release a cheaper RV-only version at some point.
It's probably just for IP and talent acquisition, if I had to guess. People who can design high performance server-class CPU microarchitectures are rare.
Frankly, Ventana seemed like an interesting entry in the space, but I have no idea who would have actually bought their servers at the end of the day. They taped out multiple designs, but none actually seem to exist outside their labs. I don't really see any path to meaningful RISC-V server adoption for at least several more years and by that time Qualcomm could design something on their own, assuming they are serious about re-entering the market. Grabbing the talent and any useful IP/core design components makes the most sense to me, anyway.
You’re right. But consider that in order to be useful when not fused off, the design would need to have a bunch of additional logic (interconnect ports, power control machinery etc) at the periphery of the to-eventually-be-fused-off area that would likely remain even when things were fused off. That may impact power.
Apart from that there’s the other usual angles: The very fact that there’s additional logic in the compute path (eventually fused off) means additional design and verification complexity. The additional area, although dark, eats into the silicon yield at the fab.
SiFive have apparently been shopping themselves around for a while. But they've been around for a long time, taken loads of investment, had a huge number of employees at one point (not now), and don't have very competitive products. My speculation is they're just not a very attractive acquisition with a complex ownership structure, and are demanding too much money to compensate their earlier investors.
The $2B deal with Intel fell through. Thought they were arguably worth more on paper then. My guess is that they're in a weird place where a fair offer at the moment is less than the investment they've gotten so far.
Note that the $2 billion deal story was always "according to people with knowledge of the matter", and I wonder if it was nothing more than Intel taking a peek at Sifive's technology and books.
Might be worth more than Qualcomm is willing to spend and/or introduce antitrust concerns. This feels like a hedging of bets, no need for Qualcomm to buy the biggest name in the RISC-V space.
SiFive have had a very long time to create competitive CPUs and they haven't really managed it. I dunno what's going on there but I'm not sure I'd buy them either.
No, it's one company == one vote. There's a similar situation with IBM & Red Hat. Since IBM owns Red Hat, Red Hatters (like myself) may participate in meetings but where individuals from both companies are present "there can be only one."
2025 and counting. Apple launched the M1 in 2020. I am an Apple user but not a fanboy but everyday I wonder about the magic in Apple that is unique because even established competitors with virtually infinite money and incredible processes can't move forward. Another incredible aspect is the early addition of an NPU by Apple in a SoC.
I would love to resurrect my XPS 13s with a durable battery and working in Linux without trigerring the fan. The same for my Lenovo Xs.
In my imagination I am waiting for the billionaire geeks doing their part for fun (e.g. energy management in Linux).
which means the M1 was being worked on since at least 2018, I'd bet much earlier than that, for sure much earlier than that if you count silicon which never left the lab.
reminder iphones run on apple silicon since 2010, which means they had to be working on it at least since 2008. they have a lot of experience in silicon design by now.
Why would Samsung do that? They have no sweetheart ARM licensing deal, they make more money selling their fab space to other customers.
Softbank could extend more generous architectural licenses to these businesses if they wanted to stimulate ARM PC sales. But they don't, so now we're here.
Qualcomm has had DSPs in its chips for a long time, providing a lot of NPU-like functionality before the term NPU had been coined. What Qualcomm currently calls its NPUs are just Hexagon DSP cores with specific instructions and abilities for matrix math and common inferencing datatypes.
The original Apple M1's performance per Watt and physical battery size may have been special when it first came out, but nowadays there's nothing special about its hardware specs relative to a modern x86 laptop.
The difference you perceive is mostly software. Windows and Linux are really just designed for desktop machines first and foremost. MacOS was too, but when they transitioned to Apple Silicon, they replaced a lot of the internals with stuff taken from iOS, and iOS is designed with batter life first and foremost.
Getting the level of battery life out of non-apple laptops is just going to be a long, hard slog of going through the operating systems and auditing *everything* and every design decision for how it affects battery life and how much resources its using.
Interesting, I thought Apple Silicon was still ahead on raw numbers, would you mind pointing me at any resources to learn more?
Is that still true when you consider the whole system power consumption vs performance? I was under the impression that Apple's ram and storage solutions give them a small edge here (at the cost of upgradability / repairability)
Apple Silicon has a lead in performance per watt over the competition (not a gigantic one, but a real one nontheless), but we were talking about M1, which is 5 years old now and has no appreciable hardware advantages compared to an AMD or Intel laptop made in the last few years.
The reason an old M1 laptop gets better battery life is almost entirely a software difference.
"raw numbers" always means a lot of things. Apple's CPU benchmarks are neck-and-neck in multicore and usually top-of-class in single-core performance compared to other desktop chips. x86 will draw more power when idling and during bursty workloads, but is typically more efficient during sustained SIMD-style workloads.
The M3 Ultra is putting up some of the saddest OpenCL benches I've ever seen from a 200-300w GPU. The entry-level RTX 5060 Ti runs circles around it with a $400 MSRP and 180w TDP. I truly feel bad for anyone that bought a Mac Studio for AI inference.
> Another incredible aspect is the early addition of an NPU by Apple in a SoC.
I'm going to go out on a limb and guess that you've not used CUDA yet. NPUs are a lot of things, but "incredible" is the last word an engineer would use to describe them these days.
Incredible means they follow a SoC approach where the RAM is shared between CPU, GPU, and NPU instead of separated like in a typical GPU such as Nvidia.
Tegra was interesting for its time but saying it’s “several times more incredible” than Apple’s architecture is just opinion. Apple builds custom high-performance CPU/GPU designs with industry-leading perf-per-watt and tight OS integration. Tegra and Apple SoCs were built for very different goals, so the comparison only makes sense with concrete metrics, not broad claims.
Most SOCs on the market today have a mix of various CPU cores. It's common to see designs with a few big ARM Cortex-A cores running an OS like Linux or Android, and then some smaller Cortex-M microcontroller cores that do housekeeping things like security checks, power management, realtime features, peripheral management, etc.
If I were to guess, Qualcomm wants to replace its various Cortex-M cores with RISC-V equivalents. This saves them money on licensing, reduces their dependency on ARM, and doesn't break customer-facing compatibility. Ventana is probably more of an aquihire to get their designer team.
"We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us. Resistance is futile." -Qualcomm, probably
Ventana's cores were 15 instruction wide, massively out of order cores that on paper compete with the application cores in Apple's M series SoCs.
They're a totally different gate count niche than a Cortex-M equivalent.
Yea, this to me signals that Qualcomm is starting to hedge its ARM bets. Given all the kerfuffle around licensing they have had with ARM already, I suspect that they are signaling to ARM that they have options and so ARM's leverage is a lot lower than it might be. That said, there are also huge switching costs to Qualcomm's customers, so this is not a move it takes lightly. In the mean time, I'm sure those Ventana engineers can also help them improve their ARM designs, too.
My guess is that this was mostly an acquihire. I had heard that Ventana had a lot of people that were laid off from Intel for instance.
I would guess the same. Although Android is adding support for RISC-V so I could potentially see them looking into RISC-V Android phones.
Feels kind of unlikely though. Ventana probably ran out of money.
Maybe Ventana's software engineers can also help Qualcomm fix its BSPs.
I can dream, right?Fully agree - Ventana's cores are more like Cortex A76 kinds of things, and are on a completely different scale from typical Cortex-M cores.
But switching to RISC-V would shut Qualcomm out from QNX and would limit its Android compatibility. And on the Qualcomm chips that I've seen so far, they're really bought in on both QNX and Android. That's why I think this is probably an aquihire more than a desire to ship Ventana's CPU cores.
> Ventana's cores are more like Cortex A76 kinds of things
More like Neoverse-V3: https://www.ventanamicro.com/technology/risc-v-cpu-ip/
BTW: "Silicon platforms launching in early 2026."
I wonder if this will be delayed due to the acquisition.
Doubtful. To have silicon in early 2026 would mean tapeout happened months ago.
Porting QNX would be very possible.
bad, bad, bad sign, when a company starts to penny pinch like that.
but unfortunately very in-line with the thesis that qualcomm is getting squeezed by a commodifying market where value-add opportunity is shifting outside of the SoC platform.
Could be good if a large firm stabilized the RISC-V version fragmentation with a massive standard SoC product boost in the Android space.
But more likely, the early product line will meet the same fate as the dog in "Old Yeller" (1957) in a market consolidation push. =3
I'd be surprised if Qualcomm replaces their application processors (the cores that typically run Android/Linux or QNX) with RISC-V any time soon. Aarch64's ecosystem is huge, and Qualcomm would cut their customers off from it by moving fully to RISC-V.
They're more likely to replace the smaller CPU cores imo.
Agreed, at $5/pc for a ARM64 7/8/9 SoC that can run a real OS, the Aarch64 is likely the minimum now for most designs. =3
It may be a while off yet, but it's pretty clear that companies, Qualcomm chief among them, are ready to replace arm as soon as possible.
If it happens, Arm will have only themselves to blame. Suing your own customers is not the smartest move.
Qualcomm acquired Nuvia in order to bypass the licence fees charged by ARM, with I can guess ARM tried to block in good terms first, and latter in bad terms without success as we saw. It may make sense now that ARM is refusing to license them the newer ones.
Qualcomm may be solely to blame themselves, as they now has to invest in researching and developing an underdeveloped architecture, quickly, while their competitors -including Chinese ones- take advantage with newer ARM designs (and perhaps they could even develop their own alternatives peacefully in the meantime).
Now they're getting counter sued by Qualcomm because it turns out they allegedly violated their own TLA (license to get off the shelf cores) and their ALA (architecture license).
Qualcomm is claiming that Arm is refusing to license the v10 architecture to them and refused to license some other TLA cores requiring them to get the Nuvia Custom CPU team to build cores for those products instead.
This explains their expansion into Risc-V it's a hedge against Arm interfering with QC's business.
It'll turn out OK. They'll just be acquired by Apple, who will continue putting out the most powerful CPUs on the market with AArch64 architecture.
Why does Qualcomm need this? They don't need to license RISC-V.
Is all the IP they acquired with Nuvia[1] tainted? Or were they just using ARM-derived internals?
From my understanding, just slapping on a different instruction decoder isn't a big technical hurdle. Actually, I wonder if it would be possible to design a chip with both an ARM and a RISC-V decoder on the same die and just fuse-off the ARM die on select units to avoid any fees...
[1] https://en.wikipedia.org/wiki/Qualcomm#2015%E2%80%932024:_NX...
ARM cancelled their architecture license and sued them, Qualcomm won, but with a threat like that to your core business it's best to have an escape hatch.
They'll need to license future versions of the ARM ISA and now they know the licensor is hostile.
They are basically acquiring talent and/or preexisting IP. RISC-V is free but implementations are the sole IP of the company.
Implementing ARM and RISC-V decoders might depend on licensing fine print for each licensee
This. SiFive, for example, is a proprietory core design based on the open source RISC V spec. Hazard3 [0] on the other hand, is an open source core design.
[0] https://github.com/Wren6991/Hazard3
Another opensource core design is XiangShan https://xiangshan.cc/en/
Another opensource is https://github.com/SpinalHDL/VexRiscv
Eating the competitor is one way to win. If you're scared of them, just buy them out.
Doesn't have to be fear, it can be simple greed, too. "Hey, look, .05% revenue boost, nomnomnom".
No big company would bother with an acquisition if the top result is 0.05% increase in revenue.
> They don't need to license RISC-V.
Correct. However you need circuitry on silicon to implement said architecture which is the expensive and time consuming part.
There are a lot of little cores in phones doing little core things. Having a first rate design team experienced in an ISA that is royalty free probably makes sense. They'll be able to expand the use of RISCV up the value chain ver time.
Buying a team that's already working on RISCV also reduces the chances of ARM lawyers getting involved.
Why would you acquire massive out-of-order super fast CPU team if you wanted a bunch of small cores. There are much cheaper teams and cores you could use for that.
Some modems and radios need more than reference-implementation performance and everything in a phone benefits from power efficiency.
https://patents.justia.com/assignee/ventana-micro-systems-in...
RISC-V being freely available does not mean that implementations of it will not be patented from here to the Orion nebula and back.
> Actually, I wonder if it would be possible to design a chip with both an ARM and a RISC-V decoder on the same die and just fuse-off the ARM die on select units to avoid any fees...
That's not quite what Raspberry Pi did with the RP2350 (the ARM and RV cores are wholly separate) but they did include the ability to fuse off one side or the other, so I wonder if they'll release a cheaper RV-only version at some point.
It's probably just for IP and talent acquisition, if I had to guess. People who can design high performance server-class CPU microarchitectures are rare.
Frankly, Ventana seemed like an interesting entry in the space, but I have no idea who would have actually bought their servers at the end of the day. They taped out multiple designs, but none actually seem to exist outside their labs. I don't really see any path to meaningful RISC-V server adoption for at least several more years and by that time Qualcomm could design something on their own, assuming they are serious about re-entering the market. Grabbing the talent and any useful IP/core design components makes the most sense to me, anyway.
QC likely use a lot of Arm IP, Nuvia notwithstanding, and want a way out of the general Arm monopoly. Seems to be a growing trend.
A dual ISA decoder with with fuse-off options will likely have unwelcome power-perf-area and yield consequences.
Fused off silicon consumes power? I assumed it just went dark.
You’re right. But consider that in order to be useful when not fused off, the design would need to have a bunch of additional logic (interconnect ports, power control machinery etc) at the periphery of the to-eventually-be-fused-off area that would likely remain even when things were fused off. That may impact power.
Apart from that there’s the other usual angles: The very fact that there’s additional logic in the compute path (eventually fused off) means additional design and verification complexity. The additional area, although dark, eats into the silicon yield at the fab.
Not saying it’s not possible.
Acquihire and hedging bets.
I wonder why SiFive wasn't the acquisition target
SiFive have apparently been shopping themselves around for a while. But they've been around for a long time, taken loads of investment, had a huge number of employees at one point (not now), and don't have very competitive products. My speculation is they're just not a very attractive acquisition with a complex ownership structure, and are demanding too much money to compensate their earlier investors.
A perfect target for Intel then, followed by a rapid exodus of the employees and destruction of the IP (like every other Intel acquisition).
They almost got bought by Intel, but then even Intel noped out.
https://www.tomshardware.com/news/intel-failed-to-buy-sifive
Does anyone know or have they leaked potential cost of acquisition?
The $2B deal with Intel fell through. Thought they were arguably worth more on paper then. My guess is that they're in a weird place where a fair offer at the moment is less than the investment they've gotten so far.
Note that the $2 billion deal story was always "according to people with knowledge of the matter", and I wonder if it was nothing more than Intel taking a peek at Sifive's technology and books.
https://archive.is/FVMLI#selection-3331.81-3331.129
Might be worth more than Qualcomm is willing to spend and/or introduce antitrust concerns. This feels like a hedging of bets, no need for Qualcomm to buy the biggest name in the RISC-V space.
SiFive have had a very long time to create competitive CPUs and they haven't really managed it. I dunno what's going on there but I'm not sure I'd buy them either.
Their P870-D looks plenty competitive.
What they might have issues with is finding clients to license it to.
Is this just Qualcomm buying itself another vote on the RISC-V foundation board?
No, it's one company == one vote. There's a similar situation with IBM & Red Hat. Since IBM owns Red Hat, Red Hatters (like myself) may participate in meetings but where individuals from both companies are present "there can be only one."
2025 and counting. Apple launched the M1 in 2020. I am an Apple user but not a fanboy but everyday I wonder about the magic in Apple that is unique because even established competitors with virtually infinite money and incredible processes can't move forward. Another incredible aspect is the early addition of an NPU by Apple in a SoC.
I would love to resurrect my XPS 13s with a durable battery and working in Linux without trigerring the fan. The same for my Lenovo Xs.
In my imagination I am waiting for the billionaire geeks doing their part for fun (e.g. energy management in Linux).
> Apple launched the M1 in 2020.
which means the M1 was being worked on since at least 2018, I'd bet much earlier than that, for sure much earlier than that if you count silicon which never left the lab.
reminder iphones run on apple silicon since 2010, which means they had to be working on it at least since 2008. they have a lot of experience in silicon design by now.
My point holds even if they started earlier, companies such as Samsung has their own chips and they could also put notebooks on the market.
Why would Samsung do that? They have no sweetheart ARM licensing deal, they make more money selling their fab space to other customers.
Softbank could extend more generous architectural licenses to these businesses if they wanted to stimulate ARM PC sales. But they don't, so now we're here.
Qualcomm has had DSPs in its chips for a long time, providing a lot of NPU-like functionality before the term NPU had been coined. What Qualcomm currently calls its NPUs are just Hexagon DSP cores with specific instructions and abilities for matrix math and common inferencing datatypes.
The original Apple M1's performance per Watt and physical battery size may have been special when it first came out, but nowadays there's nothing special about its hardware specs relative to a modern x86 laptop.
The difference you perceive is mostly software. Windows and Linux are really just designed for desktop machines first and foremost. MacOS was too, but when they transitioned to Apple Silicon, they replaced a lot of the internals with stuff taken from iOS, and iOS is designed with batter life first and foremost.
Getting the level of battery life out of non-apple laptops is just going to be a long, hard slog of going through the operating systems and auditing *everything* and every design decision for how it affects battery life and how much resources its using.
Interesting, I thought Apple Silicon was still ahead on raw numbers, would you mind pointing me at any resources to learn more?
Is that still true when you consider the whole system power consumption vs performance? I was under the impression that Apple's ram and storage solutions give them a small edge here (at the cost of upgradability / repairability)
Apple Silicon has a lead in performance per watt over the competition (not a gigantic one, but a real one nontheless), but we were talking about M1, which is 5 years old now and has no appreciable hardware advantages compared to an AMD or Intel laptop made in the last few years.
The reason an old M1 laptop gets better battery life is almost entirely a software difference.
"raw numbers" always means a lot of things. Apple's CPU benchmarks are neck-and-neck in multicore and usually top-of-class in single-core performance compared to other desktop chips. x86 will draw more power when idling and during bursty workloads, but is typically more efficient during sustained SIMD-style workloads.
If you want an example of where Apple's design chops are pretty weak, look at their GPUs: https://browser.geekbench.com/opencl-benchmarks
The M3 Ultra is putting up some of the saddest OpenCL benches I've ever seen from a 200-300w GPU. The entry-level RTX 5060 Ti runs circles around it with a $400 MSRP and 180w TDP. I truly feel bad for anyone that bought a Mac Studio for AI inference.
> Another incredible aspect is the early addition of an NPU by Apple in a SoC.
I'm going to go out on a limb and guess that you've not used CUDA yet. NPUs are a lot of things, but "incredible" is the last word an engineer would use to describe them these days.
Incredible means they follow a SoC approach where the RAM is shared between CPU, GPU, and NPU instead of separated like in a typical GPU such as Nvidia.
I consider the Tegra chip several times more incredible. What's so special about Apple's architecture to you?
Tegra was interesting for its time but saying it’s “several times more incredible” than Apple’s architecture is just opinion. Apple builds custom high-performance CPU/GPU designs with industry-leading perf-per-watt and tight OS integration. Tegra and Apple SoCs were built for very different goals, so the comparison only makes sense with concrete metrics, not broad claims.
So much for the $1.4B spent on NUVIA.
https://www.allaboutcircuits.com/news/startup-key-apple-goog...
I imagine this is mostly an acquihire to bolster the same teams that the Nuvia acquisition did.
What are you talking about Qualcomm is shipping cores from the Nuvia team they acquired for 2 years now.