RISC-V is currently slow compared to modern CPUs

(benhouston3d.com)

110 points | by bhouston a day ago ago

122 comments

  • camel-cdr a day ago

    One thing to keep in mind when looking at geekbench is that it's lacking Vector/SIMD optimizations for RISC-V. Most current processor lack RVV support as well, but it makes the comparison a bit meaningless. Geekbench also probably doesn't enable extension besides rv64gc, so no bitmanip extensions.

    Let's take two very similar cores, the SiFive P550 @1.4GHz and a Cortex-A72 @1.5GHz: https://browser.geekbench.com/v6/cpu/compare/123?baseline=74...

    Notice that the A72 is a lot faster in some of the benchmarks. Looking at the manual suggests that all of them are to a varying degree SIMD optimized for arm Neon: https://www.geekbench.com/doc/geekbench6-benchmark-internals...

    The most ISA agnostic benchmark you can compare is the clang one, as it just compiles clang, which doesn't benefit much from dedicated instructions that aren't enabled for RISC-V. Notice how there the P550 outperforms the A72.

    ---

    That compared un-optimized code, but let's compare hand optimized ARM Neon vs hand optimized RISC-V RVV:

    This time I'm comparing a Cortex-A72 with the SpacemiT X60:

    * Cortex-A55: in order 1.8GHz, dual issue neon (two 128-bit execution units)

    * SpacemiT X60: in order 1.6GHz. Has RVV with 256-bit vectors, but two 128-bit execution units with a weird layout:

    only EX1: shift, bitwise, compare, mask-ops, merge, gather, compress

    EX1&EX2: int/float arithmetic, including mul&div

    The cores should be quite comparable, although the A55 is in a lot better SOC (my phone) than the X60 core and should be slightly faster, all else equal.

    The hand optimized code is from simdutf and the gnuradio kernel library, which I ported both to RVV:

                      gnuradio kernels
                    A55 Neon vs X60 RVV
        i16c mul:        674 vs  634 ms
        i16c dot:        361 vs  415 ms
        f32c conj dot:  1043 vs  900 ms
        rotator2:        763 vs  918 ms
        min index u32:   404 vs  156 ms
        f32 interleave:  308 vs  742 ms
        f32 log2:       1208 vs  789 ms
        f32 sin:        2155 vs 2152 ms
        f32 tan:        2962 vs 3152 ms
        f32 stddev&mean: 609 vs  277 ms
        f32 poly sum:    667 vs  266 ms
        u8 conv k2:    10413 bs 7523 ms
    
        simdutf:               utf8 to utf16         utf16 to utf8
                           Neon A55 vs RVV X60    Neon A55 vs RVV X60
        arabic.utf8.txt        0.25 vs 0.34 b/c       0.66 vs 0.95 b/c
        chinese.utf8.txt       0.23 vs 0.28 b/c       0.51 vs 0.56 b/c
        czech.utf8.txt         0.21 vs 0.35 b/c       0.67 vs 0.90 b/c
        english.utf8.txt       0.81 vs 0.85 b/c       1.14 vs 1.63 b/c
        esperanto.utf8.txt     0.34 vs 0.45 b/c       0.91 vs 1.12 b/c
        french.utf8.txt        0.28 vs 0.35 b/c       0.86 vs 0.98 b/c
        german.utf8.txt        0.35 vs 0.41 b/c       0.94 vs 1.10 b/c
        greek.utf8.txt         0.24 vs 0.37 b/c       0.64 vs 1.03 b/c
        hebrew.utf8.txt        0.21 vs 0.33 b/c       0.58 vs 0.92 b/c
        hindi.utf8.txt         0.24 vs 0.29 b/c       0.48 vs 0.57 b/c
        japanese.utf8.txt      0.23 vs 0.29 b/c       0.49 vs 0.57 b/c
        korean.utf8.txt        0.21 vs 0.28 b/c       0.47 vs 0.55 b/c
        persan.utf8.txt        0.23 vs 0.33 b/c       0.60 vs 0.80 b/c
        portuguese.utf8.txt    0.29 vs 0.36 b/c       0.88 vs 1.02 b/c
        russian.utf8.txt       0.23 vs 0.31 b/c       0.60 vs 0.84 b/c
        thai.utf8.txt          0.29 vs 0.30 b/c       0.52 vs 0.60 b/c
        turkish.utf8.txt       0.25 vs 0.38 b/c       0.73 vs 0.99 b/c
        vietnamese.utf8.txt    0.18 vs 0.28 b/c       0.46 vs 0.54 b/c
        Note: b/c is bytes/cycle, so bigger is better
    
    As you can see the performance is very competitive between processors of a similar design point.
  • emmet a day ago

    > RISC-V is 25x slower than a top of the line Apple M-series chip

    I don't think anyone has put the kind of money into a RISC-V processor that Apple has in order to develop the 3nm M4.

    I was going to say it isn't an apples to apples comparison but I will restrain myself.

    • sgerenser a day ago

      True, but who is going to put in the money? It's not a foregone conclusion that RISC-V will get enough investment to ever be competitive with state-of-the-art Arm or x86 chips.

      • AnthonyMouse a day ago

        This question is sort of like, how is Linux ever going to be competitive with state-of-the-art proprietary Unix?

        Suppose Facebook are tired of paying a premium to Cisco et al and decide to commission their own network equipment. That stuff doesn't have to be competitive with x86 on single thread performance, it just has to be reasonably power efficient. So they take some existing free RISC-V core and make a few improvements to it and use that. But they publish the improvements, because they're not actually trying to be a hardware OEM and if someone else takes their design and does the same thing, they know they get those improvements for their next generation.

        So then that happens. Google want the same thing and make more improvements. Netgear use it in a consumer router, and they're not big enough to improve the chip, but they ship it in a product that sells a million units, so widespread use causes the community to optimize software for it and fix bugs. At this point Samsung or Qualcomm realize they only have to improve the SIMD support a little and they can stop paying ARM for their low and mid range phone SoCs. But if half of Android devices are now RISC-V and Qualcomm are already designing the high end cores themselves, why pay ARM for that either? So now it's in the high end phones, and someone starts putting the same chip into laptops.

        All it really takes is for enough people to not want to pay ARM to create an ecosystem that allows everybody else to do the same thing. The free designs eat the low end of the market and then the high end uses the same architecture because why wouldn't it?

      • bee_rider a day ago

        I’d imagine anyone who doesn’t have Apple super-duper special ARM license from the 90’s (or whenever) will be better served by RISC-V in the long run, right? Why deal with license issues?

        I don’t know if it will happen, but it would be extremely funny if Intel cut off Arm and went with RISC-V. (False reports of the death of x86 have been around for decades, but it is bound to happen eventually, right?)

        • 20 hours ago
          [deleted]
      • emmet a day ago

        Absolutely fair point. I'm only pointing out the fact it hasn't been proved to be a limitation of the architecture yet.

        Can't write off the first car only able to go 15km/h because your horse is able to do 40km/h.

      • o_m a day ago

        Right now it looks like China will the one dominating RISC-V

        • brucehoult 18 hours ago

          Right now they are the most enthusiastic and putting in a lot of work, yes.

          That's other people being short-sighted, not China doing anything wrong or sinister.

          There are in fact quite a lot of exciting non-Chinese developments being announced recently, including at the RISC-V Summit that is on now, but those things will take several years to make their way in to the market.

      • gjsman-1000 a day ago

        Well, the good news for RISC-V (I say this with half honesty, half sarcasm), is that most of the RISC-V investment is happening in America and China. Their access to venture capital, talented engineers, and a decent economy makes the UK (where ARM's fighting from) look like Mississippi backwaters. ARM is disadvantaged against RISC-V geographically, economically, and politically; and judging by their interest in scare tactics a few years ago, I think they know it. Perfect conditions for a possible quick erosion of their technological lead.

        • throwway120385 a day ago

          Yeah I came here to say something similar. This is ARM's game to lose, and they need to remember that their architecture was in the same situation as RISC-V is at some point. The only thing that stops RISC-V right now is that everyone is focused on ARM. If ARM gives their IP users a reason to switch by raising IP costs or making bad architecture decisions then RISC-V will take advantage of that to make inroads. History is full of incumbents that go on to lose their entire market when they get complacent.

        • talldayo a day ago

          There's also the fact that ARM has a total of like 2 architecture licensees, and everyone else has to use piss-slow Cortex designs. If there was competition happening between ARM cores it would be a more interesting story, but right now ISA has taken a backseat while OEMs fight over TSMC access.

          • wmf a day ago

            There's more like a dozen architectural licenses but they're mostly used for server chips that were canceled. The Cortex X925 is getting close to Apple/Nuvia BTW.

          • gjsman-1000 a day ago

            > has a total of like 2 architecture licensees

            And that probably only happened because Apple co-founded ARM.

            • brucehoult 18 hours ago

              That was before the breakup of the USSR, let it go.

        • alephnerd a day ago

          > makes the UK (where ARM's fighting from) look like Mississippi backwaters

          Most of ARM's design work is done in the US (Austin), India (Bangalore, Noida), and China (Beijing), though ARM China should basically be treated as a separate company at this point due to corporate shenanigans.

          That said, in the chip design space (which tends to be concentrated in the US, Israel, India, and China), RISC-V has become much more popular for commodity embedded usecases because of the less restrictive licensing meaning better profit margins, which is allowing fabless chip startups to potentially leap ahead of ARM

        • panick21_ a day ago

          Europe Supercomputer project is well funded and they are investing quite a bit. Large European industrials are also getting into RISC-V because they are building things like trains that they will have to maintain for 50+ years.

    • amelius a day ago

      > it isn't an apples to apples comparison

      It's not about how great the teams behind these CPUs are.

      It's about how great the CPUs are.

      • AnimalMuppet a day ago

        Fine, but the CPU with more money to work on the architecture often winds up being the better CPU.

      • exe34 a day ago

        right now, because one of them had a lot of investment and the other less so.

    • perihelions a day ago

      - "I was going to say it isn't an apples to apples comparison but I will restrain myself"

      That's the ignoble rhetorical device of applephasis

  • VyseofArcadia a day ago

    > ARM began in low-power embedded systems, initially facing similar performance limitations.

    No it didn't. ARM began on Acorn desktop computers, the Acorn Archimedes. ARM originally stood for Acorn RISC Machine.

    • bhouston a day ago

      Author here: I was intending to refer to where ARM got first got traction in the market.

      • NikkiA a day ago

        Even there, they gained traction in STBs first, which weren't particularly 'low power', although they were embedded.

    • f1shy a day ago

      I was once in a congress, where one of the first developers of ARM was there and told the history of the company. And I perspicuously remember he saying "at the beginning was all about money, every little transistor was important, every transistor less, was a cheaper system. The same savings in money where the key to later be able to do low power processors, because is basically the same optimization"

      So sorry, but I have to partly disagree, they did.

      Edit: Interesting... sharing what the designer of the first ARM core said leads to downvoting in HN...HN is a very different place as it used to be...

      • VyseofArcadia a day ago

        This is true, but it's coincidence. He didn't say they started out with the goal of a low-power processor. He's saying that they optimized for low-cost, and luckily that led to later success in low-power.

        • f1shy 20 hours ago

          He clearly said that soon the priority was power more than money.

      • undersuit a day ago

        This is RISC versus CISC again. ARM1's 25k transistors beat the CISCy Intel 80286's 134k transistors. Transistors had a cost at micrometer lithography sizes.

  • camel-cdr a day ago

    Meanwhile, I ran right now git clone an open-source high performance out-of-order RISC-V implementation simulate the RTL, and have it out perform my current destop cpu (Zen1) on a per cycle basis:

    https://camel-cdr.github.io/rvv-bench-results/articles/xperm... (scroll to bottom and compare scalar performance between Ryzen 1600x and XiangShanV3)

    You may notice that while scalar is faster, the vector performance is slower, this is because their vector implementation is still quite new, and they are still missing a few optimizations.

    XiangShan repo: https://github.com/OpenXiangShan/XiangShan

    More micro architectural details: https://www.servethehome.com/xiangshan-high-performance-risc...

    BTW, XiangShanV2 has already been taped out and will be available in an laptop in the future: https://milkv.io/ruyibook

    • adrian_b a day ago

      If what you say is correct about the comparison with Zen 1, you should keep in mind that Zen 1 (of which I also still have one) has an IPC equivalent with that of Intel Broadwell, a CPU launched in 2014, i.e. 10 years ago.

      (In the following years AMD has reduced then eliminated their initial handicap of 3 years, with Zen 2 matching Skylake in IPC, and then from Zen 3 until now they have always matched in IPC the best contemporaneous Intel cores.)

      So even such an unusually fast RISC-V core has 10 years of handicap to recover until being able to match modern CPU cores, like the Apple and Qualcomm Arm cores or like the current AMD and Intel cores.

      Moreover, unless your RTL simulation includes cache memories and slow DRAM memory, the simulated IPC will be far too optimistic. In any real CPU the IPC is diminished a few times from its ideal values by the cache misses that stall the CPU until data is loaded from the main memory.

      • phkahler a day ago

        >> If what you say is correct about the comparison with Zen 1, you should keep in mind that Zen 1 (of which I also still have one) has an IPC equivalent with that of Intel Broadwell, a CPU launched in 2014, i.e. 10 years ago.

        So I just upgraded my Zen1+ processor (2400G) to a Zen3 (5700G) which doubled the core count and upped single core performance by about 50 percent. My favorite benchmark runs 3x as fast. Now Zen5 AFAICT is not more than 2x the performance of Zen3. So per-core the state of the art AMD is maybe 3x that of Zen1.

        In the last 10+ years the only significant jump due to an advanced node was with the switch to EUV lithography (TSMC 7nm) and that jump was included between Zen1 and Zen3. All the other node advancements seem to be 10-15 percent performance with new CPUs getting a modest IPC increase on top of that.

        If there really is a RISC-V chip with Zen1 performance that's really quite good and I'd be happy to have it. Not sure what node it's on either, so there's probably room to "buy" more performance.

      • brucehoult 18 hours ago

        > even such an unusually fast RISC-V core has 10 years of handicap to recover until being able to match modern CPU core

        Yes.

        But why do you say this as if it's news, or even bad news?

        Two years ago the best RISC-V core in the market had 30 years of handicap to x86/PowerPC in IPC.

        Catching up by 20 years in 2 years is pretty impressive, don't you think?

        Parity is coming well before 2030, even allowing for x86 advances in that time.

  • binary132 a day ago

    IMHO comments and TFA are discussing the wrong question. It’s not really about what the best or fastest or most power efficient CPU or ISA is. I don’t know about anyone else but I fully grasp that RISC-V is slow. In the context of RISC-V, I am entirely and only interested in whether a free ISA can be developed which is adequate to be used for developing and operating software for the sake of decentralizing and unencumbering the long-term future of free computing. RISC-V seems possibly promising for that purpose. I would like to see the development of even modest graphics coprocessors using it.

    • amelius a day ago

      Yes, but this time let's not call them graphics coprocessors if we're going to use them for something else 99% of the time.

      • tmtvl a day ago

        If they're gonna be doing processing of arrays of numbers we could call them Group Processing Units to keep the GPU shorthand.

      • binary132 a day ago

        but I don't want LLMs

        I just want some graphics

        is that really so much to ask

    • freedomben a day ago

      That is my interest in risc v as well. However, experience watching adoption has shown me that the vast majority of consumers couldn't give a crap less about the ideals that we care about. They will go with whatever is cheapest, fastest, or both. If we want risc v to win, which I very much do, then it will need to be at least competitive, if not superlative.

      • giantrobot a day ago

        > If we want risc v to win, which I very much do, then it will need to be at least competitive, if not superlative.

        The whole RISC-V "winning" meme is so weird. What do you gain if RISC-V "wins"?

        Even with an open source CPU core you're not getting away from binary blobs. Any wireless baseband is going to be pretty much a sealed system for regulatory reasons. Manufacturers will still lock down firmwares. Attestation chains will still be required for security. HDCP won't magically open up.

        High performance is largely divorced from the ISA and more related to the low level chip design and process node/chemistry. If RISC-V were to "win", the decent chips will still be manufactured by TSMC. Existing chip designers won't commit mass seppuku, they'll just start working on RISC-V designs. Compiler toolchains will just target RISC-V.

        So to you what do you expect to change with RISC-V? Unless you've shorted ARM and just want them to go out of business there's no magic upside for you as an end user or even as a device designer. Maybe your next phone has a RISC-V cellular baseband? You're not going to be able to tweak the EIRP of the radio any more than you can today. ARM at the core of the baseband doesn't control that but the regulatory licensing.

        • binary132 a day ago

          that sounds an awful lot like a bunch of problems which need the same solution, rather than a bunch of reasons that we shouldn't try to solve one of, if not the most significant of them

          • giantrobot 21 hours ago

            > that sounds an awful lot like a bunch of problems which need the same solution, rather than a bunch of reasons that we shouldn't try to solve one of, if not the most significant of them

            What problem or problems do you think exists? What of those problems do you think RISC-V somehow solves? An ARM laptop can run some code you write. A RISC-V laptop can as well. The difference is immaterial unless you really love writing RISC-V assembly.

            RISC-V doesn't change physics so it doesn't obviate radio emission regulation. RISC-V doesn't change licensing to industry SIGs for protocol compliance badging. RISC-V doesn't change security postures so it's going to still use a signed bootloader to make enterprise sales. A decent performing RISC-V chip still requires a factory costing billions of dollars so you're not going to be manufacturing your own.

            • binary132 19 hours ago

              You say all that, I just see “it’s fine if only one or two companies own all of the IP and manufacturing capabilities required for the whole world’s computing infrastructure”

              • giantrobot 18 hours ago

                The issue of fabs is completely orthogonal to the instruction set or core designs of a chip. If RISC-V completely took over microcontrollers tomorrow and every new dishwasher, hard drive, and teledildonic device was RISC-V powered it would do nothing to change your life. You won't get cleaner dishes because a microcontroller was executing RISC-V instructions instead of ARM.

                The open (or closed) nature of a CPU core isn't really changing the dynamics of electronics in general or computing specifically.

      • a day ago
        [deleted]
    • brucehoult 18 hours ago

      > I fully grasp that RISC-V is slow

      RISC-V is not slow.

      The RISC-V chips currently available in off-the-shelf hardware, which have CPU cores released in 2018/2019 at the same time as the original specs were formally frozen (ratified), are slow.

      Big money started to be invested into RISC-V designs in 2021 and 2022. The results of that will be seen in hardware in the market in 2026 or 2027 or so.

  • mrpippy a day ago

    > Through years of architectural improvements and ecosystem development, ARM gradually expanded into mobile devices, then servers, and now even high-performance desktop systems

    Also, a complete re-boot of the ISA with AArch64.

    This is a mostly-uninformed theory, but I'd love to hear thoughts on it: AArch64 was a substantial break from AArch32, with lots of design changes intended to ease superscalar OoO implementations. Conditional execution mostly gone, Thumb gone, PC no longer a GPR, etc. Clearly, AArch64 has excelled for this. There are even rumors that Apple basically commissioned AArch64 for the types of cores they wanted to build: https://news.ycombinator.com/item?id=31368489

    RISC-V is quite similar to MIPS, an ISA which hasn't had a high-performance leading-edge implementation in 20+ years (dating back to SGI's last parts). Will this heritage make it harder to build high-performance OoO implementations? Does RISC-V need an AArch64-style reboot? Maybe this can be mostly done through extensions?

    • bhouston a day ago

      That is a really good question. I wish we had Jim Kelly here to answer that. :)

      • tromp a day ago

        Or Jim Keller even:-)

    • panick21_ a day ago

      The people who did RISC-V knew about AArch64 and designed it with hit performance in mind. Almost no conditional execution, a compressed mode that does cause issues and actually improves performance, no branch delay slot, no condition codes, no register windows and so on.

      They have a whole book where they go threw each instruction and explain why they added it.

      Its not perfectly optimized only for high performance, but that was certainty a major factor.

      The architecture has been evaluated by people like Dave Ditzel and Jim Keller. The leader designer for Jim Kellers company worked on the M1. And they all seem to think that its a good design.

  • arp242 a day ago

    Another aspect is software optimisation. For example, last time I looked at ffmpeg there's tons more hand-crafted x86 assembly than hand-crafted ARM assembly (been a while, not sure what the current situation is).

    In general, ARM optimisation has caught up quite a bit, although still lags behind in some places. RISC-V still has some ways to do. For example, for Go:

      [/usr/lib/go]% (for f in **/*.s; print ${${(s:_:)f:t}[-1]}) | sort | uniq -c | sort -hr | head -n10
         86 amd64.s
         72 arm64.s
         63 s390x.s
         56 s
         48 arm.s
         47 386.s
         35 riscv64.s
         32 ppc64x.s
         22 loong64.s
         20 mips64x.s
    
      [/usr/lib/go]% wc -l **/*amd64.s | tail -n1
       28801 total
    
      [/usr/lib/go]% wc -l **/*arm64.s | tail -n1
       21956 total
    
      [/usr/lib/go]% wc -l **/*riscv64.s | tail -n1
        7804 total
    
    Rough measurement of course, but at least some code paths for your RISC-V will be slower just because it's not optimised (yet).
  • johnklos a day ago

    Geekbench really should not be used to compare CPUs like this. Geekbench seems to be geared towards comparing mainstream machines / devices with other mainstream machines / devices.

    There are many, many applications for which performance isn't as important as cost and/or power draw. For instance, these days it can be cheaper to run an additional microcontroller close to where it's needed than it is to fabricate a wiring harness to bring sensor data all the way to a centralized location. RISC-V excels here.

    It's a mature enough ecosystem that people can compile whatever software they want and run it without fuss on RISC-V. Nobody is going to buy a RISC-V laptop with a full GUI and be disappointed when it doesn't perform like a MacBook.

    So if I can buy a small SBC with two ethernets and a few RISC-V cores that takes few enough watts they can be counted on a single hand for a handful of dollars so that I can make a hardware VPN device, and I can download and compile most software on it, that interests me. Processors like Intel's N200 have their uses and definitely more performant than RISC-V, but take way too much power and therefore generate way too much heat. So why would I even bother comparing? They're in different leagues.

  • kibwen a day ago

    > Those that say that RISC-V is a viable replacement for x86 or ARM in the near term are kidding themselves.

    I strongly disagree with how this frames the conversation. For most applications our desktop machines could be 10x faster if application programmers were incentivized to care about performance. I'd happily take a 2.5x slowdown for cheaper, simpler, non-proprietary hardware. And that leads into the next point:

    > RISC-V implementations in the wild lack advanced features that modern CPUs rely on for speed, including sophisticated pipelining mechanisms, out-of-order execution capabilities, advanced branch prediction, and multi-tiered cache hierarchies. Most commercial RISC-V chips remain in-order processors, meaning they execute instructions sequentially rather than optimizing their order for performance.

    Get your pitchforks out, because I consider this a feature. Spectre should have been a wakeup call that these performance optimizations are incompatible with secure computing. "Look how much faster our new minivan careens off the nearest cliff and explodes in midair!" is not the selling point people seem to think it is. I'm eagerly awaiting a RISC-V mainboard for the Framework for this reason. If I want performance, I'll use a burner PC. If I want security, I want a CPU design where it's actually tractable to make it secure.

    • stackskipton a day ago

      >I strongly disagree with how this frames the conversation. For most applications our desktop machines could be 10x faster if application programmers were incentivized to care about performance.

      But they are not going to be. Google and Slack is going to chomp RAM and burn my CPU cycles and my desire to use RISC-V isn't going to change their behavior. If you want people to use your hardware, you have to meet them where they are.

    • rangestransform a day ago

      you're in the excruciatingly tiny minority of users that care about spectre, most non-enterprise customers aren't significant enough targets to exploit with a spectre type attack vs. less sophisticated attacks like phishing

    • gjsman-1000 a day ago

      > non-proprietary hardware

      RISC-V does not guarantee that the CPU core designs are open-source, that the chip designs are open-source, that your computer doesn't use Secure Boot, that your computer doesn't have proprietary drivers, or that your computer doesn't use DRM at the hardware level.

      It's only an instruction set. A good first step, but only that. Don't get people's hopes up.

  • jl6 a day ago

    Is it more that current RISC-V implementations are slow than the ISA being inherently slow?

    • voxadam a day ago

      The article addresses this question:

      "RISC-V implementations in the wild lack advanced features that modern CPUs rely on for speed, including sophisticated pipelining mechanisms, out-of-order execution capabilities, advanced branch prediction, and multi-tiered cache hierarchies. Most commercial RISC-V chips remain in-order processors, meaning they execute instructions sequentially rather than optimizing their order for performance. This architectural simplicity creates a fundamental performance ceiling that's difficult to overcome without significant architectural changes."

      • rwmj a day ago

        This ignores two major server vendors, Rivos & Ventana (three if you include Qualcomm) who do have all those features.

        • arp242 a day ago

          As I understand it, Rivos hasn't actually shipped anything; from https://www.rivosinc.com/technology:

          "Rivos has not yet revealed details of its products or technology publicly."

          Great that they're working on that, but when discussing the _current_ state of the RISC-V ecosystem, I think they can be safely ignored until such a time that they actually start shipping stuff.

          It seems Ventana has shipped the Veyron V1, but I'm having a hard time finding concrete information on that in a quick search (other than from Ventana themselves), so not entirely sure what the status of that is? Their V2 chip is planned for 2025.

        • Rochus a day ago

          Are they included in https://browser.geekbench.com/search?q=RISC-V? I dind't get any hits when searching e.g. for Rivos.

        • bhouston a day ago

          I am excited to see innovation come in this area. I wasn't saying that there won't be improvements or more innovation, just that the currently released chips are not performant.

      • bee_rider a day ago

        That’s an odd match to the title. Who is thinking about RISC-V in 2024 without knowing that those features are typically missing? It is early days.

      • jauntywundrkind a day ago

        But these aren't Instruction Set Architecture (IS details.

        As mentioned in article, Berkeley's SonicBOOM is out of order. And you could certainly enhance the memory architecture with multi-level caches easily; the ISA is blind to this (consider how little x86 ISA has improved to tackle ever improving cache/memory strategies since i386).

        RISC-V is also extensible. So you can keep improving things! The incredibly awesome efficiency-foxused PULP group is working on a massive many-core research chip Occamy. It has huge vector processing units, many many many of them. Ok, so, promising. https://pulp-platform.org/occamy/

        And they have their own extension, Semantic Streaming Registers, which add a very CISC-y set of instructions that combine real work and load/store (and incrementing the data pointer for the next loop), allowing DSP like performance in loops. Super super slick extension to massively increase the throughput, improve the ISA & reduce it's footprint in loops. https://arxiv.org/abs/1911.08356

    • CoastalCoder a day ago

      I agree with your point.

      Some ISAs are more, or less, amenable to implementations that are fast for modern workloads.

      E.g., a really bad ISA could make SIMD ops, floating point math, prompt interrupt handling, 64-bit addressing, etc. really hard to make into a fast implementation.

      So based on the novelty of RISC-V, that's a plausible interpretation of the title.

      • phkahler a day ago

        One of the things making x86 hard to implement is the flags register. Flags are set as a side effect of many instructions. I'd be really interested to see how Intel or AMD handle the flags in a modern processor. RISC-V gets rid of ISA defined flags entirely, so that problem goes away. In some cases the lack of flags can lead to slower software, but 99.9 percent won't miss them.

    • goodpoint a day ago

      Phrasing it as "RISC-V is slow" really reads like clickbait. It seems to imply that the ISA is inherently inefficient, while it's actually really good.

      Various low power rv64 CPUs are actually outperforming x86 when you compare them in terms of die area and energy usage.

    • pengaru a day ago

      > Is it more that current RISC-V implementations are slow than the ISA being inherently slow?

      Obviously

      I'll defer to Jim Keller (dec alpha, amd zen, now tenstorrent..)

      https://www.youtube.com/watch?v=yTMRGERZrQE

  • PreInternet01 a day ago

    Nah, 'RISC-V is slow' is not exactly an uncommon sentiment: see, e.g., https://news.ycombinator.com/item?id=41920766

    Not surprising, since the few physical RISC-V implementations that are available under-perform a decade-old Raspi by a significant margin, and that platform is not a rocket-ship to begin with.

    • dagmx a day ago

      You linked to the authors own comment fwiw

      • PreInternet01 a day ago

        Oops. Well, since recursive acronyms are widely-accepted in our industry as well, I'll just leave that one up.

    • a day ago
      [deleted]
    • tredre3 a day ago

      You are out of date if you think there's no currently available commercial risc-v cores that can outperform the decade old raspberry pi 1 B+. Frankly, a 30 years old pentium 2 could outperform the first raspberry pi. It's a very low bar. And risc-v has crossed it many many years ago.

  • rwmj a day ago

    This is missing the two main server vendors who are taping out at the moment, Rivos and Ventana. Rivos at least are targeting the highest end performance.

    • classichasclass a day ago

      Yes, but "taping out" is still a ways from "actually exists and you can bench it."

      • rwmj a day ago

        The claim in the article is that RISC-V is somehow inherently unable to compete with Arm because of all sorts of missing architectural features. Yet there are server vendors who have those architectural features already and will make chips available fairly soon. The claim in the article is just wrong.

    • ta988 a day ago

      are there benchmarks?

      • rwmj a day ago

        If you sign an NDA with them, I guess. Eventually you'll be able to buy the chips & benchmark them yourself.

        • kergonath a day ago

          So, eventually we’ll be able to say that these chips are actually available. But right now, they are not proof of anything.

  • amelius a day ago

    > RISC-V implementations in the wild lack advanced features that modern CPUs rely on for speed, including sophisticated pipelining mechanisms, out-of-order execution capabilities, advanced branch prediction, and multi-tiered cache hierarchies. Most commercial RISC-V chips remain in-order processors, meaning they execute instructions sequentially rather than optimizing their order for performance. This architectural simplicity creates a fundamental performance ceiling that's difficult to overcome without significant architectural changes.

    This is a bit surprising given that all these techniques have been in computer architecture textbooks since at least the 90s.

    • librasteve a day ago

      i would guess that over 50% of the design investment in modern OOO CPUs is in branch prediction, cache tuning and so on … with multiple speed /cost points available to fine tune and balance prefetch, decode, instruction unit, blah blah to max out benchmark scores (the bible according to Hennessy et al). i would be more interested to see how fast you can make the literally 1000s of “snitch” cores go with the right software.

    • undersuit a day ago

      You don't need these tricks, you can always increase clocks and speed up memory interfaces. It was a lot harder for Intel to widen the memory interface of the P5 or increase the clocks sufficiently so they made it superscalar.

      RISC-V gets to take advantage of being produced in 2024 and absorbing all the clock speed and transistor advantages we get for free today because of 6 decades of transistor production.

      • amelius a day ago

        But how do you explain the 25x performance gap then?

        • undersuit a day ago

          RISC-V hasn't maxed out. A new 2.4Ghz clockspeed at 7nm(dev board actually 1.4Ghz) when my old Ryzen 7 5800X at 7nm gets 4.5Ghz. https://www.theregister.com/2024/04/09/sifive_riscv_hifive/

          • amelius 18 hours ago

            But increasing the clock speed might give you a 5x speed improvement at most.

            • undersuit 14 hours ago

              That's not the only thing you can do.

        • phkahler a day ago

          It costs a lot of money to develop an advanced CPU and deploy on an advanced node. You need a market for the chips to justify that so it's a bit of chicken or egg problem. The gap is slowly closing and I'm looking forward to a RISC-V linux laptop in the next few years.

  • dagmx a day ago

    I very much agree with you, and I think you’ll hit the hornets nest on this one unfortunately.

    I’m all for more architectures but RISC-V has an absurd fandom behind it now. It feels like the fandom behind Linux on the home desktop or Vulkan (which makes sense given the open nature) in the way that they’re trying to manifest its success as reality by just saying that it’ll inevitably be used, while ignoring the hurdles in the way.

    That’s not to say those aren’t successes when used , but I often feel the comments that put them on the pedestal don’t acknowledge the immense delta between today and their imagined future, and have very little interest when it’s pointed out.

    For all three it comes down to Software Compatibility and experience of use. And the proponents seem to have a “the underlying tech is built and people will flock to it when they open their eyes”, but the first part in fixing the software compatibility gulf is acknowledging it exists and actioning that. FOSS isn’t alone here, Apple does the same mistake with desktop gaming and Microsoft with almost any physical product they release that’s not running windows.

    I really do hope that more open platforms and technologies happen, but I feel like the people who unabashedly push them without acknowledging how it needs to happen are doing them a disservice.

    • wink 19 hours ago

      > It feels like the fandom behind Linux on the home desktop

      I don't agree with this comparison (or I am misjudging your intention) - 90% of the people I know who run any sort of linux deskopt (usually developers at work) only don't switch their home desktops because of games. I know we're a tiny majority (and I am typing this from a windows machine) - but it's nothing like 10x worse in objective terms (e.g. speed).

      • dagmx 19 hours ago

        But that’s precisely it. You don’t switch at home for games, others don’t for other compatibility reasons either. I’m not specifically talking about games but the whole user experience.

        That’s not a knock against Linux. It’s great but it’s also disingenuous when people push it as the year of Linux on the home desktop.

        • talldayo 17 hours ago

          > I’m not specifically talking about games but the whole user experience.

          If you are capable of using an iPad as a gaming device, I literally do not understand how you wouldn't be able to use a GNOME desktop to achieve literally the exact same outcome.

          Am I wrong? Getting Steam to use Proton is literally one click in the Steam settings - using an app like Bottles just has you open exes like normal. This is no worse than the Crossover Wine support from the Mac days of yore, if not more streamlined and not fighting against system integrity protection. And your fucking settings app doesn't give you a notification pip for not logging in.

    • giantrobot 21 hours ago

      > I’m all for more architectures but RISC-V has an absurd fandom behind it now.

      I don't understand what the fandom thinks they're going to get out of RISC-V "winning". You're not going to be able to download a new CPU like you can a new Linux kernel. An open source CPU core is useless without a factory to manufacture it.

      There's not going to be a GNU equivalent to semiconductor manufacturing. The baseline cost to build a factory is billions of dollars. You also can't just slap any design on any manufacturing node or chemistry. There's a lot work to get a chip design working on a particular node.

      A CPU is a very small part of a functional computing device. It's some magical thinking to assume just because a device was built on an open source CPU core that somehow the overall system will somehow be more open.

      Most of the stuff people bitch about being binary blobs will remain binary blobs will remain so even with a RISC-V CPU core. Anything with a radio will remain a black box for regulatory reasons, even if the baseband core is a RISC-V chip. GPUs will remain black boxes only accessible via their drivers' interface. Peripheral controllers covered by patent pools or branding licenses won't cease to be covered even if the controllers are RISC-V.

      • dagmx 21 hours ago

        The win is ideological not focused on pragmatism imho. Which is fair, but imho one should be honest with themselves if that’s where they’re coming from.

        If I can use that premise , then all the incongruity makes much more sense.

        • calf 10 hours ago

          Serious, nonrhetorical question, are you suggesting that Turing Award recipient Dave Patterson who is currently on the RISC-V board is somehow dishonest with their philosophy and approaching being taken here?

      • eternityforest 21 hours ago

        Having only one architecture for almost all devices seems like it would be nice although probably not a super big advantage

        • Manabu-eo 20 hours ago

          Obligatory xkcd: https://xkcd.com/927/

          I do recognize that as a royalty free and well supported architecture, very flexible with all the optional extensions, it do have a better shot than others at becoming the standard architecture. But the sheer amount of closed source software written for architectures that keep track of arithmetic flags that would need to be emulated is daunting.

      • panick21_ 21 hours ago

        RISC-V isn't just the RISC-V standard. Its also a larger movement. What RISC-V critically changes is that open core designs can now openly be shared. This is something that wasn't the case before.

        Universities and business can now cooperate much better better, people can work on a project in university and commercialize it far faster.

        > You're not going to be able to download a new CPU like you can a new Linux kernel. An open source CPU core is useless without a factory to manufacture it.

        A huge number of well designed FPGA cores can actually be download. Not so long ago good open FPGA cores weren't that common. Now there is a wealth of options. And you can get very high quality stuff like OpenTitan and use it yourself.

        And designing your own CPU and letting it be manufactured more like a PCB isn't as crazy as it was. It used to cost many millions, now you can get it done for much less. 30 years ago hobbits designing their own boards and getting them within a few days wasn't a thing, now its common. Almost every hacker conference now has their own PCB per conference. This wasn't a thing not that long ago.

        Google worked with SkyWater Technology to actually open source most of their process. You can use a fully open flow and order your own costume chips.

        Contract manufacturing like that barley was a thing in the 80s, and now its the standard.

        > A CPU is a very small part of a functional computing device. It's some magical thinking to assume just because a device was built on an open source CPU core that somehow the overall system will somehow be more open.

        Nobody has this 'magical' thinking, that's a Straw man. Its about observing a longer term trend. Before RISC-V having even open-core wasn't much of a thing. And now that there are quality cores people can spend their time on other thing. Open implementation of different IPs are increasingly being designed. University research on new stuff being done by adding something to an open core.

        Companies like Tenstorrent added a cool vector ISA to an Open Core. Chips Alliance is investing in tools like Verilator, and other needed things like TileLink interconnects.

        Yes its not a guarantee that whoever uses RISC-V makes other things open, but at the very least it doesn't hurt. And the other side of the coin is that it makes it very EASY for people who want to make things open.

        > GPUs will remain black boxes only accessible via their drivers' interface.

        There are already early attempts at building RISC-V open GPUs, that you can interact with however you want. Sure your not gone get that from NVidea but that doesn't mean its not valuable.

        > Peripheral controllers covered by patent pools or branding licenses won't cease to be covered even if the controllers are RISC-V.

        Nobody is disagreeing with that, but its still the case that a lot of things can actually be open. A position that says 'its bad because its not absolute good' is dumb.

        The point is that RISC-V existing and being successful is just one building block in the idea of information and technology being free and sharable, it moves cooperation and competition to a higher level.

        It has lots of practical benefits that have already been demonstrated and RISC-V as a movement already had a huge impact on everything from the tooling to peripherals. Just look at RISC-V Foundation, Chip Alliance, CORE-V, Pulp Project and so on and so on.

        > I don't understand what the fandom thinks they're going to get

        A better world mostly, and many practical benefits along the way.

        • giantrobot 16 hours ago

          >> I don't understand what the fandom thinks they're going to get

          > A better world mostly, and many practical benefits along the way.

          That's the magical thinking. There's logical leaps required to get from the current status quo to the proposed future state. A CPU architecture and core design is orthogonal to almost every other market force in the industry.

          Let me make sure I'm explicit since people tend to get very tribal when anyone says anything about their "team". I have no problem at all with RISC-V as an architecture. I do not care if my next phone or laptop has an ARM chip or RISC-V chip. As long as my laptop does laptop stuff and my phone does phone stuff the CPU instructions executing do not matter to me. I'm also writing zero assembler for any non-trivial personal or professional project. So long as compilers and toolchains exist for my system the ISA is an academic discussion for me. I have no problem with RISC-V existing or "winning".

          In terms of the world being "better" with RISC-V, that's just a weird statement. The architecture doesn't offer anything actually new to the industry. There's nothing fundamental about the architecture that makes anything better. The ISA has implementation gotchas that make for problematic or complicated compiler implementations. Its extensible nature also provides a huge surface area for minor implementation specific incompatibilities. Two similar RISC-V chips may not be drop-in replacements for one another. So it's not like the overall RISC-V design is objectively better than any other ISA.

          The open source nature of RISC-V is an academic improvement over closed core designs. I'm a random guy and I have the same access to most of the same compilers as Google, Amazon, or anyone else. I can compile Linux on any commodity computer I own. I don't need a clean room or expensive equipment to do software development or even deployment. If I write something you want to use the marginal cost of acquiring it for you is effectively zero. Open source software is unreasonably effective because of that trivial marginal cost of reproduction.

          Unlike the Linux kernel you can't compile a CPU core and reboot and get some performance gain. You might be able to build a 4004[0] in your garage you're not going to be building a CPU you can drop into your laptop or phone. At least not one that would be able run at any reasonable speed.

          Open source hardware is not bad. It just doesn't solve any of the very real problems of producing hardware. It doesn't obviate the challenges or costs of developing new process nodes or chemistries. It doesn't help the marginal costs of producing hardware. If you're just buying fully finished chips it's not like you're getting a discount because the manufacturer saved some money on the core design. They'll still charge whatever the market will bear and pocket the savings.

          The idea that open hardware will make the world better does not seem like a supportable statement. You're not getting a discount on an Android phone because someone patched a bug in the Linux kernel in their spare time.

          [0] https://spectrum.ieee.org/the-high-school-student-whos-build...

          • panick21_ 15 hours ago

            > The architecture doesn't offer anything actually new to the industry.

            There is more to live then technical specification. The change in the license and the development pattern and the business model does actually matter.

            I suggest you watch some talks by Krste Asanovic who created the RISC-V project, he explained exactly this point.

            > Open source software is unreasonably effective because of that trivial marginal cost of reproduction.

            Its open hardware isn't as good as open software. Yes we know. Sadly we don't have a universal 3d printer. But that doesn't mean its worthless.

            > Unlike the Linux kernel you can't compile a CPU core and reboot and get some performance gain.

            Have you never used an modern FPGA?

            > It just doesn't solve any of the very real problems of producing hardware.

            If you have a narrow point of you of only considering manufacturing but if you broaden your point of view and look at the whole value chain, it absolutely does. And you know who agrees with me, tons of companies who have invested in RISC-V and the ecosystem.

            If you don't believe me, I suggest you watch this video from Google where they explain why they are doing what they are doing:

            https://www.youtube.com/watch?v=EczW2IWdnOM

            You can find videos like that from other companies, including hardware companies.

            > It doesn't obviate the challenges or costs of developing new process nodes or chemistries.

            I didn't know that producing hardware was the same as developing knew nodes. In your mind, the only thing that matters is the cost of new node development? Nothing else in the whole world matters to producing computer hardware?

            > It doesn't help the marginal costs of producing hardware.

            It helps the fixed cost, and the smaller your run is, the more important that is. And it actually does improve marginal cost in many cases, if you don't pay license fee anymore. Again, go and watch the talks by Krste, he explains a lot of other point that matters around this question and why RISC-V took off with so many companies, both producers and consumers.

            As I pointed out, PCB went threw similar progression (and is still going). And this has been incredibly helpful to the whole industry.

            > If you're just buying fully finished chips it's not like you're getting a discount because the manufacturer saved some money on the core design. They'll still charge whatever the market will bear and pocket the savings.

            If there is a high quality core in the class you are looking for, and each manufacturer has access to that same IP, then competition will drive the IP value of that core to zero. That's basic economics. And that's exactly what groups like CORE-V are trying to do.

            This exact same thing happened with software as well. It used to you would pay for things like a compiler. Because it was an important value add. But once there is an open source compiler you can't charge money for it anymore.

            Funny how plenty of companies who buy a lot of chips, like industrial manufacturing companies have invested into the CORE-V project. Yet you claim it provides no value. Do these people just hate money? Or are they doing it out of the good of their own heart? Or do they understand something you missed? Consider watching presentation from Thales on that topic for example.

            > The idea that open hardware will make the world better does not seem like a supportable statement. You're not getting a discount on an Android phone because someone patched a bug in the Linux kernel in their spare time.

            Again, you seem to lack a basic understanding of economics. I am not getting a discount for a bug fixed in linux because the linux is already free. So the discount has already happened. You are literally missing the whole point of open source. The whole point is that bugs are getting fixed DESPITE ME NOT PAYING ANYTHING.

            As long as all competitors have access to the same code, non of them can extract value from it but are still 'forced' to provide it.

            Open cooperation has produced a system where the improvement cost are incredibly wide spread and the benefit are even wider spread. To the point where almost nobody can actually demand money for it. And thanks to this economic reality, we all benefit from this process.

            The literal exact same process works for for hardware designs as well. Chips before going into manufacturing are literally just code and configuration. The value of that can be driven down. We are just not as far along and of course manufacturing has cost.

            • calf 10 hours ago

              Has any of the presenters discussed the relevance of Moore's Law to RISC-V essentially being an open standard for commodity hardware (I think Dave Patterson has argued for this, akin to USB or internet protocol standards)? As in, in the last decade (prior to LLMs, etc.) people thought hardware would be a maturing market because Moore's Law was flatlining, hence it made sense then to have an open ISA instead of ARM IP-based economics. I'm just wondering aloud here.

    • panick21_ 21 hours ago

      > just saying that it’ll inevitably be used, while ignoring the hurdles in the way.

      That statement makes no sense. These things aren't in conflict, saying something is 'inevitable' doesn't mean you can't see the hurdles or that you are ignoring them. Its just born out of an understanding of what the hurdles are and how, in time they can be overcome.

      > and have very little interest when it’s pointed out.

      Maybe they aren't interested because they know its a long road and anytime there is any success its better to just be excited for a moment and not have somebody comes in with the 'but actually'. This is a totally normal social dynamic.

      This concept is best addressed in The Big Lebowski, "Your not wrong your just an a*hole".

      > For all three it comes down to Software Compatibility

      And as an industry we have some understanding of how standards with open protocol work and how the situation improves. RISC-V isn't knew and we have some idea of how that process is going and how it is organized. Its reasonable to make assumptions about how this will continue.

      We also have a large and strong open software community that is gone focus on these standards. Even for very large cooperation, maintaining its own standard is a huge pain in the ass.

      There is a pretty good standards process with lots of people involved that has been making very good progress.

      There is also real money and effort put into improving all the upstreams. The RISE project for example, brings together tons of major companies, universities, distros and so on. And this isn't just about RISC-V, lots of effort also put into tools, open designs and so on.

      RISC-V went from no support to be comparable to long established ISA in a pretty short time. Its not unreasonable to project that forward.

      > and experience of use

      RISC-V has been adopted by a huge number of universities, its the default now anytime somebody upgrades those courses. Anybody doing things on a FPGA, is almost certainty gone use some open RISC-V core. Some FPGA manufactures are even pushing RISC-V as their example cores. RISC-V is also designed to be easy to learn and get into. It already has a lot of adoption all across the industry. Far more and far faster then any other ISA ever.

      Experience is gained by people working on projects, and there are lots of them.

      > Apple does the same mistake with desktop gaming and Microsoft with almost any physical product they release that’s not running windows.

      That's different, because Apple just clearly doesn't care very much. That a very different situation.

      > "the underlying tech is built and people will flock to it when they open their eyes"

      No they are only observing that large cooperations all over the world are already moving in that direction. That China and India see the advantage as well. That there are major chip makers who have made it the core of their business. That Europe is making a major investment. That there are a whole boatload of AI companies are adopting it. A huge number of people have already seen it and its has been growing fast for a while now and that growth can be measured in a number of ways.

      > I really do hope that more open platforms and technologies happen, but I feel like the people who unabashedly push them without acknowledging how it needs to happen are doing them a disservice.

      I feel like those people mostly exist in your imagination. When speaking in support of something you are not required to follow it up with a 50-page development plan. You can just like something and be optimistic and that is totally fine, it doesn't mean that person is naive or unaware.

      If you have an actual counter argument why optimism isn't warranted, then you can say that. Some people believe RISC-V is badly designed for example. Some people believe fragmentation will kill it. But just to outright state 'people who are optimistic are a problem' is a silly position.

      • dagmx 21 hours ago

        And in this long rant, you basically just did every single thing that I mentioned.

        You focused on the academic and niche use cases to try and counter an argument about standard user use cases. Something Linux hasn’t solved precisely because of people like yourself who keep talking about the things a regular user does not care about.

        It’s also cute that in doing this, you have to resort to name calling to feel a sense of intellectual superiority.

        • panick21_ 20 hours ago

          Wow, what a attitude to have. Sorry, if have this totally crazy believe that 'academic', 'science', 'education' and many other 'niche' matter. Sorry that it matters to me that my friends made a PCB with RISC-V chip on it and we had fun with it. My bad. I'm sorry I have less then zero time with people with an attitude like yours. Have fun eating at McDonald's.

          And btw its fucking hilarious to call the OS that literally runs on almost every device in the world 'niche'.

          > It’s also cute that in doing this, you have to resort to name calling to feel a sense of intellectual superiority.

          I was not name calling, I was explaining a concept.

          And given this second comment of yours, now I think the concept actually applies.

          • dagmx 20 hours ago

            With all due respect, nobody but you cares that you had fun doing it, and nor should they. Nobody is saying you can't. Literally, it's not even part of the discussion.

            None of that is relevant to the discussion point at hand which is mass adoption of a new arch and what stands in the way.

            And again, the intended derision of saying "Have fun eating at McDonalds"? WTF is that even meant to mean. I'll stop responding to you because I think you are incredibly hostile.

  • rkagerer a day ago

    The article mentions out-of-order execution capabilities.

    How desirable is this vs. complexity introduced, and can similar benefits more cleanly be achieved at the compiler or software architecture level?

    • cmpxchg8b a day ago

      Ask the Itanium team how putting faith in software optimizations to overcome hardware issues saved their bacon.

    • Arnavion a day ago

      The point of out-of-order execution is effectively to be able to run parallel computations that don't use the same underlying hardware, eg you can run an integer operation and a floating-point operation in parallel because they use different functional units, or even two integer operations in parallel if you have redundant integer functional units.

      Branch prediction / speculative execution then takes that further by making an assumption of which arm of the branch will be taken and executing the instructions in that arm too. When the instruction for the branch completes fully, and the prediction turned out to be correct, the processor continues as normal. If the prediction turned out to be wrong, the processor throws away the pending changes from the incorrectly speculated instructions and starts again from the correct branch arm.

      An in-order CPU waits for each instruction to finish before it processes the next one. There's no way to work around that in software. At best the hardware can insert latches into the ALU etc so that instructions take the smallest number of clock cycles that they need (eg multiply takes six cycles but add takes only one).

      The alternative is VLIW ISAs that rely on the compiler to encode multiple instructions that can execute in parallel into a single instruction. That didn't work out well in the past for general computing as demonstrated by Itanium etc, though it's still used for restricted domains like DSPs.

  • xyst a day ago

    The thinly veiled threats of switching to RISC-V are unfounded and a mere bluff. ARM can get Qualcomm to bend so easily, just wonder if they have the stomach and wallet to do so.

  • a day ago
    [deleted]
  • the_jeremy a day ago

    > I haven't see any single threaded scores about 150 and no multi-threaded scores higher than 1500.

    s/about/above, s/see/seen

  • newpavlov a day ago

    I was very enthusiastic about RISC-V once upon a time, but after I encountered a number of its "quirks" (e.g. see https://www.reddit.com/r/RISCV/comments/1frrai9) my enthusiasm has significantly cooled down. And dealing with RISC-V fanboys and lukewarm reaction from ISA maintainers regarding raised issues certainly has not helped. It looks like everyone is fine with the pile of hacks like runtime measurement of misaligned op performance implemented in Linux. Seriously? And I only scratched the surface, who knows what else lurks in the deeper reaches of the ISA spec...

    So even after modern CPU features like OoO execution are implemented in hardware, I absolutely will not be surprised that RISC-V hardware will be slow on certain workloads without extensive software tuning. And let's be honest, most software developers will not bother to implement RISC-V-specific paths.

    I still wish for RISC-V to be successful and to displace the market dominance of x86 and ARM, but I think in the end it will be Linux of the hardware world. A great advancement, created just in the right time, which eventually becomes a roadblock for further development of alternatives saddled with a lot of tech debt.

    • fuhsnn a day ago

      I'm porting a small C compiler to 64-bit RISC-V and just learned a quirk that uint32_t is represented in-register as sign extended to 64-bit. Without Zba extension the compiler has to insert a bit-clearing sequence every time the value is used for operations that cannot be agnostic in two's complement. The quirk has enough impact on benchmark scores, SiFive typedef'ed an u32 type to i32 in their Coremark repo[1]. The Coremark team later updated their rule to address this trick[2].

      [1] https://github.com/sifive/benchmark-coremark/blob/4486de1f0a... [2] https://github.com/riscv/riscv-isa-manual/issues/353

      • brucehoult 17 hours ago

        It's hardly a quirk when it's mentioned and explained in paragraphs three and four of the 64 bit spec. It's literally the first non-trivial [1] thing in that spec.

        As explained, it is the only way it is possible to use a single set of 64 bit conditional branches (and SLT/SLTU) for both 32 bit and 64 bit, signed and unsigned, values.

        The Coremark restriction is stupid and explicitly ARM-centric. Any sensible person would either lazily use "int" or any 64 bit type ("long", "size_t" etc) for indexes, all of which work fine on RISC-V. The only reason to use "uint32_t" is to prematurely optimise for CPUs that zero-extend 32 bit values.

        Furthermore, any sensible real-world software will allow per-architecture #if'd typedefs.

        ----

        Most integer computational instructions operate on XLEN-bit values. Additional instruction variants are provided to manipulate 32-bit values in RV64I, indicated by a ‘W’ suffix to the opcode. These “*W” instructions ignore the upper 32 bits of their inputs and always produce 32-bit signed values, i.e. bits XLEN-1 through 31 are equal.

        The compiler and calling convention maintain an invariant that all 32-bit values are held in a sign-extended format in 64-bit registers. Even 32-bit unsigned integers extend bit 31 into bits 63 through 32. Consequently, conversion between unsigned and signed 32-bit integers is a no-op, as is conversion from a signed 32-bit integer to a signed 64-bit integer. Existing 64-bit wide SLTU and unsigned branch compares still operate correctly on unsigned 32-bit integers under this invariant. Similarly, existing 64-bit wide logical operations on 32-bit sign-extended integers preserve the sign-extension property. A few new instructions (ADD[I]W/SUBW/SxxW) are required for addition and shifts to ensure reasonable performance for 32-bit values.

        ----

        [1] i.e. proceeded only by "This chapter describes RV64I [...] widens the integer registers and supported user address space to 64 bits"*

      • camel-cdr a day ago

        Interesting, although using u32 for indices was always a problem for x86 as well, because you can't directly use it without extension in the more complex addressing modes. Why can't programmers use size_t for indices?

  • jauntywundrkind 19 hours ago

    The timing of this post coming mere hours after industry giant Microchip introduces a monster of a chip is relishable:

    > The integrated 240Gb/s [Ed: 30GB/s] TSN-enabled Ethernet switch is far from the chips' only feature: the PIC64HX has no fewer than eight 64-bit SiFive Intelligence X280 RISC-V cores,

    https://www.hackster.io/news/microchip-unveils-the-high-perf...

    Actual speed of those X280's is TBD, but this seems like a huge bump in what's on the risc-v market.

    Most of the chips we've been looking at are the open sourced C906, a not particularly fancy early core, open sourced in 2021. It wasn't even the highest offering in that release! https://riscv.org/news/2021/10/alibaba-open-sources-four-ris...

    Google talked about using the X280 two years ago, and what they were able to get from it's vector unit, https://www.sifive.com/blog/sifive-intelligence-x280-as-ai-c...

    • camel-cdr 18 hours ago

      Keep in mind that the 8 cores run at 1GHz, and have a simple simple scalar exexution, from the product page:

      - 5.75 CoreMarks/MHz

      - 3.25 DMIPS/MHz

      - 4.6 SpecINT2k6/GHz

      This SOC is mostly interesting for the 512 wide (probably single 512 vector issue) vector extension support.

      If the price is right this might usable as a nice little low power number cruncher.

      The lockstep mode is also interesting for industry uses.

      While possible, it's certainly nothing you'd want to run a desktop on.

  • bhouston a day ago

    This was the number 1 story on Hacker News for the first 25 minutes after I posted this, and then it dropped immediately to the second page.

    It seems that my first blog post submitted to Hacker News got de-promoted?

    That sucks.

    • arp242 a day ago

      Lots of comments in a short time can trigger the "flamewar detector", which downranks a story.

      Also wouldn't be surprised if some of the extremely aggressive RISC-V fanboys I've seen around have flagged your post for your blasphemy against their Lord and Saviour of Chips, which also downranks the story.

      You can email HN about it to get a more detailed explanation if you want.

      • bhouston a day ago

        Thanks for the suggestion! Apparently a mod had down weighted it, but they reconsidered after I emailed then! So thanks!

        • BenjiWiebe 18 hours ago

          I've emailed the mods several times, and always got a good (and quick) response. The mods here are awesome.

  • talldayo a day ago

    > Today, ARM's success has contributed to Intel's market plateau

    I don't buy this?

    ARM is applied in embedded devices or mobile hardware, neither of which Intel really "owned" in the first place. x86 is still the first-class datacenter option and I frankly don't see ARM taking it over on the desktop or server. The people with architectural licenses aren't interested in competing, which leaves Apple as really the only flexible customer, and we all know Apple was going to replace Intel eventually. So... Intel persists. And with DXVK running fine on RISC-V, I'm well within my right to say it's faster than I think: https://youtu.be/5UMUEM0gd34

    Honestly AMD has contributed more to Intel's demise than ARM ever did. When I read blog posts like this I really wonder how proximal the author actually is to the industry - it's an assertion that lacks evidence.

  • a day ago
    [deleted]