Reflections on 2 years of CPython's JIT Compiler

(fidget-spinner.github.io)

71 points | by bratao 3 days ago ago

79 comments

  • eigenspace 7 hours ago

    It turns out that if you have language semantics that make optimizations hard, making a fast optimizing compiler is hard. Who woulda thunk?

    To be clear, this seems like a cool project and I dont want to be too negative about it, but i just think this was an entirely foreseeable outcome, and the amount of people excited about this JIT project when it was announced shows how poorly a lot of people understand what goes into making a language fast.

    • jerf 7 hours ago

      I was active in the Python community in the 200x timeframe, and I daresay the common consensus is that language didn't matter and a sufficiently smart compiler/JIT/whatever would eventually make dynamic scripting languages as fast as C, so there was no reason to learn static languages rather than just waiting for this to happen.

      It was not universal. But it was very common and at least plausibly a majority view, so this idea wasn't just some tiny minority view either.

      I consider this idea falsified now, pending someone actually coming up with a JIT/compiler/whatever that achieves this goal. We've poured millions upon millions of dollars into the task and the scripting languages still are not as fast as C or static languages in general. These millions were not wasted; there were real speedups worth having, even if they are somewhat hard on RAM. But they have clearly plateaued well below "C speed" and there is currently no realistic chance of that happening anytime soon.

      Some people still have not noticed that the idea has been falsified and I even occasionally run into someone who thinks Javascript actually is as fast as C in general usage. But it's not and it's not going to be.

      • amval 6 hours ago

        > I was active in the Python community in the 200x timeframe, and I daresay the common consensus is that language didn't matter and a sufficiently smart compiler/JIT/whatever would eventually make dynamic scripting languages as fast as C, so there was no reason to learn static languages rather than just waiting for this to happen.

        To be very pedantic, the problem is not that these are dynamic languages _per se_, but that they were designed with semantics unconcerned with performance. As such, retrofitting performance can be extremely challenging.

        As a counterexample of fast and dynamic: https://julialang.org/ (of course, you pay the prize in other places)

        I agree with your comment overall, though.

        • jerf 3 hours ago

          I'm sort of surprised I'm not seeing any modernized dynamic scripting languages coming out lately, despite the general trend towards static languages. A fast dynamic language, with a day-one concurrency story, and some other key feature that pushes it ahead seems possible to me. (I dunno, maybe a nice story for binding to Rust instead of binding to C at this point could be enough to lift a language off?) I don't see any reason why dynamic scripting languages as a category couldn't do that. The ones we have now don't, not because the category makes it impossible, but because by the time that was desirable they just had too much baggage, and are all still struggling with it even a decade after they started.

        • throw10920 6 hours ago

          What are examples of those semantics? I'm guessing rebindable functions (and a single function/variable namespace), eval(), and object members available as a dict.

          • hmry 6 hours ago

            Some examples that come to mind: You can inspect the call stack, and get a list of local variables of your callers. You can assign to object.__class__ to dynamically change an existing object's class at runtime. You can overwrite every operator, including obj.field access, dynamically at runtime (including changing an existing class)

      • emtel 3 hours ago

        This is all true, but there's another angle that often gets missed:

        JITs are really only ideal for request-processing systems, in which a) memory is abundant b) the same code paths run over and over and over again, and c) good p99 latency is usually the bar.

        In contrast, in user facing apps, you usually find that a) memory is constrained b) lots of code runs rarely or in some cases once (e.g. the whole start-up path) c) what would be considered good p99 latency for a server can translate to pretty bad levels of jank.

        JITs can't do anything if the code you care about runs rarely and causes a frame skip every time you hit it, either because the JIT hasn't triggered yet due to too-few samples, or the generated code has been evicted from the JIT cache because you don't have memory to spare. And if you have code that needs to run fast _every_ time it runs, the easiest way to do that is to start with fast code already compiled and ready to execute.

        We saw this play out when android moved from Dalvik (JIT) to ART (AoT compilation). Apple figured this out years earlier.

        Of course it's not that there are no highly performant apps built on JIT runtimes. But it's a significant headwind.

        (Most of the above applies equally to tracing GC, btw)

      • yxhuvud 6 hours ago

        While what you say is true, there is still a huge gap between the performance of javascript (and even Ruby) and that of Python. The efforts to optimize Python are lagging behind, so there is a lot of things that still can be made faster.

        • Sesse__ 6 hours ago

          Python is also choosing to play with one hand behind its back; e.g., the “no extension API changes” rule which means any hope of a faster value representation (one of the most important factors in making a dynamic language fast!) goes out the window, refusing to change the iterator API (which means that throwing and handling exceptions is something that needs to be handled by the fast path of basically everything), and so on.

          • pbronez 5 hours ago

            Those changes are big enough that they’d need to be a Python 4, don’t you think? Community is still gun why after the 2-3 transition pain.

            • Sesse__ 3 hours ago

              The biggest problem of the Python 3 release was that it broke a bunch of _Python_ code. That's pretty different from changing the C API. But sure, it has ups and it has downs. One of the downs is that Python, despite a lot of announcements over the years, still is struggling to become significantly faster.

            • adgjlsfhk1 4 hours ago

              yeah. Python is somewhat trapped here. because Python is slow, real work gets moved to C, and the C api makes it almost impossible to speed Python up. If Python had made the API changes needed for speed 20 years ago, there would be way less C code, so further changes would be easier, but that ship has now basically sailed

              • gpderetta 4 hours ago

                It could have made the changes in the 2->3 transition.

                Instead we got parentheses around print.

        • umanwizard 6 hours ago

          Google has been pouring huge amounts of effort into making their JS interpreter fast for many years at this point. They have a lot more resources than the Python foundation.

          • no_wizard 5 hours ago

            At one point, Google was also interested in pouring lots of money into making Python faster, and they shifted those resources away.

            I think what always ends up failing here is that, as others have stated, they won't make breaking API changes, in particular those in charge of driving Python forward are extremely hesitant to break the C API for fear of losing packages that have make Python so popular.

            I would imagine, if the leadership was willing to put in the elbow grease to help those key packages along the changes when they happen, they could do it, but I understand that its not always that simple

          • dehrmann 4 hours ago

            Meta does some work for this with Cinder, but Meta has a history of language forks, and it's far enough off the beaten track that I wouldn't use it.

          • azhenley 6 hours ago

            Microsoft had the Faster CPython team for several years, and then recently laid off some of the core devs and the team lead.

      • PaulHoule 6 hours ago

        Javascript and Common Lisp aren't as fast as C but they are faster than Python.

      • sevensor 4 hours ago

        Python is definitely slower for programs that do the same thing. What I see is that the users of fast languages often write programs that do the wrong thing, 30x faster than Python. No free lunch either way

      • beebmam 5 hours ago

        I don’t understand the sentiment of not wanting to learn a language. LLMs make learning and understanding trivial if the user wants that. I think many of those complaining about strongly typed languages (etc) are lazy. In this new world of AI generated code, strongly typed languages are king

      • morkalork 6 hours ago

        I remember this from the early 2010s "compilation of a dynamic language is a superset of compilation of static languages ergo we should be able to achieve both optimizations static languages can do and more because there are opportunities that only become apparent at runtime". When really its all about the constraints you can put on the user that set you up for better optimization.

        • senkora 4 hours ago

          And profile-guided optimization (PGO) for static languages turned out to be pretty good at revealing those “only apparent at runtime” optimizations.

        • dehrmann 4 hours ago

          That's like the joke that dynamic languages are static languages, but with only one type: hash table.

    • pjmlp 6 hours ago

      Especially when one keeps ignoring the JITs of dynamic languages, that were in the genesis of all high end production JITs being used nowadays, tracing back to Smalltalk, Self, Lisp, Prolog.

      All those languages are just as dynamic as Python, more so given the dynamically loading of code with image systems, across network, with break into debugger/condition points and redo workflows.

    • ngrilly 5 hours ago

      Agreed. I'd like CPython to offer the possibility to opt in semantics that are more amenable to optimizations, similar to what Cider is enabling with their opt-in strict modules and static classes: https://github.com/facebookincubator/cinder.

    • pizlonator 3 hours ago

      The semantics of Python-the-language aren’t any worse than JavaScript’s for optimization.

      Something else is going on.

    • manypineapples 6 hours ago

      pypy manages

      • pjmlp 6 hours ago

        The black swan of Python JITs, mostly ignored by the community, unfortunately.

      • 6 hours ago
        [deleted]
    • almostgotcaught 7 hours ago

      > It turns out that if you have language semantics that make optimizations hard, making a fast optimizing compiler is hard. Who woulda thunk?

      Is this in the article? I don't see Python's semantics mentioned anywhere as a symptom (but I only skimmed).

      > shows how poorly a lot of people understand what goes into making a language fast.

      ...I'm sorry but are you sure you're not one of these people? Some facts:

      1. JS is just as dynamic and spaghetti as Python and I hope we're all aware that it has some of the best jits out there;

      2. Conversely, C++ has many "optimizing compiler[s]" and they're not all magically great by virtue of compiling a statically typed, rigid language like C++.

      • o11c 7 hours ago

        JS is absolutely not as dynamic as Python. It supports `const`ness, and uses it by default for classes and functions.

        • dontlaugh 6 hours ago

          More importantly, there's nothing like locals[] or __getattribute__.

          • pjmlp 6 hours ago

            Smalltalk has them, and its JIT research eventually became Hotspot.

            Anything can change at any time in Smalltalk.

            • igouy 4 hours ago

              And then we find out that Smalltalk implementations might choose to optimize instead of allowing anything to change at any time.

                  ifFalse: alternativeBlock 
                      "Answer the value of alternativeBlock. Execution does not actually
                      reach here because the expression is compiled in-line."
              
                      ^alternativeBlock value
            • dontlaugh 6 hours ago

              Strongtalk limited dynamic features.

              But you’re not wrong in general. Even for Python there’s PyPy, with a JIT ~3x faster than CPython.

              • pjmlp 6 hours ago

                Strongtalk was the transition step between Smalltalk JITs and what became Sun's Hotspot, but that wasn't the main point I was making.

                Also to note that even in that regard, Java happens to be more dynamic that people think, while the syntax is C++ like, the platform semantics are more akin to Smalltalk/Objective-C, hence why a JIT with such a background was a great addition.

            • hyperpape 4 hours ago

              There's a pretty big gap between "its JIT research eventually became Hotspot" and "Smalltalk can be made to perform on a par with Hotspot."

          • almostgotcaught 6 hours ago

            yes there is: https://wiki.python.org/moin/UsingSlots

            people really don't know enough about this to be talking about it with such confidence...

            • dontlaugh 6 hours ago

              I was pointing examples of the opposite, that JavaScript is less dynamic than Python.

              There's lots of Python code out there that relies on not using slots. If you're making a JIT, you can't assume that all code is using slots.

  • pizlonator 3 hours ago

    JIT and VM writer here. I’m also pretty clued in on how CPython works because I ported it to Fil-C.

    I think if I was being paid to make CPython faster I’d spend at least a year changing how objects work internally. The object model innards are simply too heavy as it stands. Therefore, eliminating the kinds of overheads that JITs eliminate (the opcode dispatch, mainly) won’t help since that isn’t the thing the CPU spends much time on when running CPython (or so I would bet).

    • kzrdude 3 hours ago

      Many changes of that kind have been made by the faster-cpython team I believe, Mark Shannon was rather focused on it (and had a decade of experience of that kind of tweaks to python).

      But I'm trying to find/recall a blog post that detailed the different steps in shrinking the CPython object struct...

      If you say that's not enough, more radical changes needed, I would understand.

    • cs_throwaway 3 hours ago

      Do you think it may be feasible to do this and maintain the FFI?

      • pizlonator 3 hours ago

        That's the hard part!

        I think that the FFI makes it super hard to do most of the optimizations I'd want to do. Maybe it makes them impossible even. The game is to find any chance for size reduction and fast path simplification that doesn't upset FFI

  • serjester 5 hours ago

    This article doesn't do the best job explaining the broader picture - stability has been their number one priority up to this point.

    - Most of the work has just been plumbing. Int/float unboxing, smarter register allocation, free-threaded safety land in 3.15+.

    - Most JIT optimizations are currently off by default or only triggers after a few thousand hits, and skips any byte-codes that look risky (profiling hooks, rare ops, etc.).

    I really recommend this talk with one of the Microsoft faster Cpython developers for more details, https://www.youtube.com/watch?v=abNY_RcO-BU

    • kenjin4096 2 hours ago

      Hi, author of the post here, stability indeed has been a priority. There are some points which are not exactly the case though:

      > - Most of the work has just been plumbing. Int/float unboxing, smarter register allocation, free-threaded safety land in 3.15+.

      The first part is true, but for the second sentence: none of that is guaranteed to land in 3.15+. We proposed to land them, that doesn't mean they will. Landing a PR in CPython is subject to maintainer time and reviewer approval, which doesn't always happen. I proposed a few optimizations for 3.14 that never landed.

      > Most JIT optimizations are currently off by default or only triggers after a few thousand hits

      It is indeed true we only trigger after a few thousand hits, but all optimizations that we currently have are always enabled. We don't sandbag the JIT on purpose.

  • ecshafer 7 hours ago

    Does anyone know why for example the Ruby team is able to create JITs that are performant with comparative ease to Python? They are in many ways similar languages, but Python has 10x the developers at this point.

    • dfox 6 hours ago

      Ruby in both its semantics and implementation is very close to smalltalk and does not really use the Python's object model that can be summarized as "everything is a dict with string keys". That makes all the tricks discovered over last 40 years of how to make Smalltalk and Lisp fast much more directly applicable in Ruby.

      • maxime_cb 4 hours ago

        Instigator of YJIT, the CRuby JIT here.

        It's easy to dismiss our efforts, but Ruby is just as dynamic if not more than Python. It's also a very difficult language to optimize. I think we could have done the same for Python. In fact the Python JIT people reached out to me when they were starting this project. They probably felt encouraged seeing our success. However they decided to ignore my advice and go with their own unproven approach.

        This is probably going to be an unpopular take but building a good JIT compiler is hard and leadership matters. I started the YJIT project with 10+ years of JIT compiler experience and a team of skilled engineers, whereas AFAIK the Python JIT project was lead by a student. It was an uphill battle getting YJIT to work well at first. We needed grit and I pushed for a very data-driven approach so we could learn from our early failures and make informed decisions. Make of that what you will.

        Yes Python is hard to optimize. I Still believe that a good JIT for CPython is very possible but it needs to be done right. Hire me if you want that done :)

        Several talks about YJIT on YouTube for those who want to know more: https://youtu.be/X0JRhh8w_4I

        • kenjin4096 an hour ago

          Hey Maxime!

          > whereas AFAIK the Python JIT project was lead by a student.

          I am definitely not leading the team! I am frankly unqualified to do so lol. The team is mostly led by Mark Shannon, who has 10+ years of compiler/static analysis experience as well. The only thing I initially led was the optimizer implementation for the JIT. The overall design to choose tracing, to use copy and patch, etc. were other people.

          > However they decided to ignore my advice and go with their own unproven approach.

          Your advice was very much appreciated and I definitely didn't ignore your advice. I just don't have much say over the initial architectural choices we make. We're slowly changing the JIT based on data, but it is an uphill battle like you said. If you're interested, it's slowly becoming more like lazy basic block versioning https://github.com/python/cpython/issues/128939

          You did great work on YJIT, and I am quite thankful for that.

        • josalhor 3 hours ago

          Having had no experience in JIT development but having followed the faster cpython JIT progress on a weekly basis, I do find their JIT strategy a bit weird. The entire decision seemed to revolve around not wanting to embebed an external JIT/compiler with all that entails...

          At first I thought their solution was really elegant. I have an appreciation for their approach, and I could have been captivated myself to choose it. But at this point I think this is a sunk cost fallacy. The JIT is not close to providing significant improvements and no one in the faster cpython community seems to be able to call the shot that the foundational approach may not be able to give optimal results.

          I either hope to be wrong or hope that faster cpython managment has a better vision for the JIT than I do.

        • ecshafer 2 hours ago

          Thanks for the response Maxime, your work on YJIT is astounding. The speedup from YJIT was a huge improvement over cRuby or MJIT, and the work was done relatively quickly compared to Python which seems to always be talking about this JIT but we are never seeing a comparable release.

      • Qem an hour ago

        > Python's object model that can be summarized as "everything is a dict with string keys".

        Given this "it's dicts all the way down" nature of CPython, I'm curious if the recent hash table theoretical breakthrough[1] discussed here[2] a few months ago may eventually help making it much faster, given the compounding of dict upon dict?

        [1] https://www.quantamagazine.org/undergraduate-upends-a-40-yea...

        [2] https://news.ycombinator.com/item?id=43002511

    • abhorrence 6 hours ago

      My complete _guess_ (in which I make a bunch of assumptions!) is that generally it seems like the Ruby team has been more willing to make small breaking changes, whereas it seems a lot like the Python folks have become timid in those regards after the decade of transition from 2 -> 3.

      • gkbrk 5 hours ago

        Python has made many breaking changes after 2->3 as well. They don't even bother to increment the major version number any more.

        I haven't checked, but I wouldn't be surprised if more Python versions contained breaking changes than not.

        • zahlman 5 hours ago

          > Python has made many breaking changes after 2->3 as well.

          Aside from the `async` keyword (experience with which seems like it may have driven the design of "soft keywords" for `match` etc.), what do you have in mind that's a language feature as opposed to a standard library deprecation or removal?

          Yes, the bytecode changes with every minor version, but that's part of their attempts to improve performance, not a hindrance.

          • gkbrk an hour ago

            Why do you exclude the standard library like it's a small thing? If it's not part of the language, why do they host the documentation on the same website and ship it with the same package?

            In C, dotnet, Rust or even Javascript, stdlib breakages are basically the same as language breakages. Python is an outlier for this.

    • pjmlp 6 hours ago

      Community.

      Smalltalk, Self, Lisp, are highly dynamic, their JIT research are the genesis of modern JIT engines.

      For some strange reason, Python community rather learns C, calls it "Python", instead of focusing why languages that are just as dynamic, have managed already a few decades ago.

    • adgjlsfhk1 6 hours ago

      I think a major factor is C API prevalence. The python C-api is bad and widely used so it's very difficult to improve.

      • maxime_cb 4 hours ago

        Ruby has the same unfortunate problem.

    • cuchoi 7 hours ago

      Funding?

      Seems like the development was funded by Shopify and they got a ~20% performance improvement. https://shopify.engineering/ruby-yjit-is-production-ready

      A similar experience in the Python community is that Microsoft funded "Faster CPython" and they made Python 20-40% faster.

      • ecshafer 7 hours ago

        The funding is one angle, but the Shopify Ruby team isn't that big (<10 people iirc). Python is used extensively at just about every tech company, and Meta, Apple, Microsoft, Alphabet, and Amazon each have at least 10x as many engineers as Shopify. This makes me think that there must be some kind of language/ecosystem reason that makes Python much harder than Ruby to optimize.

        • UncleEntity 6 hours ago

          Probably the methods they use as well.

          I may not be completely accurate on this because there's not a whole lot of information on how Python is doing their thing so...

          The way (I believe) Python is doing it is to take code templates and stitching them together (copy & patch compilation) to create an executable chunk of code. If, for example, one were to take the py-bytecode and just stitch all the code chunks together all you can realistically expect to save is the instruction dispatch operations, which the compiler should make really fast anyway, which leaves you at parity with the interpreter since each code chunk is inherently independent so the compiler can't do its magic on the entire code chunk. Basically this is just inlining the bytecode operations.

          To make a JIT compiler really excel you'd need to do something like take all the individual operations of each individual opcode and lower that to an IR and then optimize over the entire method using all the bells and whistles of modern compilers. As you can imagine this is a lot more work than 'hacking' the compiler into producing code fragments which can be patched together. Modern compilers are really good at these sorts of things and people have been trying to make the Python interpreter loop as efficient as possible for a long time so there's a big hurdle to overcome here.

          I've (or more accurately, Claude) has been writing a bytecode VM and the dispatch loop is basically just a pointer dereference and a function call which is about as fast as you can get. Ok, theoretically, this is how it works as there's also a check to make sure the opcode is within range as the compiler part is still being worked on and it's good for debugging but foundationally this is how it works.

          From what I've gleaned from the literature the real key to making something like copy & patch work is super-instructions. You take common patterns, like MULT+ADD, and mash them together so the C compiler can do its magic. This was maybe mentioned in the copy & patch paper or, perhaps, they only talked about specialization based on types, don't actually remember.

          So, yeah, if you were just competing against a basic tree-walking interpreter then copy & patch would blow it out of the water but C compilers and the Python interpreter have both had million of people hours put into them so that's really tough competition.

  • ggm 3 days ago

    What fundamentals would make the jit, this specific jit faster? Because if it's demonstrably slower, it begs the question if it can be faster or is inherently slower than a decent optimisation path through a compiler.

    At this point it's a great didactic tool and a passion project surely? Or, has advantages in other dimensions like runtime size, debugging, and .pyc coverage, or in thread safe code or ...

    • teruakohatu 7 hours ago

      The article points out they have only begun adding optimisers to the jit compiler.

      Unoptimised jit < optimised interpreter (at least in this instance)

      They are working on it presumably because they think there will eventually be a speed ups in general or at least for certain popular workloads.

      • taeric 7 hours ago

        The article also specifically calls out machine code generation as a separate thing. I confess that somewhat surprises me, as I would expect getting machine code generated would be a main source of speed up for a JIT? That and counter based choices on what optimizations to perform?

        Still, to directly answer the first question, I would hope even if there wasn't obvious performance improvements immediately, if folks want to work on this, I see no reason not to explore it. If we are lucky, we find improvements we didn't expect.

        • adrian17 6 hours ago

          > I confess that somewhat surprises me, as I would expect getting machine code generated would be a main source of speed up for a JIT?

          My understanding is that the basic copy-and-patch approach without any other optimizations doesn’t actually give that much. The difference between an interpreter running opcodes A,B,C and a JIT emitting machine code for opcode sequence A,B,C is very little - the CPU running the code will execute roughly the same instructions for both, the only difference is that the jit avoids doing an op dispatch between each op - but that’s already not that expensive due to jump threading in the interpreter. Meanwhile the JIT adds an extra possible cost of more work if you ever need to jump from JIT back to fallback interpreter.

          But what the JIT allows is to codegen machine code corresponding to more specialized ops that wouldn’t be that beneficial in the interpreter (as more and smaller ops make it much worse for icaches and branch predictors). For example standard CPython interpreter ops do very frequent refcount updates, while the JIT can relatively easily remove some sequences of refcount increments followed by immediate decrements in the next op.

          Or maybe I misunderstood the question, then in other words: in principle copy-and-patch’s code generation is quite simple, and the true benefits come from the optimized opcode stream that you feed it that wouldn’t have been as good for the interpreter.

          • taeric 6 hours ago

            Right, that is basically what I was asking. Essentially, I expected the machine code to be a bit of an unrolling of the interpreter over the opcodes that a piece of code is executing.

            That my intuition is wrong here doesn't shock me, I should add. It was still a surprise and it will get me to update my idea on what the interpreter is doing.

        • MobiusHorizons 5 hours ago

          The way I understand it, the machine code generator emits machine code for some particular piece of bytecode (or whatever the JIT IR is). This is almost like an assembler and probably has templates that it expands. It is important for this machine code to be fast, but it each template is at a pretty low level, and lacks the context for structural optimizations. The optimizer works at a higher level of abstraction, and can make these structural optimizations. You can get very large speed-ups when you can remove code that isn't necessary, or emit equivalent code that has a lower complexity or memory overhead. Typical examples of things optimizers do are * use registers instead of memory for function arguments * constant folding * function inlining * loop unrolling

          I don't know if that's exactly how it works for this particular effort, but that would be my expectation.

        • moregrist 6 hours ago

          A byte code interpreter is, very approximately, a lookup table of byte code instructions that dispatches each instruction to highly optimized assembly.

          This will almost certainly outperform a straight translation to poorly optimized machine code.

          Compilers are structured in conceptual (and sometimes distinct) layers. In a classic statically-typed language will only compile-time optimizations, the compiler front-end will parse the language into a abstract syntax tree (AST) via a parse tree or directly, and then convert the AST into the first of what may be several intermediate representations (IRs). This is where a lot of optimization is done.

          Finally the last IR is lowered to assembly, which includes register allocation and some other (peephole) optimization techniques. This is separate from the IT manipulation so you don’t have to write separate optimizers for different architectures.

          There are aspects of a tracing JIT compiler that are quite different, but it will still use IR layers to optimize and have architecture-dependent layers for generating machine code.

          • taeric 6 hours ago

            Right, I guess my main surprise is that the PyPy byte code interpreter is as fast as it is. My understanding is obviously outdated on how it is implemented; but I thought its claim to fame was that it was purely written in python. I'm assuming the subset of python it is implemented in is fairly restricted? That or my understanding was wrong in other ways. :D

    • pizlonator 3 hours ago

      In JavaScript, an unoptimizing JIT (no regalloc, no optimizations that look at patterns of ops, no analysis) is faster than the interpreter because it eliminates opcode dispatch.

      Adding more optimizations improves things from there.

      But the point is, a JIT can be a speedup just because it isn’t an interpreter (it doesn’t dynamically dispatch ops).

  • bgwalter 7 hours ago

    According to the promises of the Faster CPython Team, the JIT with a >50% speedup should have happened two years ago.

    Everyone knows Python is hard to optimize, that's why Mojo also gave up on generality. These claimed 20-30% speedups, apparently made by one of the chief liars who canceled Tim Peters, are not worth it. Please leave Python alone.

    • notatallshaw 4 hours ago

      Two years ago was Python 3.11, my real world workloads did see a ~15-20% improvement in performance with that release.

      I don't remember the Faster CPython Team claiming JIT with a >50% speedup should have happened two years ago, can you provide a source?

      I do remember Mark Shannon proposed an aggressive timeline for improving performance, but I don't remember him attributing it to a JIT, and also the Faster CPython Team didn't exist when that was proposed.

      > apparently made by one of the chief liars who canceled Tim Peters

      Tim Peters still regularly posts on DPO so calling him "cancelled" is a choice: https://discuss.python.org/u/tim.one/activity.

      Also, I really can not think who you would be referring to as part of the Faster CPython Team, of which all the former members I am aware of largely stayed out of the discussions on DPO.

    • 5 hours ago
      [deleted]
  • throwaway032023 7 hours ago

    I remember when pypy was only 25x slower than c python.

  • 7 hours ago
    [deleted]
  • firesteelrain 7 hours ago

    We have had really good success using Cython which makes many calls into the CPython interpreter and CPython Standard Libraries.

  • gjvc 4 hours ago

    not so long ago some people were saying that pypy should be the de-facto reference implementation because of its speed