If that table is anything to go by, Pyright is not to be underestimated.
I have briefly tried ty (LSP) in Emacs and it seems to work well so far. The only questionable thing I've encountered is that when the signature of a method is shown, the type annotations of some parameters seem to be presented in a particularly verbose form compared to what I'm used to - maybe they're technically correct but it can be bit much to look at.
Anyway, odds are pretty good that ty is what I will end up using long-term, so thanks and congrats on releasing the first beta!
Note: while spec conformance is important, I don't recommend using it as the basis for choosing a type checker. It is not representative of the things that most users actually care about (and is not meant to be).
(I was on the Python Typing Council and helped put together the spec, the conformance test suite, etc)
Can you add some examples of the things users care about that aren't well covered by this? I empathize with everyone who wants a feature comparison chart so they can be confident switching without unknowingly losing important safety checks.
The conformance test suite is currently mostly focused on “what does an explicit type annotation mean”
A shared spec for this is important because if you write a Python library, you don’t want to have to write a different set of types for each Python type checker
Here are some things the spec has nothing to say about:
Inference
You don’t want to annotate every expression in your program. Type checkers have a lot of leeway here and this makes a huge difference to what it feels like to use a type checker.
For instance, mypy will complain about the following, but pyright will not (because it infers the types of unannotated collections as having Any):
x = []
x.append(1)
x[0] + "oops"
The spec has nothing to say about this.
Diagnostics
The spec has very little to say about what a type checker should do with all the information it has. Should it complain about unreachable code? Should it complain if you did `if foo` instead of `if foo()` because it’s always true? The line between type checker and linter is murky. Decisions here have nothing to do with “what does this annotation mean”, so are mostly out of scope from the spec.
Configuration
This makes a huge difference when adapting huge codebases to different levels of type checking. Also the defaults really matter, which can be tricky when Python type checkers serve so many different audiences.
Other things the spec doesn’t say anything about: error messages quality, editor integration, speed, long tail of UX issues, implementation of new type system features, plugins, type system extensions or special casing
And then of course there are things we would like to spec but haven’t yet!
In case you’re not well versed in Python typecheckers, in the mypy vs Pyright example, Pyright can be configured to complain about not annotating the collection (and so both typecheckers will yell at the code as written).
TypeScript takes the same approach in this scenario, and I assume this helps both be fast.
They were "on the Python Typing Council and helped put together the spec, the conformance test suite, etc" so I assume they are an expert on Python typecheckers
I think the idea is not that there are features that aren’t listed, but rather that if a typechecker supports 10 features people care about and is missing 10 that people don’t really use, it will look a lot worse on a list like this than a typechecker with 100% compliance, when in practice it may not really be worse at all.
Edit: Based on this other comment, the point was also about things not covered by the spec. “The spec mostly concerns itself with the semantics of annotations, not diagnostics or inference.”
https://news.ycombinator.com/item?id=46296360
The chart does not describe speed (either in general or in any particular case). Speed/performance/latency is a thing users care about that is not included in the feature list.
Yea that one is fine and well covered in the blog post, and pretty easy to spot in light testing. I'm much more worried about the ones that are harder to spot until you have a false negative that turns into a real bug which would be caught by 1 tool and not another.
We'll be adding ourselves to that table soon. We'll have some work to catch up with pyright on conformance, but that's what the time between now and stable release is for.
I really need better generics support before ty becomes useful. Currently decorators just make all return types unknown. I need something this to work:
Pyright is really really good. Anyone that doubts that 10x engineers exist, just go and look at Eric Traut. He's pretty much written it single handedly. Absolute machine.
Mypy is trash. Nice to have a table to point to to prove it.
Oh my, I just looked him up. He is the developer of Virtual Game Station - a PS1 emulator that I used in the past to play PS Isos on my Windows ME PC! What a legend.
We're paying for pyx. Wouldn't have if we didn't enjoy enjoy uv and ruff.
It's definitely a narrow path for them to tread. Feels like the best case is something like Hashicorp, great until the founders don't want to do it anymore.
> Feels like the best case is something like Hashicorp
Wow, that's probably my go-to case of things going south, not "best case scenario". They sold to IBM, a famous graveyard for software, and on the way there changed from FOSS licensing to their own proprietary ones for software the community started to rely on.
My issue with them is that they claim their tools replace existing tools, but they don't bother to actually replicate all of the functionality. So if you want to use the full functionality of existing tools, you need to fall back on them instead of using Astral's "replacements". It's like one step forward and one step back. For me personally, speed of the tooling is not as important as what the tooling can check, which is very important for a language like Python that is very easy to get wrong.
If there are specific incompatibilities or rough edges you're running into, we're always interested in hearing about them. We try pretty hard to provide a pip compatibility layer[1], but Python packaging is non-trivial and has a lot of layers and caveats.
Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial? uv sync and uv run are sort of fine for developing a distribution/package, but they’re not exactly replacements for anything else one might want to do with the pip and venv commands.
As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps. Unless I’ve missed something, if I make a change to a source tree that uv sync doesn’t notice, I’m stuck with uv pip install -e ., which is a wee bit disappointing and feels a bit gross. I suppose I could try to put something correct into cache-keys, but this is fundamentally wrong. The list of files in my source tree that need to trigger a refresh is something that my build system determines when it builds. Maybe there should be a way to either plumb that into uv’s cache or to tell uv that at least “uv sync” should run the designated command to (incrementally) rebuild my source tree?
(Not that I can blame uv for failing to magically exfiltrate metadata from the black box that is hatchling plus its plugins.)
> Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial?
It's really helpful to have examples for this, like the one you provide below (which I'll respond to!). I've been a maintainer and contributor to the PyPA standard tooling for years, and once uv "clicked" for me I didn't find myself having to leave the imperative layer (of uv add/sync/etc) at all.
> As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps.
Could you say more about your setup here? By "rebuild steps" I'm inferring you mean an editable install (versus a sdist/bdist build) -- in general `uv sync` should work in that scenario, including for non-trivial things where e.g. an extension build has to be re-run. In other words, if you do `uv sync` instead of `uv pip install -e .`, that should generally work.
However, to take a step back from that: IMO the nicer way to use uv is to not run `uv sync` that much. Instead, you can generally use `uv run ...` to auto-sync and run your development tooling within an environment than includes your editable installation.
By way of example, here's what I would traditionally do:
python -m venv .env
source .env/bin/activate
python -m pip install -e .[dev] # editable install with the 'dev' extra
pytest ...
# re-run install if there are things a normal editable install can't transparently sync, like extension builds
Whereas with uv:
uv run --dev pytest ... # uses pytest from the 'dev' dependency group
That single command does everything pip and venv would normally do to prep an editable environment and run pytest. It also works across re-runs, since it'll run `uv sync` as needed under the hood.
uv needs to support creation of zipapps, like pdm does (what pex does standalone).
Various tickets asking for it, but they also want to bundle in the python interpreter itself, which is out of scope for a pyproject.toml manager: https://github.com/astral-sh/uv/issues/5802
Their integration with existing tools seems to be generally pretty good.
For example, uv-build is rather lacking in any sort of features (and its documentation barely exists AFAICT, which is a bit disappointing), but uv works just fine with hatchling, using configuration mechanisms that predate uv.
(I spent some time last week porting a project from an old, entirely unsupportable build system to uv + hatchling, and I came out of it every bit as unimpressed by the general state of Python packaging as ever, but I had no real complaints about uv. It would be nice if there was a build system that could go even slightly off the beaten path without writing custom hooks and mostly inferring how they’re supposed to work, though. I’m pretty sure that even the major LLMs only know how to write a Python package configuration because they’ve trained on random blog posts and some GitHub packages that mostly work — they’re certainly not figuring anything out directly from the documentation, nor could they.)
Getting from 95% compatible to 100% compatible may not only take a lot of time, but also result in worsening the performance. Sometimes it's good to drop some off the less frequently used features in order to make the tool better (or allow for making the tool better)
Damn it, this unicorn farting rainbows and craping gold is not yet capable of towing another car. I don't know why they advertise it as a replacement for my current mode of transportation.
Thanks Astral team! We use Pydantic heavily, and it looks like first class support from Ty is slated for the stable release, we'd love to try it.
While we wait... what's everyone's type checking setup? We run both Pyright and Mypy... they catch different errors so we've kept both, but it feels redundant.
I appreciate the even tempered question. I’ve been using mypy since its early days, and when pyright was added to vs code I was forced to reckon with their differences. For the most part I found mypy was able to infer more accurately and flexibly. At various times I had to turn pyright off entirely because of false positives. But perhaps someone else would say that I’m leaning on weaknesses of mypy; I think I’m pretty strict but who knows. And like yourself, mine is a rather dated opinion. It used to be that every mypy release was an event, where I’d have a bunch of new errors to fix, but that lessened over the years.
I suspect pyright has caught up a lot but I turned it off again rather recently.
For what it’s worth I did give up on cursor mostly because basedpyright was very counterproductive for me.
I will say that I’ve seen a lot more vehement trash talking about mypy and gushing about pyright than vice versa for quite a few years. It doesn’t quite add up in my mind.
I’ve added ecosystem regression checks to every Python type checker and typeshed via https://github.com/hauntsaninja/mypy_primer. This helped a tonne with preventing unintended or overly burdensome regressions in mypy, so glad to hear upgrades are less of an Event for you
> I will say that I’ve seen a lot more vehement trash talking about mypy and gushing about pyright than vice versa for quite a few years. It doesn’t quite add up in my mind.
agreed! mypy's been good to us over the years.
The biggest problem we're looking to solve now is raw speed, type checking is by far the slowest part of our precommit stack which is what got us interested in Ty.
Mentioned this in another comment, but the spec conformance suite is not representative of the things users care about (nor is it meant to be).
The spec mostly concerns itself with the semantics of annotations, not diagnostics or inference. I don't really recommend using it as the basis for choosing a type checker.
(I was on the Python Typing Council and helped put together the spec, the conformance test suite, etc)
The title of this story should be "Announcing the Beta release of ty". A lot of people have been waiting for the beta specifically.
I've been using Pyrefly and loving it compared to Pyright, but they recently shipped some updates with crash bugs that forced me to pin to a previous version, which is annoying. Unfortunately my first impression of ty isn't great either. Trying to install the ty extension on the current version of Cursor says "Can't install 'astral-sh.ty' extension because it is not compatible with the current version of Cursor (version 2.2.20, VSCode version 1.105.1)."
(pyrefly maintainer here) If you haven't already, please file an issue for that crash on the [Pyrefly repo](https://github.com/facebook/pyrefly) as well :)
If there's anything else accompanying the error, do you mind filing an issue? I've been using the ty extension with Cursor for weeks and am having trouble reproducing right now.
That's the full error. It shows up in a dialog box when I press the install button. I'm on macOS, connected with the Anysphere Remote SSH extension to a Linux machine.
If I choose "install previous version" I am able to install the pre-release version from 12 hours ago without issue. Then on the extension page I get a button labeled "Switch to Release Version" and when I press it I get an error that says "Can't install release version of 'ty' extension because it has no release version." Filed a GitHub issue with these details.
In the meantime, the previous version appears to be working well! I like that it worked without any configuration. The Pyrefly extension needed a config tweak to work.
https://forum.cursor.com/t/newly-published-extensions-appear... suggests that there's some kind of delayed daily update for new VSCode extension versions to become available to Cursor? It seems likely that's what is happening here, since ty-vscode 0.0.2 was only published an hour or two ago.
Apart from installation problems/crash issues, do you have some feedback about type checking with ty vs. pyrefly? Which is stricter, soundness issues, etc?
Both are rust/open-source/new/fast so it's difficult to understand why I should choose one over the other.
I still don’t understand how a single language can have multiple (what is it now, half a dozen?) different type checkers, all with different behaviour.
Do library authors have to test against every type checker to ensure maximum compatibility? Do application developers need to limit their use of libraries to ones that support their particular choice of type checker?
Users of a library will generally instruct their type-checker not to check the library.
So only the outer API surface of the library matters. That’s mostly explicitly typed functions and classes so the room for different interpretations is lower (but not gone).
This is obviously out the window for libraries like Pydantic, Django etc with type semantics that aren’t really covered by the spec.
You’re talking about a duck typed language with optional type annotations. I love python but that’s a combination that should explain a bit why there are so many different implementations.
The annotations have fairly well defined semantics, the behavior of typecheckers in the absence of annotations, where types are ambiguous (a common case being when the type is a generic collection type but the defining position is assignment to an empty collection so that the correct specialization of the generic type is ambiguous) is less defined.
TypeScript widens the type of x to allow `number | string`, there are no type errors below:
const x = []
x.push(1)
type t = typeof x
// ^? type t = number[]
x[0] = "new"
type t2 = typeof x
// ^? type t2 = (number | string)[]
const y = x[0] + "oops"
// ^? const y: string
> Either way, you didn't annotate the code so it's kind of pointless to discuss.
There are several literals in that code snippet; I could annotate them with their types, and this code would still be exactly as it is. You asked why there are competing type checkers, and the fact that the language is only optionally typed means ambiguity like that example exists, and should be a warning/bug/allowed; choose the type checker that most closely matches the semantics you want to impose.
> There are several literals in that code snippet; I could annotate them with their types, and this code would still be exactly as it is.
Well, no, there is one literal that has an ambiguous type, and if you annotated its type, it would resolve entirely the question of what a typechecker should say; literally the entire reason it is an open question is because that one literal is not annotated.
True, you could annotate 3 of the 4 literals in this without annotating the List, which is ambiguous. In the absence of an explicit annotation (because those are optional), type checkers are left to guess intent to determine whether you wanted a List[Any] or List[number | string], or whether you wanted a List[number] or List[string].
> And the fact that python doesn't specify the semantics of its type annotations is a super interesting experiment.
That hasn't been a fact for quite a while. Npw, it does specify the semantics of its type annotations. It didn't when it first created annotations for Python 3.0 (PEP 3107), but it has progressively since, starting with Python 3.5 (PEP 484) through several subsequent PEPs including creation of the Python Typing Council (PEP 729).
> I could annotate them with their types, and this code would still be exactly as it is.
Well, no, you didn't. Because it's not clear whether the list is a list of value or a list of values of a distinct type. And there are many other ways you could quibble with this statement.
> Do library authors have to test against every type checker to ensure maximum compatibility?
Yes, but in practice, the ecosystem mostly tests against mypy. pyright has been making some inroads, mostly because it backs the diagnostics of the default VS Code Python extension.
> Do application developers need to limit their use of libraries to ones that support their particular choice of type checker?
You can provide your own type stubs instead of using the library's built-in types or existing stubs.
I am not that surprised, to be honest. Basically every C/C++ static analyzer out there does (among other things) some amount of additional "custom" type checking to catch operations that are legal up to the standard, but may cause issues at runtime. Of course in Python you have gradual typing which adds to the complexity, but truly well-formalised type systems are not that common in the industry.
I am so pleased by ty’s stance that I should not have to add annotations to satisfy the type checker. I ripped out last type checker out because it was constantly nagging us about technicalities, but ty immediately found issues where we annotated that a duct was an acceptable input, but actually doing so would break things.
You guys are a godsend to the python tooling world. I’ve been far more excited about the impact rust is having on the software world than that of AI, and your work is a big part of that. While I have not seen any real net productivity gains from AI in mine or my juniors work, I’ve definitely seen real gains from using your tooling!
In fact as Jetbrains has been spending years chasing various rabbits including AI, instead of substantially improving or fixing PyCharm, without you steadily replacing/repairing big chunks of Pycharms functionality I would be miserable. If it came down to it, we would happily pay a reasonable license fee to use your tools as long as they stayed free for non-commercial usage.
I believe the've been looking for two-letter names that aren't already taken, and are easy to type. I think I heard that from one of the podcasts that Charlie Marsh was on.
Source here for anyone interested[0]. From memory, Ruff was its own thing, (I think named after the bird?) since then they've tried to give projects short letter combinations for consistency and ease of typing (uv, ty, pyx)
well, this is where being pedantic bites me in the a* again. Our codebase has been mostly pyright-focused, with many very specific `pyright: ignore[...]` pragmas. Now it would be great if ty (pyrefly has an option!) could also ignore those lines. There's not _that_ many of them, but .... it's a pain.
Wow, even if it wasn't so fast, I'd be tempted to use this solely due to their support of intersection (A & B) types! This is a sore omission from the standard python typing system.
Ty doesn't support Django yet, and it doesn't have a plugin system, so third party developers can't improve it. If you need Django support, it is better to stick to mypy or pyright for the time being.
I was underwhelmed by uv as a tool when it was announced, and when I started using it. For context, I'm a C++ developer who occasionally has to dip into python-land for scripts and tooling. I set up a new workstation about 6 months ago and decided I'd just use pip + venv again, and honestly I lasted 2 weeks before installing UV again. It's one of those tools that... doesn't really do much except _what you wanted the original tool to do_, and I'm hoping that Ty has the same effect.
The guy behind Zuban should've put his project out the in open way earlier. I'd love to see both projects succeed, but in reality it should become one.
We've been relying on TypeForm (an experimental feature in Pyright) in xDSL. Since there are some Astral members commenting here: are there any plans to support TypeForm any time soon? It seems like you already have some features that go beyond the Python type spec, so I feel like there may be hope
Yes, we love TypeForm! We plan to support it as soon as the PEP for it lands. Under the covers, we already support much of what's needed, and use it for some of our special-cased functions like `ty_extensions.is_equivalent_to` [1,2]. TypeForm proper has been lower on the priority list mostly because we have a large enough backlog as it is, and that lets us wait to make sure there aren't any last-minute changes to the syntax.
Very exciting! I guess I'll have to wait for Django and Pydantic support to migrate to it on Zulip, but type checking was the last major linter that's still slow in Python.
is there anything like `uv` available for ruby? going from python and typescript where I can use uv and bun, it feels like ruby is stuck in the past :(
Very excited to see this. I thought that speed does not matter much for python tooling, but then I tried uv, and realized that I was wrong. The experience is just better. Looking forward to see more high performance quality tooling for Python.
Jesus, how long will we need this shite? Can't someone from MS fix this already?
Or is it possible for Astral to implement fully fledged Python extension so you don't have to use Microsoft crap that includes proprietary pylance?
```
It's recommended to disable the language server from the Python extension to avoid running two Python language servers by adding the following setting to your settings.json:
Python programmers are crying out for types it seems. It’s a shame the Python foundation haven’t blessed a spec. Better to get everyone working on a single slightly imperfect standard than a morass or differing ideas.
Have you written any go code? it's the closest I've come to actually enjoying a type system - it gets out of your way, and loosely enforces stuff. It could do with some more convenience methods, but overall I'd say it's my most _efficient_ type system. (not necessarily the best)
> Using types in a prototyping language is madness.
It's not a prototyping language or a scripting language or whatever. It's just a language. And types are useful, especially when you can opt out of type checking when you need to. Most of the time you don't want to be reassigning variables to be different types anyway, even though occasionally an escape hatch is nice.
Well, it's just a documentation suggestion for user. Having for me about same value as if it was written in pydoc. I'd really love to see such study as well
> Not quite, static typing is used at runtime, python type annotations are not
No, static typing is usually used AOT (most frequently at compile time), not usually at runtime (types may or may not exist at runtime; they don't in Haskell, for instance.)
Python type checking is also AOT, but (unlike where it is inextricably tied to compilation because types are not only checked but used for code generation) it is optional to actually do that step.
Python type annotations exist and are sometimes used at runtime, but not usually at that point for type checking in the usual sense.
> > Not quite, static typing is used at runtime, python type annotations are not
> No, static typing is usually used AOT (most frequently at compile time), not usually at runtime (types may or may not exist at runtime; they don't in Haskell, for instance.)
In fact, Haskell then allows you to add back in runtime types using Typeable!
Not impressed because when tried ruff, and discovered that it doesn't replace (basic) pylint check https://github.com/astral-sh/ruff/issues/970 so we have ruff then pylint (and looking at the number of awaiting PR of ruff feels bad)
Ruff is incredible, replacing a mountain of tools and rules with a single extremely fast linter/formatter. Given that it is updated and improved frequently, I’m curious if you have tried it recently, and if so what pylint rules are you using that it doesn’t cover?
The codebase has none of the rust code. In fact even the python code in the code base is mostly just scripts for updating version tags and etc...
Seems like the code isn't actually open source which to me is a bit concerning. At the very least, if ya'll want to release it like this please be clear that you're not open source. The MIT license in the repo gives the wrong impression.
The ty repo contains the ruff repo[1] as a submodule, where the remainder of the code is. It is indeed open source, the layout is just indirect at the moment because of code-sharing between the tools.
At least as of a couple months ago, `ty` was actually being developed in the `ruff` repo (per an pdocast interview the devs did on Talk Python), so that might be why the `ty` repo looks empty (and pulls in ruff as a git submodule).
Hopefully it gets added to this comparison:
https://htmlpreview.github.io/?https://github.com/python/typ...
If that table is anything to go by, Pyright is not to be underestimated.
I have briefly tried ty (LSP) in Emacs and it seems to work well so far. The only questionable thing I've encountered is that when the signature of a method is shown, the type annotations of some parameters seem to be presented in a particularly verbose form compared to what I'm used to - maybe they're technically correct but it can be bit much to look at.
Anyway, odds are pretty good that ty is what I will end up using long-term, so thanks and congrats on releasing the first beta!
Note: while spec conformance is important, I don't recommend using it as the basis for choosing a type checker. It is not representative of the things that most users actually care about (and is not meant to be).
(I was on the Python Typing Council and helped put together the spec, the conformance test suite, etc)
Can you add some examples of the things users care about that aren't well covered by this? I empathize with everyone who wants a feature comparison chart so they can be confident switching without unknowingly losing important safety checks.
The conformance test suite is currently mostly focused on “what does an explicit type annotation mean”
A shared spec for this is important because if you write a Python library, you don’t want to have to write a different set of types for each Python type checker
Here are some things the spec has nothing to say about:
Inference
You don’t want to annotate every expression in your program. Type checkers have a lot of leeway here and this makes a huge difference to what it feels like to use a type checker.
For instance, mypy will complain about the following, but pyright will not (because it infers the types of unannotated collections as having Any):
The spec has nothing to say about this.Diagnostics
The spec has very little to say about what a type checker should do with all the information it has. Should it complain about unreachable code? Should it complain if you did `if foo` instead of `if foo()` because it’s always true? The line between type checker and linter is murky. Decisions here have nothing to do with “what does this annotation mean”, so are mostly out of scope from the spec.
Configuration
This makes a huge difference when adapting huge codebases to different levels of type checking. Also the defaults really matter, which can be tricky when Python type checkers serve so many different audiences.
Other things the spec doesn’t say anything about: error messages quality, editor integration, speed, long tail of UX issues, implementation of new type system features, plugins, type system extensions or special casing
And then of course there are things we would like to spec but haven’t yet!
> but pyright will not (because it infers the types of unannotated collections as having Any)
This is incorrect. pyright will infer the type of x as list[Unknown].
Unknown has the exact same type system semantics as Any.
Unknown is a pyright specific term for inferred Any that is used as the basis for enabling additional diagnostics prohibiting gradual typing.
Notably, this is quite different from TypeScript’s unknown, which is type safe.
In case you’re not well versed in Python typecheckers, in the mypy vs Pyright example, Pyright can be configured to complain about not annotating the collection (and so both typecheckers will yell at the code as written).
TypeScript takes the same approach in this scenario, and I assume this helps both be fast.
They were "on the Python Typing Council and helped put together the spec, the conformance test suite, etc" so I assume they are an expert on Python typecheckers
I didn’t write it for parent lol. I guess I should be more careful with “you”.
TypeScript will use flow typing to determine the type as number[] in this code:
I think the idea is not that there are features that aren’t listed, but rather that if a typechecker supports 10 features people care about and is missing 10 that people don’t really use, it will look a lot worse on a list like this than a typechecker with 100% compliance, when in practice it may not really be worse at all.
Edit: Based on this other comment, the point was also about things not covered by the spec. “The spec mostly concerns itself with the semantics of annotations, not diagnostics or inference.” https://news.ycombinator.com/item?id=46296360
The chart does not describe speed (either in general or in any particular case). Speed/performance/latency is a thing users care about that is not included in the feature list.
I can't recall a single time that type-checker speed was the limiting factor for me.
Yea that one is fine and well covered in the blog post, and pretty easy to spot in light testing. I'm much more worried about the ones that are harder to spot until you have a false negative that turns into a real bug which would be caught by 1 tool and not another.
We'll be adding ourselves to that table soon. We'll have some work to catch up with pyright on conformance, but that's what the time between now and stable release is for.
pyright is very good, but there is also https://docs.basedpyright.com/latest/ which improves on it further.
That said I'm very happy user of uv, so once Ty becomes ready enough will be happy to migrate.
Basedpyright plus any AI generated python is a hellscape unless you use hooks and have a lot of patience.
Not sure where the AI generated python is coming from?
AI generated anything is a hellscape.
Do you mind elaborating?
And what do you use instead?
Pyright has been great. But it’s slow. Speed of a LSP does matter for UX. Excited to see how much ty improves on this.
Is it wrong to to say that I don't like pyright on principle because it requires node.js and npm to install and run?
I feel the same way.
Pyright is a type checker, not a LSP per se in my opinion. ty is both.
Pyright is also an lsp, it implements the LSP spec, it is just slow.
I think it is way to slow too. The one from microsoft (pylance IIRC) is better in my opinion.
Pylance's type checker is Pyright, so in that particular respect they're exactly the same.
https://github.com/python/typing/pull/2137
PR is somewhat WIP-ish but I needed some motivation to do OSS work again :)
For those interested: the results page in this PR looks like this:
https://htmlpreview.github.io/?https://github.com/SimonSchic...
Thanks.
I really need better generics support before ty becomes useful. Currently decorators just make all return types unknown. I need something this to work:
Also, I use a lot of TypedDicts and there's not much support yet.Pyright is really really good. Anyone that doubts that 10x engineers exist, just go and look at Eric Traut. He's pretty much written it single handedly. Absolute machine.
Mypy is trash. Nice to have a table to point to to prove it.
Oh my, I just looked him up. He is the developer of Virtual Game Station - a PS1 emulator that I used in the past to play PS Isos on my Windows ME PC! What a legend.
Wat
Unbelievable
I really hope astral can monetize without a highly destructive rugpull, because they are building great tools and solving real problems.
"pyx" is their first commercial offering: https://astral.sh/pyx
I agree though. Hope this is successful and they keep building awesome open-source tools.
We're paying for pyx. Wouldn't have if we didn't enjoy enjoy uv and ruff.
It's definitely a narrow path for them to tread. Feels like the best case is something like Hashicorp, great until the founders don't want to do it anymore.
> Feels like the best case is something like Hashicorp
Wow, that's probably my go-to case of things going south, not "best case scenario". They sold to IBM, a famous graveyard for software, and on the way there changed from FOSS licensing to their own proprietary ones for software the community started to rely on.
Why the “y” look so wrong in the special font.
Yeah their work thus far has been an incredible public service to the Python community.
Feels like they’re headed in the direction of bun.
In zero revenue or acquisition direction
Thankfully all these LLM labs are heavily invested in python so this seems like the likely route IMO
Just need to book a long nice walk with one of the CEOs
My issue with them is that they claim their tools replace existing tools, but they don't bother to actually replicate all of the functionality. So if you want to use the full functionality of existing tools, you need to fall back on them instead of using Astral's "replacements". It's like one step forward and one step back. For me personally, speed of the tooling is not as important as what the tooling can check, which is very important for a language like Python that is very easy to get wrong.
If there are specific incompatibilities or rough edges you're running into, we're always interested in hearing about them. We try pretty hard to provide a pip compatibility layer[1], but Python packaging is non-trivial and has a lot of layers and caveats.
[1]: https://docs.astral.sh/uv/pip/
Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial? uv sync and uv run are sort of fine for developing a distribution/package, but they’re not exactly replacements for anything else one might want to do with the pip and venv commands.
As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps. Unless I’ve missed something, if I make a change to a source tree that uv sync doesn’t notice, I’m stuck with uv pip install -e ., which is a wee bit disappointing and feels a bit gross. I suppose I could try to put something correct into cache-keys, but this is fundamentally wrong. The list of files in my source tree that need to trigger a refresh is something that my build system determines when it builds. Maybe there should be a way to either plumb that into uv’s cache or to tell uv that at least “uv sync” should run the designated command to (incrementally) rebuild my source tree?
(Not that I can blame uv for failing to magically exfiltrate metadata from the black box that is hatchling plus its plugins.)
> Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial?
It's really helpful to have examples for this, like the one you provide below (which I'll respond to!). I've been a maintainer and contributor to the PyPA standard tooling for years, and once uv "clicked" for me I didn't find myself having to leave the imperative layer (of uv add/sync/etc) at all.
> As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps.
Could you say more about your setup here? By "rebuild steps" I'm inferring you mean an editable install (versus a sdist/bdist build) -- in general `uv sync` should work in that scenario, including for non-trivial things where e.g. an extension build has to be re-run. In other words, if you do `uv sync` instead of `uv pip install -e .`, that should generally work.
However, to take a step back from that: IMO the nicer way to use uv is to not run `uv sync` that much. Instead, you can generally use `uv run ...` to auto-sync and run your development tooling within an environment than includes your editable installation.
By way of example, here's what I would traditionally do:
Whereas with uv: That single command does everything pip and venv would normally do to prep an editable environment and run pytest. It also works across re-runs, since it'll run `uv sync` as needed under the hood.uv needs to support creation of zipapps, like pdm does (what pex does standalone).
Various tickets asking for it, but they also want to bundle in the python interpreter itself, which is out of scope for a pyproject.toml manager: https://github.com/astral-sh/uv/issues/5802
Their integration with existing tools seems to be generally pretty good.
For example, uv-build is rather lacking in any sort of features (and its documentation barely exists AFAICT, which is a bit disappointing), but uv works just fine with hatchling, using configuration mechanisms that predate uv.
(I spent some time last week porting a project from an old, entirely unsupportable build system to uv + hatchling, and I came out of it every bit as unimpressed by the general state of Python packaging as ever, but I had no real complaints about uv. It would be nice if there was a build system that could go even slightly off the beaten path without writing custom hooks and mostly inferring how they’re supposed to work, though. I’m pretty sure that even the major LLMs only know how to write a Python package configuration because they’ve trained on random blog posts and some GitHub packages that mostly work — they’re certainly not figuring anything out directly from the documentation, nor could they.)
Getting from 95% compatible to 100% compatible may not only take a lot of time, but also result in worsening the performance. Sometimes it's good to drop some off the less frequently used features in order to make the tool better (or allow for making the tool better)
Got any examples in mind?
Damn it, this unicorn farting rainbows and craping gold is not yet capable of towing another car. I don't know why they advertise it as a replacement for my current mode of transportation.
Thanks Astral team! We use Pydantic heavily, and it looks like first class support from Ty is slated for the stable release, we'd love to try it.
While we wait... what's everyone's type checking setup? We run both Pyright and Mypy... they catch different errors so we've kept both, but it feels redundant.
https://htmlpreview.github.io/?https://github.com/python/typ... suggests that Pyright is a superset, which hasn't matched our experience.
Though our analysis was ~2 years ago. Anyone with a large Python codebase successfully consolidated to just Pyright?
I appreciate the even tempered question. I’ve been using mypy since its early days, and when pyright was added to vs code I was forced to reckon with their differences. For the most part I found mypy was able to infer more accurately and flexibly. At various times I had to turn pyright off entirely because of false positives. But perhaps someone else would say that I’m leaning on weaknesses of mypy; I think I’m pretty strict but who knows. And like yourself, mine is a rather dated opinion. It used to be that every mypy release was an event, where I’d have a bunch of new errors to fix, but that lessened over the years.
I suspect pyright has caught up a lot but I turned it off again rather recently.
For what it’s worth I did give up on cursor mostly because basedpyright was very counterproductive for me.
I will say that I’ve seen a lot more vehement trash talking about mypy and gushing about pyright than vice versa for quite a few years. It doesn’t quite add up in my mind.
I’ve added ecosystem regression checks to every Python type checker and typeshed via https://github.com/hauntsaninja/mypy_primer. This helped a tonne with preventing unintended or overly burdensome regressions in mypy, so glad to hear upgrades are less of an Event for you
> I will say that I’ve seen a lot more vehement trash talking about mypy and gushing about pyright than vice versa for quite a few years. It doesn’t quite add up in my mind.
agreed! mypy's been good to us over the years.
The biggest problem we're looking to solve now is raw speed, type checking is by far the slowest part of our precommit stack which is what got us interested in Ty.
Mentioned this in another comment, but the spec conformance suite is not representative of the things users care about (nor is it meant to be).
The spec mostly concerns itself with the semantics of annotations, not diagnostics or inference. I don't really recommend using it as the basis for choosing a type checker.
(I was on the Python Typing Council and helped put together the spec, the conformance test suite, etc)
The title of this story should be "Announcing the Beta release of ty". A lot of people have been waiting for the beta specifically.
I've been using Pyrefly and loving it compared to Pyright, but they recently shipped some updates with crash bugs that forced me to pin to a previous version, which is annoying. Unfortunately my first impression of ty isn't great either. Trying to install the ty extension on the current version of Cursor says "Can't install 'astral-sh.ty' extension because it is not compatible with the current version of Cursor (version 2.2.20, VSCode version 1.105.1)."
(pyrefly maintainer here) If you haven't already, please file an issue for that crash on the [Pyrefly repo](https://github.com/facebook/pyrefly) as well :)
If there's anything else accompanying the error, do you mind filing an issue? I've been using the ty extension with Cursor for weeks and am having trouble reproducing right now.
That's the full error. It shows up in a dialog box when I press the install button. I'm on macOS, connected with the Anysphere Remote SSH extension to a Linux machine.
If I choose "install previous version" I am able to install the pre-release version from 12 hours ago without issue. Then on the extension page I get a button labeled "Switch to Release Version" and when I press it I get an error that says "Can't install release version of 'ty' extension because it has no release version." Filed a GitHub issue with these details.
In the meantime, the previous version appears to be working well! I like that it worked without any configuration. The Pyrefly extension needed a config tweak to work.
https://forum.cursor.com/t/newly-published-extensions-appear... suggests that there's some kind of delayed daily update for new VSCode extension versions to become available to Cursor? It seems likely that's what is happening here, since ty-vscode 0.0.2 was only published an hour or two ago.
Oh, huh, and since there's no previous release version it just fails to install completely? That's unfortunate.
I can reproduce this; we're looking into it.
Apart from installation problems/crash issues, do you have some feedback about type checking with ty vs. pyrefly? Which is stricter, soundness issues, etc?
Both are rust/open-source/new/fast so it's difficult to understand why I should choose one over the other.
I still don’t understand how a single language can have multiple (what is it now, half a dozen?) different type checkers, all with different behaviour.
Do library authors have to test against every type checker to ensure maximum compatibility? Do application developers need to limit their use of libraries to ones that support their particular choice of type checker?
Users of a library will generally instruct their type-checker not to check the library.
So only the outer API surface of the library matters. That’s mostly explicitly typed functions and classes so the room for different interpretations is lower (but not gone).
This is obviously out the window for libraries like Pydantic, Django etc with type semantics that aren’t really covered by the spec.
You’re talking about a duck typed language with optional type annotations. I love python but that’s a combination that should explain a bit why there are so many different implementations.
It doesn't. Either the optional type annotations have precise semantics or they don't.
The annotations have fairly well defined semantics, the behavior of typecheckers in the absence of annotations, where types are ambiguous (a common case being when the type is a generic collection type but the defining position is assignment to an empty collection so that the correct specialization of the generic type is ambiguous) is less defined.
What should a type checker say about this code?
It's optionally typed, but I would credit both "type checks correctly" and "can't assign 'new' over a number" as valid type checker results.TypeScript widens the type of x to allow `number | string`, there are no type errors below:
https://www.typescriptlang.org/play/?#code/GYVwdgxgLglg9mABA...https://github.com/microsoft/pyright/blob/main/docs/type-inf...
It depends on the semantics the language specifies. Whether or not the annotations are optional is irrelevant.
Either way, you didn't annotate the code so it's kind of pointless to discuss.
Also fwiw python is typed regardless of the annotations; types are not optional in any sense. Unless you're using BCPL or forth or something like that
> Either way, you didn't annotate the code so it's kind of pointless to discuss.
There are several literals in that code snippet; I could annotate them with their types, and this code would still be exactly as it is. You asked why there are competing type checkers, and the fact that the language is only optionally typed means ambiguity like that example exists, and should be a warning/bug/allowed; choose the type checker that most closely matches the semantics you want to impose.
> There are several literals in that code snippet; I could annotate them with their types, and this code would still be exactly as it is.
Well, no, there is one literal that has an ambiguous type, and if you annotated its type, it would resolve entirely the question of what a typechecker should say; literally the entire reason it is an open question is because that one literal is not annotated.
True, you could annotate 3 of the 4 literals in this without annotating the List, which is ambiguous. In the absence of an explicit annotation (because those are optional), type checkers are left to guess intent to determine whether you wanted a List[Any] or List[number | string], or whether you wanted a List[number] or List[string].
Right. And the fact that python doesn't specify the semantics of its type annotations is a super interesting experiment.
Optimally, this will result in a democratic consensus of semantics.
Pessimistically, this will result in dialects of semantics that result in dialects of runtime languages as folks adopt type checkers.
> And the fact that python doesn't specify the semantics of its type annotations is a super interesting experiment.
That hasn't been a fact for quite a while. Npw, it does specify the semantics of its type annotations. It didn't when it first created annotations for Python 3.0 (PEP 3107), but it has progressively since, starting with Python 3.5 (PEP 484) through several subsequent PEPs including creation of the Python Typing Council (PEP 729).
So why do the type checkers differ in behavior?
> I could annotate them with their types, and this code would still be exactly as it is.
Well, no, you didn't. Because it's not clear whether the list is a list of value or a list of values of a distinct type. And there are many other ways you could quibble with this statement.
They don't. They're just documentation.
At least some of it is differing policies on what types can be inferred/traced through the callers vs what has to be given explicitly.
I think everyone basically agrees that at the package boundary, you want explicit types, but inside application code things are much more murky.
(plus of course, performance, particularly around incremental processing, which Astral is specifically calling out as a design goal here)
> Do library authors have to test against every type checker to ensure maximum compatibility?
Yes, but in practice, the ecosystem mostly tests against mypy. pyright has been making some inroads, mostly because it backs the diagnostics of the default VS Code Python extension.
> Do application developers need to limit their use of libraries to ones that support their particular choice of type checker?
You can provide your own type stubs instead of using the library's built-in types or existing stubs.
I am not that surprised, to be honest. Basically every C/C++ static analyzer out there does (among other things) some amount of additional "custom" type checking to catch operations that are legal up to the standard, but may cause issues at runtime. Of course in Python you have gradual typing which adds to the complexity, but truly well-formalised type systems are not that common in the industry.
I am so pleased by ty’s stance that I should not have to add annotations to satisfy the type checker. I ripped out last type checker out because it was constantly nagging us about technicalities, but ty immediately found issues where we annotated that a duct was an acceptable input, but actually doing so would break things.
You guys are a godsend to the python tooling world. I’ve been far more excited about the impact rust is having on the software world than that of AI, and your work is a big part of that. While I have not seen any real net productivity gains from AI in mine or my juniors work, I’ve definitely seen real gains from using your tooling!
In fact as Jetbrains has been spending years chasing various rabbits including AI, instead of substantially improving or fixing PyCharm, without you steadily replacing/repairing big chunks of Pycharms functionality I would be miserable. If it came down to it, we would happily pay a reasonable license fee to use your tools as long as they stayed free for non-commercial usage.
Nice to see Pycharm going downhill being mentioned. Nice tool in the past, not much so now.
Slight tangent
I recently viewed tutorials on uv and ruff from Corey Schafer on youtube which were excellent
Hope to make these tools part of my defaults
Look forward a similar overview by Corey on ty :)
Curious ..is there any backstory to these library names?
I believe the've been looking for two-letter names that aren't already taken, and are easy to type. I think I heard that from one of the podcasts that Charlie Marsh was on.
Source here for anyone interested[0]. From memory, Ruff was its own thing, (I think named after the bird?) since then they've tried to give projects short letter combinations for consistency and ease of typing (uv, ty, pyx)
[0] https://talkpython.fm/episodes/download/520/pyx-the-other-si...
Ruff wasn't named after the bird, we just think it's funny that Charlie didn't know it was a bird. He made up the word :)
Ah, thanks for demystifying!
I've always assumed it was something like:
ruff - "RUst Formatter".
ty - "TYpe checker"
uv - "Unified python packaging Versioner"? or "UniVersal python packaging"
That's great news! TIL that ty is also a language server, which means it replaces not only mypy, but also Pyright in Neovim/VSCode.
well, this is where being pedantic bites me in the a* again. Our codebase has been mostly pyright-focused, with many very specific `pyright: ignore[...]` pragmas. Now it would be great if ty (pyrefly has an option!) could also ignore those lines. There's not _that_ many of them, but .... it's a pain.
Wow, even if it wasn't so fast, I'd be tempted to use this solely due to their support of intersection (A & B) types! This is a sore omission from the standard python typing system.
Without digging too deep- what is the Django story?
Django does a bunch of magic which is challenging for the type checkers to handle well.
Ty doesn't support Django yet, and it doesn't have a plugin system, so third party developers can't improve it. If you need Django support, it is better to stick to mypy or pyright for the time being.
Is Django planned? Or always going to be a non first party kind of deal?
It is being planned, but there is no timeframe [1]:
> We are planning to add dedicated Django support at some point, but it's not on our short-term roadmap
[1] https://github.com/astral-sh/ruff/pull/21308#issuecomment-35...
I was underwhelmed by uv as a tool when it was announced, and when I started using it. For context, I'm a C++ developer who occasionally has to dip into python-land for scripts and tooling. I set up a new workstation about 6 months ago and decided I'd just use pip + venv again, and honestly I lasted 2 weeks before installing UV again. It's one of those tools that... doesn't really do much except _what you wanted the original tool to do_, and I'm hoping that Ty has the same effect.
Too bad they did not benchmark Zuban, which is also promising.
Also, it's also too bad we have three competing fast LSP/typechecker projects now We had zero 1 year ago.
The guy behind Zuban should've put his project out the in open way earlier. I'd love to see both projects succeed, but in reality it should become one.
For real. I consider myself to be “into Python typing,” and yet I had no knowledge of Zuban before the parent comment and a very faint memory of Jedi.
Displaying inferred types inline is a killer feature (inspired from rust lang server?). It was a pleasant surprise!
It's fast too as promised.
However, it doesn't work well with TypedDicts and that's a show-stopper for us. Hoping to see that support soon.
We should generally support TypeDicts. Can you go into more details of what is not working for you?
```
from anthropic.types import MessageParam
data: list[MessageParam] = [{"role": "user", "content": [{"type": "text", "text": ""}]}]
```
This for example works both in mypy and pyright. (Also autocompletion of typedict keys / literals from pylance is missing)
Thank you!
I reported this as https://github.com/astral-sh/ty/issues/1994
Support for auto-completing TypedDict keys is tracked here: https://github.com/astral-sh/ty/issues/86
We've been relying on TypeForm (an experimental feature in Pyright) in xDSL. Since there are some Astral members commenting here: are there any plans to support TypeForm any time soon? It seems like you already have some features that go beyond the Python type spec, so I feel like there may be hope
Yes, we love TypeForm! We plan to support it as soon as the PEP for it lands. Under the covers, we already support much of what's needed, and use it for some of our special-cased functions like `ty_extensions.is_equivalent_to` [1,2]. TypeForm proper has been lower on the priority list mostly because we have a large enough backlog as it is, and that lets us wait to make sure there aren't any last-minute changes to the syntax.
[1] https://github.com/astral-sh/ruff/blob/0bd7a94c2732c232cc142...
[2] https://github.com/astral-sh/ruff/blob/0bd7a94c2732c232cc142...
Very exciting! I guess I'll have to wait for Django and Pydantic support to migrate to it on Zulip, but type checking was the last major linter that's still slow in Python.
is there anything like `uv` available for ruby? going from python and typescript where I can use uv and bun, it feels like ruby is stuck in the past :(
Rv, a new kind of Ruby management tool:
https://news.ycombinator.com/item?id=45023730
Super excited about this generally ok satisfied with pyright but so I was with conda before uv or black before ruff.
Very excited to see this. I thought that speed does not matter much for python tooling, but then I tried uv, and realized that I was wrong. The experience is just better. Looking forward to see more high performance quality tooling for Python.
Django support will be a game changer on top of the game changer ty is!
How conformant is this, compared to e.g. mypy?
FWIW MyPy is not very conformant: https://htmlpreview.github.io/?https://github.com/python/typ...
Beautiful acknowledgment list, and congratulations on the beta release!
Jesus, how long will we need this shite? Can't someone from MS fix this already? Or is it possible for Astral to implement fully fledged Python extension so you don't have to use Microsoft crap that includes proprietary pylance?
``` It's recommended to disable the language server from the Python extension to avoid running two Python language servers by adding the following setting to your settings.json:
{ "python.languageServer": "None" } ```
Python programmers are crying out for types it seems. It’s a shame the Python foundation haven’t blessed a spec. Better to get everyone working on a single slightly imperfect standard than a morass or differing ideas.
Could you elaborate on what you mean? There are various typing PEPs; they even have their own category[1].
[1]: https://peps.python.org/topic/typing/
And the PEPs are now collated into a larger single typing spec [1], even hosted on a python.org subdomain. (Previously it was hosted on readthedocs)
[1] https://typing.python.org/en/latest/
Speaking as a Python programmer, no. Using types in a prototyping language is madness.
The point is you drop things such as types to enable rapid iteration which enables you to converge to the unknownable business requirements faster.
If you want slow development with types, why not Java?
Have you written any go code? it's the closest I've come to actually enjoying a type system - it gets out of your way, and loosely enforces stuff. It could do with some more convenience methods, but overall I'd say it's my most _efficient_ type system. (not necessarily the best)
because i want fast development with types.
> Using types in a prototyping language is madness.
It's not a prototyping language or a scripting language or whatever. It's just a language. And types are useful, especially when you can opt out of type checking when you need to. Most of the time you don't want to be reassigning variables to be different types anyway, even though occasionally an escape hatch is nice.
I was reading the list of their offerings in the footer and for a second was very excited by the 5th item:
> RUFF 0.14.9
> UV 0.9.18
> TY 0.0.2
> PYX Beta
> GITHUB
rust again? ill skip
Is there any study that shows that typing in Python improves code quality and reduce runtime issues?
> Ughm, is there any study that shows that guardrails and lights on bridges reduce fatalities?
> Akshually, are there any studies showing that cars riding 30 km/h kill less people than cars that ride 80 km/h?
I think there are both of those.
Newsflash: not everything good has a study about it
Well, it's just a documentation suggestion for user. Having for me about same value as if it was written in pydoc. I'd really love to see such study as well
Agree, we already had a solution documenting types in docstring.
In my case they just add noise when reading code and make it more difficult to review
it has been consensus for decades at this point.
That's equivalent to asking if there are benefits of static typing.
Specifically, it's like asking if there are any studies that demonstrate benefits of static typing. Are there?
Not quite, static typing is used at runtime, python type annotations are not
> Not quite, static typing is used at runtime, python type annotations are not
No, static typing is usually used AOT (most frequently at compile time), not usually at runtime (types may or may not exist at runtime; they don't in Haskell, for instance.)
Python type checking is also AOT, but (unlike where it is inextricably tied to compilation because types are not only checked but used for code generation) it is optional to actually do that step.
Python type annotations exist and are sometimes used at runtime, but not usually at that point for type checking in the usual sense.
> > Not quite, static typing is used at runtime, python type annotations are not
> No, static typing is usually used AOT (most frequently at compile time), not usually at runtime (types may or may not exist at runtime; they don't in Haskell, for instance.)
In fact, Haskell then allows you to add back in runtime types using Typeable!
https://hackage.haskell.org/package/base-4.21.0.0/docs/Data-...
tools like dataclasses and pydantic would like to have a word...
> static typing is used at runtime
Educate yourself before making such claims.
Not impressed because when tried ruff, and discovered that it doesn't replace (basic) pylint check https://github.com/astral-sh/ruff/issues/970 so we have ruff then pylint (and looking at the number of awaiting PR of ruff feels bad)
As noted in the linked issue
> At time of writing, many of the remaining rules require type inference and/or multi-file analysis, and aren't ready to be implemented in Ruff.
ty is actually a big step in this direction as it provides multi-file analysis and type inference.
(I work at Astral)
Ruff is incredible, replacing a mountain of tools and rules with a single extremely fast linter/formatter. Given that it is updated and improved frequently, I’m curious if you have tried it recently, and if so what pylint rules are you using that it doesn’t cover?
The codebase has none of the rust code. In fact even the python code in the code base is mostly just scripts for updating version tags and etc...
Seems like the code isn't actually open source which to me is a bit concerning. At the very least, if ya'll want to release it like this please be clear that you're not open source. The MIT license in the repo gives the wrong impression.
The ty repo contains the ruff repo[1] as a submodule, where the remainder of the code is. It is indeed open source, the layout is just indirect at the moment because of code-sharing between the tools.
[1]: https://github.com/astral-sh/ruff
This is also documented at https://github.com/astral-sh/ty?tab=readme-ov-file#contribut... and https://github.com/astral-sh/ty/blob/main/CONTRIBUTING.md#re...
I think that's because most of the code for `ty` is tucked away in the `ruff` codebase: https://github.com/astral-sh/ruff/tree/main/crates/ty - it's all MIT licensed.
At least as of a couple months ago, `ty` was actually being developed in the `ruff` repo (per an pdocast interview the devs did on Talk Python), so that might be why the `ty` repo looks empty (and pulls in ruff as a git submodule).