This just doesn't match with my experience of Swift. My experience is a language that is well designed, at times feeling too complex, but always justifying that complexity with correctness when I dig into the reasons. Most of the language is driven by an open process, to the point that language announcements at WWDC are only a recap of the last year of accepted enhancement proposals.
"A great language strangled by governance" – it's not clear to me that Swift is in a materially worse place than any of the 3 alternatives this post points to. Rust has had a lot of drama and botched their async/await, Python had 3, took a decade to recover, and has failed to address its terrible packaging over the last 2 decades, and the Kotlin process described sounds almost identical to the Swift process.
Was Swift entirely top-down controlled at the beginning, with a bunch of breaking changes? Yes, but it has very clearly graduated from its early development now with no forced breaking changes and an open proposal process that is clearly working. Has Swift hard-coded Apple framework exceptions? Maybe, but that has never gotten in the way of me using or understanding Swift code, so I'm not sure it affects me as a developer. Has Swift become a hugely complex compiler? Sure, but I'd rather have a complex compiler and a better developer experience, especially if Apple is paying for it.
I don't disagree with the facts here, this just doesn't resonate with me at all. I'm a very happy Swift user, would like to be using it a lot more, and enjoy the forward momentum in the ecosystem as Swift becomes in my mind one of the most advanced languages available.
That may be more fair. I guess a better way of putting it is that the async/await implementation was fraught with drama and that it did not really succeed in getting the whole community onboard and pulling in the same direction.
Python, for example, continues to have similar issues, whereas Node/JS definitely did stick the landing, and Swift arguably did too.
How so? It’s trivial to block on a Future or to call a function within a Future.
I could imagine another language where all “functions” are Futures, but I don’t think that would be the right move for Rust.
At the very least, that alternative language would probably have to provide a default executor. IMO, that’s a bit much for a language that aims to have a very minimal runtime.
As I mention in replies to sibling comments, Go works that way: every function is a Future, and there is no such thing as a non-async function. It works pretty well and is super easy to use and reason about, with no hard edges.
Yes, it would have to provide a default executor in the standard library. And it would have to have a noasync mode (analogous to nostd) for embedded applications that can't have asynchronous execution. These are hurdles, but not very large ones.
No, the reason this doesn't work for Rust (and even more so Swift) is that they need to call code that is not annotated. In Go the runtime knows what blocks and what doesn't, and this lets it effectively manage its thread pool based on the state of every goroutine. If you call outside of your runtime, you lose guarantees about what that code might do: it could block indefinitely or deadlock some other code in your program. Solving this is a pretty hard problem, which is why the solutions these languages have designed are limited. (Again: Swift ships with its own runtime; there is a default "executor". It doesn't actually solve this problem for the reasons I described.)
A great bit about Rust async is that it’s sufficiently flexible for use in embedded environments.
I would really not prefer a Rust that includes a more intrusive runtime. Especially given the API changes things like io-uring may require to exploit.
As it works now, I can nearly always predict what assembly I’ll see when I compile a Rust program. It’s predictable enough to work anywhere C does. I would hate to lose that.
The other comment implied it but I think it's worth pointing out that:
> embedded applications that can't have asynchronous execution
Is most definitely not the case.
They can't have the same type of async runtime that would be optimal for a web server or the likes (and I'm not sure all desktop applications and web servers are going to always benefit from the same runtime in the same way), but that's a point in favour of Rust's model imho
If you're interested this is an embedded async runtime that's expected to run in no-std and no-alloc environments
I'm still skeptical that such a thing exists. A concurrency runtime doesn't have to be much more than a queue of function calls. What kind of hardware can run normal code but can't do that?
An interrupt handler could schedule an asynchronous task just fine. In a lot of situations that would be a very good way to handle things.
You shouldn't await in an interrupt handler, but there's a lot of things you shouldn't do in an interrupt handler. A rule like that doesn't mean you need a noasync mode. (And if you really wanted to use await there, you could make it work with priorities or with a separate task queue.)
Depending on your implementation it is very possible to schedule interrupt handlers into a concurrent runtime. You'd have to be exceptionally careful about how it was designed with regards to, like, reentrancy, but Rust is quite good at encoding these kinds of things.
Yeah, I think I added a comma while reading by mistake so I thought the "that" was preceding the explanation of why you singled out embedded, my bad
The point about a runtime optimized to run bare metal on a single core probably being suboptimal on a desktop machine with 8 cores still stands though. (And the laptop one is possibly not going to work optimally on a server with hundreds of core either)
Doing that in Rust can be a footgun if the synchronous code blocks for too long. You often want a separate threadpool for calling those functions. It works great in Swift, though!
Swift mitigates a lot of the function coloring pain: synchronous methods can conform to asynchronous protocol requirements, thanks to the compiler generating thunks (at least, I think that’s how it works)
The opposite, actually. Swift (Concurrency) doesn't really have a good way to do the kind of thing spawn_blocking does. (Though, some would argue that the very design of spawn_blocking is not good to begin with.)
If you decide to go with the async/await model, why not just go all in and have every function be async/await always? Gets you concurrency support without Colored functions. Is there a language like this?
I haven’t used Go, but I am fairly certain it’s not really trying to do the same things as Rust. IIRC, calling C from Go isn’t zero cost. It also includes a much larger runtime than Rust.
Yes, calling C from Go has significant overhead for exactly this reason. It basically requires creating a stack and copying over all the current coroutine info into it. Depending on the application, this can be accelerated by maintaining a cache of stack objects to use, or by having a "noasync" mode of compilation with all the coroutine stuff turned off (making calling C zero cost again), or by having a producer-consumer channel to a noasync thread that gets stuff passed to it from coroutine land and calls out to C.
Generally speaking, applications tend to either (1) exist as low-level shims or embedded code that doesn't use coroutines, (2) be large Rust projects that don't call out to non-Rust stuff often (e.g. a web server), or (3) large performance-sensitive applications that call low-level APIs frequently (e.g. games). The above solutions handle all three of these cases.
What Rust chose instead was to optimize for a compilation mode which supports all three cases at once, with zero cost for FFI. This sounds nice, until you realize that what they traded off to get this was UX. Writing concurrency in Rust is lightyears better than C++, but it still sucks compared with Go or Erlang.
That sounds like a few different languages. And I think it’s fine to want a language like Rust that allows you to be less explicit about things. Based on what I’ve heard, you might just be looking for Gleam.
But I don’t think it would be the right move to make Rust something that it’s not, just so people don’t have to think about what a Future is or isn’t.
You never tried to await from within a synchronous function? There's stuff that you just can't do with Rust's approach to concurrency, that would be trivial if we allowed some limited cost abstractions (e.g. thunking the future in this case). Or just make all functions asynchronous but eliminate the state machine if it contains no yields.
The whole point of higher level abstractions is making it so that developers don't have to think about stuff. It's what I love about Rust: with the borrow checker, I know that if it compiles then I am not going to be accessing mutable or already freed state. The language and the compiler get more complicated so that I don't have to think about stuff.
Rust didn't go that way with concurrency. They instead opted to push that complexity (thinking about what a Future is or isn't) onto the developer. YOU have to think about "should this API call be async?" instead of letting the downstream user decide that.
> You never tried to await from within a synchronous function?
Whenever necessary I have been able to do this easily. Calling async code from a synchronous context is not an example of something which can’t be done in Rust today.
> Or just make all functions asynchronous but eliminate the state machine if it contains no yields.
The optimizer generally already cleans this up.
> The whole point of higher level abstractions is making it so that developers don't have to think about stuff. It's what I love about Rust: with the borrow checker, I know that if it compiles then I am not going to be accessing mutable or already freed state. The language and the compiler get more complicated so that I don't have to think about stuff.
The borrow checker is not “limited cost”. It is zero cost.
> Rust didn't go that way with concurrency. They instead opted to push that complexity (thinking about what a Future is or isn't) onto the developer. YOU have to think about "should this API call be async?" instead of letting the downstream user decide that.
The borrow checker is not "zero cost". That is a myth. It forces the use of non-zero overhead escape hatches (RefCell/Rc) almost every time when the code tries to do something nontrivial, that is, something actually worth doing.
The guarantees given to you by the borrow checker are zero cost.
Once you’ve exceeded its capabilities, you have the choice to either pay a runtime cost (use RefCell and friends) or deal with maintaining and validating unsafe code.
Even though I am not a big fan of its design, there are hardly any scenarios were Go doesn't fit other than cargo cult against automatic resource management languages.
To the point that even disliking Go, I might favour it over Rust, unless using any kind of automatic resource management is forbidden, and I don't feel like reaching out to C++ for whatever reason.
Go is the opposite, everything is synchronous. This works great with CSP to allow for concurrent workloads, but its functions/methods are not async. I mean language where every function/method call works like async/await. The idea is probably closer to Haskell and its lazy evaluation.
Go handles it quite nicely. It's not perfect, but it is nice. Every function is a coroutine. There is no such thing as a non-async fn in Go. But if you want explicit await-like concurrency, then you can explicitly wait on a message from a channel.
Every time I bring this up there are people who jump in with excuses for why Rust can't do something similar. But these are either (1) historical accidents in the sense of implementing goroutines would've broken existing code [fair], or (2) excuses. Go's approach is much harder to get right, and much more pervasive in the impact on the language. But when it's done well, it's really smooth to use.
I use Rust as my daily driver because I want the safety guarantees that Rust gives, and cargo is pretty nice. But every time I need to do something highly concurrent, I wish I was working in Go...
There is a good reason why Rust can't do it the Go way - Go's always-async concurrency requires a fairly heavy runtime. Rust is designed to be able to support systems that have no runtime at all, or a specialised one. (https://embassy.dev/ is an example of a specialised async runtime for running async Rust on microcontrollers).
Why wouldn't you be able to swap out the concurrency runtime for something else? Or not switch it off entirely when not needed (making the concurrency keyboards into syntax errors)?
You can, you can switch it off by declaring a normal non-async function.
It's almost as if there's two fundamentally different types of functions that people might want to declare, necessitating two function colors.
Go is a language that is primarily used for async programming, so the design of having everything async makes sense. Rust is a language that is primarily used for sync programming, async programming is catered for but it's not the focus of the language. So having everything be async doesn't make any sense at all.
> It's almost as if there's two fundamentally different types of functions that people might want to declare, necessitating two function colors.
Since any function can get indefinitely stuck in a system call, the main difference between "sync" and "async" isn't whether you can pause execution, it's who can pause execution. And the main downside to the more flexible option is that it requires extra overhead in how the stack gets allocated.
But why should the flexible/inflexible choice be made on a per-function basis? I feel like I'm manually annotating which functions will inline, except worse because certain combinations won't compile. Instead, how about having the compiler figure it out on a per-call basis?
Rust used to have a GC in the past, as well as green threads and a bigger runtime. All of this has been explicitly removed before Rust 1.0. The reasons are well documented in many pull requests and RFCs for the language.
The way the async feature works in Rust is that the asynchronous function is just a syntax sugar that gets desugared into a state machine struct during compilation. The way this state machine works is similar to how one could achieve async in a language like C. It's unfair to dismiss everything as excuses given that the fundamental aim of the language is different.
In Rust async functions are not really colored because again - the async function is just a syntax sugar for a struct you can create and use in a sync context. The colors analogy is only really applicable in a language like JavaScript, where there's no way to poll an async function in a sync context.
Here's the thing though: Rust could have every function be an async state machine, automatically. And then the compiler optimizes away that code when it isn't needed. It would be a big pain to implement, but it's doable, and it would deliver a developer experience much closer to Go's. There isn't a technical reason for why Rust couldn't do this.
FYI you can't poll an async result in a sync context in Rust, either.
> There isn't a technical reason for why Rust couldn't do this.
First, it would be a huge undertaking. That in itself is a huge time/resource burden.
Second, it would add overhead to any non-async function call. Because async introduces branching on every function invocation, it would make the resulting assembly even harder to understand. This strongly goes against the zero overhead/ zero cost abstraction idea of Rust.
By the same measure, Go could technically remove (almost) all GC, add some kind of borrowing mechanism and steal Rust's thunder.
Yeah. I think "keyword" is less of an issue personally. At the end of the day, if we want to approach things like async / await and ownership, I would prefer "consume / borrow" than some particular symbols.
That has been said, I do worry that there are too much runtime stuff built into the language, and it doesn't seem there any desire to clean it up. Newer things like actors that people argues cannot be done as library brings runtime into the language. autograd is in the similar vein. Even automatic-reference-counting (a.k.a. classes) would be nice to move that out of core language.
The end result is that Swift is a huge project for people to maintain, in both the compiler land and the runtime land. I am happy that Apple is paying for it, but ...
I agree, the number of keywords really doesn't get in may way day to day. You know what does get in the way? Lack of functionality.
I use Go in my day job and it's really quite frustrating to write much of the time because the compiler does so little for you. I regularly hit issues that would be checked for me in Swift, I regularly write type-unsafe code because the type-checker isn't sufficiently advanced, or I implement slow things myself because it's impossible to use a generic fast implementation, and I don't want a week long side-quest implementing an obscure data structure in my otherwise 2 day long feature.
In that way Go is much like C, and while it's trendy to hate on C++, there was a reason why C++ was created – C wasn't doing enough for people.
I do find it quite unsatisfying the way the actor executor is somehow magically generated at compile time, without any way to build your own in Swift. It's also not obvious that actors are re-entrant until you stumble into it.
It's quite interesting the way the swift compiler gets around its own complexity by wrapping core functionality as llvm builtins whenever the engineers hit a wall. Is it an abstraction? Not really sure what to call it, but I'm finding it becomes a very leaky design after working in large Swift codebases.
Sure, there is always intentionality behind the decisions that get made, and not saying it its sloppy, it's not. However, every single language design decision feels one-off, and in 2024, the language is now made up of hundreds of little one-off decisions that somehow still work out okay.
Yeah. I guess that I misspoke a little bit. I am not even criticize actor (as in Go, there is a relatively "heavy" runtime to implement its green thread too). I was originally thinking about distributed actor proposal (and implementation) and forget to spell that out.
> It's also not obvious that actors are re-entrant until you stumble into it.
Yeah, this part is annoying. While understandable from runtime implementation perspective, it feels like a big mistake need to be corrected in the future.
> the language is now made up of hundreds of little one-off decisions that somehow still work out okay.
There are some fixes on these decisions, some times. For example, `some` / `any` fixed some associated type related weirdness that you need to workaround (this is a huge improvement for me at least).
But some other problems remains: the existence of existential type is not obvious at all and can cause issues. For example, a closure type is not existential, and cannot be extended. This is OK most of the time, but then UnboundedRange is defined as a closure: https://developer.apple.com/documentation/swift/unboundedran... so you cannot extend that easily (which would be ideal if you want to extend PartialRange / ClosedRange / UnboundedRange to do something like, return Python slice object?). I think this / or similar issue also made things like tuples to automatically conform to Equatable / Hashable protocol not easy to do, which is not big problem, but minor annoyance.
They actually allow you to opt out of ARC in Swift 6 by marking types as non-Copyable (~Copyable), along with “consume” and “borrowing” keywords. There are restrictions (i.e. their use in generics is limited), but it’s pretty great for optimizing hot loops.
Have you had much success adopting consume and borrow?
Once you need anything in the swift standard library, you can't use a non-copyable struct with it. The keywords are all in too, so either you write completely self-contained rust-like code that doesn't touch your other code, or you revert back to using the older copyable types.
I've tried many times to ease into using them, but I can never stick with them, and eventually revert.
I do really like the high-level design though and look forward to it evolving into something usable.
This is an insulting piece. It's a poor hitjob by someone who doesn't really know what is going on, which is extra unfortunate because Swift has a bunch of problems. Some of them are actually adjacent to the claims here! But this is just unsubstantiative flamebait.
Swift is controlled by Apple, Inc. Nobody worth listening to is denying that. The steering committee is almost all Apple employees, most of the commits going in come from people who commute to Cupertino, and whatever the progress is on Windows or Arduino or whatever, it's clear that "first class" really means iOS. That's all true. Apple makes proposals, they seek some community input, and in the end they pick what they like and it ends up in the language.
But even with that said, the rest of this post is of low quality. Tim Cook and his MBA buddies didn't personally come in to ruin the language because the shareholders aren't making money. Chris Lattner is, whatever his contributions to the language previously, just a guy. He would probably try to put Swift on GPUs today if he had his way. It's not the first time he sold it out before. You've got to go beyond "omgz so many keywords so complicated!!1!!" and look deeply at the real sources of complexity. Swift has a powerful generics system that has worst-case behavior that really sucks. The discussion process has a bunch of busybodies that have nothing better to do than talk about names of things. Yes. These are real problems. It's funny when there are random hacks in the source code to support things that Apple is shipping. But that is how a large project works. You are more than welcome to laugh about it on Twitter but substantive criticism it is not. Which is a shame, because there is plenty of it go around.
I like how the conclusion of the article paints a very positive picture for the language. It’s as if the entire negative tone of the article was a poor attempt to sound insightful for clicks.
These kind of pieces is what ends up with worshiping stuff like Go with its clunky design evolution, it turns out either the language handles the complexity of the world, or the source code makes it visible in needless boilerplate.
Agree on the general tone, only up voted by accident.
I've used Python for nearly two decades. It's not a great programming language to write or maintain, and its package management makes npm (which is not great itself) look like a walk in the park. It's a bit unfortunate that it became AI's go-to programming language.
I know it's subjective, but if we're arguing about Swift's quality by comparing with Python, I'm not buying it.
To each his own. I thoroughly enjoy Python, having used it for ~15 years now, have absolutely no issues with package management and haven't for the better part of a decade, and believe there are good reasons why Python "won" the data science, machine learning, etc. "battle." Namely, it's easy to use. It allows you to focus more on what you are trying to build than managing the language itself.
It has strengths and weaknesses depending on domain just like any other language. Given it is one of the most, and sometimes _the_ most, popular programming language, I support the article using it as a comparison point.
I don't necessarily disagree with you, but regarding Python winning data science, I think that's in large part to a) Anaconda creating a single cohesive alternative ecosystem, shunned by engineers but loved by data scientists who just want to get things done, and b) the fact that with a couple of major libraries/frameworks you can get most things done, Pandas, Numpy, Sklearn, maybe a few others, meaning there's less dependency management to do overall.
As a thought experiment, imagine someone had built Anaconda for C and then, instead of making Python APIs out of those mostly C libraries, just left the interface as C. Same packaging solution, same libraries, but the user has to learn C. Or swap C for Go even. No memory management but still more esoteric syntax and typing, interfaces, etc. required. How far does that get?
The tools you mention were built on/around Python because it was realistic to ask technical but non-programmer users to learn to use Python. The lack of explicit typing and more "conversation" nature of Python syntax is, IMO, what makes that possible. So Python "wins" and other tooling is built on/around it to make it even easier for non-programmers to operate in the ecosystem, which solidifies it's position even more.
I would love to hear positive python packaging stories because my other language packaging experiences (lisps, R, rust) felt so much better. Did you try to maintain python packages on a system with old gcc that cannot be upgraded and walk through the dependency hell explicitly while having SAT solvers fail to find compatible solutions after minutes at a time? Have you managed to keep bit-level reproducibility on future reproduction runs for more than five years in any other way than keeping the old OS and hardware around? What are your packaging tools of choice (setup.py is obsolete soon, poetry did not stick, pyproject.toml for the win I guess, conda at glacial speeds, pip too slow and prone to security risks, uv faster but incomplete, …)? Did you not encounter packages with dependency specs that no longer worked on future dates?
Python got adopted by academics around ‘10, because it had lower barriers for new programmers than Java (now legacy) and C (now a specialization).
The language and tooling options of today mean that the initial reasons for academia embracing Python are long gone. Python now presents more barriers to new programmers than alternatives.
I agree. It’s a shame that the machine learning community has married itself to Python, since it gives Python a staying force that it doesn’t really merit.
I also think Swift is getting bloated (and this can be seen in the compile times which are getting out of hand).
Sometimes I feel like some features of Swift have been implemented without being well thought about, just because they are popular in the programming crowd.
I am thinking more specifically about metaprogramming, which was originally a non goal, but I guess with time they saw it has a point and they replicated Rust's approach with syntactic macros which is to me a very poor approach. You need to much boilerplate to write a macro since it's a compiler plugin basically, and you can't interpret the semantic of the underlying code without reimplementing part of compiler's logic.
I can find other examples, and that's not always bad, but I am not sure that's the right way of designing a language.
Macros were an early Swift goal, but were not the highest priority or a small amount of work that could be tackled as part of meeting higher priority goals.
As an example - one could argue property wrappers exist because they were needed before macros could get done, and their syntax was chosen to align with where the Swift core team thought macros would land in the future.
I still have no idea where macros fit into a project, or why I would want to use them. They seemingly built this functionality for SwiftUI previews maybe?! and didn't really consider how this could be a core part of the language.
At least with rust, you can just lightly add them in the same module and at a minimum use them to avoid boilerplate code duplication. Macros in rust work as a core language feature because of the way the borrow checker works. You can't really break apart some methods since you need to establish ownership. So, you can easily end up with duplicated code that gets noisy. Macros are the escape hatch to do that, and yes, they are also complex to write, but at least I know when I'd want to use them.
Swift doesn't really document how to create macros, and they didn't really spend energy explaining how they would be valuable to a project and worth the engineering cost of implementing them.
They fit into the same way as C, C++, Lisp, Scheme, Rust, Scala... macros, Java and Kotlin compiler plugins, .NET code generators, do fit into a project.
There are several WWDC sessions about how and why.
This article is full of statements that are not backed up by any material proof. The author claims that Swift features no longer compose without giving a single example. I write Swift SDKs for a living and I have never thought that any of the recent features felt incompatible, incongruous, or redundant. Sure, each has its strengths and weaknesses, but that results in an expressive language where every problem has several solutions with different tradeoffs.
When it comes to governance, it’s not without wrinkles, but it’s not as bad as the author makes it sound. If anything, things got a bit less “dictatorial” since Chris left. Different core team members pull in slightly different directions which made the language more balanced in my view.
If you have a recent machine, the actual developer experience isn't too bad. It's just not a language for language purists (was it ever?).
I'm enough of a purist to be annoyed whenever I see the "expression took too long to type check" error. (I think bidirectional type inference isn't worth it with Swift)
The gaggle of verbose pointer types makes me want to switch to C++ whenever I have to deal with memory directly.
As the article mentions, a bunch of features were added for the sake of SwiftUI. Function builders allow SwiftUI's syntax to be minimal. They allow you to write:
VStack {
SomeView()
AnotherView()
}
instead of something like
VStack(
SomeView(),
AnotherView()
)
Given the rather bad (still!) error messages you get with SwiftUI that seem to be a result of function builders, I'd say it wasn't worth it. At least I get fewer of the "couldn't produce a diagnostic, please file a bug" errors than I used to.
Then there are property wrappers, which wrap struct/class fields with get/set code (IIRC Lattner didn't like the property wrappers). They've been partially replaced in SwiftUI by macros. The @Observable macro (probably the most widely used one) decorates your class with code that notifies listeners (almost always SwiftUI) of changes. I'd be curious to see what SwiftUI would look like without property wrappers (or macros).
I think they had a missed opportunity to really add robust updating of views in response state changes. Currently it's still relatively easy to not have your SwiftUI views update because your data model uses some object that isn't @Observable.
I wrote a UI library inspired by SwiftUI, but in Rust [1], and of course I couldn't add anything to the language, and more experienced Rust programmers discouraged me from using macros. So it can be done without all the extra stuff Swift added.
Every few years I revisit Swift to see if things have improved but the language is now even more complex with cryptic compiler errors and poor runtime error messages, long build times and slow iteration times. Xcode is still a horrible IDE and unfortunately JetBrains AppCode has since been discontinued, basically forcing using VS Code for development and Xcode to run/debug tests. It can finally do async/await but its async http requests were unstable/unusable on Linux and cross platform is a chore with littering macros like `#if canImport(FoundationNetworking)` everywhere.
Used to think Swift was the future given the resources Apple could throw behind it, but at its current state after 10 years I no longer do. It will remain a mainstay thanks to the Apple ecosystem, but don't see its popularity extending far beyond that.
I think Swift has been strangled, but I'm not convinced it's because of its governance structure (with Apple at the top). I get the impression that Swift is the world's best sandbox for programming language nerds to play in—and that Apple is woefully understaffing the team writing technical copy for the compiler (e.g. error messages), and the documentation.
The rate at which the Swift language has changed just in the past 3 years feels as significant as the rate at which it changed between 2016, with the release of Swift 3, and 2022.
I'm sure that people who are 100% all in on iOS development feel like the rate of change is delightful, but for the rest of us who have other responsibilities, it's exhausting and bewildering.
On the other hand, this burst of change brought it over a line into functional and declarative usability. Somewhere from 5.7 through 5.10 it indeed became meaningfully more “delightful” to use.
Delightful not because of the rate of change, but because of the programming approach it's now enabled.
Surely the "some" keyword the author is railing about actually _improves_ composability, and is in fact exactly in line with Lattner's vision "Simple things that compose." The accepted answer here [1] explains this better than I can.
I can't share the author's enthusiasm for Swift on server either. WASM support is not production ready so Cloudflare workers or Deno deploy is out. Vapor is the only properly maintained framework I can find (happy to be corrected) which I did not find ergonomic. Google Cloud Functions or AWS Lambda would be options but there's no reason to limit oneself when Rust is production ready on most web platforms.
Swift has a lot of features, but you can leave many of them alone to begin with and learn them when you need them. Low-syntax defaults work in a reasonable, streamlined way. SwiftUI will confront you with more syntactic variations than other Swift code, but that's not unusual in a DSL. Overall it's a smooth learning curve due to progressive disclosure in language features.
I don’t necessarily disagree with the article but I think the language is in a much but better place than it was a few years ago, with a few exceptions. As the author points out, Foundation and a bunch of frameworks and libraries are being open sourced and maintained by Apple and the Swift foundation which are all cross platform (including Windows and Linux).
The static concurrency checking stuff is cool on paper but has so far just introduced noise to our codebase so we’re not touching it until it cooks a little more.
I’m basically asking for the same thing I was 5 years ago though and the author touches on, fix the compiler issues.
Official Android support would be nice too but the community is filling the gap there.
I use Rust a little on the side and tooling, compiler messages and ecosystem are far ahead of Swift. Language wise I still prefer Swift for the work I do.
A honest question here from outside of the Swift world: why does it matter?
I'm a Clojure developer, have been exclusively one for the last 10 years and have no intention of changing anytime soon (in other words, I don't care/follow closely what the majority of development world is and what the trends are).
A few years ago we had "Open Source is Not About You" [1] by Rich Hickey, basically pushing back that the creator(s) of Open Source don't owe anything back to the community. That piece has aged well for me: now I understand that something is what it is, not what I wish it was. And more importantly, the goal of open source isn't to be liked or used by everybody. It's a thing put out by its creator for others to use, and that includes Apple.
So really, why does the governance matter so much? If you don't agree with something, why not fork it and make it like you want it?
Call me a fanboy, but Swift is one of the best languages I've ever had the privilege of using. I'm truly hopeful with some more work, it's able to become the de-facto cross-platform language.
The complexity is very manageable, a decent Java, Objective-C or .NET developer can pick it up in less than a day, is feature-packed and stuff just make sense.
Ladybird switching to Swift made me think. Then there was this article recently on HN on Swift vs. Rust. I would give Swift a chance, I do think currently it lacks open source projects for server side development (all the batteries).
I think Swift is a really nice language when you look at the design. I really liked protocol extensions for example.
The problem I have with it is that some decisions are very ideologic.
You can easily link to C libraries for example, but the pointer types in Swift are so fucking unusable (because pointers are evil) that it makes no sense to try it. (unless you cannot chose an other language)
It doesn't matter, because Swift is one of those languages that enjoys an overlord, anyone that wants to target its overlord platforms eventually has to deal with Swift.
Same with Java and Kotlin on Android, .NET and C++ on Windows, JavaScript on the Web, ISO C on UNIX/POSIX, Go on Docker/Kubernetes,....
Decisions are made openly with excellent (if exhaustive) discussion, clear reasoning, and mostly on the merits.
Features are building on prior features over the years, and when things have needed to be changed or deprecated, they get changed. So the language is alive but not encrusted.
Decisions are made much more independently and quickly than Java, which has a similar installed code base and motivated corporate patron. Both small-scope and sweeping architectural changes work through the evolution process pretty quickly.
All languages starts with a clean sweet spot but major gaps, and then migrate to common features as they age. Right now, Swift has probably the most mature concurrency plan of all languages, and is beginning to address the lifetime issues presented by value types.
No other language blends low-level efficiency and control with high-level type-safety so well - no less seamless inter-op with C/C++, static macros, etc.
What's not maturing as fast: third-party libraries ecosystem because most people just need Apple's, Windows support (because - why?), and alternate IDE's, static analysis, etc.
As for 217 keywords, etc.: be glad there's a name for everything, without a plethora of ways to do the same thing. If it weren't easier to do complicated things with a correspondingly complicated language, everyone would be writing functional lisp code.
The real problem now is there's no real agreement on how to do the function coloring, effects, traits, etc., but that's what's needed to make use of modern hardware.
In particular, the type system being bidirectional has made it both pleasant (when it works) and painful (when it doesn't), and stuffing all this new behavior into a type system constraint solver is a bit of a black art. Venn diagrams and verbiage aside, it's just hard for people to think about, e.g., what ~Copyable means for generic parameters, or how a parameter can place a function in a different memory region.
But they'll do it after a boatload of discussion, and then make a dumbed-down happy-path 10-minute video for WWDC with code folks can play with - 10 million folks.
If anything, the language is strangled by history, but remains attractive because the team does such a good job.
Who cares about keyword count? Reserved words preserve flexibility while not realistically impairing programmer expression. What do you do with a keyword-impoverished language?
1) overload the keywords you do have to mean random unrelated things depending on context (which is why "static" in C++ means a bunch of random things), or
'
2) create context-dependent keywords (e.g. "final" in C++ and "match" in Python), which work, but complicate every parser everywhere.
As much as it pains me to type the following: JavaScript got it right. The initial version of the language reserved a number of keywords (e.g. "class") that would only become useful two decades later.
Yes, I didn't knew about it. When I was read, there is full article is there without paywall. Seems, after some traffic they showing to subscribe to read the page.
Swift will be fine as long as they invest in Mini-Swift that improves compile times by 10x and make it seamlessly interop with Papa-Swift the current behemoth.
I'm sure they're aware of the compile times. If they can make C++ interop work (not easy), they can make Mini-Swift :)
As for this governance talk - I can only shake my head at these attempts to start drama. I don't understand it.
This just doesn't match with my experience of Swift. My experience is a language that is well designed, at times feeling too complex, but always justifying that complexity with correctness when I dig into the reasons. Most of the language is driven by an open process, to the point that language announcements at WWDC are only a recap of the last year of accepted enhancement proposals.
"A great language strangled by governance" – it's not clear to me that Swift is in a materially worse place than any of the 3 alternatives this post points to. Rust has had a lot of drama and botched their async/await, Python had 3, took a decade to recover, and has failed to address its terrible packaging over the last 2 decades, and the Kotlin process described sounds almost identical to the Swift process.
Was Swift entirely top-down controlled at the beginning, with a bunch of breaking changes? Yes, but it has very clearly graduated from its early development now with no forced breaking changes and an open proposal process that is clearly working. Has Swift hard-coded Apple framework exceptions? Maybe, but that has never gotten in the way of me using or understanding Swift code, so I'm not sure it affects me as a developer. Has Swift become a hugely complex compiler? Sure, but I'd rather have a complex compiler and a better developer experience, especially if Apple is paying for it.
I don't disagree with the facts here, this just doesn't resonate with me at all. I'm a very happy Swift user, would like to be using it a lot more, and enjoy the forward momentum in the ecosystem as Swift becomes in my mind one of the most advanced languages available.
Not to your main point, but I don’t think it’s fair to say Rust “botched” its async impl.
There are certain features it’s missing, but nothing about the design really prevents them from being added. And many have been in the past few years.
I’d point to the “leakpocalypse” as a more fundamental Rust flaw.
That may be more fair. I guess a better way of putting it is that the async/await implementation was fraught with drama and that it did not really succeed in getting the whole community onboard and pulling in the same direction.
Python, for example, continues to have similar issues, whereas Node/JS definitely did stick the landing, and Swift arguably did too.
Yup, that’s an entirely fair characterization. One I probably should have understood from your initial comment :).
Having perpetually colored functions is a huge, fundamental annoyance with the Rust concurrency design.
How so? It’s trivial to block on a Future or to call a function within a Future.
I could imagine another language where all “functions” are Futures, but I don’t think that would be the right move for Rust.
At the very least, that alternative language would probably have to provide a default executor. IMO, that’s a bit much for a language that aims to have a very minimal runtime.
As I mention in replies to sibling comments, Go works that way: every function is a Future, and there is no such thing as a non-async function. It works pretty well and is super easy to use and reason about, with no hard edges.
Yes, it would have to provide a default executor in the standard library. And it would have to have a noasync mode (analogous to nostd) for embedded applications that can't have asynchronous execution. These are hurdles, but not very large ones.
The ship has sailed though.
No, the reason this doesn't work for Rust (and even more so Swift) is that they need to call code that is not annotated. In Go the runtime knows what blocks and what doesn't, and this lets it effectively manage its thread pool based on the state of every goroutine. If you call outside of your runtime, you lose guarantees about what that code might do: it could block indefinitely or deadlock some other code in your program. Solving this is a pretty hard problem, which is why the solutions these languages have designed are limited. (Again: Swift ships with its own runtime; there is a default "executor". It doesn't actually solve this problem for the reasons I described.)
A great bit about Rust async is that it’s sufficiently flexible for use in embedded environments.
I would really not prefer a Rust that includes a more intrusive runtime. Especially given the API changes things like io-uring may require to exploit.
As it works now, I can nearly always predict what assembly I’ll see when I compile a Rust program. It’s predictable enough to work anywhere C does. I would hate to lose that.
> It works pretty well and is super easy to use and reason about, with no hard edges.
Oh,rly?
https://songlh.github.io/paper/go-study.pdf
The other comment implied it but I think it's worth pointing out that:
> embedded applications that can't have asynchronous execution
Is most definitely not the case.
They can't have the same type of async runtime that would be optimal for a web server or the likes (and I'm not sure all desktop applications and web servers are going to always benefit from the same runtime in the same way), but that's a point in favour of Rust's model imho
If you're interested this is an embedded async runtime that's expected to run in no-std and no-alloc environments
https://embassy.dev/
"embedded applications *that* can't have asynchronous execution"
not
"embedded applications can't have asynchronous execution"
there are embedded applications that are compiled with no concurrency runtime. I'm not saying all embedded applications work this way.
I'm still skeptical that such a thing exists. A concurrency runtime doesn't have to be much more than a queue of function calls. What kind of hardware can run normal code but can't do that?
Interrupt handlers, for example.
An interrupt handler could schedule an asynchronous task just fine. In a lot of situations that would be a very good way to handle things.
You shouldn't await in an interrupt handler, but there's a lot of things you shouldn't do in an interrupt handler. A rule like that doesn't mean you need a noasync mode. (And if you really wanted to use await there, you could make it work with priorities or with a separate task queue.)
Depending on your implementation it is very possible to schedule interrupt handlers into a concurrent runtime. You'd have to be exceptionally careful about how it was designed with regards to, like, reentrancy, but Rust is quite good at encoding these kinds of things.
Yeah, I think I added a comma while reading by mistake so I thought the "that" was preceding the explanation of why you singled out embedded, my bad
The point about a runtime optimized to run bare metal on a single core probably being suboptimal on a desktop machine with 8 cores still stands though. (And the laptop one is possibly not going to work optimally on a server with hundreds of core either)
Doing that in Rust can be a footgun if the synchronous code blocks for too long. You often want a separate threadpool for calling those functions. It works great in Swift, though!
Swift mitigates a lot of the function coloring pain: synchronous methods can conform to asynchronous protocol requirements, thanks to the compiler generating thunks (at least, I think that’s how it works)
Isn’t this what spawn_blocking and block_in_place do?
The opposite, actually. Swift (Concurrency) doesn't really have a good way to do the kind of thing spawn_blocking does. (Though, some would argue that the very design of spawn_blocking is not good to begin with.)
Yes! In Swift there is no need to do this sort of differentiation, though, which is more ergonomic. Different trade-offs.
Of course. But the common case of calling a synchronous function from a Future is something trivial like a + b.
A thunk wouldn't be zero-cost.
If you decide to go with the async/await model, why not just go all in and have every function be async/await always? Gets you concurrency support without Colored functions. Is there a language like this?
Go.
I haven’t used Go, but I am fairly certain it’s not really trying to do the same things as Rust. IIRC, calling C from Go isn’t zero cost. It also includes a much larger runtime than Rust.
Yes, calling C from Go has significant overhead for exactly this reason. It basically requires creating a stack and copying over all the current coroutine info into it. Depending on the application, this can be accelerated by maintaining a cache of stack objects to use, or by having a "noasync" mode of compilation with all the coroutine stuff turned off (making calling C zero cost again), or by having a producer-consumer channel to a noasync thread that gets stuff passed to it from coroutine land and calls out to C.
Generally speaking, applications tend to either (1) exist as low-level shims or embedded code that doesn't use coroutines, (2) be large Rust projects that don't call out to non-Rust stuff often (e.g. a web server), or (3) large performance-sensitive applications that call low-level APIs frequently (e.g. games). The above solutions handle all three of these cases.
What Rust chose instead was to optimize for a compilation mode which supports all three cases at once, with zero cost for FFI. This sounds nice, until you realize that what they traded off to get this was UX. Writing concurrency in Rust is lightyears better than C++, but it still sucks compared with Go or Erlang.
That sounds like a few different languages. And I think it’s fine to want a language like Rust that allows you to be less explicit about things. Based on what I’ve heard, you might just be looking for Gleam.
But I don’t think it would be the right move to make Rust something that it’s not, just so people don’t have to think about what a Future is or isn’t.
You never tried to await from within a synchronous function? There's stuff that you just can't do with Rust's approach to concurrency, that would be trivial if we allowed some limited cost abstractions (e.g. thunking the future in this case). Or just make all functions asynchronous but eliminate the state machine if it contains no yields.
The whole point of higher level abstractions is making it so that developers don't have to think about stuff. It's what I love about Rust: with the borrow checker, I know that if it compiles then I am not going to be accessing mutable or already freed state. The language and the compiler get more complicated so that I don't have to think about stuff.
Rust didn't go that way with concurrency. They instead opted to push that complexity (thinking about what a Future is or isn't) onto the developer. YOU have to think about "should this API call be async?" instead of letting the downstream user decide that.
> You never tried to await from within a synchronous function?
Whenever necessary I have been able to do this easily. Calling async code from a synchronous context is not an example of something which can’t be done in Rust today.
> Or just make all functions asynchronous but eliminate the state machine if it contains no yields.
The optimizer generally already cleans this up.
> The whole point of higher level abstractions is making it so that developers don't have to think about stuff. It's what I love about Rust: with the borrow checker, I know that if it compiles then I am not going to be accessing mutable or already freed state. The language and the compiler get more complicated so that I don't have to think about stuff.
The borrow checker is not “limited cost”. It is zero cost.
> Rust didn't go that way with concurrency. They instead opted to push that complexity (thinking about what a Future is or isn't) onto the developer. YOU have to think about "should this API call be async?" instead of letting the downstream user decide that.
There is no zero cost way of doing this.
The borrow checker is not "zero cost". That is a myth. It forces the use of non-zero overhead escape hatches (RefCell/Rc) almost every time when the code tries to do something nontrivial, that is, something actually worth doing.
The guarantees given to you by the borrow checker are zero cost.
Once you’ve exceeded its capabilities, you have the choice to either pay a runtime cost (use RefCell and friends) or deal with maintaining and validating unsafe code.
Even though I am not a big fan of its design, there are hardly any scenarios were Go doesn't fit other than cargo cult against automatic resource management languages.
To the point that even disliking Go, I might favour it over Rust, unless using any kind of automatic resource management is forbidden, and I don't feel like reaching out to C++ for whatever reason.
Go is the opposite, everything is synchronous. This works great with CSP to allow for concurrent workloads, but its functions/methods are not async. I mean language where every function/method call works like async/await. The idea is probably closer to Haskell and its lazy evaluation.
And Erlang, Java as well.
What would be a good way to not have sync / async colors?
Explicit yield / resume, like Lua? Full-on CPS?
Go handles it quite nicely. It's not perfect, but it is nice. Every function is a coroutine. There is no such thing as a non-async fn in Go. But if you want explicit await-like concurrency, then you can explicitly wait on a message from a channel.
Every time I bring this up there are people who jump in with excuses for why Rust can't do something similar. But these are either (1) historical accidents in the sense of implementing goroutines would've broken existing code [fair], or (2) excuses. Go's approach is much harder to get right, and much more pervasive in the impact on the language. But when it's done well, it's really smooth to use.
I use Rust as my daily driver because I want the safety guarantees that Rust gives, and cargo is pretty nice. But every time I need to do something highly concurrent, I wish I was working in Go...
There is a good reason why Rust can't do it the Go way - Go's always-async concurrency requires a fairly heavy runtime. Rust is designed to be able to support systems that have no runtime at all, or a specialised one. (https://embassy.dev/ is an example of a specialised async runtime for running async Rust on microcontrollers).
Why wouldn't you be able to swap out the concurrency runtime for something else? Or not switch it off entirely when not needed (making the concurrency keyboards into syntax errors)?
You can, you can switch it off by declaring a normal non-async function.
It's almost as if there's two fundamentally different types of functions that people might want to declare, necessitating two function colors.
Go is a language that is primarily used for async programming, so the design of having everything async makes sense. Rust is a language that is primarily used for sync programming, async programming is catered for but it's not the focus of the language. So having everything be async doesn't make any sense at all.
> It's almost as if there's two fundamentally different types of functions that people might want to declare, necessitating two function colors.
Since any function can get indefinitely stuck in a system call, the main difference between "sync" and "async" isn't whether you can pause execution, it's who can pause execution. And the main downside to the more flexible option is that it requires extra overhead in how the stack gets allocated.
But why should the flexible/inflexible choice be made on a per-function basis? I feel like I'm manually annotating which functions will inline, except worse because certain combinations won't compile. Instead, how about having the compiler figure it out on a per-call basis?
Rust used to have a GC in the past, as well as green threads and a bigger runtime. All of this has been explicitly removed before Rust 1.0. The reasons are well documented in many pull requests and RFCs for the language.
The way the async feature works in Rust is that the asynchronous function is just a syntax sugar that gets desugared into a state machine struct during compilation. The way this state machine works is similar to how one could achieve async in a language like C. It's unfair to dismiss everything as excuses given that the fundamental aim of the language is different.
In Rust async functions are not really colored because again - the async function is just a syntax sugar for a struct you can create and use in a sync context. The colors analogy is only really applicable in a language like JavaScript, where there's no way to poll an async function in a sync context.
Here's the thing though: Rust could have every function be an async state machine, automatically. And then the compiler optimizes away that code when it isn't needed. It would be a big pain to implement, but it's doable, and it would deliver a developer experience much closer to Go's. There isn't a technical reason for why Rust couldn't do this.
FYI you can't poll an async result in a sync context in Rust, either.
> FYI you can't poll an async result in a sync context in Rust, either.
Sure you can. Async runtimes in Rust are written in Rust, that's exactly what they do.
> There isn't a technical reason for why Rust couldn't do this.
First, it would be a huge undertaking. That in itself is a huge time/resource burden.
Second, it would add overhead to any non-async function call. Because async introduces branching on every function invocation, it would make the resulting assembly even harder to understand. This strongly goes against the zero overhead/ zero cost abstraction idea of Rust.
By the same measure, Go could technically remove (almost) all GC, add some kind of borrowing mechanism and steal Rust's thunder.
Elixir/Erlang, Go, Java all do. Java is new to the game with “green threads”.
Swift has the same problem, FWIW.
Yeah. I think "keyword" is less of an issue personally. At the end of the day, if we want to approach things like async / await and ownership, I would prefer "consume / borrow" than some particular symbols.
That has been said, I do worry that there are too much runtime stuff built into the language, and it doesn't seem there any desire to clean it up. Newer things like actors that people argues cannot be done as library brings runtime into the language. autograd is in the similar vein. Even automatic-reference-counting (a.k.a. classes) would be nice to move that out of core language.
The end result is that Swift is a huge project for people to maintain, in both the compiler land and the runtime land. I am happy that Apple is paying for it, but ...
I agree, the number of keywords really doesn't get in may way day to day. You know what does get in the way? Lack of functionality.
I use Go in my day job and it's really quite frustrating to write much of the time because the compiler does so little for you. I regularly hit issues that would be checked for me in Swift, I regularly write type-unsafe code because the type-checker isn't sufficiently advanced, or I implement slow things myself because it's impossible to use a generic fast implementation, and I don't want a week long side-quest implementing an obscure data structure in my otherwise 2 day long feature.
In that way Go is much like C, and while it's trendy to hate on C++, there was a reason why C++ was created – C wasn't doing enough for people.
I do find it quite unsatisfying the way the actor executor is somehow magically generated at compile time, without any way to build your own in Swift. It's also not obvious that actors are re-entrant until you stumble into it.
It's quite interesting the way the swift compiler gets around its own complexity by wrapping core functionality as llvm builtins whenever the engineers hit a wall. Is it an abstraction? Not really sure what to call it, but I'm finding it becomes a very leaky design after working in large Swift codebases.
Sure, there is always intentionality behind the decisions that get made, and not saying it its sloppy, it's not. However, every single language design decision feels one-off, and in 2024, the language is now made up of hundreds of little one-off decisions that somehow still work out okay.
Yeah. I guess that I misspoke a little bit. I am not even criticize actor (as in Go, there is a relatively "heavy" runtime to implement its green thread too). I was originally thinking about distributed actor proposal (and implementation) and forget to spell that out.
> It's also not obvious that actors are re-entrant until you stumble into it.
Yeah, this part is annoying. While understandable from runtime implementation perspective, it feels like a big mistake need to be corrected in the future.
> the language is now made up of hundreds of little one-off decisions that somehow still work out okay.
There are some fixes on these decisions, some times. For example, `some` / `any` fixed some associated type related weirdness that you need to workaround (this is a huge improvement for me at least).
But some other problems remains: the existence of existential type is not obvious at all and can cause issues. For example, a closure type is not existential, and cannot be extended. This is OK most of the time, but then UnboundedRange is defined as a closure: https://developer.apple.com/documentation/swift/unboundedran... so you cannot extend that easily (which would be ideal if you want to extend PartialRange / ClosedRange / UnboundedRange to do something like, return Python slice object?). I think this / or similar issue also made things like tuples to automatically conform to Equatable / Hashable protocol not easy to do, which is not big problem, but minor annoyance.
They actually allow you to opt out of ARC in Swift 6 by marking types as non-Copyable (~Copyable), along with “consume” and “borrowing” keywords. There are restrictions (i.e. their use in generics is limited), but it’s pretty great for optimizing hot loops.
Have you had much success adopting consume and borrow?
Once you need anything in the swift standard library, you can't use a non-copyable struct with it. The keywords are all in too, so either you write completely self-contained rust-like code that doesn't touch your other code, or you revert back to using the older copyable types.
I've tried many times to ease into using them, but I can never stick with them, and eventually revert.
I do really like the high-level design though and look forward to it evolving into something usable.
Nope. The ergonomics are really bad at the moment. I also had to revert any places I used them in API surface.
I haven’t had a use case yet where I could use them privately in the internal functioning of a type.
I thought borrowing could still be used with Copyable types, though
This is an insulting piece. It's a poor hitjob by someone who doesn't really know what is going on, which is extra unfortunate because Swift has a bunch of problems. Some of them are actually adjacent to the claims here! But this is just unsubstantiative flamebait.
Swift is controlled by Apple, Inc. Nobody worth listening to is denying that. The steering committee is almost all Apple employees, most of the commits going in come from people who commute to Cupertino, and whatever the progress is on Windows or Arduino or whatever, it's clear that "first class" really means iOS. That's all true. Apple makes proposals, they seek some community input, and in the end they pick what they like and it ends up in the language.
But even with that said, the rest of this post is of low quality. Tim Cook and his MBA buddies didn't personally come in to ruin the language because the shareholders aren't making money. Chris Lattner is, whatever his contributions to the language previously, just a guy. He would probably try to put Swift on GPUs today if he had his way. It's not the first time he sold it out before. You've got to go beyond "omgz so many keywords so complicated!!1!!" and look deeply at the real sources of complexity. Swift has a powerful generics system that has worst-case behavior that really sucks. The discussion process has a bunch of busybodies that have nothing better to do than talk about names of things. Yes. These are real problems. It's funny when there are random hacks in the source code to support things that Apple is shipping. But that is how a large project works. You are more than welcome to laugh about it on Twitter but substantive criticism it is not. Which is a shame, because there is plenty of it go around.
I like how the conclusion of the article paints a very positive picture for the language. It’s as if the entire negative tone of the article was a poor attempt to sound insightful for clicks.
These kind of pieces is what ends up with worshiping stuff like Go with its clunky design evolution, it turns out either the language handles the complexity of the world, or the source code makes it visible in needless boilerplate.
Agree on the general tone, only up voted by accident.
I've used Python for nearly two decades. It's not a great programming language to write or maintain, and its package management makes npm (which is not great itself) look like a walk in the park. It's a bit unfortunate that it became AI's go-to programming language.
I know it's subjective, but if we're arguing about Swift's quality by comparing with Python, I'm not buying it.
To each his own. I thoroughly enjoy Python, having used it for ~15 years now, have absolutely no issues with package management and haven't for the better part of a decade, and believe there are good reasons why Python "won" the data science, machine learning, etc. "battle." Namely, it's easy to use. It allows you to focus more on what you are trying to build than managing the language itself.
It has strengths and weaknesses depending on domain just like any other language. Given it is one of the most, and sometimes _the_ most, popular programming language, I support the article using it as a comparison point.
I don't necessarily disagree with you, but regarding Python winning data science, I think that's in large part to a) Anaconda creating a single cohesive alternative ecosystem, shunned by engineers but loved by data scientists who just want to get things done, and b) the fact that with a couple of major libraries/frameworks you can get most things done, Pandas, Numpy, Sklearn, maybe a few others, meaning there's less dependency management to do overall.
As a thought experiment, imagine someone had built Anaconda for C and then, instead of making Python APIs out of those mostly C libraries, just left the interface as C. Same packaging solution, same libraries, but the user has to learn C. Or swap C for Go even. No memory management but still more esoteric syntax and typing, interfaces, etc. required. How far does that get?
The tools you mention were built on/around Python because it was realistic to ask technical but non-programmer users to learn to use Python. The lack of explicit typing and more "conversation" nature of Python syntax is, IMO, what makes that possible. So Python "wins" and other tooling is built on/around it to make it even easier for non-programmers to operate in the ecosystem, which solidifies it's position even more.
I would love to hear positive python packaging stories because my other language packaging experiences (lisps, R, rust) felt so much better. Did you try to maintain python packages on a system with old gcc that cannot be upgraded and walk through the dependency hell explicitly while having SAT solvers fail to find compatible solutions after minutes at a time? Have you managed to keep bit-level reproducibility on future reproduction runs for more than five years in any other way than keeping the old OS and hardware around? What are your packaging tools of choice (setup.py is obsolete soon, poetry did not stick, pyproject.toml for the win I guess, conda at glacial speeds, pip too slow and prone to security risks, uv faster but incomplete, …)? Did you not encounter packages with dependency specs that no longer worked on future dates?
Python is excellent as long as you run it on your own computer.
Having to distribute a Python application with all of its dependencies and libraries is a pain I don't want to experience.
Python got adopted by academics around ‘10, because it had lower barriers for new programmers than Java (now legacy) and C (now a specialization).
The language and tooling options of today mean that the initial reasons for academia embracing Python are long gone. Python now presents more barriers to new programmers than alternatives.
I agree. It’s a shame that the machine learning community has married itself to Python, since it gives Python a staying force that it doesn’t really merit.
They're comparing governance of the language so your bellyaching about package management seems neither here nor there.
I also think Swift is getting bloated (and this can be seen in the compile times which are getting out of hand). Sometimes I feel like some features of Swift have been implemented without being well thought about, just because they are popular in the programming crowd. I am thinking more specifically about metaprogramming, which was originally a non goal, but I guess with time they saw it has a point and they replicated Rust's approach with syntactic macros which is to me a very poor approach. You need to much boilerplate to write a macro since it's a compiler plugin basically, and you can't interpret the semantic of the underlying code without reimplementing part of compiler's logic. I can find other examples, and that's not always bad, but I am not sure that's the right way of designing a language.
Macros were an early Swift goal, but were not the highest priority or a small amount of work that could be tackled as part of meeting higher priority goals.
As an example - one could argue property wrappers exist because they were needed before macros could get done, and their syntax was chosen to align with where the Swift core team thought macros would land in the future.
I still have no idea where macros fit into a project, or why I would want to use them. They seemingly built this functionality for SwiftUI previews maybe?! and didn't really consider how this could be a core part of the language.
At least with rust, you can just lightly add them in the same module and at a minimum use them to avoid boilerplate code duplication. Macros in rust work as a core language feature because of the way the borrow checker works. You can't really break apart some methods since you need to establish ownership. So, you can easily end up with duplicated code that gets noisy. Macros are the escape hatch to do that, and yes, they are also complex to write, but at least I know when I'd want to use them.
Swift doesn't really document how to create macros, and they didn't really spend energy explaining how they would be valuable to a project and worth the engineering cost of implementing them.
They fit into the same way as C, C++, Lisp, Scheme, Rust, Scala... macros, Java and Kotlin compiler plugins, .NET code generators, do fit into a project.
There are several WWDC sessions about how and why.
https://docs.swift.org/swift-book/documentation/the-swift-pr... seems to do both to me?
This article is full of statements that are not backed up by any material proof. The author claims that Swift features no longer compose without giving a single example. I write Swift SDKs for a living and I have never thought that any of the recent features felt incompatible, incongruous, or redundant. Sure, each has its strengths and weaknesses, but that results in an expressive language where every problem has several solutions with different tradeoffs.
When it comes to governance, it’s not without wrinkles, but it’s not as bad as the author makes it sound. If anything, things got a bit less “dictatorial” since Chris left. Different core team members pull in slightly different directions which made the language more balanced in my view.
If you have a recent machine, the actual developer experience isn't too bad. It's just not a language for language purists (was it ever?).
I'm enough of a purist to be annoyed whenever I see the "expression took too long to type check" error. (I think bidirectional type inference isn't worth it with Swift)
The gaggle of verbose pointer types makes me want to switch to C++ whenever I have to deal with memory directly.
As the article mentions, a bunch of features were added for the sake of SwiftUI. Function builders allow SwiftUI's syntax to be minimal. They allow you to write:
instead of something like Given the rather bad (still!) error messages you get with SwiftUI that seem to be a result of function builders, I'd say it wasn't worth it. At least I get fewer of the "couldn't produce a diagnostic, please file a bug" errors than I used to.Then there are property wrappers, which wrap struct/class fields with get/set code (IIRC Lattner didn't like the property wrappers). They've been partially replaced in SwiftUI by macros. The @Observable macro (probably the most widely used one) decorates your class with code that notifies listeners (almost always SwiftUI) of changes. I'd be curious to see what SwiftUI would look like without property wrappers (or macros).
I think they had a missed opportunity to really add robust updating of views in response state changes. Currently it's still relatively easy to not have your SwiftUI views update because your data model uses some object that isn't @Observable.
I wrote a UI library inspired by SwiftUI, but in Rust [1], and of course I couldn't add anything to the language, and more experienced Rust programmers discouraged me from using macros. So it can be done without all the extra stuff Swift added.
[1] https://github.com/audulus/rui
Isn't it by design that only observable or state data update views? Otherwise there would be significant overhead watching everything.
Every few years I revisit Swift to see if things have improved but the language is now even more complex with cryptic compiler errors and poor runtime error messages, long build times and slow iteration times. Xcode is still a horrible IDE and unfortunately JetBrains AppCode has since been discontinued, basically forcing using VS Code for development and Xcode to run/debug tests. It can finally do async/await but its async http requests were unstable/unusable on Linux and cross platform is a chore with littering macros like `#if canImport(FoundationNetworking)` everywhere.
Used to think Swift was the future given the resources Apple could throw behind it, but at its current state after 10 years I no longer do. It will remain a mainstay thanks to the Apple ecosystem, but don't see its popularity extending far beyond that.
I think Swift has been strangled, but I'm not convinced it's because of its governance structure (with Apple at the top). I get the impression that Swift is the world's best sandbox for programming language nerds to play in—and that Apple is woefully understaffing the team writing technical copy for the compiler (e.g. error messages), and the documentation.
The rate at which the Swift language has changed just in the past 3 years feels as significant as the rate at which it changed between 2016, with the release of Swift 3, and 2022.
I'm sure that people who are 100% all in on iOS development feel like the rate of change is delightful, but for the rest of us who have other responsibilities, it's exhausting and bewildering.
On the other hand, this burst of change brought it over a line into functional and declarative usability. Somewhere from 5.7 through 5.10 it indeed became meaningfully more “delightful” to use.
Delightful not because of the rate of change, but because of the programming approach it's now enabled.
Surely the "some" keyword the author is railing about actually _improves_ composability, and is in fact exactly in line with Lattner's vision "Simple things that compose." The accepted answer here [1] explains this better than I can.
I can't share the author's enthusiasm for Swift on server either. WASM support is not production ready so Cloudflare workers or Deno deploy is out. Vapor is the only properly maintained framework I can find (happy to be corrected) which I did not find ergonomic. Google Cloud Functions or AWS Lambda would be options but there's no reason to limit oneself when Rust is production ready on most web platforms.
[1] https://stackoverflow.com/questions/56433665/what-is-the-som...
Swift has a lot of features, but you can leave many of them alone to begin with and learn them when you need them. Low-syntax defaults work in a reasonable, streamlined way. SwiftUI will confront you with more syntactic variations than other Swift code, but that's not unusual in a DSL. Overall it's a smooth learning curve due to progressive disclosure in language features.
If Swift compiled five times faster, I would want for very little. But I don't know how they'd walk back the combination of type-based overloading and implicit conversions. See: https://belkadan.com/blog/2021/08/Swift-Regret-Type-based-Ov...
I don’t necessarily disagree with the article but I think the language is in a much but better place than it was a few years ago, with a few exceptions. As the author points out, Foundation and a bunch of frameworks and libraries are being open sourced and maintained by Apple and the Swift foundation which are all cross platform (including Windows and Linux).
The static concurrency checking stuff is cool on paper but has so far just introduced noise to our codebase so we’re not touching it until it cooks a little more.
I’m basically asking for the same thing I was 5 years ago though and the author touches on, fix the compiler issues.
Official Android support would be nice too but the community is filling the gap there.
I use Rust a little on the side and tooling, compiler messages and ecosystem are far ahead of Swift. Language wise I still prefer Swift for the work I do.
A honest question here from outside of the Swift world: why does it matter?
I'm a Clojure developer, have been exclusively one for the last 10 years and have no intention of changing anytime soon (in other words, I don't care/follow closely what the majority of development world is and what the trends are). A few years ago we had "Open Source is Not About You" [1] by Rich Hickey, basically pushing back that the creator(s) of Open Source don't owe anything back to the community. That piece has aged well for me: now I understand that something is what it is, not what I wish it was. And more importantly, the goal of open source isn't to be liked or used by everybody. It's a thing put out by its creator for others to use, and that includes Apple.
So really, why does the governance matter so much? If you don't agree with something, why not fork it and make it like you want it?
[1]: https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...
Forking a programming language is no small feat.
Look, I wish the compiler was faster and the diagnostics more helpful but I find Swift pretty great! And I'm using it on Linux.
Call me a fanboy, but Swift is one of the best languages I've ever had the privilege of using. I'm truly hopeful with some more work, it's able to become the de-facto cross-platform language.
The complexity is very manageable, a decent Java, Objective-C or .NET developer can pick it up in less than a day, is feature-packed and stuff just make sense.
I also love that we can write high-performance graphics apps without C++ (OK shaders are kind of C++ but that's fine as long as it stays there).
Ladybird switching to Swift made me think. Then there was this article recently on HN on Swift vs. Rust. I would give Swift a chance, I do think currently it lacks open source projects for server side development (all the batteries).
I think Swift is a really nice language when you look at the design. I really liked protocol extensions for example.
The problem I have with it is that some decisions are very ideologic.
You can easily link to C libraries for example, but the pointer types in Swift are so fucking unusable (because pointers are evil) that it makes no sense to try it. (unless you cannot chose an other language)
It doesn't matter, because Swift is one of those languages that enjoys an overlord, anyone that wants to target its overlord platforms eventually has to deal with Swift.
Same with Java and Kotlin on Android, .NET and C++ on Windows, JavaScript on the Web, ISO C on UNIX/POSIX, Go on Docker/Kubernetes,....
Thanks so much for sharing my work! :)
The devs here are a lot more nuanced than the guys on Reddit lol
I disagree. [edit: sorry: not completely?]
Decisions are made openly with excellent (if exhaustive) discussion, clear reasoning, and mostly on the merits.
Features are building on prior features over the years, and when things have needed to be changed or deprecated, they get changed. So the language is alive but not encrusted.
Decisions are made much more independently and quickly than Java, which has a similar installed code base and motivated corporate patron. Both small-scope and sweeping architectural changes work through the evolution process pretty quickly.
All languages starts with a clean sweet spot but major gaps, and then migrate to common features as they age. Right now, Swift has probably the most mature concurrency plan of all languages, and is beginning to address the lifetime issues presented by value types.
No other language blends low-level efficiency and control with high-level type-safety so well - no less seamless inter-op with C/C++, static macros, etc.
What's not maturing as fast: third-party libraries ecosystem because most people just need Apple's, Windows support (because - why?), and alternate IDE's, static analysis, etc.
As for 217 keywords, etc.: be glad there's a name for everything, without a plethora of ways to do the same thing. If it weren't easier to do complicated things with a correspondingly complicated language, everyone would be writing functional lisp code.
The real problem now is there's no real agreement on how to do the function coloring, effects, traits, etc., but that's what's needed to make use of modern hardware.
In particular, the type system being bidirectional has made it both pleasant (when it works) and painful (when it doesn't), and stuffing all this new behavior into a type system constraint solver is a bit of a black art. Venn diagrams and verbiage aside, it's just hard for people to think about, e.g., what ~Copyable means for generic parameters, or how a parameter can place a function in a different memory region.
But they'll do it after a boatload of discussion, and then make a dumbed-down happy-path 10-minute video for WWDC with code folks can play with - 10 million folks.
If anything, the language is strangled by history, but remains attractive because the team does such a good job.
Who cares about keyword count? Reserved words preserve flexibility while not realistically impairing programmer expression. What do you do with a keyword-impoverished language?
1) overload the keywords you do have to mean random unrelated things depending on context (which is why "static" in C++ means a bunch of random things), or ' 2) create context-dependent keywords (e.g. "final" in C++ and "match" in Python), which work, but complicate every parser everywhere.
As much as it pains me to type the following: JavaScript got it right. The initial version of the language reserved a number of keywords (e.g. "class") that would only become useful two decades later.
Paywalled?
Yes, I didn't knew about it. When I was read, there is full article is there without paywall. Seems, after some traffic they showing to subscribe to read the page.
Swift will be fine as long as they invest in Mini-Swift that improves compile times by 10x and make it seamlessly interop with Papa-Swift the current behemoth.
I'm sure they're aware of the compile times. If they can make C++ interop work (not easy), they can make Mini-Swift :)
As for this governance talk - I can only shake my head at these attempts to start drama. I don't understand it.
[dead]
[flagged]
> Apple has the purest incentive of all: maximise profit for shareholders.
I'd call it the impurest incentive of all.
pure adjective
unmixed with any other matter
Sure, it's still completely impure in another sense, as in for the benefit of the language and its users.