Just to start some discussion about the actual API and not the breaking change aspect of it:
I find the `Reader.stream(writer, limit)` and `Reader.streamRemaining(writer)` functions to be especially elegant to build a push-based data transformation pipeline (like GREP or compression/encryption). You just implement a Writer interface for your state machine and dump the output into another Writer and you don't have to care about how the bytes come and how they leave (be it a socket or shared memory or file) -- you just set the buffer sizes (which you can even set to zero as I gather!)
`Writer.sendFile()` is also nice, I don't know of any other stream abstraction that provides this primitive in the "generic interface", you usually have to downcast the stream to a "FileStream" and work on the file descriptor directly.
re: sendfile in the interface - that's important because while downcasting the stream to "FileStream" would work if your pipeline looks like A -> B, it falls apart the moment you introduce an item in the middle (A -> B -> C). Meanwhile I have a demo of File -> tar -> HTTP (Transfer-Encoding: chunked) -> Socket and the direct fd-to-fd copies make it all the way through the chain!
As a hobby Zig developer, it's a bummer to see a breaking change in something so fundamental, but I get that's what I accept when building on a pre-1.0 language.
I hope that the Zig team invests more into helping with migration than they have in the past. My experience for past breaking changes is that downstream developers got left in the cold without clear guidance about how to fix breaking changes.
In Zig 0.12.0 (released just a year ago), there were a lot of breaking changes to the build system that the release notes didn't explain at all. To see what I mean, look at the changes I had to make[0] in a Zig 0.11.0 project and then search the release notes[1] for guidance on those changes. Most of the breaking changes aren't even mentioned, much less explained how to migrate from 0.11.0 to 0.12.0.
>Some of you may die, but that is a sacrifice I am willing to make.
I'm definitely looking at the example set by hare with interest[0]. Also unironically love Shrek. I once hosted a viewing party of Shrek Retold[1] in my tiny NYC apartment :D
This is why I'm surprised when production projects, like bun, choose to use zig. I don't think the language itself is a bad choice (although I do disagree qith some of the design decisions), but having to make substantial changes when there are breaking changes like this because the language is pre-1.0 every so often in a large code base isn't something I would want to deal with.
One issue I have with the old reader/writer pattern is that it is not easy to store them in a struct. Reader and writer are passed into a function as 'anytype' which implements any of the read() or write() functions. Often time in a struct's init() function, I want to take in a reader/writer and store it for later use. It's close to impossible since I don't know what type of the field of the struct to store them.
Does the new change make it easier to store reader/writer in a struct?
something that I don't understand about the actual API:
I maintain the zigler library, and one thing that was useful about the old async "colored-but-not-really" functions was that they implicitly tolerate having internal suspend points (detail: https://www.youtube.com/watch?v=lDfjdGva3NE&t=1819s) -- I'm not sure if having IO be a passed parameter will still let me do that? Can users build their own functions with yield points? And will you be able to jump out of the frame of a function and give control back to the executor, to let it resume later?
As you're aware, that feature of the language ("stackless coroutines", "generators", "rewriting function logic into a state machine") was regressed. At first, this new IO interface won't have that capability.
However, as a followup issue, I'd like to reintroduce it, potentially in more low-level manner, for use inside IO implementations. Combined with restricted function pointers, this will allow functions that can suspend to pass through runtime-known function pointer boundaries - something that was terribly clunky before to the point that it compromised the entire design. This means that, again, the same IO interface usage code will be able to be reused, including when the implementation uses suspend points, and the automatic calling convention rewriting will be able to propagate through the interface into the usage code.
thanks! I would have asked on stream but I'm in a bit of a different timezone than usual so my ability to track livestream times competently has regressed.
As an aside, do you think in the near future there will be a "guide to building a compiler backend" either in-project or by the community?
That doesn't matter much when it's specifically the C/C++ compiler vendors who don't care about fixing the problem. It would be trivial for C/C++ compiler vendors to make cross-compilation as simple as with the Zig toolchain, but they don't care about the problem. Fast forward to today and the best C/C++ cross-compilation toolchain is the Zig toolchain.
You'll have to write C API wrappers around your C++ libraries to access them from Zig, but other then that I can cross-compile my mixed C/C++/Zig projects using Windows APIs like DXGI/D3D/WASAPI with `zig build -Dtarget=x86_64-windows` from a Mac with the vanilla Zig toolchain.
...you don't even need to port anything in your C/C++ project to Zig, just integrate `zig cc` as C/C++ compiler into your existing build system, or port your build system files to build.zig.
Yes, the C++ code compiles just fine, but to call into C++ APIs from Zig you'll need a C API wrapper (and the same is true for ObjC APIs). Not an issue of course for pure C++ projects when the Zig toolchain is just used for cross-compiling.
I tried Zig some time ago to use with microcontrollers. It has a generator for the pins, which was nice. But subsequent versions broke as Zig changed syntax. So I started going down the rabbit-hole (it needed a newer version of llvm, for example) until I eventually decided that the game wasn't worth the candle.
The fact that another breaking change has been introduced confirms my suspicion that Zig is not ready for primetime.
My conclusion is to just use C. For low-level programming it's very hard to improve on C. There is not likely to be any killer feature that some other contender will allow you to write the same code in a fifth of the lines nor make the code any more understandable.
Yes, C may have its quirky behaviour that people gnash their teeth over. But ultimately, it's not that bad.
If you want to use a better C, use C++. C++ is perfectly fine for using with microcontrollers, for example. Now get back to work!
Well, that's why Zig is 0.x and not 1.x. I'm fine even with large scale breakage if the direction is right (and looking at the mess that C++ has become for the sake of backward compatibility, IMHO breaking changes are also the better option after 1.x, as long as there's features to help manage the required changes).
Also, "Zig the language" is currently better designed than "Zig the stdlib", so breaking changes will actually be needed in the future at least in the stdlib because getting it right the first time is very unlikely, and I don't like to be stuck with bad initial design decisions which then can't be fixed for decades (again, as a perfect example of how not to do it, see C++)
I can relate, because I have so much things in years that broke left and right, but at the same time (except if you are talking about pre/alphas) I think is unhealthy to be vary of breaking changes.
A language, in special, should be able to do it. Extreme compatibility is the way to make the mistake that is C.
A breaking change that fix something is a investing that extend infinity to the feature.
Fear to do it, like in C, is how you accumulate errors, mistakes, millions of dollars wasted, because is also compound debt.
P.D: I think langs should be fast to break pre 1.0, and maybe have room to do it each 5/7 years. Despite the debacle of Python (that is in fact more reflective of python than of breaking), there should be possible to make a relatively painless sunsetting with good caring
It absolutely is and we (ZML) are using it with great success. That said, Andrew said he would absolutely would break compat if it meant things go in the right direction.
Yes, it can be painful sometimes, yes I do not always agree with his choices, but it has never been a blocker nor a significant time sink.
And in the end, things do improve significantly.
In this case, I think the new IO stuff is incredible.
Yes, that was the point. To understand what that "0.14" means, we need to know those "well documented 1.0 goals" and some hour long YouTube video. That is, merely the "0.14 version number" without context is not enough, like your previous comment said.
You look at the version, the milestones https://github.com/ziglang/zig/milestones, and it makes sense. The YouTube video is just more proof. Picking up a 0.14 software without looking the most basic thing about it like “oh, what kind of 0.14 is this” then complaining that “it’s not ready for prime time” is odd behavior
Well, that's a sentiment I don't quite agree with. It willfully ignores industry experience with c/c++ whence zig, rust, D, and others.
If your micro controller is say <5000 lines maybe ... but an OS or a mellanox verbs or dpdk API won't fall so easily to such surface level thinking.
Maybe zig could help itself by providing through llvm what Google sometimes does for large api breaking changes ... have llvm tool that searches out old api invocation update to new so upgrading is faster, more operationally effective.
Google's tools do this and give the dev a source code pr candidate. That's how they can change zillions of calls with confidence.
I haven’t done embedded stuff in Rust, but the nostd crates and automatically generated libraries from manufacturer SVDs seemed neat. The ability to trivially pull in already written functionality would also seem fantastic.
I also have to disagree with C++ for micro controllers / bare metal programming.
You don't get the standard library so you're missing out on most features that make C++ worthwhile over C.
Sure you get namespaces, constexpr and templates but without any standard types you'll have to build a lot on your own just to start out with.
I recently switched to Rust for a bare metal project and while its not perfect I get a lot more "high level" features than with C or C++.
In deployments where C and C++ are the only two options available, and management is not willing to get another one, C++ still has lots of improvements over, as "Typescript for C".
Building our own types was a rite of passage for C++ programming back in the early 1990's, and university curriculums for C++ as well.
Why is that? Sure, allocating containers and other exception-throwing facilities are a no-go but the stdlib still contains a lot of useful and usable stuff like <type_traits>, <utility>, <source_location>, <bit>, <optional>, <coroutine> [1] and so on
[1] yes they allocate, but operator new can easily be overridden for the promise class and can get the coro function arguments forwarded to it. For example if coro function takes a "Foo &foo", you can have operator new return foo.m_Buffer (and -fno-exceptions gets rid of unwinding code gen)
In the C and C++ languages there's a thing called a "freestanding" implementation. This is roughly analogous to Rust's nostd.
In C the freestanding environment doesn't provide any concrete features, you don't get any functions at all, you can get a bunch of useful constants such as the value of Pi or the maximum value that will fit in an unsigned integer, some typedefs, that's about it. Concrete stuff from the "C standard library" is not available, for example it does not provide any sort of in-place sort algorithm, or a way to compare whether two things are the same (if they fit in a primitive you can use the equality operator)
In C++ there are concrete functions provided by the language standard in freestanding mode. These, together with definitions for types etc. form the freestanding version of the "standard library" in C++. There's a long period where this was basically untended, it wasn't removed but it also wasn't tracking new features or feedback. In the last few C++ versions that improved, but even if you have a new enough compiler and it's fully compliant (most are not) there's still not always a rhyme or reason to what is or is not available.
In Rust it's really easy. You always have core, if you've got a heap allocator of some sort you can have alloc, and if there's a whole operating system it provides std.
In most cases a whole type lives entirely in one of those modules, Duration for example lives in core. Maybe your $5 device has no idea which year this is, let alone day but it does definitely know 60 seconds is a minute.
But in some cases modules extend a type. For example arrays exist in core of course - an array of sixty Doodads where Doodads claim to be Totally Ordered, can just be unstably sorted, that works. But, what if we want a stable sort, so that if two equal Doodads were arranged A, B they are not reversed B, A ? Well Rust's core module doesn't provide a stable sort, the stable sort provided uses an allocation and so the entire function you need just doesn't exist unless you've got allocators.
I know how freestanding works, and I agree that Rust's "nostd" is much more thought out than C/C++'s freestanding, however
> This is roughly analogous to Rust's nostd.
"freestanding" is actually worse that this. It means that the compiler can't even assume things about memcpy and optimize it out (as on gcc it implies -fno-builtin), which pessimizes a lot of idiomatic code (eg. serialization).
The "-nostdlib" option is usually what one wants in many cases (don't link against libc but still provide standard C and C++ headers), such as when compiling privileged code with -mgeneral-regs only and such. This way you can benefit from <chrono>, etc.
If you are writing userland code you should be using a toolchain for this, instead of relying of freestanding/nostdlib which are geared towards kernel code and towards working around defective toolchains.
If you are targeting armv4t/armv5/armv6k+vfp (or armv7 but not optimized for it) for Aarch32, or Armv8.0-A and are fine with newlib, then devkitARM and devkitA64, respectively, get the job done and ship GCC 15.1.
There is also devkitPPC, shipping with the same toolchain (and which additionally has some Obj-C support iirc).
Custom patches to newlib and consorts (https://github.com/devkitPro/buildscripts/) introduce hooks and WEAK functions that allow to implement standard library functions on almost any platform, on a platform library basis or even on a per-program basis (with some restrictions on lock sizes).
That's the most frustrating part, a lot of the std library would work on a bare metal system (and would be rather useful), but getting those parts into your project and avoiding the ones that will give you compiler errors in form of esoteric poems is a nightmare.
Vendors at this point seem to give their implementation of some of the std library components, but the one's I've seen were lacking in terms of features.
Doesn’t Rust nostd give up a comparable part that C++ would give up? It’s typically all the memory allocations that inhibit the use of data structures.
Yeah you don't get its std library, but Rust makes a distinction between core and std, and core is available.
Doesn't sound like a lot but you get your standard types like Result and Option, you get slices since they're part of the language or if you need allocation you can define the global allocator in core::alloc.
This distinction makes it really comfortable to use.
Is that versioning site supposed to be some kind of joke? I can't really figure out if they are joking or serious - the tone comes off as joking, but it could be read as serious too.
The point of the language stability is spon on, but it's actually very easy to improve on C, not in terms of performance or readability, but rather safety and the ability to encode more constraints in a compact form than C would ever allow. Sometimes it's not about less lines, but the same amount of lines that encode a lot more stuff than these lines in C.
This is why it's good to have automated tooling that can do semantic changes on your language and standard library use. Go has `go fix` even if it was only used in pre-1.0 days AFAIK. It is never lost because this type of tooling can be used as the foundation for linters, refactoring tools, etc. Is there such a solution in Zig?
by now it's a well worn/used trope to make -gate names for any scandal. But the distance in time (and culture) to the original Watergate scandal is growing, so it seems less impactful now.
As a data point: I can honestly say I’ve never heard of Gamergate before this comment, and I am a 31-year-old white male. I did read a book on Watergate when I was in my teens, though.
GamerGate is well worth understanding. While some of the details are unique to the situation, it’s provided a template for right-wing radicalisation that’s been employed multiple times since. There’s also an entertaining “where are they now” aspect where some people have been almost forgotten and some are in the White House. KotakoInAction is still going and has (inevitably) morphed into a bunch of people complaining about the Lūgenpresse.
Let’s just say that I think it’s an important event and understanding how a guy harassing his ex-girlfriend became a formative moment in alt-right history is fascinating, but I really don’t want to get drawn into arguments with anyone who’s still drinking that particular kool-aid.
I like Zig but it seems to just keep redesigning itself, while other languages like Odin “shipped” long ago and don’t seem to need to look back. Is Zig suffering from perfectionism syndrome where things are never good enough??
This is a standard library change, not a syntax change
I think the main big thing that’s left for 1.0 is to resurrect async/await.. and that’s a huge thing because arguably very few if any language has gotten that truly right.
As the PR description mentions: “This is part of a series of changes leading up to "I/O as an Interface" and Async/Await Resurrection.”
So this work is partially related to getting async/await right. And getting IO right is a very important part of that.
I think it’s a good idea for Zig to try to avoid a Python 3 situation after they reach 1.0. The project seems fairly focused to me, but they’re trying to solve some difficult problems. And they spend more time working on the compiler and compiler infrastructure than other languages, which is also good. Working on their own backend is actually critical for the language itself, because part of what’s holding Zig back from doing async right is limitations and flaws in LLVM
> I think the main big thing that’s left for 1.0 is to resurrect async/await.. and that’s a huge thing because arguably very few if any language has gotten that truly right.
Interesting. I like Zig. I dabble periodically. I’m hoping that maturity and our next generation ag tech device in a few years might intersect.
Throwing another colored function debacle in a language, replete with yet another round of the familiar but defined slightly differently keywords, would be a big turn off for me. I don’t even know if Grand Central Dispatch counts, but it—and of course Elixir/Erlang—are the only two “on beyond closures/callbacks” asynch system I’ve found worked well.
easy peasy. you've resolved the coloring boundary.
now, if you want to be a library writer, yeah, you have to color your functions if you don't want to be an asshole, but for the 95% use case this is not function coloring.
Sorry, I think this comparison is just unfair. Odin might have "shipped" but are there are any projects with significant usage built on it? I can count at least 3 with Zig - Ghostty, Tigerbeetle, and Bun.
Programming languages which do get used are always in flux, for good reason - python is still undergoing major changes (free-threading, immutability, and others), and I'm grateful for it.
I still think what drives languages to continuously make changes is the focus on developer UX, or at least the intent to make it better. So, PLs with more developers will always keep evolving.
If I look to how I was programming in 1986, and how I am programming now, it is too much hope to have such a design goal, especially since most likely there is little Zig has to add to quantum and AI based systems.
The computer is a machine, and modern ones are complicated. When I am programming, I want to precisely control that machine. For me, simplicity is measured in how complicated it is to get the machine to do what I want it to do. So, eg, having several different operators for adding two integers sounds complicated. However there is simplicity in not having to reach far to actually get the correct behavior, and there is some simplicity in the process of being forced to make that choice as it irons about what behavior you actually want.
the only two new feature syntaxes in about six releases have been multiple iterations in for loops and continue in switches? maybe reified tuple types too (not just implicit) and destructuring tuples.
a few things have been removed, too. and async/suspend/nosuspend/await, usingnamesplace are headed for the woodchipper.
Looks like it, while at the same time still lacks any killer application that would make learning Zig a requirement, regardless of one's opinion on the language, like it already happened with many others now in mainstream.
So where is Zig's OS, browser, docker, engine, security, whatever XYZ, that would make having Zig on the toolbox a requirement?
I have written very little Zig and a lot of Rust, but I love both languages. However, Zig having breaking changes has made me wary of not starting anything serious it with – yet. I'm still happy that these changes happen, because I'm willing to wait for a stable version. Meanwhile, I enjoy myself some Rust, and probably continue doing so.
Andrew’s design decisions in the language have always been impeccable. I’ve never seen him put a foot wrong and would have made the same change myself.
This is also not new to us, Andrew spoke about this at Systems Distributed ‘25.
Also, TigerBeetle has and owns its own IO stack in any event, and we’ve always been careful to use stable language features.
But regardless, it’s in our nature to “do the right thing”, even if that means a bit of change. We call this “Edge” and explicitly hire for people who have the same characteristic, the craftspeople who know how to spot great technical quality, regardless of how young (or old!) a project may be.
Finally, I’ve been in Zig since 2018. I wouldn’t exactly call it “shiny new”. Zig already has the highest quality toolchain and std lib of anything I would use.
> Andrew’s design decisions in the language have always been impeccable. I’ve never seen him put a foot wrong and would have made the same change myself.
Interesting, who designed the old Zig IO stack which alas Andrew needed to replace?
This is a few months after `git init`. You can see I was really just working on the parser, with a toy example to get things started.
Over time, I merged contributions that made minor changes and shuffled things around, and these APIs evolved to kind of work okay. But nobody really considered "the Zig IO stack" as a whole and put in design effort. That is happening for the first time right now.
This is how programming languages are constructed. Things evolve slowly over time, and periodically you have to reevaluate things and do major reworkings.
I think what you're not appreciating is how this design is a huge improvement over the status quo, not only in Zig, but also the streaming interfaces in most languages.
Wait till the SD25 talk on this comes out, to first understand the rationale a bit better!
> I think what you're not appreciating is how this design is a huge improvement
The point was that if he did the old design, which needed improving enough to justify breaking the language backwards compatibility, then why say his decisions are impeccable? Pobody's nerfect.
Yes, and my point (in response) was that Zig's status quo was no different from other languages, but now is better. (There's some humor in the issue's title “Writergate” here!)
Again, we use Zig, and this change is welcome for us.
We also like that Zig is able to break backwards compatibility, and are fully signed up for that.
The crucial thing for TigerBeetle is that Zig as language will make the right calls looking to the next few decades, rather than ossify for fear of people who don't use it.
> Zig already has the highest quality toolchain and std lib of anything I would use.
My couple of days experience with Zig was very lackluster with the std lib, not that it is bad, but feels like it is lacking a lot of bare essentials. To be expected for a new pre-1.0 language of course.
Depends on which language you're coming from. Compared to C or even C++, the Zig stdlib has already many more things to offer. Compared to Python or Node.js it's quite bare bones.
Fair, I was mentally comparing to Go. I was a bit disappointed there wasn't more wrappers around basic OS stuff. Go stdlib wraps everything and does its best to make stuff cross-platform.
In my specific case I was trying to send some DNS messages. I went the route of linking libc and using the posix data structures for DNS messages and struggled quite a bit how to map the C data structures to my program.
This kind of thing is a big barrier to adoption unfortunately.
Good to know, also thanks for the detailed reply! Glad you are fully aware of these nuances, but it also doesn't surprise me considering your amazing presentation of Tigerbeetle! Much success in the future.
Thanks zwnow, appreciate your kind words, and my pleasure!
I think you'll enjoy Andrew's talk on this too when it comes out in the next few weeks.
The velocity of Zig has been valuable for us. Being able to put things like io_uring or @prefetch in the std lib or language, and having them merged quickly. Zig has been so solid, even with all the fuzzing we do. It's really held up, and upgrades across versions have not been much work, only a pleasure.
I don't think even those are particularly short periods. TestCase.assertEquals() was deprecated in Python 3.2 (February 2011) and removed in Python 3.12 (October 2023). 12 ⅔ years to get rid of a silly alias because it's a breaking change (of a single character).
It's not about sticking around on an old version, it's about ever being able to catch up, and what the rest of the ecosystem is going to do. Python did this major version bump that broke a lot of the ecosystem, and it went so poorly that they've effectively promised never to do it again and completely excised any thought of ever having a major version bump again, and other languages and communities now point to it regularly as a debacle to be avoided.
When you break things regularly, you're forcing a choice on every individual package in the ecosystem: move forward, and leave the old users behind, or stay behind, and risk that the rest of the ecosystem moves forward without you. Now you've got a whole ecosystem in a prisoner's dilemma. For an individual, maybe you can make a choice and dig in and make your way along without too much trouble. But the ecosystem as a whole can't, the ecosystem fractures, and if it doesn't converge on the latest version, it slowly withers and dies.
Then those developers won't ever use anything ever. Why would breaking changes in an explicitly unstable development version exclude it from use for all time?
If you want stability, stick to stuff that has stability guarantees, but at the very least let them make breaking changes during development.
Every time I touched Zig, examples I found on the internet were no longer working. I worked on a project for a while and then the stuff I used was deprecated / broken on the newer version.
I like Zig, but I'm waiting for it to become somewhat stable, because the amount of breaking changes feels pretty significant. I suppose that's the price of progress.
That is because the language is pre-1.0.
The new language I follow a lot is Mojo, it also has this problem.
I think the only way to follow a new (unstable) language is to join whatever community where the conversation happens; otherwise, what you think you know about the language will become outdated pretty quickly.
Just to start some discussion about the actual API and not the breaking change aspect of it:
I find the `Reader.stream(writer, limit)` and `Reader.streamRemaining(writer)` functions to be especially elegant to build a push-based data transformation pipeline (like GREP or compression/encryption). You just implement a Writer interface for your state machine and dump the output into another Writer and you don't have to care about how the bytes come and how they leave (be it a socket or shared memory or file) -- you just set the buffer sizes (which you can even set to zero as I gather!)
`Writer.sendFile()` is also nice, I don't know of any other stream abstraction that provides this primitive in the "generic interface", you usually have to downcast the stream to a "FileStream" and work on the file descriptor directly.
re: sendfile in the interface - that's important because while downcasting the stream to "FileStream" would work if your pipeline looks like A -> B, it falls apart the moment you introduce an item in the middle (A -> B -> C). Meanwhile I have a demo of File -> tar -> HTTP (Transfer-Encoding: chunked) -> Socket and the direct fd-to-fd copies make it all the way through the chain!
As a hobby Zig developer, it's a bummer to see a breaking change in something so fundamental, but I get that's what I accept when building on a pre-1.0 language.
I hope that the Zig team invests more into helping with migration than they have in the past. My experience for past breaking changes is that downstream developers got left in the cold without clear guidance about how to fix breaking changes.
In Zig 0.12.0 (released just a year ago), there were a lot of breaking changes to the build system that the release notes didn't explain at all. To see what I mean, look at the changes I had to make[0] in a Zig 0.11.0 project and then search the release notes[1] for guidance on those changes. Most of the breaking changes aren't even mentioned, much less explained how to migrate from 0.11.0 to 0.12.0.
>Some of you may die, but that is a sacrifice I am willing to make.
>-Lord Farquaad
[0] https://github.com/mtlynch/zenith/pull/90/files#diff-f87bb35...
[1] https://ziglang.org/download/0.12.0/release-notes.html
I'm definitely looking at the example set by hare with interest[0]. Also unironically love Shrek. I once hosted a viewing party of Shrek Retold[1] in my tiny NYC apartment :D
[0] https://harelang.org/blog/2025-06-11-hare-update/
[1] https://www.youtube.com/watch?v=pM70TROZQsI
Good to hear! (on all fronts)
An automated tool would be great, but even good documentation with examples of before vs. after code snippets would go a long way.
This is why I'm surprised when production projects, like bun, choose to use zig. I don't think the language itself is a bad choice (although I do disagree qith some of the design decisions), but having to make substantial changes when there are breaking changes like this because the language is pre-1.0 every so often in a large code base isn't something I would want to deal with.
Zig just caught up with the practice that runs rampant in JavaScript land ;)
maybe... it's got some really good things going for it that are worth the pain.
One issue I have with the old reader/writer pattern is that it is not easy to store them in a struct. Reader and writer are passed into a function as 'anytype' which implements any of the read() or write() functions. Often time in a struct's init() function, I want to take in a reader/writer and store it for later use. It's close to impossible since I don't know what type of the field of the struct to store them.
Does the new change make it easier to store reader/writer in a struct?
Yes, that is precisely what "non-generic" means.
something that I don't understand about the actual API:
I maintain the zigler library, and one thing that was useful about the old async "colored-but-not-really" functions was that they implicitly tolerate having internal suspend points (detail: https://www.youtube.com/watch?v=lDfjdGva3NE&t=1819s) -- I'm not sure if having IO be a passed parameter will still let me do that? Can users build their own functions with yield points? And will you be able to jump out of the frame of a function and give control back to the executor, to let it resume later?
Hi Isaac, good to (virtually) see you.
As you're aware, that feature of the language ("stackless coroutines", "generators", "rewriting function logic into a state machine") was regressed. At first, this new IO interface won't have that capability.
However, as a followup issue, I'd like to reintroduce it, potentially in more low-level manner, for use inside IO implementations. Combined with restricted function pointers, this will allow functions that can suspend to pass through runtime-known function pointer boundaries - something that was terribly clunky before to the point that it compromised the entire design. This means that, again, the same IO interface usage code will be able to be reused, including when the implementation uses suspend points, and the automatic calling convention rewriting will be able to propagate through the interface into the usage code.
The issue to track is: https://github.com/ziglang/zig/issues/23446
I'll add that I'm still keen on the previous suspend/resume keywords and semantics as a solution to this issue.
thanks! I would have asked on stream but I'm in a bit of a different timezone than usual so my ability to track livestream times competently has regressed.
As an aside, do you think in the near future there will be a "guide to building a compiler backend" either in-project or by the community?
For context this was presented, alongside other things, in the Zig Roadmap 2026 stream.
VOD: https://youtu.be/x3hOiOcbgeA
A big change like this makes me hopeful Zig may revisit and improve other design choices in the future.
I don't mind breaking changes if I can fix them within a day.
What bothers me with C/C++ is how difficult it is to cross compile a simple Windows + SDL app from inside WSL without MSVC installed.
I've spent weeks on this.
If Zig saves me from that nightmare, and still lets me use C++ libraries, I will gladly switch over to it.
None of which has anything to do with C++ the language.
That doesn't matter much when it's specifically the C/C++ compiler vendors who don't care about fixing the problem. It would be trivial for C/C++ compiler vendors to make cross-compilation as simple as with the Zig toolchain, but they don't care about the problem. Fast forward to today and the best C/C++ cross-compilation toolchain is the Zig toolchain.
In theory yes, in practice that's irrelevant unless you can show someone has done it, and nobody has in 40+ years as far as I know
You'll have to write C API wrappers around your C++ libraries to access them from Zig, but other then that I can cross-compile my mixed C/C++/Zig projects using Windows APIs like DXGI/D3D/WASAPI with `zig build -Dtarget=x86_64-windows` from a Mac with the vanilla Zig toolchain.
...you don't even need to port anything in your C/C++ project to Zig, just integrate `zig cc` as C/C++ compiler into your existing build system, or port your build system files to build.zig.
That works out great, since all the libraries I need are C or have C wrappers anyway. I might actually do this, thanks.
Wait what. Shouldn’t zig crosscompile C++ just fine?
Yes, the C++ code compiles just fine, but to call into C++ APIs from Zig you'll need a C API wrapper (and the same is true for ObjC APIs). Not an issue of course for pure C++ projects when the Zig toolchain is just used for cross-compiling.
I tried Zig some time ago to use with microcontrollers. It has a generator for the pins, which was nice. But subsequent versions broke as Zig changed syntax. So I started going down the rabbit-hole (it needed a newer version of llvm, for example) until I eventually decided that the game wasn't worth the candle.
The fact that another breaking change has been introduced confirms my suspicion that Zig is not ready for primetime.
My conclusion is to just use C. For low-level programming it's very hard to improve on C. There is not likely to be any killer feature that some other contender will allow you to write the same code in a fifth of the lines nor make the code any more understandable.
Yes, C may have its quirky behaviour that people gnash their teeth over. But ultimately, it's not that bad.
If you want to use a better C, use C++. C++ is perfectly fine for using with microcontrollers, for example. Now get back to work!
Well, that's why Zig is 0.x and not 1.x. I'm fine even with large scale breakage if the direction is right (and looking at the mess that C++ has become for the sake of backward compatibility, IMHO breaking changes are also the better option after 1.x, as long as there's features to help manage the required changes).
Also, "Zig the language" is currently better designed than "Zig the stdlib", so breaking changes will actually be needed in the future at least in the stdlib because getting it right the first time is very unlikely, and I don't like to be stuck with bad initial design decisions which then can't be fixed for decades (again, as a perfect example of how not to do it, see C++)
I can relate, because I have so much things in years that broke left and right, but at the same time (except if you are talking about pre/alphas) I think is unhealthy to be vary of breaking changes.
A language, in special, should be able to do it. Extreme compatibility is the way to make the mistake that is C.
A breaking change that fix something is a investing that extend infinity to the feature.
Fear to do it, like in C, is how you accumulate errors, mistakes, millions of dollars wasted, because is also compound debt.
P.D: I think langs should be fast to break pre 1.0, and maybe have room to do it each 5/7 years. Despite the debacle of Python (that is in fact more reflective of python than of breaking), there should be possible to make a relatively painless sunsetting with good caring
It absolutely is and we (ZML) are using it with great success. That said, Andrew said he would absolutely would break compat if it meant things go in the right direction. Yes, it can be painful sometimes, yes I do not always agree with his choices, but it has never been a blocker nor a significant time sink.
And in the end, things do improve significantly.
In this case, I think the new IO stuff is incredible.
It also helps job prospects of Zig programmers within organizations that have already adopted Zig -- more breakage, more job security.
advances the purposes of cynics as well, so big bonus.
> The fact that another breaking change has been introduced confirms my suspicion that Zig is not ready for primetime.
Huh, it was the 0.14 version number for me.
Pandas (different world: Python) arguably peaked in hype (if not popularity) before reaching 1.0
0.x doesn't say as much as it used to 20 years ago, many fine projects keep it for way too long.
Zig has a pretty well documented 1.0 goals. It was the first thing I heard about zig from Andrew about. https://youtu.be/5eL_LcxwwHg
Yes, that was the point. To understand what that "0.14" means, we need to know those "well documented 1.0 goals" and some hour long YouTube video. That is, merely the "0.14 version number" without context is not enough, like your previous comment said.
You look at the version, the milestones https://github.com/ziglang/zig/milestones, and it makes sense. The YouTube video is just more proof. Picking up a 0.14 software without looking the most basic thing about it like “oh, what kind of 0.14 is this” then complaining that “it’s not ready for prime time” is odd behavior
Well, that's a sentiment I don't quite agree with. It willfully ignores industry experience with c/c++ whence zig, rust, D, and others.
If your micro controller is say <5000 lines maybe ... but an OS or a mellanox verbs or dpdk API won't fall so easily to such surface level thinking.
Maybe zig could help itself by providing through llvm what Google sometimes does for large api breaking changes ... have llvm tool that searches out old api invocation update to new so upgrading is faster, more operationally effective.
Google's tools do this and give the dev a source code pr candidate. That's how they can change zillions of calls with confidence.
I haven’t done embedded stuff in Rust, but the nostd crates and automatically generated libraries from manufacturer SVDs seemed neat. The ability to trivially pull in already written functionality would also seem fantastic.
But at some point it'll be ready. Might it be worth it then?
Obligatory C is not a low level language: https://queue.acm.org/detail.cfm?id=3212479
I also have to disagree with C++ for micro controllers / bare metal programming. You don't get the standard library so you're missing out on most features that make C++ worthwhile over C. Sure you get namespaces, constexpr and templates but without any standard types you'll have to build a lot on your own just to start out with.
I recently switched to Rust for a bare metal project and while its not perfect I get a lot more "high level" features than with C or C++.
In deployments where C and C++ are the only two options available, and management is not willing to get another one, C++ still has lots of improvements over, as "Typescript for C".
Building our own types was a rite of passage for C++ programming back in the early 1990's, and university curriculums for C++ as well.
> You don't get the standard library
Why is that? Sure, allocating containers and other exception-throwing facilities are a no-go but the stdlib still contains a lot of useful and usable stuff like <type_traits>, <utility>, <source_location>, <bit>, <optional>, <coroutine> [1] and so on
[1] yes they allocate, but operator new can easily be overridden for the promise class and can get the coro function arguments forwarded to it. For example if coro function takes a "Foo &foo", you can have operator new return foo.m_Buffer (and -fno-exceptions gets rid of unwinding code gen)
In the C and C++ languages there's a thing called a "freestanding" implementation. This is roughly analogous to Rust's nostd.
In C the freestanding environment doesn't provide any concrete features, you don't get any functions at all, you can get a bunch of useful constants such as the value of Pi or the maximum value that will fit in an unsigned integer, some typedefs, that's about it. Concrete stuff from the "C standard library" is not available, for example it does not provide any sort of in-place sort algorithm, or a way to compare whether two things are the same (if they fit in a primitive you can use the equality operator)
In C++ there are concrete functions provided by the language standard in freestanding mode. These, together with definitions for types etc. form the freestanding version of the "standard library" in C++. There's a long period where this was basically untended, it wasn't removed but it also wasn't tracking new features or feedback. In the last few C++ versions that improved, but even if you have a new enough compiler and it's fully compliant (most are not) there's still not always a rhyme or reason to what is or is not available.
In Rust it's really easy. You always have core, if you've got a heap allocator of some sort you can have alloc, and if there's a whole operating system it provides std.
In most cases a whole type lives entirely in one of those modules, Duration for example lives in core. Maybe your $5 device has no idea which year this is, let alone day but it does definitely know 60 seconds is a minute.
But in some cases modules extend a type. For example arrays exist in core of course - an array of sixty Doodads where Doodads claim to be Totally Ordered, can just be unstably sorted, that works. But, what if we want a stable sort, so that if two equal Doodads were arranged A, B they are not reversed B, A ? Well Rust's core module doesn't provide a stable sort, the stable sort provided uses an allocation and so the entire function you need just doesn't exist unless you've got allocators.
I know how freestanding works, and I agree that Rust's "nostd" is much more thought out than C/C++'s freestanding, however
> This is roughly analogous to Rust's nostd.
"freestanding" is actually worse that this. It means that the compiler can't even assume things about memcpy and optimize it out (as on gcc it implies -fno-builtin), which pessimizes a lot of idiomatic code (eg. serialization).
The "-nostdlib" option is usually what one wants in many cases (don't link against libc but still provide standard C and C++ headers), such as when compiling privileged code with -mgeneral-regs only and such. This way you can benefit from <chrono>, etc.
If you are writing userland code you should be using a toolchain for this, instead of relying of freestanding/nostdlib which are geared towards kernel code and towards working around defective toolchains.
What the standard says about freestanding is all well and good. But what do actual embedded c++ compilers actually ship?
Also embedded covers a very wide range of computers.
If you are targeting armv4t/armv5/armv6k+vfp (or armv7 but not optimized for it) for Aarch32, or Armv8.0-A and are fine with newlib, then devkitARM and devkitA64, respectively, get the job done and ship GCC 15.1.
There is also devkitPPC, shipping with the same toolchain (and which additionally has some Obj-C support iirc).
Custom patches to newlib and consorts (https://github.com/devkitPro/buildscripts/) introduce hooks and WEAK functions that allow to implement standard library functions on almost any platform, on a platform library basis or even on a per-program basis (with some restrictions on lock sizes).
Indeed. My point was that freestanding is a strawman, with likely little relevance for embedded developers.
That's the most frustrating part, a lot of the std library would work on a bare metal system (and would be rather useful), but getting those parts into your project and avoiding the ones that will give you compiler errors in form of esoteric poems is a nightmare.
Vendors at this point seem to give their implementation of some of the std library components, but the one's I've seen were lacking in terms of features.
This is a problem with WASM as well, use a certain innocent function from the C++ std lib and suddenly your WASM binary grows by 10mb.
Doesn’t Rust nostd give up a comparable part that C++ would give up? It’s typically all the memory allocations that inhibit the use of data structures.
Yeah you don't get its std library, but Rust makes a distinction between core and std, and core is available. Doesn't sound like a lot but you get your standard types like Result and Option, you get slices since they're part of the language or if you need allocation you can define the global allocator in core::alloc.
This distinction makes it really comfortable to use.
Though one caveat about no_std is that you'll need some support library like https://docs.rs/cortex-m-rt/latest/cortex_m_rt/
Doesn't arduino use c++?
Dude Zig is clearly pre 1.0. It can introduce breaking changes with every commit and rightfully so. I mean d'oh it's Not ready for Prime Time.
Zig uses ZeroVer so don't expect it to ever hit 1.0.
https://0ver.org/
Is that versioning site supposed to be some kind of joke? I can't really figure out if they are joking or serious - the tone comes off as joking, but it could be read as serious too.
I guess you're being Poe's lawwed but it's definitely a joke
The point of the language stability is spon on, but it's actually very easy to improve on C, not in terms of performance or readability, but rather safety and the ability to encode more constraints in a compact form than C would ever allow. Sometimes it's not about less lines, but the same amount of lines that encode a lot more stuff than these lines in C.
This is why it's good to have automated tooling that can do semantic changes on your language and standard library use. Go has `go fix` even if it was only used in pre-1.0 days AFAIK. It is never lost because this type of tooling can be used as the foundation for linters, refactoring tools, etc. Is there such a solution in Zig?
zig fmt has some auto-fixes for upgrading source code to new Zig versions, AFAIK it's only for language changes, not stdlib changes though.
Nice, I wonder if adapting it for this change would make sense?
its a really huge change
By the title I thought that they were going to implement this,
https://github.com/ziglang/zig/issues/5973
Is Writergate a technical term or a reference to Watergate?
in a sense it's a reference to Allocgate (a previous big breaking change to allocators in Zig), which was itself a reference to Watergate
by now it's a well worn/used trope to make -gate names for any scandal. But the distance in time (and culture) to the original Watergate scandal is growing, so it seems less impactful now.
[flagged]
Gamergate was not the first scandal (post-Watergate) to use -gate: https://en.m.wikipedia.org/wiki/List_of_-gate_scandals_and_c...
As a data point: I can honestly say I’ve never heard of Gamergate before this comment, and I am a 31-year-old white male. I did read a book on Watergate when I was in my teens, though.
GamerGate is well worth understanding. While some of the details are unique to the situation, it’s provided a template for right-wing radicalisation that’s been employed multiple times since. There’s also an entertaining “where are they now” aspect where some people have been almost forgotten and some are in the White House. KotakoInAction is still going and has (inevitably) morphed into a bunch of people complaining about the Lūgenpresse.
This comment smells like LLM output, you have said a lot, but I didn't understand anything.
Let’s just say that I think it’s an important event and understanding how a guy harassing his ex-girlfriend became a formative moment in alt-right history is fascinating, but I really don’t want to get drawn into arguments with anyone who’s still drinking that particular kool-aid.
I like Zig but it seems to just keep redesigning itself, while other languages like Odin “shipped” long ago and don’t seem to need to look back. Is Zig suffering from perfectionism syndrome where things are never good enough??
This is a standard library change, not a syntax change
I think the main big thing that’s left for 1.0 is to resurrect async/await.. and that’s a huge thing because arguably very few if any language has gotten that truly right.
As the PR description mentions: “This is part of a series of changes leading up to "I/O as an Interface" and Async/Await Resurrection.”
So this work is partially related to getting async/await right. And getting IO right is a very important part of that.
I think it’s a good idea for Zig to try to avoid a Python 3 situation after they reach 1.0. The project seems fairly focused to me, but they’re trying to solve some difficult problems. And they spend more time working on the compiler and compiler infrastructure than other languages, which is also good. Working on their own backend is actually critical for the language itself, because part of what’s holding Zig back from doing async right is limitations and flaws in LLVM
> I think the main big thing that’s left for 1.0 is to resurrect async/await.. and that’s a huge thing because arguably very few if any language has gotten that truly right.
Interesting. I like Zig. I dabble periodically. I’m hoping that maturity and our next generation ag tech device in a few years might intersect.
Throwing another colored function debacle in a language, replete with yet another round of the familiar but defined slightly differently keywords, would be a big turn off for me. I don’t even know if Grand Central Dispatch counts, but it—and of course Elixir/Erlang—are the only two “on beyond closures/callbacks” asynch system I’ve found worked well.
As far as I know, Zig still wants their implementation of async to avoid function colouring.
My understanding is that the current plans are to implement async in userspace, as part of a broader IO overhaul.
This would involve removing async/await as keywords from the language.
part of function coloring is "not being trivially resolvable". in this case the function coloring boundary is trivially resolvable.
easy peasy. you've resolved the coloring boundary.now, if you want to be a library writer, yeah, you have to color your functions if you don't want to be an asshole, but for the 95% use case this is not function coloring.
>> because part of what’s holding Zig back from doing async right is limitations and flaws in LLVM
this was interesting! Do you have a link or something to be able to read about it?
Much of the discussion is buried in the various GitHub issues related to async. I found something of a summary in this Reddit comment
https://www.reddit.com/r/Zig/comments/1d66gtp/comment/l6umbt...
iirc the llvm async operation does heap allocations?
Sorry, I think this comparison is just unfair. Odin might have "shipped" but are there are any projects with significant usage built on it? I can count at least 3 with Zig - Ghostty, Tigerbeetle, and Bun.
Programming languages which do get used are always in flux, for good reason - python is still undergoing major changes (free-threading, immutability, and others), and I'm grateful for it.
All the JangaFX products (such as EmberGen) are written in Odin.
Thank you, my bad - I wasn't aware.
I still think what drives languages to continuously make changes is the focus on developer UX, or at least the intent to make it better. So, PLs with more developers will always keep evolving.
> Odin might have "shipped" but are there are any projects with significant usage built on it?
JangaFX stuff is written in Odin and has some pretty big users.
https://jangafx.com/ https://odin-lang.org/showcase/
I'd say it is taking some serious design decision for like the 30 years to come, so I am happy it breaks things now.
I wish it moved to snake_case for functions, this is a cosmetic detail but it drives me crazy.
If I look to how I was programming in 1986, and how I am programming now, it is too much hope to have such a design goal, especially since most likely there is little Zig has to add to quantum and AI based systems.
This feels out of touch with the actual industry today.
Out of touch is assuming that a programming language with zero touch points with AI tooling is going to be relevant in a AI driven industry.
What is "AI tooling"?
Ask Chat GPT, Claude or Gemini.
I’m glad they are taking their time. They’ve made solid improvements and I don’t think get the sense that they’re paralyzed with perfectionism.
They’re not rushing, that’s for sure. But I’ve never felt worried about 1.0 never happening in an unending pursuit of unrealistic impossible ideals.
They've been pretty explicit about their goals in not settling for a local optimum in the language and taking their time.
It seems like folks expect stability pre 1.0.
That's kinda my experience with watching Zig. It went from 'look how simple this is' to 'look at this new feature syntax' long ago.
People used to compare it as simpler than Rust. I don't agree that it's simple anymore at all.
None of this is meant to be badmouthing or insulting. I'm a polyglot but love simple languages and syntaxes, so I tend to overly notice such things.
The computer is a machine, and modern ones are complicated. When I am programming, I want to precisely control that machine. For me, simplicity is measured in how complicated it is to get the machine to do what I want it to do. So, eg, having several different operators for adding two integers sounds complicated. However there is simplicity in not having to reach far to actually get the correct behavior, and there is some simplicity in the process of being forced to make that choice as it irons about what behavior you actually want.
I think that's long been the argument of simplicity. 'Simple to remember' vs 'simple to perform.'
I tend to fall into the former camp. Something like BF would be the ultimate simple language, even if not particularly useful.
Structured concurrency is a notoriously hard problem. This is part of Zig’s 4th attempt to get it right.
the only two new feature syntaxes in about six releases have been multiple iterations in for loops and continue in switches? maybe reified tuple types too (not just implicit) and destructuring tuples.
a few things have been removed, too. and async/suspend/nosuspend/await, usingnamesplace are headed for the woodchipper.
Rust will be (already became?) as complex as C++, if not more. Zig will be as complex as early rust. It's like a force of nature.
How do you figure Rust is "as complex as C++" ?
Looks like it, while at the same time still lacks any killer application that would make learning Zig a requirement, regardless of one's opinion on the language, like it already happened with many others now in mainstream.
So where is Zig's OS, browser, docker, engine, security, whatever XYZ, that would make having Zig on the toolbox a requirement?
I don't see Bun nor Tiger Beetle being that app.
Not a killer app, but I think one thing you might consider is zig build.
Not a seller to me.
The killer application case is slow adoption inside ancient C and C++ codebases. That's the angle.
It hardly brings anything new to the table in such cases, given its approach to safety.
Most of it you can already get in C and C++, by using the tools that have in the market for the last 30 years.
It brings a lot of nice features, the potential for a healthier ecosystem, a unified build system, explicit allocators, explicit casts, and so on.
Ecosystems sell languages, not the other way around.
And yet C/C++ developers have mostly spent the last 30 years not using those tools which is why safer successors to C and C++ appeared.
Zig is as safe as Modula-2 or Object Pascal, not the turning point of something like Swift or Rust.
I think pjmlps point is that Zig is not adding enough to be one of those safer successors.
Well it's about time.
Zig Roadmap 2026 : https://www.youtube.com/watch?v=x3hOiOcbgeA
Is there non-video version?
https://ziglang.org/download/0.14.0/release-notes.html
I have written very little Zig and a lot of Rust, but I love both languages. However, Zig having breaking changes has made me wary of not starting anything serious it with – yet. I'm still happy that these changes happen, because I'm willing to wait for a stable version. Meanwhile, I enjoy myself some Rust, and probably continue doing so.
And this is exactly why you do not use shiny new languages for your projects. Hope tigerbeetle won't have too much trouble with this
This is exactly why we chose Zig.
Andrew’s design decisions in the language have always been impeccable. I’ve never seen him put a foot wrong and would have made the same change myself.
This is also not new to us, Andrew spoke about this at Systems Distributed ‘25.
Also, TigerBeetle has and owns its own IO stack in any event, and we’ve always been careful to use stable language features.
But regardless, it’s in our nature to “do the right thing”, even if that means a bit of change. We call this “Edge” and explicitly hire for people who have the same characteristic, the craftspeople who know how to spot great technical quality, regardless of how young (or old!) a project may be.
Finally, I’ve been in Zig since 2018. I wouldn’t exactly call it “shiny new”. Zig already has the highest quality toolchain and std lib of anything I would use.
> Andrew’s design decisions in the language have always been impeccable. I’ve never seen him put a foot wrong and would have made the same change myself.
Interesting, who designed the old Zig IO stack which alas Andrew needed to replace?
Actually, nobody.
Here is the commit where Reader/Writer was introduced: https://github.com/ziglang/zig/commit/5e212db29cf9e2c06aba36...
This is a few months after `git init`. You can see I was really just working on the parser, with a toy example to get things started.
Over time, I merged contributions that made minor changes and shuffled things around, and these APIs evolved to kind of work okay. But nobody really considered "the Zig IO stack" as a whole and put in design effort. That is happening for the first time right now.
This is how programming languages are constructed. Things evolve slowly over time, and periodically you have to reevaluate things and do major reworkings.
I've built a bridge 20 years ago. It was great, people could finally go from one side of the river to the other.
Everyday, more and more people started using that bridge.
In 2025, I've rebuilt the bridge twice as big to accommodate the demand of a growing community.
It's great and the people love it!
Indeed, but to be fair, the old stack was done with a hand, not a foot!
A less experienced Andrew
I think what you're not appreciating is how this design is a huge improvement over the status quo, not only in Zig, but also the streaming interfaces in most languages.
Wait till the SD25 talk on this comes out, to first understand the rationale a bit better!
> I think what you're not appreciating is how this design is a huge improvement
The point was that if he did the old design, which needed improving enough to justify breaking the language backwards compatibility, then why say his decisions are impeccable? Pobody's nerfect.
Yes, and my point (in response) was that Zig's status quo was no different from other languages, but now is better. (There's some humor in the issue's title “Writergate” here!)
Again, we use Zig, and this change is welcome for us.
We also like that Zig is able to break backwards compatibility, and are fully signed up for that.
The crucial thing for TigerBeetle is that Zig as language will make the right calls looking to the next few decades, rather than ossify for fear of people who don't use it.
> Zig already has the highest quality toolchain and std lib of anything I would use.
My couple of days experience with Zig was very lackluster with the std lib, not that it is bad, but feels like it is lacking a lot of bare essentials. To be expected for a new pre-1.0 language of course.
Depends on which language you're coming from. Compared to C or even C++, the Zig stdlib has already many more things to offer. Compared to Python or Node.js it's quite bare bones.
Fair, I was mentally comparing to Go. I was a bit disappointed there wasn't more wrappers around basic OS stuff. Go stdlib wraps everything and does its best to make stuff cross-platform.
In my specific case I was trying to send some DNS messages. I went the route of linking libc and using the posix data structures for DNS messages and struggled quite a bit how to map the C data structures to my program.
This kind of thing is a big barrier to adoption unfortunately.
Good to know, also thanks for the detailed reply! Glad you are fully aware of these nuances, but it also doesn't surprise me considering your amazing presentation of Tigerbeetle! Much success in the future.
Thanks zwnow, appreciate your kind words, and my pleasure!
I think you'll enjoy Andrew's talk on this too when it comes out in the next few weeks.
The velocity of Zig has been valuable for us. Being able to put things like io_uring or @prefetch in the std lib or language, and having them merged quickly. Zig has been so solid, even with all the fuzzing we do. It's really held up, and upgrades across versions have not been much work, only a pleasure.
Zig is the only language I've used where every library specifies the one (and only) compiler version it works on in their GitHub readme.
How many pre-release languages are you typically using though?
Experienced quite the contrary, some time ago at least..
Which is a pity because really liked the language, but the discovering what works with what, oh dear
TigerBeetle uses io_uring afaik so they don’t use these io interfaces at all.
Also found that these interfaces only cause problems for performance and flexibility in rust so didn’t even look at them in zig.
The risk isn't unique to shiny new languages
Are people deploying production code in a language that is still in its 0.x version?
A lot of people were using tokio in prod when it was 0.1 and didn’t get upset afaik.
Rust didn’t even have async await at that time
Some prod are more prod than the others.
> A lot of people were using tokio in prod when it was 0.1 and didn’t get upset afaik.
Citation needed. A lot of people wanted Rust to stabilize. Hence why they huried to Rust 1.0.
I mean, what's the difference to the python 2/3 debacle? People were writing/extending in python 2 long after it was declared obsolete
Not having breaking changes every N months?
Each new minor Python 3.x version has plenty of deprecations followed by removals in the stdlib though.
I don't think even those are particularly short periods. TestCase.assertEquals() was deprecated in Python 3.2 (February 2011) and removed in Python 3.12 (October 2023). 12 ⅔ years to get rid of a silly alias because it's a breaking change (of a single character).
It's not about sticking around on an old version, it's about ever being able to catch up, and what the rest of the ecosystem is going to do. Python did this major version bump that broke a lot of the ecosystem, and it went so poorly that they've effectively promised never to do it again and completely excised any thought of ever having a major version bump again, and other languages and communities now point to it regularly as a debacle to be avoided.
When you break things regularly, you're forcing a choice on every individual package in the ecosystem: move forward, and leave the old users behind, or stay behind, and risk that the rest of the ecosystem moves forward without you. Now you've got a whole ecosystem in a prisoner's dilemma. For an individual, maybe you can make a choice and dig in and make your way along without too much trouble. But the ecosystem as a whole can't, the ecosystem fractures, and if it doesn't converge on the latest version, it slowly withers and dies.
I dont but there are companies who trust the language (which is a good thing but also short sighted)
This arguably is why julia still has no real users and python, c++, and fortran still rule in hpc, despite hypsters doing the hyping.
At some point people just want their code to work so they go back to something that just works and won't break in a few years.
python is a really bad example. Code constantly stops working properly with language updates.
how was python 2/3 again?
Famous because of how rare it was.
At my first job, the senior guy on my team used to say:
"Software is just like lasagna. It has many layers, and it tastes best after you let it sit for a while".
I still follow this principle years down the line and avoid introducing shiny new things on my projects.
well, in that case, the lasagna is still being cooked, until served (1.0), why question the chef?
let him cook
[flagged]
[flagged]
[flagged]
Be as sarcastic as you want. This is a feeling many developers probably share.
Then those developers won't ever use anything ever. Why would breaking changes in an explicitly unstable development version exclude it from use for all time?
If you want stability, stick to stuff that has stability guarantees, but at the very least let them make breaking changes during development.
And “I don’t care if you use it or not” is a feeling many other developers share so both are valid
Every time I touched Zig, examples I found on the internet were no longer working. I worked on a project for a while and then the stuff I used was deprecated / broken on the newer version.
I like Zig, but I'm waiting for it to become somewhat stable, because the amount of breaking changes feels pretty significant. I suppose that's the price of progress.
That is because the language is pre-1.0. The new language I follow a lot is Mojo, it also has this problem.
I think the only way to follow a new (unstable) language is to join whatever community where the conversation happens; otherwise, what you think you know about the language will become outdated pretty quickly.