Richard Feldman:
However, we have decided to do a scratch-rewrite in a different language—namely, Zig. We’re very excited about it!
Here are some relevant comparison points between the two languages in the specific context of Roc’s compiler in 2025:
- For many projects, Rust’s memory safety is a big benefit. As we’ve learned, Roc’s compiler is not one of those projects. We tend to pass around allocators for memory management (like Zig does, and Rust does not) and their lifetimes are not complicated. We intern all of our strings early in the process, and all the other data structures are isolated to a particular stage of compilation.
- Besides the fact that Zig is built around passing around allocators for memory management (which we want to do anyway), it also has built-in ways to make struct-of-arrays programming nicer - such as MultiArrayList. We have abstractions to do this in Rust, but they have been painful to use. We don’t have experience with Zig’s yet, but looking through them, they definitely seem more appealing than what we’ve been using in Rust.
- Rust’s large ecosystem is also a big benefit to a lot of projects. Again, Roc’s compiler is not one of them. We use Inkwell, a wrapper around LLVM’s APIs, but we actively would prefer to switch to Zig’s direct generation of LLVM bitcode, and Zig has the only known implementation of that approach. Other than that, the few third-party Rust dependencies we use have equivalents in the Zig ecosystem. So overall, Zig’s ecosystem is a selling point for us because once you filter out all the packages we have no interest in using, Zig’s ecosystem has a larger absolute number of depedencies we actually want to use.
- Zig’s toolchain makes it much eaiser for us to compile statically-linked Linux binaries using musl libc, which is something we’ve wanted to do for a long time so that Roc can run on any distro (including Alpine in containers, which it currently can’t in our Rust compiler). We know this can be done in Rust, but we also know it’s easier to do in Zig. Zig’s compiler itself does this, and it does so while bundling LLVM (by building it from source with musl libc), which is exactly what we need to do. Once again, we can reuse Zig code that has done exactly what we want, and there is no equivalent in the Rust ecosystem.
- There are various miscellaneous performance optimization techniques that we’ve wanted to use in Rust, but haven’t as much as we’d like to because they have been too painful. For example, we often want to pack some metadata into some of the bits of an index into an array. Zig lets us describe this index as a struct, where we specify what each of the bit ranges mean, and that includes arbitrary integer sizes. For example, in Rust we might store an index as a u32, but in Zig we could break that into a u27 for the actual index data (when we know the index will never need to exceed 2^27 anyway), and then we can specify that we want to use another 3 bits to store a small enumeration, and the remaining 2 bits for a pair of boolean flags. Again, these are all things we could do using Rust and bit shifts, but it’s much nicer to do in Zig. Tagless unions are another Zig feature we expect to be valuable.
Did I mention compile times? I’ll reiterate: compile times are a huge deal. Not only have they been painful in Rust, we also know that the fundamental unit of compilation is a crate, so we’re incentivized to organize our code around what will make for faster compile times rather than what makes the most sense to us as authors. Often the boundaries end up being drawn in the same place regardless, but when those two are in tension, it’s frustrating to have to sacrifice either feedback loop speed or code organization. In Zig we expect we can just have both.
This is a large amount of effort to invest knowing you’ll need to do it again to make the complier self-hosting, so they must be pretty unhappy with the current implementation. The project was already using Zig for the Roc standard library having replaced the Rust version some time ago, so it seems like they’re going in fairly well informed.
Consider me dubious of that memory safety claim. It’ll be interesting to see if memory related bugs that would be prevented by Rust show up in the new implementation.
Roc contributor Brendan Hansknecht on Lobsters:
Our rust compiler implementation definitely could compile a lot faster. A huge pain to the compile times is the fact that it grew organically. If we were to rewrite it again in rust, I’m sure it would compile a lot faster.
These times are from an M1 mac. They are approximately the same as an intel i7 linux gaming laptop (used to be the m1 was way faster, not sure when they became even). All of the below is just building the roc binary. Building test and other facilities is much much worse (and we already combine many tests binaries to reducing linking time, though we could do it more).
After change something in the cli (literally zero dependency and best case possible):
Finished dev [unoptimized + debuginfo] target(s) in 4.15s
After changing something insignificant in the belly of the compiler:
Finished dev [unoptimized + debuginfo] target(s) in 16.95s
And for reference, clean build:
Finished dev [unoptimized + debuginfo] target(s) in 1m 58s
It’s not uncommon for a rewrite to yield benefits just from being a second implementation. It sounds like this would partly be the case here too: a second Rust implementation might be able to be constructed in a way that makes it faster to compile. However, the same way you can make JavaScript fast using specific patterns that play to the JIT compiler, it’s nicer when it’s just fast by default.
Personally the heavy focus on compile times doesn’t really resonate with me—and that’s ok—they’re clearly a huge deal to Richard. The features each person values in a language vary by person. Having come from the dynamic language world with constant runtime errors I was drawn to Rust because it was able to eliminate whole classes of errors without compromising on runtime performance. I’m happy to trade a few seconds of compile time to know that it’s quite unlikely I’ll ever have to be debugging NULL pointers, segmentation faults, or data races in production software.
Via Lobsters