Zig After Months of Using ItPermalink

For a completely different take on Zig compared to the last post, here’s Dimitri Sabadie:

Today, I want to provide a more matured opinion of Zig. I need to make the obvious disclaimer that because I mainly work in Rust — both spare-time and work — I have a bias here (and I have a long past of Haskell projects too). Also, take notice that Zig is still in its pre-1.0 era (but heck, people still mention that Bun, Tigerbeetle, Ghostty are all written in Zig, even though it hasn’t reached 1.0).

I think my adventure with Zig stops here. I have had too frustration regarding correctness and reliability concerns. Where people see simplicity as a building block for a more approachable and safe-enough language, I see simplicity as an excuse not to tackle hard and damaging problems and causing unacceptable tradeoffs. I’m also fed up of the skill issue culture. If Zig requires programmers to be flawless, well, I’m probably not a good fit for the role.

There’s no one true correct choice when it comes to programming languages. Each one has pros and cons. What prospective users value in a language also varies based on personal preferences and values. I’m aligned with Dimitri here. I value correctness, reliability, and performance over surface level simplicity. If you’re into Zig though, that’s cool—the less software written in C the better.

Rust Implementation of Roc Compiler to Be Rewritten in ZigPermalink

Richard Feldman:

However, we have decided to do a scratch-rewrite in a different language—namely, Zig. We’re very excited about it!

Here are some relevant comparison points between the two languages in the specific context of Roc’s compiler in 2025:

  • For many projects, Rust’s memory safety is a big benefit. As we’ve learned, Roc’s compiler is not one of those projects. We tend to pass around allocators for memory management (like Zig does, and Rust does not) and their lifetimes are not complicated. We intern all of our strings early in the process, and all the other data structures are isolated to a particular stage of compilation.
  • Besides the fact that Zig is built around passing around allocators for memory management (which we want to do anyway), it also has built-in ways to make struct-of-arrays programming nicer - such as MultiArrayList. We have abstractions to do this in Rust, but they have been painful to use. We don’t have experience with Zig’s yet, but looking through them, they definitely seem more appealing than what we’ve been using in Rust.
  • Rust’s large ecosystem is also a big benefit to a lot of projects. Again, Roc’s compiler is not one of them. We use Inkwell, a wrapper around LLVM’s APIs, but we actively would prefer to switch to Zig’s direct generation of LLVM bitcode, and Zig has the only known implementation of that approach. Other than that, the few third-party Rust dependencies we use have equivalents in the Zig ecosystem. So overall, Zig’s ecosystem is a selling point for us because once you filter out all the packages we have no interest in using, Zig’s ecosystem has a larger absolute number of depedencies we actually want to use.
  • Zig’s toolchain makes it much eaiser for us to compile statically-linked Linux binaries using musl libc, which is something we’ve wanted to do for a long time so that Roc can run on any distro (including Alpine in containers, which it currently can’t in our Rust compiler). We know this can be done in Rust, but we also know it’s easier to do in Zig. Zig’s compiler itself does this, and it does so while bundling LLVM (by building it from source with musl libc), which is exactly what we need to do. Once again, we can reuse Zig code that has done exactly what we want, and there is no equivalent in the Rust ecosystem.
  • There are various miscellaneous performance optimization techniques that we’ve wanted to use in Rust, but haven’t as much as we’d like to because they have been too painful. For example, we often want to pack some metadata into some of the bits of an index into an array. Zig lets us describe this index as a struct, where we specify what each of the bit ranges mean, and that includes arbitrary integer sizes. For example, in Rust we might store an index as a u32, but in Zig we could break that into a u27 for the actual index data (when we know the index will never need to exceed 2^27 anyway), and then we can specify that we want to use another 3 bits to store a small enumeration, and the remaining 2 bits for a pair of boolean flags. Again, these are all things we could do using Rust and bit shifts, but it’s much nicer to do in Zig. Tagless unions are another Zig feature we expect to be valuable.

Did I mention compile times? I’ll reiterate: compile times are a huge deal. Not only have they been painful in Rust, we also know that the fundamental unit of compilation is a crate, so we’re incentivized to organize our code around what will make for faster compile times rather than what makes the most sense to us as authors. Often the boundaries end up being drawn in the same place regardless, but when those two are in tension, it’s frustrating to have to sacrifice either feedback loop speed or code organization. In Zig we expect we can just have both.

This is a large amount of effort to invest knowing you’ll need to do it again to make the complier self-hosting, so they must be pretty unhappy with the current implementation. The project was was already using Zig for the Roc standard library having replaced the Rust version some time ago, so it seems like they’re going in fairly well informed.

Consider me dubious of that memory safety claim. It’ll be interesting to see if memory related bugs that would be prevented by Rust show up in the new implementation.

Roc contributor Brendan Hansknecht on Lobsters:

Our rust compiler implementation definitely could compile a lot faster. A huge pain to the compile times is the fact that it grew organically. If we were to rewrite it again in rust, I’m sure it would compile a lot faster.

These times are from an M1 mac. They are approximately the same as an intel i7 linux gaming laptop (used to be the m1 was way faster, not sure when they became even). All of the below is just building the roc binary. Building test and other facilities is much much worse (and we already combine many tests binaries to reducing linking time, though we could do it more).

After change something in the cli (literally zero dependency and best case possible):

Finished dev [unoptimized + debuginfo] target(s) in 4.15s

After changing something insignificant in the belly of the compiler:

Finished dev [unoptimized + debuginfo] target(s) in 16.95s

And for reference, clean build:

Finished dev [unoptimized + debuginfo] target(s) in 1m 58s

It’s not uncommon for a rewrite to yield benefits just from being a second implementation. It sounds like this would partly be the case here too: a second Rust implementation might be able to be constructed in a way that makes it faster to compile. However, the same way you can make JavaScript fast using specific patterns that play to the JIT compiler, it’s nicer when it’s just fast by default.

Personally the heavy focus on compile times doesn’t really resonate with me—and that’s ok—they’re clearly a huge deal to Richard. The features each person values in a language vary by person. Having come from the dynamic language world with constant runtime errors I was drawn to Rust because it was able to eliminate whole classes of errors without compromising on runtime performance. I’m happy to trade a few seconds of compile time to know that it’s quite unlikely I’ll ever have to be debugging NULL pointers, segmentation faults, or data races in production software.

Via Lobsters

The Invalid 68030 Instruction That Allowed the Macintosh Classic II to BootPermalink

Doug Brown:

This is the story of how Apple made a mistake in the ROM of the Macintosh Classic II that probably should have prevented it from booting, but instead, miraculously, its Motorola MC68030 CPU accidentally prevented a crash and saved the day by executing an undefined instruction.

Incredible that we’re still discovering things about these machines some 30 years later. Doug dives deep to work out why MAME was unable to emulate the Macintosh Classic II.

Things People Get Wrong About ElectronPermalink

Felix Rieseberg:

I dedicated years bringing web technologies and desktop apps closer together. The most recent and most successful project in that vein is Electron, which I’ve spent the last ten years working on.

As an open source project, our website never had to “convince people” to use Electron, so I never took the time to actually explain why I’m betting on web technologies to build user interfaces or why I prefer bundling a rendering engine.

Electron’s choices, especially the very idea of building interfaces with web tech and shipping large parts of Chromium to render them, are not uncontroversial. Reasonable people wonder why we made those choices.

I finally took the time to write down the arguments for the choices that we made. You can find that document on the Electron homepage. It tries to pre-empt a lot of common misconceptions. This post is a pairing suggestion—and discusses some of the things I believe people get most wrong about Electron on the Internet today.

I’ll admit I’m not a huge Electron fan. Slack’s early years soured me on it considerably. Back then it would regularly use 2Gb or more of RAM, which was outrageous on a contemporary machine with 8Gb of RAM.

Nowadays Electron apps are somewhat unavoidable, and seem to be a bit better behaved. Still, there’s no standard UI across Electron apps, and each one bringing 100Mb or more of Chromium along for the ride1 does not fill me with joy. Ultimately, Electron feels like a tool that’s good for business, not a tool for building amazing applications.

Nonetheless it’s interesting to read this perspective on Electron from one of its developers. I certainly don’t agree with some of the arguments though:

Electron pits JavaScript versus native code

The argument: JavaScript isn’t the right choice for everything. Native is better. Electron is not native.

This misconception is likely the fault of Electron’s maintainer team and especially me. Most of the talks I’ve given in the earlier days of Electron highlighted its ability to interact with the operating system from JavaScript.

However, the entire point of Electron is that you can pair your web app with any native code you want to write—specifically with C++, Objective-C, or Rust.

I don’t think that point is well advertised or a reflection of Electron use in practice. From my experience most Electron apps use JavaScript and web technologies as much as possible, only using native code if absolutely necessary. In contrast Tauri seems to emphasise the native back-end aspect of their implementation quite a bit more. Right on their home page they say:

Write your frontend in JavaScript, application logic in Rust, and integrate deep into the system with Swift and Kotlin.

Whereas the only similar thing on the Electron home page is:

Native graphical user interfaces

Interact with your operating system’s interfaces with Electron’s main process APIs. Customize your application window appearance, control application menus, or alert users through dialogs or notifications.

The APIs that it’s talking about are JavaScript APIs… so that you can stay in JS land and not have to write native code.

Most Electron apps clock in around 100 to 300 MB. From first principles, smaller is obviously better. Nobody argues that a bigger app is better than a smaller app.

But: Users, both in the consumer and business space, do not care. One hour of Netflix at 4K is roughly 7 GB, a typical Call of Duty update regularly clocks in more than 300 GB. In practice, we have not seen end users care about binary size more than they do about virtually anything else your engineering team could spend time on.

This feels like its coming from an incredibly privileged and sheltered viewpoint. I’m sure there are plenty of places in the world where people do care a lot about downloads clocking in at hundreds of megabytes. No doubt there’s a bunch of them here in Australia—my parents still have a 12MBit connection. Additionally, there’s a big difference between ephemeral downloads for streaming and downloads that permanently take up space on your hard disk (multiplied by every Electron app you have installed).

  1. Kudos to the heroic efforts of the Arch Linux packagers who package Electron apps with a dependency on a shared Electron runtime. You still end up with multiple versions of those runtime packages (such as electron28 and electron30), but at least there’s only one copy of each version, and updates to the dependant apps are much smaller.

Tracking Down a Rogue EINVAL Error in FirmwarePermalink

Dion Dokter writing on the Tweede golf blog about what must have been an incredibly frustrating bug:

We had not gotten further in a long time. At this point, we had already spent around 20 days on this issue collectively and we had no clear direction to go in. Maybe we could take a more brute-force approach, but that was infeasible due to the modem jail.

Why was the RPC call made with length 0 and a null pointer?

We tried using watchpoints at the start of the adventure, but not all hardware supports it. So when we didn’t get it to work, we assumed this microcontroller also had no support for it. But simply updating the debugging stack to the newest versions did the trick. So if you’re ever debugging something and the experience is kind of bad, then make sure to download the newest versions! Even when that means going outside of your OS package manager.

Once Wouter got things going, the whole debugging experience became much nicer.

Things went fast from here. He was able to find where exactly the rpc length was written. He noticed that the length was lowered from the 44 input to the function to 0. Why?

It’s a long read but they did track down the source of the bug in the end.

Posts like this can be good to keep in the back of your mind whenever you’re tackling a gnarly bug. Perhaps there’s an insight here that would help shortcut the process if you ever find yourself in a similar position.