Socket Passing for Auto-Reloading ServersPermalink

Armin Ronacher:

But what about the socket? The solution to this problem I picked comes from systemd. Systemd has a “protocol” that standardizes passing file descriptors from one process to another through environment variables. In systemd parlance this is called “socket activation,” as it allows systemd to only launch a program if someone started making a request to the socket. This concept was originally introduced by Apple as part of launchd.

To make this work with Rust, I created two crates:

  • systemfd is the command line tool that opens sockets and passes them on to other programs.
  • listenfd is a Rust crate that accepts file descriptors from systemd or systemfd.

It’s worth noting that systemfd is not exclusively useful to Rust. The systemd protocol can be implemented in other languages as well, meaning that if you have a socket server written in Go or Python, you can also use systemfd.

For projects like Linked List I currently use something like

watchexec -w src -r 'cargo run --bin linkedlistd'

to restart the server when the source changes. As Armin points out in this post there is a window in which there server is down, so if you hit F5 to reload the page during that window you get a connection error. With the presented solution the systemfd tool always has the socket open and socket passing will hand it to the server when its back up—neat. I’ll have to incorporate this into the Linked List code.

Cerebras' Wafer-Scale AI EnginePermalink

Some folks from Cerebras were on the most recent episode of the Oxide and Friends podcast. I’d not heard of Cerebras before, but they’ve developed custom silicon for doing AI inference called the WSE-3. It takes up an entire silicon wafer, and has 900,000 cores and 44Gb of on die SRAM:

This gives every core single-clock-cycle access to fast memory at extremely high bandwidth – 21 PB/s.

Those are some pretty fun figures. All of this aims to make AI inference fast. You can try it out with a small selection of models at inference.cerebras.ai. For the couple of prompts I tried the responses were nearly instantaneous, which is mighty impressive. Of course, their table of comparative figures against the Nvidia H100 does not include power consumption, but I imagine it is possible that it’s better than a cluster of individual machines.

Zig After Months of Using ItPermalink

For a completely different take on Zig compared to the last post, here’s Dimitri Sabadie:

Today, I want to provide a more matured opinion of Zig. I need to make the obvious disclaimer that because I mainly work in Rust — both spare-time and work — I have a bias here (and I have a long past of Haskell projects too). Also, take notice that Zig is still in its pre-1.0 era (but heck, people still mention that Bun, Tigerbeetle, Ghostty are all written in Zig, even though it hasn’t reached 1.0).

I think my adventure with Zig stops here. I have had too frustration regarding correctness and reliability concerns. Where people see simplicity as a building block for a more approachable and safe-enough language, I see simplicity as an excuse not to tackle hard and damaging problems and causing unacceptable tradeoffs. I’m also fed up of the skill issue culture. If Zig requires programmers to be flawless, well, I’m probably not a good fit for the role.

There’s no one true correct choice when it comes to programming languages. Each one has pros and cons. What prospective users value in a language also varies based on personal preferences and values. I’m aligned with Dimitri here. I value correctness, reliability, and performance over surface level simplicity. If you’re into Zig though, that’s cool—the less software written in C the better.

Rust Implementation of Roc Compiler to Be Rewritten in ZigPermalink

Richard Feldman:

However, we have decided to do a scratch-rewrite in a different language—namely, Zig. We’re very excited about it!

Here are some relevant comparison points between the two languages in the specific context of Roc’s compiler in 2025:

  • For many projects, Rust’s memory safety is a big benefit. As we’ve learned, Roc’s compiler is not one of those projects. We tend to pass around allocators for memory management (like Zig does, and Rust does not) and their lifetimes are not complicated. We intern all of our strings early in the process, and all the other data structures are isolated to a particular stage of compilation.
  • Besides the fact that Zig is built around passing around allocators for memory management (which we want to do anyway), it also has built-in ways to make struct-of-arrays programming nicer - such as MultiArrayList. We have abstractions to do this in Rust, but they have been painful to use. We don’t have experience with Zig’s yet, but looking through them, they definitely seem more appealing than what we’ve been using in Rust.
  • Rust’s large ecosystem is also a big benefit to a lot of projects. Again, Roc’s compiler is not one of them. We use Inkwell, a wrapper around LLVM’s APIs, but we actively would prefer to switch to Zig’s direct generation of LLVM bitcode, and Zig has the only known implementation of that approach. Other than that, the few third-party Rust dependencies we use have equivalents in the Zig ecosystem. So overall, Zig’s ecosystem is a selling point for us because once you filter out all the packages we have no interest in using, Zig’s ecosystem has a larger absolute number of depedencies we actually want to use.
  • Zig’s toolchain makes it much eaiser for us to compile statically-linked Linux binaries using musl libc, which is something we’ve wanted to do for a long time so that Roc can run on any distro (including Alpine in containers, which it currently can’t in our Rust compiler). We know this can be done in Rust, but we also know it’s easier to do in Zig. Zig’s compiler itself does this, and it does so while bundling LLVM (by building it from source with musl libc), which is exactly what we need to do. Once again, we can reuse Zig code that has done exactly what we want, and there is no equivalent in the Rust ecosystem.
  • There are various miscellaneous performance optimization techniques that we’ve wanted to use in Rust, but haven’t as much as we’d like to because they have been too painful. For example, we often want to pack some metadata into some of the bits of an index into an array. Zig lets us describe this index as a struct, where we specify what each of the bit ranges mean, and that includes arbitrary integer sizes. For example, in Rust we might store an index as a u32, but in Zig we could break that into a u27 for the actual index data (when we know the index will never need to exceed 2^27 anyway), and then we can specify that we want to use another 3 bits to store a small enumeration, and the remaining 2 bits for a pair of boolean flags. Again, these are all things we could do using Rust and bit shifts, but it’s much nicer to do in Zig. Tagless unions are another Zig feature we expect to be valuable.

Did I mention compile times? I’ll reiterate: compile times are a huge deal. Not only have they been painful in Rust, we also know that the fundamental unit of compilation is a crate, so we’re incentivized to organize our code around what will make for faster compile times rather than what makes the most sense to us as authors. Often the boundaries end up being drawn in the same place regardless, but when those two are in tension, it’s frustrating to have to sacrifice either feedback loop speed or code organization. In Zig we expect we can just have both.

This is a large amount of effort to invest knowing you’ll need to do it again to make the complier self-hosting, so they must be pretty unhappy with the current implementation. The project was already using Zig for the Roc standard library having replaced the Rust version some time ago, so it seems like they’re going in fairly well informed.

Consider me dubious of that memory safety claim. It’ll be interesting to see if memory related bugs that would be prevented by Rust show up in the new implementation.

Roc contributor Brendan Hansknecht on Lobsters:

Our rust compiler implementation definitely could compile a lot faster. A huge pain to the compile times is the fact that it grew organically. If we were to rewrite it again in rust, I’m sure it would compile a lot faster.

These times are from an M1 mac. They are approximately the same as an intel i7 linux gaming laptop (used to be the m1 was way faster, not sure when they became even). All of the below is just building the roc binary. Building test and other facilities is much much worse (and we already combine many tests binaries to reducing linking time, though we could do it more).

After change something in the cli (literally zero dependency and best case possible):

Finished dev [unoptimized + debuginfo] target(s) in 4.15s

After changing something insignificant in the belly of the compiler:

Finished dev [unoptimized + debuginfo] target(s) in 16.95s

And for reference, clean build:

Finished dev [unoptimized + debuginfo] target(s) in 1m 58s

It’s not uncommon for a rewrite to yield benefits just from being a second implementation. It sounds like this would partly be the case here too: a second Rust implementation might be able to be constructed in a way that makes it faster to compile. However, the same way you can make JavaScript fast using specific patterns that play to the JIT compiler, it’s nicer when it’s just fast by default.

Personally the heavy focus on compile times doesn’t really resonate with me—and that’s ok—they’re clearly a huge deal to Richard. The features each person values in a language vary by person. Having come from the dynamic language world with constant runtime errors I was drawn to Rust because it was able to eliminate whole classes of errors without compromising on runtime performance. I’m happy to trade a few seconds of compile time to know that it’s quite unlikely I’ll ever have to be debugging NULL pointers, segmentation faults, or data races in production software.

Via Lobsters

The Invalid 68030 Instruction That Allowed the Macintosh Classic II to BootPermalink

Doug Brown:

This is the story of how Apple made a mistake in the ROM of the Macintosh Classic II that probably should have prevented it from booting, but instead, miraculously, its Motorola MC68030 CPU accidentally prevented a crash and saved the day by executing an undefined instruction.

Incredible that we’re still discovering things about these machines some 30 years later. Doug dives deep to work out why MAME was unable to emulate the Macintosh Classic II.