Cloudflare Details Trie Structure to Optimise Header SanitationPermalink

Kevin Guthrie writing on the Cloudflare blog:

This small and pleasantly-readable function consumes more than 1.7% of pingora-origin’s total cpu time. To put that in perspective, the total cpu time consumed by pingora-origin is 40,000 compute-seconds per second. You can think of this as 40,000 saturated CPU cores fully dedicated to running pingora-origin. Of those 40,000, 1.7% (680) are only dedicated to evaluating clear_internal_headers. The function’s heavy usage and simplicity make it seem like a great place to start optimizing.

The scale of Cloudflare’s operation is staggering when put in terms like this.

River Reverse ProxyPermalink

Josh Aas writing on the Prossimo blog:

The River reverse proxy recently has come a long way since we announced the project in February.

River is a project of Prossimo, itself a project of the Internet Security Research Group (ISRG) project. ISRG are the folks behind Let’s Encrypt. The River reverse proxy is implemented in Rust.

Just about every significantly-sized deployment on the Internet makes use of reverse proxy software, and the most commonly deployed reverse proxy software is not memory safe. This means that most deployments have millions of lines of C and C++ handling incoming traffic at the edges of their networks, a risk that needs to be addressed if we are to have greater confidence in the security of the Internet.

Our own goal is to have River ready to replace Nginx and other reverse proxy software used by Let’s Encrypt within the next year, and we encourage other organizations to start considering where they might start to improve the security of their networks with memory safe proxy software.

That would constitute quite a significant production deployment for the project.

The Static Site ParadoxPermalink

Loris Cro:

In front of you are two personal websites, each used as a blog and to display basic contact info of the owner:

  1. One is a complex CMS written in PHP that requires a web server, multiple workers, a Redis cache, and a SQL database. The site also has a big frontend component that loads as a Single Page Application and then performs navigation by requesting the content in JSON form, which then gets “rehydrated” client-side.
  2. The other is a collection of static HTML files and one or two CSS files. No JavaScript anywhere.

If you didn’t know any better, you would expect almost all normal users to have [2] and professional engineers to have something like [1], but it’s actually the inverse: only few professional software engineers can “afford” to have the second option as their personal website, and almost all normal users are stuck with overcomplicated solutions.

For №2 to be successful for “normal” users I think there needs to be an approachable UI to be able to make changes on the fly… and the implementation gets complicated quickly if you’re doing it on the website itself.

The funny thing is that we we had such a thing in the early days of the web: native apps for authoring a website and uploading it, such as Frontpage, Dreamweaver, or Netscape Composer. It’s strange that these seem to have fallen out of favour now.

The Costs of the Move From 32-Bit to 64-Bit CPUsPermalink

Julio Merino:

All of this growth has been in service of ever-growing programs. But… even if programs are now more sophisticated than they were before, do they all really require access to a 64-bit address space? Has the growth from 8 to 64 bits been a net positive in performance terms?

Let’s try to answer those questions to find some very surprising answers.

We observe massive differences in the machine code generated for the trivial main function. The 64-bit code is definitely smaller than the 32-bit code, contrary to my expectations. But the code is also very different; so different, in fact, that ILP32 vs. LP64 doesn’t explain it.

Was the move to 64-bit CPUs all upside? Julio runs the numbers to show some pros and cons.

Mastodon 4.3 ReleasedPermalink

Eugen Rochko writing on the Mastodon blog:

Mastodon 4.3 just landed! If you’re a mastodon.social user, you might have already seen some of this in action as we’ve been gradually rolling out these updates over the course of the last 11 months in nightly releases, but we’re finally making a new stable release available to the community. If you use a different server, you will get access to these improvements once your server operator upgrades.

This one’s been in the works for a long time. The notification improvements are most welcome. I have been using the Phanpy Mastodon client for a while, partly due to grouped notifications. Glad this has functionality has made it to the official client now.