Weeknotes 2025 W32: Crunchy

August 4​–​10, 2025

Quick bits:


I have been reading up on the Jujutsu version control system (VCS), but haven’t installed it yet. I figured that I’ve already made life hard for myself lately, and learning another VCS would not be productive right now.

Steve Klabnik’s Jujutsu tutorial is inspiring, though, and I found Madeleine Mortensen’s Jujutsu For Busy Devs article to be insightful, too.

Jujutsu is making me nostalgic for Mercurial. Mercurial is the VCS I used before Git (and after Subversion, if people still remember that). Back in the day, I had hoped that Mercurial would win the Mercurial-Git war, but we all know who the winner ended up being. Boo.

Anyway — I’ll postpone learning Jujutsu and pick it up another time.


I’ve been working on a parser for TomatenMark in Zig. The original parser implementation in Ruby wasn’t up to my standards; this is one of the reasons why I never finished open-sourcing it.

Sadly, I don’t think the Zig version is all that much better. This is in part because I am still getting used to Zig, and in part because I find it difficult to write parsers for markup languages (more so than programming languages). Perhaps I need to dive into the source code of fast Markdown parsers to see how they do it.

Zig has a built-in fuzzer, which would be just excellent for this parser. But it unfortunately isn’t so great and — more importantly — consistently crashes on macOS.


The TomatenMark parser is part of my attempt at writing a static-site generator in Zig. Progress there is slow, in part because it is an inherently complex project. A static-site generator is not a simple tool! But also: I’m still very much learning Zig.

I’m quite happy with my choice of language. I wouldn’t have gotten far with C or Rust. Zig is still not easy, however.

The first transformer2 I implemented was a Markdown-to-HTML one using koino. The second one was also a Markdown-to-HTML one, but using pulldown-cmark. The latter turned out to be an order of magnitude faster, and showed that plugins could fairly easily be written in Rust, too. Roughly 70ms to compile all the Markdown pages on my web site3 — not bad, though it doesn’t even come close yet to doing all the transformations needed for my (admittedly pretty complex) web site.

The goal I have for this project is to build something that is 50× faster than Nanoc, both for cold compilation and incremental compilation. It’ll be a challenge, but with parallelism and async IO, I think it is a realistic goal.

The way filters/transformers are implemented right now is as dynamic libraries (.dylib/.so/.dll) that provide a function with a usize (char *in_str, usize in_len, char **out_str_ptr) signature. This is still lacking support for parameters, but it’s a start.

The idea, for now at least, is to have processing be driven by a Lua DSL. Something like this, perhaps:

ssg.match("/articles/**/*", function (b, item)
  b.snapshot("raw")

  if item:path():endwith(".md") then
    b.filter("koino_markdown")
  end

  b.snapshot("pre-layout")

  b.layout("/default.*")
  b.write{extension = ".html"}

  b.loadsnapshot("raw")
  b.filter("cleanup_text")
  b.write{extension = ".txt"}
end)

It is less elegant than the Ruby version, I’ll admit. But also simpler: the concept of “item representation” is no longer useful, for example. A single rule can output multiple files (a HTML and a TXT one in the example above).

Anyway. It’s all a big experiment.


Joan Westenberg deleted all notes and I’m rather inspired.

Not that I’d do the same, because there is a lot of useful stuff in my notes collection. Stuff I actively use and refer to.

Still, I spent some time pruning my notes. Anything with questionable value is out. A good notes collection is actively maintained and always up to date. A notes collection, I think, should never become an archive.

This is where Deniki is interesting, too: I can create a separate wiki (or “notes collection,” if you will) that I can archive once it has lost its relevance.

Like when a project has come to an end. I’ve got a handful of notes about my ongoing job search, but I know that they will become obsolete once that job search is over. I’ve got a handful of stale notes from the 2024 job search, too. These are dead notes, and belong in an archive rather than my notes collection.

Note-taking is not the skill of collecting stuff.


My note-taking app Bear now supports math formulas. Nice! And the best part: the syntax matches what I was already using, and so getting Bear formulas onto my web site requires, well, literally zero effort.

Anyway, have a simple formula:

a={a/2if a is even3a+1if a is odd a^\prime={\begin{cases}a/2&{\text{if }}a\text{ is even}\\3a+1&{\text{if }}a\text{ is odd}\end{cases}}

Did you know that for every positive number, if you keep on applying this formula over and over again, you’ll eventually get 1? It’s true! For a decent challenge, come up with a proof and then email it to me.


I keep trying to figure out how to use AI for writing code. It is, after all, not just a sought-after skill but often “required knowledge.”

But I am struggling so much to have it generate anything that is remotely useful. It consistently generates code that, at best, is incomplete, requiring a ton of re-work.

The last few attempts have resulted in code that is syntactically invalid, have random bits of Markdown, JSON and HTML injected into Zig and Rust code that seem to have come from bizarre unrelated prompts. Wild. Like Gemini is having a stroke.

A common response I hear is that I need to use agentic AI. So, I tried Zed with Claude Sonnet 4, and it took several minutes to come up with a custom thread-pool implementation (despite a proper implementation existing in the standard library), that performed worse than single-threaded code due to misuse of mutexes. Afterwards it bragged about how greatly performance had improved (again, it did not). I threw out that code.

Agentic AI also has the tendency to rewrite unrelated code for no reason. It’ll turn nicely factored code into shit, move stuff around for no reason, and also rewrite comments. I had to revert a bunch of changes. It’s so frustrating.

Picture the most incompetent coworker you’ve ever had. Make them twenty times worse. Make them very fast, so that they’ll apply your feedback very quickly (but again, incorrectly) so that all your time is spent handholding them. Make them sound aggravatingly self-confident. Make them sycophantic af. Cue screaming.

It is worse than unhelpful. It’s destroying my desire to work on side projects. It is making me want to chuck my laptop through the window.

Which makes me wonder even more: How the fuck do people actually use this tech? The stories of people vibe-coding entire applications with it — how?!


Entertainment:


Links:


  1. The J isn’t much of an issue because in Tarmak-1, that key is in a temporary location anyway. ↩︎

  2. A transformer is comparable to a filter in Nanoc. The name “filter” has always bothered me, and I don’t know why I picked that as a name. I’m not convinced about “transformer” either, but it’s an improvement. ↩︎

  3. For the same content, kramdown (which I use on my web site) takes about 2 seconds. ↩︎

  4. Cyberpunk 2077 (CD Projekt Red, 2020), published by CD Projekt. ↩︎

You can reply to this weeknotes entry by email. I’d love to hear your thoughts!
If you like what I write, stick your email address below and subscribe. I send out my weeknotes every Sunday morning. Alternatively, subscribe to the web feed.