Nanoc’s in-memory approach limits scalability

Nanoc processes data in memory: Nanoc loads all (non-binary) items and layouts into memory, as well as the compiled content, outdatedness results, the dependency graph, and more.

While RAM is fast, it is limited. Very large sites (with hundreds of thousands of pages) might require an unusually large amount of memory, and cause disk swapping. Additionally, large amounts of objects in memory can cause strain on the garbage collector. Either way, performance could go down drastically.

Lazy-loading is done for item content, but not very effective. Firstly, I/O is done at unpredictable times and can cause slowdowns. Secondly, lazy-loading is not effective if metadata is combined with content, which is often the case.

Nanoc could prefer to keep stuff out of memory and on disk, but then Nanoc likely would need to make good use of concurrency to minimize lag caused by disk I/O operations. On SSDs, this could be acceptable, but probably not on spinning drives.

It comes down to what Nanoc wants to optimise for. What size of web site would be considered “normal” for Nanoc?

Note last edited November 2023.
Incoming links: Nanoc.