Skip to content

Blog

Packaging Automation, Next Steps

Hot damn we've been busy lately. No, really. The latest development cycle saw us focus exclusively on boulder, our build tooling. As of today it features a proof of concept boulder new subcommand for the automatic generation of packaging templates from an upstream source (i.e. tarball).

Before we really start this blog post off, I'd like to thank everyone who is supporting the project! All of the OpenCollective contributions will make it easier for me to work full time on Serpent OS =) Much love <3

Look at all the buildiness

But, but build flavours ...

Alright you got me there, certain projects prefer to abstract the configuration, build and installation of packages and be provided with some kind of hint to the build system, i.e. manually setting autotools, etc.

Serpent OS packaging is declarative and well structured, and relies on the use of RPM-style "macros" for distribution integration and common tasks to ensure consistent packaging.

We prefer a self documenting approach that can be machine validated rather than depending on introspection at the time of build. Our stone.yml format is very flexible and powerful, with automatic runtime dependencies and package splitting as standard.

..Doesn't mean we can't make packaging even easier though.

Build discovery

Pointing boulder at an upstream source will perform a deep analysis of the sources to determine the build system type, build dependencies, metadata, licensing etc. Right now it's just getting ready to leave POC stage so it has a few warts, however it does have support for generating package skeletons for the following build systems:

  • cmake
  • meson
  • autotools

We're adding automation for Perl and Python packaging (and Golang, Rust, etc) so we can enforce consistency, integration and ease without being a burden on developers. This will greatly reduce the friction of contribution - allowing anyone to package for Serpent OS.

We're also able to automatically discover build time dependencies during analysis and add those to the skeleton stone.yml file. We'll enhance support for other build systems as we go, ensuring that each new package is as close to done on creation as possible, with review and iteration left to the developer.

License compliance

A common pain in the arse when packaging for any Linux distribution is ensuring the package information is compliant in terms of licensing. As such we must know all of the licensing information, as well as FSF and OSI compliance for our continuous integration testing.

...Finding all of that information is truly a painful process when conducted manually. Thankfully boulder can perform analysis of all licensing files within the project to greatly improve compliance and packaging.

Every license listed in a stone.yml file must use a valid SPDX identifier, and be accurate. boulder now scans all license files, looking for matches with both SPDX IDs as well as fuzzy-matching the text of input licenses to make a best-guess at the license ID.

This has so far been highly accurate and correctly identifies many hundreds of licenses, ensuring a compliant packaging process with less pain for the developers. Over time we'll optimise and improve this process to ensure compliance for our developers rather than blocking them.

As of today we support the REUSE specification for expressing software licenses too!

Next on the list

The next steps are honest-to-goodness exciting for us. Or should I say.. exiting?

bill

Work formally begins now on Bootstrap Bill (Turner). Whilst we did successfully bootstrap Serpent OS and construct the Protosnek repository, the process for that is not reproducible as boulder has gone through massive changes in this time.

The new project will leverage boulder and a newly designed bootstrap process to eliminate all host contamination and bootstrap Serpent OS from stone.yml files, emitting an immutable bootstrap repository.

Layering support will land in moss and boulder to begin the infrastructure projects.

Build Submission ("Let's Roll")

The aim is to complete bill in a very short time so we can bring some initial infrastructure online to facilitate the automatic build of submitted build jobs. We'll use this process to create our live repository, replacing the initial bootstrap repository from bill.

At this point all of the tooling we have will come together to allow us all to very quickly iterate on packaging, polish up moss and race towards installed systems with online updates.

A Word From The Founder

Well well, it's been a long time since I personally wrote a post.. :) So let's keep this short and sweet, shall we? I'm returning to full time work on Serpent OS.

The 6th of July will be my last day at my current employment having tendered my 30 day notice today. Despite having enjoyment at my current position, the reality is that my passion and focus is Serpent OS.

I'm now in a transition process and will ramp up my efforts with Serpent OS. Realistically I need to reduce the outgoing costs of the project and with your help I can gain some level of financial support as we move through the next stages of development. Worst case, I will only take on any part-time or contractual gigs, allowing my primary focus to be Serpent OS.

I'll begin accelerating works and enabling community contribution so we can get the derailed-alpha train back on the tracks.

I have absolute faith in this project, the community and our shared ability to deliver the OS and tooling. To achieve it will require far more of my time and I'm perfectly willing to give it.

Thank you all to everyone who has been supporting the project, it is now time to deliver. Not just another run of the mill distribution but a technically competent and usable distribution that is not only different but better.

Let's do this in the most grassroots and enjoyable way possible =)

RELR Brings Smaller Files, More Performance?

RELR is an efficient method of storing relative relocations (but is not yet available in glibc upstream). This has a significant reduction on file size often in the vicinity of 5% for libraries and even higher for PIE binaries. We also take a look at the performance impact on enabling RELR and that looks really good too! Smaller files with more performance - you can have your cake and eat it too!

Delicious Size Savings

Everyone enjoys smaller files, especially when it's for free! RELR provides a very efficient method of storing relative relocations where it only requires a few percent compared to storing them in the .rela.dyn section which is what currently happens. However, it can't store everything so the .rela.dyn section remains (though much smaller).

Here's an example of the sections of libLLVM-13.so with and without RELR. Here we see the .rela.dyn section taking up a whole 2MB! When enabling RELR, .rela.dyn shrinks down to just under 100KB while adding a new section .relr.dyn which is just over 20KB! That's nearly a 1.9MB file size reduction, so you'll get smaller packages, smaller updates and it will be even faster to create delta packages from our servers. For reference, some of the biggest files have a .rela.dyn section over 10MB!

Section Without RELR With RELR
.dynstr 2,285,738 2,285,723
.rela.dyn 2,006,472 97,464
.relr.dyn - 21,688
.text 28,708,290 28,708,386
.dynamic 592 608
Total 44,853,987 42,966,764

Smaller, But at What Cost?

While most of the discussion about RELR is around the size savings, there's been very little in terms of the performance numbers of enabling RELR. For most things, it's not going to make a noticeable difference, as it should only really impact loading of binaries. There's one rule we have and that's to measure everything! We care about every little detail where many 1-2% improvements can add up to significant gains.

First, we require a test to determine if we could detect changes between an LLVM built with RELR and one without. The speed of the compiler is vital to a distro, where lackluster performance of the build system hurts all contributors and anyone performing source based builds. clang in this example was built using a shared libLLVM so that it would load the RELR section and it's large enough to be able to measure a difference in load times (if one exists). Building gettext was the chosen test (total time includes tarball extraction, configure, make and make install stages), rather than a synthetic binary loading test to reflect real world usage. The configure stage is very long when building gettext so clang is called many times for short compiles. Lets take a look at the results:

no RELR:
[Build] Finished: 1 minute, 57 secs, 80 ms, 741 μs, and 2 hnsecs
[Build] Finished: 1 minute, 57 secs, 691 ms, 586 μs, and 4 hnsecs
[Build] Finished: 1 minute, 56 secs, 861 ms, 31 μs, and 8 hnsecs

RELR:
[Build] Finished: 1 minute, 55 secs, 244 ms, 213 μs, and 8 hnsecs
[Build] Finished: 1 minute, 55 secs, 400 ms, 158 μs, and 8 hnsecs
[Build] Finished: 1 minute, 55 secs, 775 ms, 40 μs, and 8 hnsecs

RELR+startup patch:
[Build] Finished: 1 minute, 54 secs, 979 ms, 166 μs, and 8 hnsecs
[Build] Finished: 1 minute, 54 secs, 820 ms, and 675 μs
[Build] Finished: 1 minute, 54 secs, 713 ms, 440 μs, and 3 hnsecs

Here we see the base configuration was able to build gettext in 117.21s on average. When we enabled RELR in our LLVM build (all other packages were without RELR still), the average build time decreased by 1.74s! That does not sound like a lot, but the time spent loading clang would only be a portion of the total, yet still gives a 1-2% performance lift over the whole build. While we were reducing start up time, I ran another test, but this time adding a patch to reduce paths searched on startup as well as enabling RELR. This patch reduced the average build time by a further 0.63s!

That's a 2.37s reduction in the build just from improving the clang binary's load time.

What This Means - RELR by Default

So what actually is RELR? I can't really do the topic justice, so will point you to a great blog post about RELR, Relative Relocations and RELR. It's quite technical for the average reader, but definitely worth a read if you like getting into the details. To no surprise the author (Fangrui Song) started the initial push for getting RELR support upstream in glibc (at the time of this post the patch series has not yet been committed to glibc git).

What I can tell you, is that we've applied the requisite patches for RELR support and enabled RELR by default in boulder for builds. Our container has been rebuilt and all is working well with RELR enabled. More measurements will be done in future in the same controlled manner, particularly around PIE load times.

Caveats - The Hidden Details

The performance benchmark was quite limited in terms of being an optimal case for RELR as clang is called thousands of times in the build so on average improved load time by about 0.6-0.7ms. We can presume that using RELR on smaller files is unlikely to regress load times. It definitely gives us confidence that it would be about the same or better in most situations, but not noticeable or measurable in most use cases. Minimizing build times is a pretty significant target for us, so even these small gains are appreciated.

The size savings can vary between packages and not everything can be converted into the .relr.dyn section. The current default use of RELR is not without cost as it adds a version dependency on glibc. We will ensure we ship a sane implementation that minimizes or removes such overhead.

It was also not straight forward to utilize RELR in Serpent. The pending upstream glibc patch series included a patch which caused issues when enabling RELR in Serpent OS (patch 3/5). As we utilize two toolchains, gcc/bfd and clang/lld, both need to function independently to create outputs of a functional OS. However the part "Issue an error if there is a DT_RELR entry without GLIBC_ABI_DT_RELR dependency nor GLIBC_PRIVATE definition." meant that glibc would refuse to load files linked by lld despite having the ability to load them. lld has supported RELR for some time already, but does not create the GLIBC_ABI_DT_RELR dependency that is required by glibc. I have added my feedback to the patch set upstream. lld now has support for this version dependency upstream if we ever decide to use it in future.

After dropping the patch and patching bfd to no longer generate the GLIBC_ABI_DT_RELR dependency either, I was finally able to build both glibc and LLVM with the same patches. With overcoming that hurdle, rebuilding the rest of the repository went without a hitch, so we are now enjoying RELR in all of our builds and is enabled by default.

There is even further scope for more size savings, by switching the rela.dyn section for the rel.dyn section (this is what is used for 32-bit builds and one of the reasons files are smaller!). lld supports switching the section type, but I don't believe glibc will read the output as it expects the psABI specified section (something musl can handle though).

{{}}

The Cost of Adding GLIBC_ABI_DT_RELR

A quick check of two equivalent builds (one adding the GLIBC_ABI_DT_RELR version dependency and one not), there was an increase of 34 bytes to the file's sections (18 bytes to .dynstr and 16 bytes to .gnu.version_r). It also means having to validate that the GLIBC_ABI_DT_RELR version is present in the libc and that the file using RELR includes this version dependency. This may not sound like much but it is completely unnecessary! Note that the testing provided in this blog post is without GLIBC_ABI_DT_RELR.

Regardless of what eventuates, these negatives won't ship in Serpent OS. This will allow for us to support files that include the version dependency (when appimage and other distros catch up) as it will still exist in libc, but we won't have the version check in files, nor will glibc check that the version exists before loading for packages built by boulder.

Making Deltas Great Again! (Part 1)

In Optimising Package Distribution we discussed some early findings for implementing binary deltas in Serpent OS. While discussing the implementation we have found the requirements to be suboptimal for what we were after. We provide a fresh look at the issue and what we can do to make it useful in almost all situations without the drawbacks.

Deltas Have a Bad Reputation

I remember back in the early 2000s on Gentoo when someone set up a server to produce delta patches from source tarballs to distribute to users with small data caps such as myself. When requesting the delta, the job would be added to the queue (which occasionally could be minutes) and then created a small patch to download. This was so important at the time to reduce the download size that the extra time was well worth it!

Today things are quite different. The case for deltas has reduced for users as internet speeds have increased. Shrinking 20MB off a package size may be a second reduction for some, but 10 seconds for others. The largest issue is that deltas have typically pushed extra computational effort onto the users computer in compensation for the smaller size. With a fast internet connection that cost is a real burden where deltas take longer to install than simply downloading the full package.

What's so Bad About the Old Method?

The previous idea of using the full payload for deltas was very efficient in terms of distribution, but required changes in how moss handles packages to implement it. Having the previous payload available (and being fast) means storing the old payloads on disk. This increases the storage requirements for the base system, although that can be reduced by compressing the payloads to reduce disk usage (but increasing CPU usage at install time).

To make it work well, we needed the following requirements: - Install all the files from a package and recreate the payload from the individual files. However, Smart System Management allows users to avoid installing unneeded locale files and we would not be able to recreate the full payload without them. - Alternatively we can store the full payloads on disk. Then there becomes a tradeoff from doubling storage space or additional overhead from compressing the payloads to reduce it. - Significant memory requirements to use a delta when a package is large.

In short we weren't happy with having to increase the disk storage requirements (potentially more than 2x) or the increase in CPU time to create compressed payloads to minimize it. This was akin to the old delta model, of smaller downloads but significantly slower updates.

Exploring an Alternative Approach

Optimal compression generally benefits from combining multiple files into one archive than to compress each file individually. With zstd deltas, since you read in a dictionary (the old file), you already have good candidates for compression matching. The real question was simply whether creating a delta for each file was a viable method for delta distribution (packaged into a single payload of course).

As the real benefit of deltas is reducing download size, the first thing to consider is the size impact. Using the same package as the previous blog post (but with newer versions of QtWebEngine and zstd) we look at what happens when you delta each file individually. Note that the package is quite unique in that the largest file is 76% of the total package size and that delta creation time increased a lot for files larger than 15-20MB at maximum compression.

Full Tarball Individual Files Full Tarball --zstd=chainLog=30 Individual Files --zstd=chainLog=30
Time to create 134.6s 137.6s 157.9s 150.9s
Size of delta 30.8MB 29.8MB 28.3MB 28.6MB
Peak Memory 1.77GB 1.64GB 4.77GB 2.64GB

Quite surprisingly, the delta sizes were very close! Most surprising was that without increasing the size of the chainLog in zstd, the individual file approach actually resulted in smaller deltas! We can also see how much lower the memory requirements were (and they would be much smaller when there isn't one large file in the package!). Our locale and doc trimming of files will still work nicely, as we don't need to recreate the locale files that aren't installed (as we still don't want them!).

The architecture of moss allows us to cache packages, where all cached packages are available for generating multiple system roots including with our build tool boulder without any need for the original package file. Therefore any need to retain old payloads or packages is no longer required or useful, eliminating the drawbacks of the previous delta approach. The memory requirements are also reduced as the maximum memory requirement scales with the size of the largest file, rather than the entire package (which is generally a lot bigger). There are many packages containing hundreds of MBs of uncompressed data and a few into the GBs. But the largest file I could find installed locally was only 140MB, and only a handful over 100MB. This smaller increase in memory requirements is a huge improvement and the small increase in memory use to extract the deltas is likely to go unnoticed by users.

Well everything sounds pretty positive so far, there must be some drawback?

{{}}

The Impact on Package Cache Time

As the testing method for this article is quite simplistic (bash loops and calls to zstd directly), the additional overhead from creating deltas for individual files I estimated to be about 20ms compared to a proper solution. The main difference from the old delta method is how we extract the payloads and recreate the files of the new package. Using the full package you simply extract the content payload and split it into its corresponding files. The new approach requires two steps, extracting the payload (we could in theory not compress it) and then applying patches to the original files to recreate the new ones. Note that times differ slightly from the previous table due to minor variations between test runs.

Normal Package Individual Delta File Package
Time to Delta Files - 148.0s (137 files)
Time to Compress Payload 78.6s 4.0s
Size of Uncompressed Payload 165.8MB 28.9MB
Size of Compressed Payload 51.3MB 28.6MB
Instructions to Extract Payload 2,876.9m (349ms) 33.1m (34ms)
Instructions to Recreate Files - 1,785.6m (452ms)
Total instructions to Extract 2,876.9m (349ms) 1,818.7m (486ms)

What's important to note is that is this reflects a worst case scenario for the delta approach, where all 137 files were different between the old and new version of the package. Package updates where files remain unchanged allows us to omit them from the delta package altogether! So the delta approach not only saves time downloading files, but also requires fewer CPU instructions to apply the update. It's not quite a slam dunk though as reading the original file as a dictionary results in an increase in elapsed time of extraction (though the extra time is likely much less than the time saved downloading 20MB less!).

In Part 2 we will look at some ways we can tweak the approach to balance the needed resources for creating delta packages and to reduce the time taken to apply them.

Note: This was intended to be a 2 part series as it contains a lot of information to digest.
However, Part 2 was committed and follows below.

There's more than one way to create a delta! This post follows on from the earlier analysis of creating some basic delta packages. Where this approach, and the use of zstd, really thrives is that it gives us options in how to manage the overhead, from creating the deltas to getting them installed locally. Next we explore some ideas of how we can minimize the caching time of delta files.

Improving Delta Cache Time

To get the best bang for your buck with deltas, it is essential to reduce the size of the larger files. My experience in testing was that there wasn't a significant benefit from creating deltas for small files. In this example, we only create delta files when they are larger than a specific size while including the full version of files that are under the cutoff. This reduces the number of delta files created without impacting the overall package size by much.

Only Delta Files Greater Than Greater than 10KB Greater than 100KB Greater than 500KB
Time to Delta Files 146.1s (72 files) 146.3s (64 files) 139.4s (17 files)
Time to Compress Payload 3.9s 4.0s 8.3s
Size of Uncompressed Payload 28.9MB 29.3MB 42.4MB
Size of Compressed Payload 28.6MB 28.7MB 30.5MB
Instructions to Extract Payload 37.8m (36ms) 34.7m (29ms) 299.1m (66ms)
Instructions to Recreate Files 1,787.7m (411ms) 1,815.0m (406ms) 1,721.4m (368ms)
Total instructions to Extract 1,825.5m (447ms) 1,849.7m (435ms) 2,020.5m (434ms)

Here we see that by not creating deltas for files under 100KB, it barely impacts the size of the delta at all, while reducing caching time by 50ms compared to creating a delta for every file (486ms from the previous blog post). It even leads to up to a 36% reduction in CPU instructions to undertake caching through the delta than using the full package. In terms of showing how effective this delta technique really is, I chose one of the worst examples and I would expect that many deltas would be faster to cache when there's files that are exact matches between the old and new package. The largest file alone took 300ms to apply the delta, where overheads tend to scale a lot when you start getting to larger files.

There are also some steps we can take to make sure that caching a delta is almost always faster than the full package (solving the only real drawback to users), only requiring Serpent OS resources to create these delta packages.

Creating More Efficient Deltas

For this article, all the tests have been run with zstd --ultra -22 --zstd=chainLog=30...until now! The individual file delta approach is more robust at lower compression levels to keep package size small while reducing how long they take to create. Lets take a look at the difference while also ensuring --long is enabled. This testing combined with the results above for only creating deltas for files larger than 10KB.

Individual Delta File Package zstd -12 zstd -16 zstd -19 zstd -22
Time to Delta Files 6.7s 113.9s 142.3s 148.3s
Time to Compress Payload 0.5s 3.2s 5.3s 4.0s
Size of Uncompressed Payload 41.1MB 30.6MB 28.9MB 28.9MB
Size of Compressed Payload 40.9MB 30.3MB 28.6MB 28.6MB
Instructions to Extract Payload 46.5m (35ms) 51.2m (28ms) 42.6m (33ms) 37.8m (36ms)
Instructions to Recreate Files 1,773.7m (382ms) 1,641.5m (385ms) 1,804.2m (416ms) 1,810.9m (430ms)
Total instructions to Extract 1,820.2m (417ms) 1,692.7m (413ms) 1,846.8m (449ms) 1,848.7m (466ms)

Compression levels 16-19 look quite interesting where you start to reduce the time taken to apply the delta as well and only seeing a small increase in size. For comparison, at -19 it only took 9s to delta the remaining 39.8MB of files when excluding the largest file (it was 15s at -22). While the time taken between 19 and 22 was almost the same, at -19 it took 27% fewer instructions to create the deltas than at -22 (-16 uses 64% fewer instructions than -22). It will need testing across a broader range of packages to see the overall impact and to evaluate a sensible default.

As a side effect of reducing the compression level, you also get another small decrease in the time to cache a package. The best news of all is that these numbers are already out of date. Testing was performed last year with zstd 1.5.0, where there have been notable speed improvements to both compression and decompression that have been included in newer releases. Great news given how fast it is already! Here's a quick summary of how it all ties together.

Delta's Are Back!

This blog series has put forward a lot of data that might be difficult to digest...but what does it mean for users of Serpent OS? Here's a quick summary of the advantages of using this delta method on individual files when compared to fetching the full packages:

  • Package update sizes are greatly reduced to speed up fetching of package updates.
  • moss architecture means that we have no need to leave packages on disk for later use, reducing the storage footprint. In fact, we could probably avoid writes (except for the extracted package of course!) by downloading packages to tmpfs where you have sufficient free memory.
  • Delta's can be used for updating packages for packaging and your installed system. There's no need for a full copy of the full original package for packaging. A great benefit when combined with the source repository.
  • Delta's are often overlooked due to being CPU intensive while most people have pretty decent internet speeds. This has a lot to do with how they are implemented.
  • With the current vision for Serpent OS deltas they will require fewer CPU instructions to use than full packages, but may slightly increase the time to cache some packages (but we are talking ms). But we haven't even considered the reduction in time taken to download the delta vs the full package which more than makes up the difference!
  • The memory requirements are reduced compared to the prior delta approach, especially if you factor in extracting the payload in memory (possibly using tmpfs) as part of installation.

Getting More Bang for Your Buck

There's still plenty more work to be done for implementing delta's in Serpent OS and they likely aren't that helpful early on. To make delta packages sustainable and efficient over the long run, we can make them even better and reduce some wastage. Here are some more ideas in how to make deltas less resource intensive and better for users:

  • As we delta each file individually, we could easily use two or more threads to speed up caching time. Using this package as an example, two threads would reduce the caching time to 334ms, the time the largest file took to recreate plus the time to extract the payload. Now the delta takes less time and CPU to cache than the full package!
  • zstd gives us options to tradeoff some increase in delta size to reduce the resources needed to create delta packages. This testing was performed with --ultra -22 --zstd=chainLog=30 which is quite slow, but produces the smallest files. Even reducing the compression level to -16 --long didn't result in a large increase in delta size.
  • We always have the option to not create deltas for small packages to ease the burden, but in reality the biggest overhead is created from large files.
  • When creating deltas, you typically generate them for multiple prior releases. We can use smart criteria when to stop delta generation from earlier releases for instance if they save less than 10% total size or less than 100KB. A delta against an earlier release will almost always be larger than versus a more recent release.

Even Better than These Numbers Suggest

While the numbers included have been nothing short of remarkable, they don't quite reflect how good this approach will be. The results shown lack some of the key advantages of our delta approach such as excluding files that are unchanged between the two packages. Other things that will show better results are:

  • When package updates include minimal changes between versions (and smaller files), we would expect the average package to be much closer in elapsed time than indicated in these results.
  • A quick test using a delta of two package builds of the same source version resulted in a 13MB delta (rather than the 28.6MB delta we had here). On top of that it took 62% fewer CPU instructions and less time (295ms) than the full package to extract (349ms) without resorting to multi-threading.
  • Delta creation of the average package will be quite fast where the largest files are <30MB. In our example, one file is 76% of the package size (126MB) but can take nearly 95% of the total creation time!
  • We will be applying features to our packages that will reduce file sizes (such as the much smaller RELR relocations and identical code folding), making the approach even better, but that will be for a future blog post.

Can Hardly Contain Myself, Plus a Bonus

One of the core steps for building a package is setting up a minimal environment with only the required (and stated) dependencies. Currently we have been building our stones in an systemd-nspawn container, where the root contains every package that's been built so far. This makes the environment extremely difficult to reproduce!

Today we announce moss-container, a simple but flexible container creator that we can integrate for proper containerized builds.

Versatile For Many Use Cases

Containers have a multitude of uses for a Linux distro, but our immediate use case is for reproducible container builds for boulder. However, we have plans to use moss-container for testing, validation and benchmarking purposes as well. Therefore it's important to consider all workloads, so features like fakeroot and networking can be toggled on or off depending on what features are needed.

moss-container takes care of everything, the device nodes in the /dev tree, mounting directories as tmpfs so the environment is left in a clean state, and mounting the /sys and /proc special file-systems. These are essential for a fully functioning container where programs like python and even clang won't work without them. And best of all, it's very fast so fits in well with the rest of our tooling!

The next step is integrating moss-container into boulder, so that builds become reproducible across machines, and makes it much easier for users to run builds on their host machines.

moss Now Understands Repositories

Previously (but not covered in the blogs) work was also done on moss so that it can understand and fetch stone packages from an online repo. This ties in nicely with the moss-container work and is a requirement for finishing up a proper build process for Serpent OS. We are now one step closer to having a full distribution cycle from building packages and pushing those packages as system updates!

Container with functioning device nodes

Check Out The Development

In case you've missed it, ikey has been streaming some of the development of the tooling on his Twitch channel. DLang is not as commonly used as other languages, so check it out to see the advantages it brings. Streams are typically announced on twitter, or give him a follow to see when he next goes live!

Bonus Content Refresh

This year we've had a considerable number of new visitors and interest in Serpent OS. Unfortunately the content on the website had been a bit stale and untouched for some time. There was some confusion and misunderstanding over some of the terms and content. Some of the common issues were:

  • Subscriptions is a loaded term relating to software
  • Subscriptions only referred to a fraction of the smart features
  • Seemed targeted at advanced users with too many technical terms
  • Lack of understanding around what moss and boulder do
  • That features would add complexity when the tools were actually removing the complexity

The good news is that a good chunk of it has been redone, including two new pages for our core tools boulder and moss. Subscriptions has been renamed to Smart System Management to reflect its broader nature (which you can read about here).

Much of the content has also had a refresh or a rewrite, so if you've seen it before, it will likely be a lot easier to digest now. But this isn't the final state of the content, as more features will need to be added and there's still a few rough edges (and I like to rewrite things every once in awhile). Many ideas have been raised by our community in the matrix channel, so a shout-out to the good folks we have hanging out there.

Performance Corner: Small Changes Pack a Punch

Here we have another round of changes to make packages smaller and show just how much we care about performance and efficiency! Today we are focusing mainly on moss-format changes to reduce the size of its payloads. The purpose of these changes is to reduce the size of packages, DB storage for transactions and memory usage for moss. These changes were made just before we moved out of bootstrap, so we wouldn't have to rebuild the world after changing the format. Lets get started!

Squeezing Out the Last Blit of Performance

Blitting in Serpent OS is the process of setting up a root, using hardlinks of files installed in the moss store to construct the system /usr directory. Some initial testing showed buildPath using about 3% of the total events when installing a package with many files. As a system is made up of over 100,000 files, that's a lot of paths that need to be calculated! Performance is therefore important to minimize time taken and power consumed.

After running a few benchmarks of buildPath, there were a few tweaks which could improve the performance. Rather than calculating the full path of the file, we can reuse the constant root path for each file instead, reducing buildPath events by 15%. Here we see some callgrind data from this small change (as it was difficult to pickup in the noise).

Before:
   48,766,347 ( 1.50%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   34,241,805 ( 1.05%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   14,363,684 ( 0.44%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

After:
   40,357,800 ( 1.25%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   29,523,966 ( 0.91%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   12,814,283 ( 0.40%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

Only Record What's Needed!

Our Index Payload is used for extracting the Content Payload into its hashed file names. As it has one job (and then discarded), we could hone in on including only the information we needed. Previously an entry was a 32 byte key and storing a 33 byte hash. We have now integrated the hash as a ubyte[16] field, cut other unneeded fields so that we can fit the whole entry into 32 bytes. That's a 51% reduction in the size of the Index Payload and about 25% smaller when compressed.

Before: Index [Records: 3688 Compression: Zstd, Savings: 60.61%, Size: 239.72 KB]
After:  Index [Records: 3688 Compression: Zstd, Savings: 40.03%, Size: 118.02 KB]

Too Many Entries Makes for a Large Payload

One of the bugbears about the Layout Payload, was the inclusion of Directory paths for every single directory. This is handy in that directories can be created easily before any files are included (to ensure they exist), but it comes at a price. The issue was twofold, the extra entries made inspecting the file list take longer and also made the Layout Payload larger than it needed to be. Therefore directories are no longer included as Layout entries with the exceptions of empty directories and directories with different permissions. Lets compare nano and glibc builds before and after the change:

nano
Before: Layout [Records: 174 Compression: Zstd, Savings: 80.58%, Size: 13.78 KB]
After:  Layout [Records:  93 Compression: Zstd, Savings: 73.49%, Size:  9.15 KB]

glibc
Before: Layout [Records: 7879 Compression: Zstd, Savings: 86.88%, Size: 755.00 KB]
After:  Layout [Records: 6813 Compression: Zstd, Savings: 86.24%, Size: 689.20 KB]

A surprisingly large impact from a small change, with 1,000 fewer entries for glibc and cutting the Layout Payload size of nano by a third. What it shows is that it's hugely beneficial where locales are involved and % size reduction increases where you have fewer files. To give an example of how bad it could be, the KDE Frameworks package ktexteditor would have produced 300 entries in the Layout Payload, where 200 of those would have been for directories! I'd estimate a 50% reduction in the Layout Payload size for this package! Here's an example of how a locale file used to be stored (where only the last line is added now).

  - /usr/share/locale [Directory]
  - /usr/share/locale/bg [Directory]
  - /usr/share/locale/bg/LC_MESSAGES [Directory]
  - /usr/share/locale/bg/LC_MESSAGES/nano.mo -> ec5b82819ec2355d4e7bbccc7d78ce60 [Regular]

This will be exceptionally useful for keeping the LayoutDB slimmer and faster as the number of packages grows.

Cutting Back 1 Byte at a Time

Next we removed recording the timestamps of files, which for reproducible builds, is often a number of little relevance as you have to force them all to a fixed number. As moss is de-duplicating for files, there's a second issue where two packages could have different timestamps for the same hash. Therefore it was considered an improvement to simply exclude timestamps altogether. This improved install time as we no longer overwrite the timestamps and made the payload more compressible due to replacing it with 8 bytes of padding. Unfortunately we weren't quite able to free up 16 bytes to reduce the size of each entry, but will be something to pursue in future.

Another quick improvement was reducing the lengths of paths for each entry. moss creates system roots and switches between them by changing the /usr symlink. Therefore, all system files need to be installed into /usr or they will not make up your base system. Therefore we have no need to store /usr in the Payload so we strip /usr/ from the paths (the extra / gives us another byte off!) which we recreate on install. This improved the uncompressed size of the Payload, with only a minor reduction when compressed.

Combined these result in about a 5% decrease in the compressed and uncompressed size of the Layout Payload.

Before: Layout [Records: 6812 Compression: Zstd, Savings: 86.24%, Size: 689.10 KB]
After: Layout [Records: 6812 Compression: Zstd, Savings: 86.20%, Size: 655.00 KB]

Storing the Hash Efficiently

Using the same method for the Index Payload, we are now storing the hash as ubyte[16], but not directly in the Payload Entry. This gives us a sizeable reduction of 17 bytes per entry which is the most significant of all the Layout Payload changes. As the extra space was unneeded, it compressed well so only resulted in a small reduction in the compressed Payload size.

Before: Layout [Records: 6812 Compression: Zstd, Savings: 86.20%, Size: 655.00 KB]
After: Layout [Records: 6812 Compression: Zstd, Savings: 83.92%, Size: 539.26 KB]

Planning payload changes

Hang On, Why am I Getting Faster Installation?

As a side-effect of small code rewrites to implement these changes, we've seen a nice decrease in time to install packages. There are fewer entries to iterate over with the removal of directories and buildPath is now only called once for each path. It goes to show that focusing on the small details leads to less, more efficient and faster code. Here we find that we have now essentially halved the number of events related to buildPath with all the changes resulting in about a 5% reduction in install time. Note that for this test, over 80% of time is spent in zstd decompressing the package which we haven't optimized (yet!). Here's another look the buildPath numbers factoring in all the changes:

Before:
   48,766,347 ( 1.50%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   34,241,805 ( 1.05%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   14,363,684 ( 0.44%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

After:
   25,145,625 ( 0.79%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   17,967,216 ( 0.57%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
    7,761,417 ( 0.24%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

Summing it All Up

It was a pretty awesome weekend of work (a few weeks ago now), making some quick changes that will improve Serpent OS a great deal for a long time to come. This also means we have integrated all the quick format changes so we won't have to rebuild packages while bringing up the repos.

Here's a quick summary of the results of all these small changes:

  • 51% reduction in the uncompressed size of the Index Payload
  • 25% reduction in the compressed size of the Index Payload
  • 29-50% reduction in the uncompressed size of the Layout Payload (much more with fewer files and more locales)
  • 12-15% reduction in the compressed size of the Layout Payload
  • 5% faster package installation for our benchmark (800ms to 760ms)

These are some pretty huge numbers and even excluded the massive improvements we made in the previous blog!

What if We Include the Previous Changes?

I'm glad you asked, cause I was curious too! Here we see a before and after with all the changes included. For the Layout Payload we see a ~45% reduction in the compressed and uncompressed size. For the Index Payload we have reduced the uncompressed size by 67% and the compressed size by 56%. Together resulting in halving the compressed and uncompressed size of the metadata of our stone packages!

Before:
Payload: Layout [Records: 5441 Compression: Zstd, Savings: 83.13%, Size: 673.46 KB] (113.61 KB compressed)
Payload: Index [Records: 2550 Compression: Zstd, Savings: 55.08%, Size: 247.35 KB] (111.11 KB compressed)

After:
Payload: Layout [Records: 4718 Compression: Zstd, Savings: 83.62%, Size: 374.42 KB] (61.33 KB compressed)
Payload: Index [Records: 2550 Compression: Zstd, Savings: 39.85%, Size: 81.60 KB] (49.01 KB compressed)

Now we proceed towards bringing up repos to enjoy our very efficient packages!

Out of the Bootstrap - Towards Serpent OS

The initial stone packages that will seed the first Serpent OS repo have now been finalized! This means that work towards setting up the infrastructure for live package updates begins now. We plan on taking time to streamline the processes with a focus of fixing issues at the source. In this way we can make packaging fast and efficient so we can spend time working on features rather than package updates.

The End of Bootstrap

Bootstrapping a distribution involves building a new toolchain and many packages needed to support it. For us bootstrap was getting us to the point where we have built stone packages that we can use to start an initial repository with full working dependencies. This has been enabled by integrating dependencies into moss, creating the first repo index. Of note is that it is already enabled for 32bit support, so we have you covered there. While this is the end of bootstrap, the fun has only just begun!

The first install from the bootstrap

Where to Next?

The next goal is to make Serpent OS self-hosting, where we can build packages in a container and update the repo index with the newly built packages. It is essentially a live repository accessible from the internet. There's still plenty of improvements to be made with the tooling, but will soon enable more users to participate in bringing Serpent OS into fruition.

Becoming More Inclusive

While there's a strong focus in Serpent OS on performance, the decision has been made to lower the initial requirements for users. Despite AVX2 being an older technology, there are still computers sold today that don't support it. Because of this (and already having interested users who don't have newer machines), the baseline requirement for Serpent OS will be x86_64-v2, which only requires SSE4.2.

It was always the plan to add support for these machines, just later down the track. In reality, this makes a lot more sense, as there will be many cases where building 2 versions of a package provides little value. This is where a package takes a long time to build and doesn't result in a notable performance improvement. We will always need the x86_64-v2 version of a package to be compatible with the older machines. With this approach we can reduce the build server requirements without a noticeable impact to users as only a few packages you use will be without extra optimizations (and probably don't benefit from them anyway).

I want to make it clear that this will be temporary, with impactful x86_64-v3+ packages rolling out as soon as possible. This change paves the way to integrate our technology stack taking care of your system for you and increases its priority. Users meeting the requirements of the x86_64-v3+ instruction set (this includes additional instructions beyond x86_64-v3) will automatically start installing these faster packages as they become available. Our subscriptions model will seamlessly take care of everything for you behind the scenes so you don't need to read a wiki or forum to learn how to get these faster packages. We can utilize the same approach in future for our ARM stack, offering more optimized packages where it provides the most benefit.

Note that from the bootstrap, most packages built in under 15s and only three took longer than 2 minutes.

Trying Out Some Tweaks From the Get Go

While the project is young is a great time to test out new technologies. The use of clang and lld open up new possibilities to reduce file sizes, shorten load times and decrease memory usage. Some of these choices may have impacts on compatibility, so testing them out will be the best way to grasp that. Making sure that you can run apps like Steam is vital to the experience, so whatever happens we will make sure it works. The good news is that due to the unique architecture of Serpent OS, we can push updates that break compatibility with just a reboot, so if we ever feel the need to change the libc, we can make the change without you having to reinstall! More importantly, we can test major stack updates by rebooting into a staged update and go straight back to the old system, regardless of your file system.

Speed Packaging!

In the early days of the repository, tooling to make creating new packages as simple as possible is vital for efficiency. Spending some time automating as much of the process as possible will take weeks off bringing up Serpent OS. By making packaging as easy as possible will also help users when creating their own packages. While it would be faster to work around issues, the build tooling upgrades will benefit everyone.

The other way we'll be speeding up the process is by holding back some of the tuning options by default. LTO for instance can result in much longer build times so will not initially be the default. The same is true for debug packages, where it slows down progress without any tangible benefit.

Things Are Happening!

We hope you are as excited as we are!

It All Depends

It all depends.. it really does. On shared libraries.. interpreters.. pkg-config providers and packages. It's the same story for all "package managers", how do we ensure that the installed software has everything it needs at runtime?

Our merry crew has been involved in designing and building Linux distributions for a very, very long time, so we didn't want to simply repeat history.

Using updated moss

Thanks to many improvements across our codebase, including moss-deps, we automatically analyse the assets in a package (rapidly too) to determine any dependencies we can add without requiring the maintainer to list them. This is similar in nature to RPM's behaviour.

As such we encode dependencies into our (endian-aware, binary) format which is then stored in the local installation database. Global providers are keyed for quick access, and the vast majority of packages will not explicitly depend on another package's name, rather, they'll depend on a capability or provider. For subpackage dependencies that usually depend on "NEVRA" equality (i.e. matching against a name with a minimum version, release, matching epoch and architecture), we'll introduce a lockstep dependency that can only be resolved from its origin source (repo).

Lastly, we'll always ensure there is no possibility for "partial update" woes. With these considerations, we have no need to support >= style dependencies, and instead rely on strict design goals and maintainer responsibility.

First and foremost!

The rapid move we're enjoying from concept, to prototype, and soon to be fully fledged Linux distribution, is only possible with the amazing community support. The last few months have seen us pull off some amazing feats, and we're now executing on our first public milestones. With your help, more and more hours can be spent getting us ready for release, and would probably help to insulate my shed office! (Spoiler: its plastic and electric heaters are expensive =))

The Milestones

We have created our initial milestones that our quite literally our escape trajectory from bootstrap to distro. We're considerably closer now, hence this open announcement.

v0.0: Container (systemd-nspawn)

Our first release medium will be a systemd-nspawn compatible container image. Our primary driver for this is to allow us to add encapsulation for our build tool, boulder, permitting us to create distributable builder images to seed our infrastructure and first public binary repository.

v0.1: Bootable "image"

Once our build infra is up and running (honestly a lot of work has been completed for this in advance) we'll work towards our first 0.1 image. This will initially target VM usage, with a basic console environment and tooling (moss, boulder, etc).

And then..

We have a clear linear path ahead of us, with each stage unlocking the next. During the development of v0.0 and v0.1 we'll establish our build and test infrastructure, and begin hosting our package sources and binaries. At this point we can enter a rapid development cycle with incremental, and considerable improvements. Such as a usable desktop experience and installer.. :)

Recent changes

I haven't blogged in quite a while, as I've been deep in the trenches working on our core features. As we've expressed before, we tend to work on the more complex systems first and then glue them together after to form a cohesive hole. The last few days have involved plenty of glue, and we now have distinct package management features.

Refactor

  • Replaced defunct InstallDB with reusable MetaDB for local installation of archives as well as forming the backbone of repository support.
  • Added ActivePackagesPlugin to identify installed packages
  • Swapped non cryptographic hash usage with xxhash

Dependencies

  • Introduced new Transaction type to utilise a directed acyclical graph for dependency solving.
  • Reworked moss-deps into plugins + registry core for all resolution operations.
  • Locally provided .stone files handled by CobblePlugin to ensure we depsolve from this set too.
  • New Transaction set management for identifying state changes and ensuring full resolution of target system state.
  • Shared library and interpreter (DT_INTERP) dependencies and producers automatically encoded into packages and resolved by depsolver.

Package Installation

We handle locally provided .stone packages passed to the install command identically to those found in a repository. This eliminates a lot of special casing for local archives and allows us to find dependencies within the provided set, before looking to the system and the repositories.

Install

Dependency resolution is performed now for our package installation and is validated at multiple points, allowing a package like nano to depend on compact automatic dependencies:

    Dependency(DependencyType.SharedLibraryName, "libc.so.6(x86_64)");

Note our format and database are binary and endian aware. The dependency type only requires 1 byte of storage and no string comparisons.

List packages

Thanks to the huge refactor, we can now trivially access the installed packages as a list. This code will be reused for a list available command in future.

Example list installed output:

                   file (5.4) - File type identification utility
             file-32bit (5.4) - Provides 32-bit runtime libraries for file
       file-32bit-devel (5.4) - Provides development files for file-32bit
             file-devel (5.4) - Development files for file
                   nano (5.5) - GNU Text Editor

ListInstalled

Inspect archives

For debugging and development purposes, we've moved our old "info" command to a new "inspect" command to work directly on local .stone files. This displays extended information on the various payloads and their compression stats.

For general users - the new info command displays basic metadata and package dependencies.

Info

Package Removal

Upon generating a new system state, "removed" packages are simply no longer installed. As such no live mutation is performed. As of today we can now request the removal of packages from the current state, which generates a new filtered state. Additionally we remove all reverse dependencies, direct and transitive. This is accomplished by utilising a transposed copy of the directed acyclical graph, identifying the relevant subgraph and occluding the set from the newly generated state.

Remove

Lastly

The past few weeks have been especially enjoyable. I've truly had a fantastic time working on the project and cannot wait for the team and I to start offering our first downloads, and iterate as a truly new Linux distribution that borrows some ideas from a lot of great places, and fuses them into something awesome.

Keep toasty - this train isn't slowing down.

Performance Corner: Faster Builds, Smaller Packages

Performance Corner is a new series where we highlight to you some changes in Serpent OS that may not be obvious, but show a real improvement. Performance is a broad term that also covers efficiency, so things like making files smaller, making things faster or reducing power consumption. In general things that are unquestionably improvements with little or no downside. While the technical details may be of interest to some, the main purpose is to highlight the real benefit to users and/or developers that will make using Serpent OS a more enjoyable experience. Show me the numbers!

Here we focus on a few performance changes Ikey has been working on to the build process that are showing some pretty awesome results! If you end up doing any source builds, you'll be thankful for these improvements. Special thanks to ermo for the research into hash algorithms and enabling our ELF processing.

Start From the Beginning

When measuring changes, it's always important to know where you're starting from. Here are some results from a recent glibc build, but before these latest changes were incorporated.

Payload: Layout [Records: 5441 Compression: Zstd, Savings: 83.13%, Size: 673.46 KB]
Payload: Index [Records: 2550 Compression: Zstd, Savings: 55.08%, Size: 247.35 KB]
Payload: Content [Records: 2550 Compression: Zstd, Savings: 81.46%, Size: 236.72 MB]
 ==> 'BuildState.Build' finished [4 minutes, 6 secs, 136 ms, 464 μs, and 7 hnsecs]
 ==> 'BuildState.Analyse' finished [21 secs, 235 ms, 300 μs, and 2 hnsecs]
 ==> 'BuildState.ProducePackages' finished [25 secs, 624 ms, 996 μs, and 8 hnsecs]

The build time is a little high, but a lot of that is due to a slow compiler on the host machine. But analysing and producing packages was also taking a lot longer than it needed to.

The Death of moss-jobs in boulder

In testing an equivalent build outside of boulder, the build stages were about 5% faster. Testing under perf, the jobs system was a bit excessive for the needs of boulder, polling for work when we already know the times when parallel jobs would be useful. Removing moss-jobs allowed for simpler codepaths using multiprocessing techniques from the core language. This work is integrated in moss-deps and the excess overhead of the build has now been eliminated.

Before:
 ==> 'BuildState.Build' finished [4 minutes, 6 secs, 136 ms, 464 μs, and 7 hnsecs]
 ==> 'BuildState.Analyse' finished [21 secs, 235 ms, 300 μs, and 2 hnsecs]
After:
[Build] Finished: 3 minutes, 53 secs, 386 ms, 306 μs, and 4 hnsecs
[Analyse] Finished: 8 secs, 136 ms, 22 μs, and 8 hnsecs

The new results reflect a 26s reduction in the overall build time. But only 13s of this relates to the moss-jobs removal. The other major change is making the analyse stage parallel in moss-deps (a key part of why we wanted parallelism to begin with). Decreasing the time from 21.2s to 8.1s is a great achievement despite it doing more work as we've also added ELF scanning for dependency information in-between these results.

New Hashing Algorithm

One of the unique features in moss is using hashes for file names which allows full deduplication within packages, the running system, previous system roots and for source builds with boulder. Initially this was hooked up using sha256, but it was proving to be a bit of a slowdown.

Enter xxhash, the hash algorithm by Yann Collet for use in fast decompression software such as lz4 and zstd (and now in many places!). This is seriously fast, with the potential to produce hashes faster than RAM can feed the CPU. The hash is merely used as a unique identifier in the context of deduplication, not a cryptographic verification of origin. XXH3_128bit has been chosen due to it having an almost zero probability of a collision across 10s of millions of files.

The benefit is actually two-fold. First of all, the hash length is halved from sha256, so there's savings in the package metadata. This shouldn't be understated as hash data is generally not as compressible as typical text and there are packages with a lot of files! Here the metadata for the Layout and Index payloads has reduced by 232KB! That's about a 25% reduction with no other changes.

Before:
Payload: Layout [Records: 5441 Compression: Zstd, Savings: 83.13%, Size: 673.46 KB]
Payload: Index [Records: 2550 Compression: Zstd, Savings: 55.08%, Size: 247.35 KB]
After:
Payload: Layout [Records: 5441 Compression: Zstd, Savings: 86.66%, Size: 522.97 KB]
Payload: Index [Records: 2550 Compression: Zstd, Savings: 60.02%, Size: 165.75 KB]

Compressed this turns out to be about a 89KB reduction in the package size. For larger packages, this probably doesn't mean much but could help a lot more with delta packages. For deltas, we will be including the full metadata of the Layout and Index payloads, so the difference will be more significant there.

The other benefit of course is the speed and the numbers speak for themselves! A further 6.4s reduction in build time removing most of the delay at the end of the build for the final package. This will also improve speeds for caching or validating a package.

Before:
 ==> 'BuildState.Analyse' finished [21 secs, 235 ms, 300 μs, and 2 hnsecs]
After:
[Analyse] Finished: 1 sec, 688 ms, 681 μs, and 8 hnsecs

With these changes combined, building packages can take 12x less time in the analyse stage, while reducing the size of the metadata and the overall package. We do expect the analyse time to increase in future as we add more dependency types, debug handling and stripping, but with the integrated parallel model, we can minimize the increase in time.

Building

We're Not Done Yet

The first installment of Performance Corner shows some great wins to the Serpent OS tools and architecture. This is just the beginning and there will likely be a follow up soon (you may have also noticed that it takes too long to make the packages), and there's a couple more tweaks to further decrease the size of the metadata. Kudos to Ikey for getting these implemented!

Optimal File Locality

File locality in this post refers to the order of files in our content payload. Yes that's right, we're focused on the small details and incremental improvements that combined add up to significant benefits! All of this came about from testing the efficiency of content payload in moss-format and how well it compared against a plain tarball. One day boulder was looking extremely inefficient and then retesting the following day was proving to be extremely efficient without any changes made to boulder or moss-format. What on Earth was going on?

Making Sure you Aren't Going Crazy!

To test the efficiency our content payload, the natural choice was to compare it to a tarball containing the same files. When first running the test the results were quite frankly awful! Our payload was 10% larger than the equivalent tarball! It was almost unbelievable in a way, so the following day I repeated the test again only this time the content payload was smaller than the tarball. This didn't actually make sense, I made the tarball with the same files, but only changed the directory it was created from. Does it really matter?

File Locality Really Matters!

Of course it does (otherwise it would be a pretty crappy blog post!). When extracting a .stone package it creates two directories, mossExtract where the sha256sum named files are stored and mossInstall where those files are hardlinked to their full path name. The first day I created the tarball from mossInstall and the second day I realised that creating the tarball from mossExtract would provide the closest match to the content payload since it was a direct comparison. When compressing the tarballs to match the .stone compression level, the tarball compressed from mossInstall was 10% smaller, despite the uncompressed tarball being slightly larger.

Compression Wants to Separate Apples and Oranges

In simplistic terms, the way compression works is comparing data that it's currently reading versus data that it's read earlier in the file. zstd has some great options like --long that increases the distance in which these matches can be made at the cost of increased memory use. To limit memory use while making compression and decompression fast, it takes shortcuts that reduce the compression ratio. For optimal compression, you want files that are most similar to each other to be as close as possible. You won't get as many matches from a text file to an ELF file as you would from a similar looking text file.

Spot the Difference

Files in mossExtract are listed in their sha256sum order, which is basically random, where files in mossInstall are ordered by their path. Sorting files by path actually does some semblance of sorting where binaries are in /usr/bin and libraries are in /usr/lib bringing them closer together. This is in no way a perfect order, but is a large improvement on a random order (up to 10% in our case!).

Our glibc package has been an interesting test case for boulder, where an uncompressed tarball of the install directory was just under 1GB. As boulder stores files by their sha256sum, it is able to deduplicate files that are the same even when the build hasn't used symlinks or hardlinks to prevent the wasted space. In this case, the deduplication reduced the uncompressed size of the payload by 750MB alone (that's a lot of duplicate locale data!). In the python package, it removes 1,870 duplicate cache files to reduce the installation size.

As part of the deduplication process boulder would sort files by sha256sum to remove duplicate hashes. If two files have the same sha256sum, then only one copy needs to be stored. It also felt clean with the output of moss info looking nice where the hashes are listed in alphabetical order. But it was having a significant negative impact on package sizes so that needed to be addressed by resorting the files by path order (a simple one-liner), making the content payload more efficient than a tarball once again.

Compression Level sha256sum Order File path Order
1 72,724,389 70,924,858
6 65,544,322 63,372,056
12 49,066,505 44,039,782
16 45,365,415 40,785,385
19 26,643,334 24,134,820
22 16,013,048 15,504,806

Testing has shown that higher compression levels (and enabling --long) is more forgiving of a suboptimal file order (3-11% smaller vs only 2-5% smaller when using --long). The table above is without --long so the difference is larger.

Hang On, Why Don't You...

There's certainly something to this and sorting by file order is a first step. In future we can consider creating an efficient order for files to improve locality. Putting all the ELF, image or text files together in the payload will help to shave a bit off our package sizes at only the cost to sort the files. However, we don't want to go crazy here, the biggest impact on reducing package sizes will be using deltas as the optimal package delivery system (and there will be a followup on this approach shortly). The moss-format content payload is quite simple and contains no filenames or paths in it. Therefore it's effectively costless to switch around the order of files, so we can try out a few things and see what happens.

An Academic Experiment

To prove the value of moss-format and the content payload, I tried out some crude sorting methods and their impact on compression for the package. As you want similar files chunked together, it divided the files into 4 groups, still sorted by their path order in their corresponding chunk:

  • gz: gzipped files
  • data: non-text files that weren't ELF
  • elf: ELF files
  • text: text files (bash scripts, perl etc)

Path order vs optimal order

As the chart shows, you can get some decent improvements from reordering files within the tarball when grouping files in logical chunks. At the highest compression level, the package is reduced by 0.83% without any impact on compression or decompression time. In the compression world, such a change would be greatly celebrated!

Also important to note was that just moving the gzipped files to the front of the payload was able to capture 40% of the size improvement at high compression levels, but had slightly worse compression at levels 1-4. So simple changes to the order (in this case moving non-compressible files to the edge of the payload) can provide a reduction in size at the higher levels that we care about. We don't want to spend a long time analyzing files for a small reduction in package size, so we can start off with some basic concepts like this. Moving files that don't compress a lot such as already compressed files, images and video to the start of payload meaning that the remaining files are closer together. We also need to test out a broader range of packages and the impact any changes would have on them.

Food For Thought?

So ultimately the answer to the original question (was moss-format efficient?), the answer is yes! While there are some things that we still want to change to make it even better, in its current state package creation time was faster and overheads were lower than with compressing an equivalent tarball. The compressed tarball at zstd -16 was 700KB larger than the full .stone file (which contains a bit more data than the tarball).

The unique format also proves its worth in that we can make further adjustments to increase performance, reduce memory requirements and reduce package sizes. What this experiment shows is that file order really does matter, but using the basic sorting method of filepath gets you most of the way there and is likely good enough for most cases.

Here are some questions we can explore in future to see whether there's greater value in tweaking the file order:

  • Do we sort ELF files by path order, file name or by size?
  • Does it matter the order of chunks in the file? (i.e. ELF-Images-Text vs Images-Text-ELF)
  • How many categories do we need to segregate and order?
  • Can we sort by extension? (i.e. for images, all the png files will be together and the jpegs together)
  • Do we simply make a couple of obvious changes to order and leave zstd to do the heavy lifting?