Skip to content

2021

Performance Corner: Small Changes Pack a Punch

Here we have another round of changes to make packages smaller and show just how much we care about performance and efficiency! Today we are focusing mainly on moss-format changes to reduce the size of its payloads. The purpose of these changes is to reduce the size of packages, DB storage for transactions and memory usage for moss. These changes were made just before we moved out of bootstrap, so we wouldn't have to rebuild the world after changing the format. Lets get started!

Squeezing Out the Last Blit of Performance

Blitting in Serpent OS is the process of setting up a root, using hardlinks of files installed in the moss store to construct the system /usr directory. Some initial testing showed buildPath using about 3% of the total events when installing a package with many files. As a system is made up of over 100,000 files, that's a lot of paths that need to be calculated! Performance is therefore important to minimize time taken and power consumed.

After running a few benchmarks of buildPath, there were a few tweaks which could improve the performance. Rather than calculating the full path of the file, we can reuse the constant root path for each file instead, reducing buildPath events by 15%. Here we see some callgrind data from this small change (as it was difficult to pickup in the noise).

Before:
   48,766,347 ( 1.50%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   34,241,805 ( 1.05%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   14,363,684 ( 0.44%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

After:
   40,357,800 ( 1.25%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   29,523,966 ( 0.91%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   12,814,283 ( 0.40%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

Only Record What's Needed!

Our Index Payload is used for extracting the Content Payload into its hashed file names. As it has one job (and then discarded), we could hone in on including only the information we needed. Previously an entry was a 32 byte key and storing a 33 byte hash. We have now integrated the hash as a ubyte[16] field, cut other unneeded fields so that we can fit the whole entry into 32 bytes. That's a 51% reduction in the size of the Index Payload and about 25% smaller when compressed.

Before: Index [Records: 3688 Compression: Zstd, Savings: 60.61%, Size: 239.72 KB]
After:  Index [Records: 3688 Compression: Zstd, Savings: 40.03%, Size: 118.02 KB]

Too Many Entries Makes for a Large Payload

One of the bugbears about the Layout Payload, was the inclusion of Directory paths for every single directory. This is handy in that directories can be created easily before any files are included (to ensure they exist), but it comes at a price. The issue was twofold, the extra entries made inspecting the file list take longer and also made the Layout Payload larger than it needed to be. Therefore directories are no longer included as Layout entries with the exceptions of empty directories and directories with different permissions. Lets compare nano and glibc builds before and after the change:

nano
Before: Layout [Records: 174 Compression: Zstd, Savings: 80.58%, Size: 13.78 KB]
After:  Layout [Records:  93 Compression: Zstd, Savings: 73.49%, Size:  9.15 KB]

glibc
Before: Layout [Records: 7879 Compression: Zstd, Savings: 86.88%, Size: 755.00 KB]
After:  Layout [Records: 6813 Compression: Zstd, Savings: 86.24%, Size: 689.20 KB]

A surprisingly large impact from a small change, with 1,000 fewer entries for glibc and cutting the Layout Payload size of nano by a third. What it shows is that it's hugely beneficial where locales are involved and % size reduction increases where you have fewer files. To give an example of how bad it could be, the KDE Frameworks package ktexteditor would have produced 300 entries in the Layout Payload, where 200 of those would have been for directories! I'd estimate a 50% reduction in the Layout Payload size for this package! Here's an example of how a locale file used to be stored (where only the last line is added now).

  - /usr/share/locale [Directory]
  - /usr/share/locale/bg [Directory]
  - /usr/share/locale/bg/LC_MESSAGES [Directory]
  - /usr/share/locale/bg/LC_MESSAGES/nano.mo -> ec5b82819ec2355d4e7bbccc7d78ce60 [Regular]

This will be exceptionally useful for keeping the LayoutDB slimmer and faster as the number of packages grows.

Cutting Back 1 Byte at a Time

Next we removed recording the timestamps of files, which for reproducible builds, is often a number of little relevance as you have to force them all to a fixed number. As moss is de-duplicating for files, there's a second issue where two packages could have different timestamps for the same hash. Therefore it was considered an improvement to simply exclude timestamps altogether. This improved install time as we no longer overwrite the timestamps and made the payload more compressible due to replacing it with 8 bytes of padding. Unfortunately we weren't quite able to free up 16 bytes to reduce the size of each entry, but will be something to pursue in future.

Another quick improvement was reducing the lengths of paths for each entry. moss creates system roots and switches between them by changing the /usr symlink. Therefore, all system files need to be installed into /usr or they will not make up your base system. Therefore we have no need to store /usr in the Payload so we strip /usr/ from the paths (the extra / gives us another byte off!) which we recreate on install. This improved the uncompressed size of the Payload, with only a minor reduction when compressed.

Combined these result in about a 5% decrease in the compressed and uncompressed size of the Layout Payload.

Before: Layout [Records: 6812 Compression: Zstd, Savings: 86.24%, Size: 689.10 KB]
After: Layout [Records: 6812 Compression: Zstd, Savings: 86.20%, Size: 655.00 KB]

Storing the Hash Efficiently

Using the same method for the Index Payload, we are now storing the hash as ubyte[16], but not directly in the Payload Entry. This gives us a sizeable reduction of 17 bytes per entry which is the most significant of all the Layout Payload changes. As the extra space was unneeded, it compressed well so only resulted in a small reduction in the compressed Payload size.

Before: Layout [Records: 6812 Compression: Zstd, Savings: 86.20%, Size: 655.00 KB]
After: Layout [Records: 6812 Compression: Zstd, Savings: 83.92%, Size: 539.26 KB]

Planning payload changes

Hang On, Why am I Getting Faster Installation?

As a side-effect of small code rewrites to implement these changes, we've seen a nice decrease in time to install packages. There are fewer entries to iterate over with the removal of directories and buildPath is now only called once for each path. It goes to show that focusing on the small details leads to less, more efficient and faster code. Here we find that we have now essentially halved the number of events related to buildPath with all the changes resulting in about a 5% reduction in install time. Note that for this test, over 80% of time is spent in zstd decompressing the package which we haven't optimized (yet!). Here's another look the buildPath numbers factoring in all the changes:

Before:
   48,766,347 ( 1.50%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   34,241,805 ( 1.05%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   14,363,684 ( 0.44%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

After:
   25,145,625 ( 0.79%)  /usr/include/dlang/ldc/std/path.d:pure nothrow @safe immutable(char)[] std.path.buildPath
   17,967,216 ( 0.57%)  /usr/include/dlang/ldc/std/utf.d:pure nothrow @safe immutable(char)[] std.path.buildPath
    7,761,417 ( 0.24%)  /usr/include/dlang/ldc/std/range/package.d:pure nothrow @safe immutable(char)[] std.path.buildPath

Summing it All Up

It was a pretty awesome weekend of work (a few weeks ago now), making some quick changes that will improve Serpent OS a great deal for a long time to come. This also means we have integrated all the quick format changes so we won't have to rebuild packages while bringing up the repos.

Here's a quick summary of the results of all these small changes:

  • 51% reduction in the uncompressed size of the Index Payload
  • 25% reduction in the compressed size of the Index Payload
  • 29-50% reduction in the uncompressed size of the Layout Payload (much more with fewer files and more locales)
  • 12-15% reduction in the compressed size of the Layout Payload
  • 5% faster package installation for our benchmark (800ms to 760ms)

These are some pretty huge numbers and even excluded the massive improvements we made in the previous blog!

What if We Include the Previous Changes?

I'm glad you asked, cause I was curious too! Here we see a before and after with all the changes included. For the Layout Payload we see a ~45% reduction in the compressed and uncompressed size. For the Index Payload we have reduced the uncompressed size by 67% and the compressed size by 56%. Together resulting in halving the compressed and uncompressed size of the metadata of our stone packages!

Before:
Payload: Layout [Records: 5441 Compression: Zstd, Savings: 83.13%, Size: 673.46 KB] (113.61 KB compressed)
Payload: Index [Records: 2550 Compression: Zstd, Savings: 55.08%, Size: 247.35 KB] (111.11 KB compressed)

After:
Payload: Layout [Records: 4718 Compression: Zstd, Savings: 83.62%, Size: 374.42 KB] (61.33 KB compressed)
Payload: Index [Records: 2550 Compression: Zstd, Savings: 39.85%, Size: 81.60 KB] (49.01 KB compressed)

Now we proceed towards bringing up repos to enjoy our very efficient packages!

Out of the Bootstrap - Towards Serpent OS

The initial stone packages that will seed the first Serpent OS repo have now been finalized! This means that work towards setting up the infrastructure for live package updates begins now. We plan on taking time to streamline the processes with a focus of fixing issues at the source. In this way we can make packaging fast and efficient so we can spend time working on features rather than package updates.

The End of Bootstrap

Bootstrapping a distribution involves building a new toolchain and many packages needed to support it. For us bootstrap was getting us to the point where we have built stone packages that we can use to start an initial repository with full working dependencies. This has been enabled by integrating dependencies into moss, creating the first repo index. Of note is that it is already enabled for 32bit support, so we have you covered there. While this is the end of bootstrap, the fun has only just begun!

The first install from the bootstrap

Where to Next?

The next goal is to make Serpent OS self-hosting, where we can build packages in a container and update the repo index with the newly built packages. It is essentially a live repository accessible from the internet. There's still plenty of improvements to be made with the tooling, but will soon enable more users to participate in bringing Serpent OS into fruition.

Becoming More Inclusive

While there's a strong focus in Serpent OS on performance, the decision has been made to lower the initial requirements for users. Despite AVX2 being an older technology, there are still computers sold today that don't support it. Because of this (and already having interested users who don't have newer machines), the baseline requirement for Serpent OS will be x86_64-v2, which only requires SSE4.2.

It was always the plan to add support for these machines, just later down the track. In reality, this makes a lot more sense, as there will be many cases where building 2 versions of a package provides little value. This is where a package takes a long time to build and doesn't result in a notable performance improvement. We will always need the x86_64-v2 version of a package to be compatible with the older machines. With this approach we can reduce the build server requirements without a noticeable impact to users as only a few packages you use will be without extra optimizations (and probably don't benefit from them anyway).

I want to make it clear that this will be temporary, with impactful x86_64-v3+ packages rolling out as soon as possible. This change paves the way to integrate our technology stack taking care of your system for you and increases its priority. Users meeting the requirements of the x86_64-v3+ instruction set (this includes additional instructions beyond x86_64-v3) will automatically start installing these faster packages as they become available. Our subscriptions model will seamlessly take care of everything for you behind the scenes so you don't need to read a wiki or forum to learn how to get these faster packages. We can utilize the same approach in future for our ARM stack, offering more optimized packages where it provides the most benefit.

Note that from the bootstrap, most packages built in under 15s and only three took longer than 2 minutes.

Trying Out Some Tweaks From the Get Go

While the project is young is a great time to test out new technologies. The use of clang and lld open up new possibilities to reduce file sizes, shorten load times and decrease memory usage. Some of these choices may have impacts on compatibility, so testing them out will be the best way to grasp that. Making sure that you can run apps like Steam is vital to the experience, so whatever happens we will make sure it works. The good news is that due to the unique architecture of Serpent OS, we can push updates that break compatibility with just a reboot, so if we ever feel the need to change the libc, we can make the change without you having to reinstall! More importantly, we can test major stack updates by rebooting into a staged update and go straight back to the old system, regardless of your file system.

Speed Packaging!

In the early days of the repository, tooling to make creating new packages as simple as possible is vital for efficiency. Spending some time automating as much of the process as possible will take weeks off bringing up Serpent OS. By making packaging as easy as possible will also help users when creating their own packages. While it would be faster to work around issues, the build tooling upgrades will benefit everyone.

The other way we'll be speeding up the process is by holding back some of the tuning options by default. LTO for instance can result in much longer build times so will not initially be the default. The same is true for debug packages, where it slows down progress without any tangible benefit.

Things Are Happening!

We hope you are as excited as we are!

It All Depends

It all depends.. it really does. On shared libraries.. interpreters.. pkg-config providers and packages. It's the same story for all "package managers", how do we ensure that the installed software has everything it needs at runtime?

Our merry crew has been involved in designing and building Linux distributions for a very, very long time, so we didn't want to simply repeat history.

Using updated moss

Thanks to many improvements across our codebase, including moss-deps, we automatically analyse the assets in a package (rapidly too) to determine any dependencies we can add without requiring the maintainer to list them. This is similar in nature to RPM's behaviour.

As such we encode dependencies into our (endian-aware, binary) format which is then stored in the local installation database. Global providers are keyed for quick access, and the vast majority of packages will not explicitly depend on another package's name, rather, they'll depend on a capability or provider. For subpackage dependencies that usually depend on "NEVRA" equality (i.e. matching against a name with a minimum version, release, matching epoch and architecture), we'll introduce a lockstep dependency that can only be resolved from its origin source (repo).

Lastly, we'll always ensure there is no possibility for "partial update" woes. With these considerations, we have no need to support >= style dependencies, and instead rely on strict design goals and maintainer responsibility.

First and foremost!

The rapid move we're enjoying from concept, to prototype, and soon to be fully fledged Linux distribution, is only possible with the amazing community support. The last few months have seen us pull off some amazing feats, and we're now executing on our first public milestones. With your help, more and more hours can be spent getting us ready for release, and would probably help to insulate my shed office! (Spoiler: its plastic and electric heaters are expensive =))

The Milestones

We have created our initial milestones that our quite literally our escape trajectory from bootstrap to distro. We're considerably closer now, hence this open announcement.

v0.0: Container (systemd-nspawn)

Our first release medium will be a systemd-nspawn compatible container image. Our primary driver for this is to allow us to add encapsulation for our build tool, boulder, permitting us to create distributable builder images to seed our infrastructure and first public binary repository.

v0.1: Bootable "image"

Once our build infra is up and running (honestly a lot of work has been completed for this in advance) we'll work towards our first 0.1 image. This will initially target VM usage, with a basic console environment and tooling (moss, boulder, etc).

And then..

We have a clear linear path ahead of us, with each stage unlocking the next. During the development of v0.0 and v0.1 we'll establish our build and test infrastructure, and begin hosting our package sources and binaries. At this point we can enter a rapid development cycle with incremental, and considerable improvements. Such as a usable desktop experience and installer.. :)

Recent changes

I haven't blogged in quite a while, as I've been deep in the trenches working on our core features. As we've expressed before, we tend to work on the more complex systems first and then glue them together after to form a cohesive hole. The last few days have involved plenty of glue, and we now have distinct package management features.

Refactor

  • Replaced defunct InstallDB with reusable MetaDB for local installation of archives as well as forming the backbone of repository support.
  • Added ActivePackagesPlugin to identify installed packages
  • Swapped non cryptographic hash usage with xxhash

Dependencies

  • Introduced new Transaction type to utilise a directed acyclical graph for dependency solving.
  • Reworked moss-deps into plugins + registry core for all resolution operations.
  • Locally provided .stone files handled by CobblePlugin to ensure we depsolve from this set too.
  • New Transaction set management for identifying state changes and ensuring full resolution of target system state.
  • Shared library and interpreter (DT_INTERP) dependencies and producers automatically encoded into packages and resolved by depsolver.

Package Installation

We handle locally provided .stone packages passed to the install command identically to those found in a repository. This eliminates a lot of special casing for local archives and allows us to find dependencies within the provided set, before looking to the system and the repositories.

Install

Dependency resolution is performed now for our package installation and is validated at multiple points, allowing a package like nano to depend on compact automatic dependencies:

    Dependency(DependencyType.SharedLibraryName, "libc.so.6(x86_64)");

Note our format and database are binary and endian aware. The dependency type only requires 1 byte of storage and no string comparisons.

List packages

Thanks to the huge refactor, we can now trivially access the installed packages as a list. This code will be reused for a list available command in future.

Example list installed output:

                   file (5.4) - File type identification utility
             file-32bit (5.4) - Provides 32-bit runtime libraries for file
       file-32bit-devel (5.4) - Provides development files for file-32bit
             file-devel (5.4) - Development files for file
                   nano (5.5) - GNU Text Editor

ListInstalled

Inspect archives

For debugging and development purposes, we've moved our old "info" command to a new "inspect" command to work directly on local .stone files. This displays extended information on the various payloads and their compression stats.

For general users - the new info command displays basic metadata and package dependencies.

Info

Package Removal

Upon generating a new system state, "removed" packages are simply no longer installed. As such no live mutation is performed. As of today we can now request the removal of packages from the current state, which generates a new filtered state. Additionally we remove all reverse dependencies, direct and transitive. This is accomplished by utilising a transposed copy of the directed acyclical graph, identifying the relevant subgraph and occluding the set from the newly generated state.

Remove

Lastly

The past few weeks have been especially enjoyable. I've truly had a fantastic time working on the project and cannot wait for the team and I to start offering our first downloads, and iterate as a truly new Linux distribution that borrows some ideas from a lot of great places, and fuses them into something awesome.

Keep toasty - this train isn't slowing down.

Performance Corner: Faster Builds, Smaller Packages

Performance Corner is a new series where we highlight to you some changes in Serpent OS that may not be obvious, but show a real improvement. Performance is a broad term that also covers efficiency, so things like making files smaller, making things faster or reducing power consumption. In general things that are unquestionably improvements with little or no downside. While the technical details may be of interest to some, the main purpose is to highlight the real benefit to users and/or developers that will make using Serpent OS a more enjoyable experience. Show me the numbers!

Here we focus on a few performance changes Ikey has been working on to the build process that are showing some pretty awesome results! If you end up doing any source builds, you'll be thankful for these improvements. Special thanks to ermo for the research into hash algorithms and enabling our ELF processing.

Start From the Beginning

When measuring changes, it's always important to know where you're starting from. Here are some results from a recent glibc build, but before these latest changes were incorporated.

Payload: Layout [Records: 5441 Compression: Zstd, Savings: 83.13%, Size: 673.46 KB]
Payload: Index [Records: 2550 Compression: Zstd, Savings: 55.08%, Size: 247.35 KB]
Payload: Content [Records: 2550 Compression: Zstd, Savings: 81.46%, Size: 236.72 MB]
 ==> 'BuildState.Build' finished [4 minutes, 6 secs, 136 ms, 464 μs, and 7 hnsecs]
 ==> 'BuildState.Analyse' finished [21 secs, 235 ms, 300 μs, and 2 hnsecs]
 ==> 'BuildState.ProducePackages' finished [25 secs, 624 ms, 996 μs, and 8 hnsecs]

The build time is a little high, but a lot of that is due to a slow compiler on the host machine. But analysing and producing packages was also taking a lot longer than it needed to.

The Death of moss-jobs in boulder

In testing an equivalent build outside of boulder, the build stages were about 5% faster. Testing under perf, the jobs system was a bit excessive for the needs of boulder, polling for work when we already know the times when parallel jobs would be useful. Removing moss-jobs allowed for simpler codepaths using multiprocessing techniques from the core language. This work is integrated in moss-deps and the excess overhead of the build has now been eliminated.

Before:
 ==> 'BuildState.Build' finished [4 minutes, 6 secs, 136 ms, 464 μs, and 7 hnsecs]
 ==> 'BuildState.Analyse' finished [21 secs, 235 ms, 300 μs, and 2 hnsecs]
After:
[Build] Finished: 3 minutes, 53 secs, 386 ms, 306 μs, and 4 hnsecs
[Analyse] Finished: 8 secs, 136 ms, 22 μs, and 8 hnsecs

The new results reflect a 26s reduction in the overall build time. But only 13s of this relates to the moss-jobs removal. The other major change is making the analyse stage parallel in moss-deps (a key part of why we wanted parallelism to begin with). Decreasing the time from 21.2s to 8.1s is a great achievement despite it doing more work as we've also added ELF scanning for dependency information in-between these results.

New Hashing Algorithm

One of the unique features in moss is using hashes for file names which allows full deduplication within packages, the running system, previous system roots and for source builds with boulder. Initially this was hooked up using sha256, but it was proving to be a bit of a slowdown.

Enter xxhash, the hash algorithm by Yann Collet for use in fast decompression software such as lz4 and zstd (and now in many places!). This is seriously fast, with the potential to produce hashes faster than RAM can feed the CPU. The hash is merely used as a unique identifier in the context of deduplication, not a cryptographic verification of origin. XXH3_128bit has been chosen due to it having an almost zero probability of a collision across 10s of millions of files.

The benefit is actually two-fold. First of all, the hash length is halved from sha256, so there's savings in the package metadata. This shouldn't be understated as hash data is generally not as compressible as typical text and there are packages with a lot of files! Here the metadata for the Layout and Index payloads has reduced by 232KB! That's about a 25% reduction with no other changes.

Before:
Payload: Layout [Records: 5441 Compression: Zstd, Savings: 83.13%, Size: 673.46 KB]
Payload: Index [Records: 2550 Compression: Zstd, Savings: 55.08%, Size: 247.35 KB]
After:
Payload: Layout [Records: 5441 Compression: Zstd, Savings: 86.66%, Size: 522.97 KB]
Payload: Index [Records: 2550 Compression: Zstd, Savings: 60.02%, Size: 165.75 KB]

Compressed this turns out to be about a 89KB reduction in the package size. For larger packages, this probably doesn't mean much but could help a lot more with delta packages. For deltas, we will be including the full metadata of the Layout and Index payloads, so the difference will be more significant there.

The other benefit of course is the speed and the numbers speak for themselves! A further 6.4s reduction in build time removing most of the delay at the end of the build for the final package. This will also improve speeds for caching or validating a package.

Before:
 ==> 'BuildState.Analyse' finished [21 secs, 235 ms, 300 μs, and 2 hnsecs]
After:
[Analyse] Finished: 1 sec, 688 ms, 681 μs, and 8 hnsecs

With these changes combined, building packages can take 12x less time in the analyse stage, while reducing the size of the metadata and the overall package. We do expect the analyse time to increase in future as we add more dependency types, debug handling and stripping, but with the integrated parallel model, we can minimize the increase in time.

Building

We're Not Done Yet

The first installment of Performance Corner shows some great wins to the Serpent OS tools and architecture. This is just the beginning and there will likely be a follow up soon (you may have also noticed that it takes too long to make the packages), and there's a couple more tweaks to further decrease the size of the metadata. Kudos to Ikey for getting these implemented!

Optimal File Locality

File locality in this post refers to the order of files in our content payload. Yes that's right, we're focused on the small details and incremental improvements that combined add up to significant benefits! All of this came about from testing the efficiency of content payload in moss-format and how well it compared against a plain tarball. One day boulder was looking extremely inefficient and then retesting the following day was proving to be extremely efficient without any changes made to boulder or moss-format. What on Earth was going on?

Making Sure you Aren't Going Crazy!

To test the efficiency our content payload, the natural choice was to compare it to a tarball containing the same files. When first running the test the results were quite frankly awful! Our payload was 10% larger than the equivalent tarball! It was almost unbelievable in a way, so the following day I repeated the test again only this time the content payload was smaller than the tarball. This didn't actually make sense, I made the tarball with the same files, but only changed the directory it was created from. Does it really matter?

File Locality Really Matters!

Of course it does (otherwise it would be a pretty crappy blog post!). When extracting a .stone package it creates two directories, mossExtract where the sha256sum named files are stored and mossInstall where those files are hardlinked to their full path name. The first day I created the tarball from mossInstall and the second day I realised that creating the tarball from mossExtract would provide the closest match to the content payload since it was a direct comparison. When compressing the tarballs to match the .stone compression level, the tarball compressed from mossInstall was 10% smaller, despite the uncompressed tarball being slightly larger.

Compression Wants to Separate Apples and Oranges

In simplistic terms, the way compression works is comparing data that it's currently reading versus data that it's read earlier in the file. zstd has some great options like --long that increases the distance in which these matches can be made at the cost of increased memory use. To limit memory use while making compression and decompression fast, it takes shortcuts that reduce the compression ratio. For optimal compression, you want files that are most similar to each other to be as close as possible. You won't get as many matches from a text file to an ELF file as you would from a similar looking text file.

Spot the Difference

Files in mossExtract are listed in their sha256sum order, which is basically random, where files in mossInstall are ordered by their path. Sorting files by path actually does some semblance of sorting where binaries are in /usr/bin and libraries are in /usr/lib bringing them closer together. This is in no way a perfect order, but is a large improvement on a random order (up to 10% in our case!).

Our glibc package has been an interesting test case for boulder, where an uncompressed tarball of the install directory was just under 1GB. As boulder stores files by their sha256sum, it is able to deduplicate files that are the same even when the build hasn't used symlinks or hardlinks to prevent the wasted space. In this case, the deduplication reduced the uncompressed size of the payload by 750MB alone (that's a lot of duplicate locale data!). In the python package, it removes 1,870 duplicate cache files to reduce the installation size.

As part of the deduplication process boulder would sort files by sha256sum to remove duplicate hashes. If two files have the same sha256sum, then only one copy needs to be stored. It also felt clean with the output of moss info looking nice where the hashes are listed in alphabetical order. But it was having a significant negative impact on package sizes so that needed to be addressed by resorting the files by path order (a simple one-liner), making the content payload more efficient than a tarball once again.

Compression Level sha256sum Order File path Order
1 72,724,389 70,924,858
6 65,544,322 63,372,056
12 49,066,505 44,039,782
16 45,365,415 40,785,385
19 26,643,334 24,134,820
22 16,013,048 15,504,806

Testing has shown that higher compression levels (and enabling --long) is more forgiving of a suboptimal file order (3-11% smaller vs only 2-5% smaller when using --long). The table above is without --long so the difference is larger.

Hang On, Why Don't You...

There's certainly something to this and sorting by file order is a first step. In future we can consider creating an efficient order for files to improve locality. Putting all the ELF, image or text files together in the payload will help to shave a bit off our package sizes at only the cost to sort the files. However, we don't want to go crazy here, the biggest impact on reducing package sizes will be using deltas as the optimal package delivery system (and there will be a followup on this approach shortly). The moss-format content payload is quite simple and contains no filenames or paths in it. Therefore it's effectively costless to switch around the order of files, so we can try out a few things and see what happens.

An Academic Experiment

To prove the value of moss-format and the content payload, I tried out some crude sorting methods and their impact on compression for the package. As you want similar files chunked together, it divided the files into 4 groups, still sorted by their path order in their corresponding chunk:

  • gz: gzipped files
  • data: non-text files that weren't ELF
  • elf: ELF files
  • text: text files (bash scripts, perl etc)

Path order vs optimal order

As the chart shows, you can get some decent improvements from reordering files within the tarball when grouping files in logical chunks. At the highest compression level, the package is reduced by 0.83% without any impact on compression or decompression time. In the compression world, such a change would be greatly celebrated!

Also important to note was that just moving the gzipped files to the front of the payload was able to capture 40% of the size improvement at high compression levels, but had slightly worse compression at levels 1-4. So simple changes to the order (in this case moving non-compressible files to the edge of the payload) can provide a reduction in size at the higher levels that we care about. We don't want to spend a long time analyzing files for a small reduction in package size, so we can start off with some basic concepts like this. Moving files that don't compress a lot such as already compressed files, images and video to the start of payload meaning that the remaining files are closer together. We also need to test out a broader range of packages and the impact any changes would have on them.

Food For Thought?

So ultimately the answer to the original question (was moss-format efficient?), the answer is yes! While there are some things that we still want to change to make it even better, in its current state package creation time was faster and overheads were lower than with compressing an equivalent tarball. The compressed tarball at zstd -16 was 700KB larger than the full .stone file (which contains a bit more data than the tarball).

The unique format also proves its worth in that we can make further adjustments to increase performance, reduce memory requirements and reduce package sizes. What this experiment shows is that file order really does matter, but using the basic sorting method of filepath gets you most of the way there and is likely good enough for most cases.

Here are some questions we can explore in future to see whether there's greater value in tweaking the file order:

  • Do we sort ELF files by path order, file name or by size?
  • Does it matter the order of chunks in the file? (i.e. ELF-Images-Text vs Images-Text-ELF)
  • How many categories do we need to segregate and order?
  • Can we sort by extension? (i.e. for images, all the png files will be together and the jpegs together)
  • Do we simply make a couple of obvious changes to order and leave zstd to do the heavy lifting?

Unpacking the Build Process: Part 2

Part 2 looks at the core of the build process, turning the source into compiled code. In Serpent OS this is handled by our build tool boulder. It is usually the part of the build that takes the longest, so where speed ups have the most impact. How long it takes is largely down to the performance of your compiler and what compile flags you are building with.

This post follows on from Part 1.

Turning Source into Compiled Code

The steps for compiling code are generally quite straight-forward: - Setting up the build (cmake, configure, meson) - Compiling the source (in parallel threads) - Installing the build into a package directory

This will build software compiled against packages installed on your system. It's a bit more complicated when packaging as we first set up an environment to compile in (Part 1). But even then you have many choices to make and each can have an impact on how long it takes to compile the code. Do you build with Link Time Optimizations (LTO) or Profile Guided Optimizations (PGO), do you build the package for performance or for the smallest size? Then there's packages that benefit considerably from individual tuning flags (like -fno-semantic-interposition with python). With so many possibilities, boulder helps us utilize them through convenient configuration options.

What Makes boulder so Special?

As I do a lot of packaging and performance tuning, boulder is where I spend most of my time. Here are some key features that boulder brings to make my life easier.

  • Ultimate control over build C/CXX/LDFLAGS via the tuning key
  • Integrated 2 stage context sensitive PGO builds with a single line workload
  • Able to switch between gnu and llvm toolchains easily
  • Rules based package creation
  • Control the extraction locations for multiple upstream tarballs

boulder will also be used to generate and amend our stone.yml files to take care of as much as possible automatically. This is only the beginning for boulder as it will continue to be expanded to learn new tricks to make packaging more automated and able to bring more information to help packagers know when they can improve their stone.yml, or alert them that something might be missing.

Serpent OS is focused on the performance of produced packages, even if that means that builds will take longer to complete. This is why we have put in significant efforts to speed up the compiler and setup tools in order to offset and minimize the time needed to enable greater performance.

Why do You Care so Much About Setup Time?

My initial testing focused on the performance of clang as well as the time taken to run cmake and configure. This lays the foundation for all future work in expanding the Serpent OS package archives at a much faster pace. On the surface, running cmake can be a small part of the overall build. However, it is important in that it utilizes a single thread, so is not sped up by adding more CPU cores like the compile time is. With a more compile heavy build, our highly tuned compiler can build the source in around 75s. So tuning the setup step to run in 5s rather than 10s actually reduces the overall build time by an additional 6%!

There are many smaller packages where the setup time is an even higher proportion of the overall build and becomes more relevant as you increase the numbers of threads on the builder. For example, when building nano on the host, the configure step takes 13.5s, while the build itself takes only 2.3s, so there's significant gains to be had from speeding up the setup stage of a build (which we will absolutely be taking advantage of!).

A Closer Look at the clang Compiler's Performance

A first cut of the compiler results were shared earlier in Initial Performance Testing, and given the importance to overall build time, I've been taking a closer look. In the post I said that "At stages where I would have expected to be ahead already, the compile performance was only equal" and now I have identified the discrepancy.

I've tested multiple configurations for the clang compiler and noticed that changing the default standard C++ library makes a difference to the time of this particular build. The difference in the two runs is compiling llvm-ar with the LLVM libraries of compiler-rt/libc++/libunwind or the GNU libraries of libgcc/libstdc++. And just to be clear, this is increasing the time of compiling llvm-ar with libc++ vs libstdc++ and not to do with the performance of either library. The clang compiler itself is built with libc++ in both cases as it produces a faster compiler.

{{}}

Test using clang Serpent LLVM libs Serpent GNU libs Host
cmake LLVM 5.89s 5.67s 10.58s
Compile -j4 llvm-ar 126.16s 112.51s 155.32s
configure gettext 36.64s 36.98s 63.55s

The host now takes 38% longer than the Serpent OS clang when building with the same GNU libraries and is much more in line with my expectations. Next steps will be getting bolt and perf integrated into Serpent OS to see if we can shave even more time off the build.

What remains unclear is whether this difference is due to something specifically in the LLVM build or whether it would translate to other C++ packages. I haven't noticed a 10% increase in build time when performing the full compiler build with libc++ vs libstdc++.

Unpacking the Build Process: Part 1

While the build process (or packaging as it's commonly referred to) is largely hidden to most users, it forms a fundamental and important aspect to the efficiency of development. In Serpent OS this efficiency also extends to users via source based builds for packages you may want to try/use that aren't available as binaries upstream.

The build process can be thought of in three distinct parts, setting up the build environment, compiling the source and post build analysis plus package creation. Please note that this process hasn't been finalized in Serpent OS so we will be making further changes to the process where possible.

Setting up the Build Environment

Some key parts to setting up the build environment:

  • Downloading packages needed as dependencies for the build
  • Downloading upstream source files used in the build
  • Fetching and analyzing the latest repository index
  • Creating a reproducible environment for the build (chroot, container or VM for example)
  • Extracting and installing packages into the environment
  • Extracting tarballs for the build (this is frequently incorporated as part of the build process instead)

While the focus of early optimization work has been on build time performance, there's more overhead to creating packages time than simply compiling code. Now the compiler is in a good place, we can explore the rest of the build process.

Packaging is More than just Compile Time

There's been plenty of progress in speeding up the creation of the build environment such as parallel downloads to reduce connection overhead and using zstd for the fast decompression of packages. But there's more that we can do to provide an optimal experience to our packagers.

Some parts of the process are challenging to optimize as while you can download multiple files at once to ensure maximum throughput, you are still ultimately limited by your internet speed. When packaging regularly (or building a single package multiple times), downloaded files are cached so become a one off cost. One part we have taken particular interest in speeding up is extracting and installing packages into the environment.

{{}}

The Dynamic Duo: boulder and moss

Installing packages to a clean environment can be the most time consuming part of setting up the build (excluding fetching files which is highly variable). Serpent OS has a massive advantage with the design of moss where packages are cached (extracted on disk) and ready to be used by multiple roots, including the creation of clean build environments for boulder. Having experienced a few build systems in action, setting up the root could take quite some time with a large number of dependencies (even getting over a minute). moss avoids the cost of extracting packages entirely every build by utilizing its cache!

There are also secondary benefits to how moss handles packages via its caches where disk writes are reduced by only needing to extract packages a single time. But hang on, won't you be using tmpfs for builds? Of course we will have RAM builds as an option and there are benefits there too! When extracting packages to the RAM disk, it consumes memory which can add up to more than a GB before the build even begins. moss allows for us to start with an empty tmpfs so we can perform larger builds before exhausting the memory available on our system.

Another great benefit is due to the atomic nature of moss. This means that we can add packages to be cached as soon as they're fetched while waiting for the remaining files to finish downloading (both for boulder and system updates). Scheduling jobs becomes much more efficient and we can have the build environment available in moments after the last file is downloaded!

moss allows us to eliminate one of the bigger time sinks in setting up builds, enabling developers and contributors alike to be more efficient in getting work done for Serpent OS. With greater efficiency it may become possible to provide a second architecture for older machines (if the demand arises).

Part 1?

Yes, there's plenty more to discuss so there will be more follow up posts showing the cool features Serpent OS is doing to both reduce the time taken to build packages and in making packages easier to create so stay tuned!

A Rolling Boulder Gathers No Moss

We actually did it. Super pleased to announce that moss is now capable of installing and removing packages. Granted, super rough, but gotta start somewhere right?

Transactional system roots + installation in moss

OK let's recap. A moss archive is super weird, and consists of multiple containers, or payloads. We use a strongly typed binary format, per-payload compression (Currently zstd), and don't store files in a typical archive fashion.

Instead a .stone file (moss archive) has a Content Payload, which is a compressed "megablob" of all the unique files in a given package. The various files contained within that "megablob" are described in an IndexPayload, which simply contains some IDs and offsets, acting much like a lookup table.

That data alone doesn't actually tell us where files go on the filesystem when installed. For that, we have a specialist Layout Payload, encoding the final layout of the package on disk.

As can be imagined, the weirdness made it quite difficult to install in a trivial fashion.

Databases

Well, persistence really. Thanks to RocksDB and our new moss-db project, we can trivially store information we need from each package we "precache". Primarily, we store full system states within our new StateDB, which at present is simply a series of package ID selections grouped by a unique 64-bit integer.

Additionally we remember the layouts within the LayoutDB so that we can eventually recreate said layout on disk.

Precaching

Before we actually commit to an install, we try to precache all of the stone files in our pool. So we unpack the content payload ("megablob"), split it into various unique files in the pool ready for use. At this point we also record the Layouts, but do not "install" the package to a system root.

Blitting

This is our favourite step. When our cache is populated, we gather all relevant layouts for the current selections, and then begin applying them in a new system root. All directories and symlinks are created as normal, whereas any regular file is hardlinked from the pool. This process takes a fraction of a second and gives us completely clean, deduplicated system roots.

Currently these live in /.moss/store/root/$ID/usr. To complete the transaction, we update /usr to point to the new usr tree atomically assuming that a reboot isn't needed. In future, boot switch logic will update the tree for us.

Removal

Removal is quite the same as installation. We simply remove the package IDs from the new state selections (copied from the last state) and blit a new system root, finally updating the atomic /usr pointer.

Removal

Tying it all together

We retain classic package management traits such as having granular selections, multiple repositories, etc, whilst sporting advanced features like full system deduplication and transactions/rollbacks.

When we're far enough along, it'll be possible to boot back to the last working transaction without requiring an internet connection. Due to the use of pooling and hardlinks, each transaction tree is only a few KiB, with files shared between each transaction/install.

On the list..

We need some major cleanups, better error handling, logging, timed functions, and an eventloop driven process to allow parallel fetching/precaching prior to final system rootfs construction.

It's taken us a very long time to get to this point, and there is still more work to be done. However this is a major milestone and we can now start adding features and polish.

Once the required features are in place, we'll work on the much needed pre alpha ISO :) If you fancy helping us get to that stage quicker, do check out our OpenCollective! (We won't limit prealpha availability, don't worry :))

Moss DB Progress

I'll try to make this update as brief as I can but it's certainly an important one, so let's dive right into it. The last few weeks have been rough but work on our package manager has still been happening. Today we're happy to reveal another element of the equation: moss-db.

{{}}

moss-db is an abstract API providing access to simplistic "Key Value" stores. We had initially used some payload based files as databases but that introduced various hurdles, so we decided to take a more abstract approach to not tie ourselves to any specific implementation of a database.

Our main goal with moss-db is to encapsulate the RocksDB library, providing sane, idiomatic access to a key value store.

High level requirements

At the highest level, we needed something that could store arbitrary keys and values, grouped by some kind of common key (commonly known as "buckets"). We've succeeded in that abstraction, which also required us to fork a rocksdb-binding to add the Transform APIs required.

Additionally we required idiomatic range behaviours for iteration, as well as generic access patterns. To that affect we can now foreach a bucket, pipe it through the awesomely powerful std.algorithm APIs, and automatically encode/decode keys and values through our generic APIs when implementing the mossdbEncode() and mossdbDecode() functions for a specific type.

In a nutshell, this was the old, ugly, hard way:

    /* old, hard way */
    auto nameZ = name.toStringz();
    int age = 100;
    ubyte[int.sizeof] ageEncoded = nativeToBigEndian(ageEncoded);
    db.setDatum(cast(ubyte[]) (nameZ[0 .. strlen(nameZ)]), ageEncoded);

And this is the new, shmexy way:

    db.set("john", 100);
    db.set("user 100", "bobby is my name");

    auto result = db.get!int("john");
    if (result.found)
    {
        writeln(result.value);
    }

    auto result2 = db.get!string("user 100");
    if (result2.found)
    {
        writeln(result2.value);
    }

It's quite easy to see the new API lends itself robustly to our needs, so that we may implement stateful, strongly typed databases for moss.

Next Steps

Even though some APIs in moss-db may still be lacking (remove, for example) we're happy that it can provide the foundation for our next steps. We now need to roll out the new StateDB, MetaDB and LayoutDB, to record system states, package metadata, and filesystem layout information, respectively.

With those 3 basic requirements in place we can combine the respective works into installation routines. Which, clearly, warrants another blog post ... :)

For now you can see the relevant projects on our GitLab project.

Initial Performance Testing

With further progress on boulder, we can now build native stone packages with some easy tweaks such as profile guided optimizations (PGO) and link time optimizations (LTO). That means we can take a first look at what the performance of the first cut of Serpent OS shows for the future. The tests have been conducted using benchmarking-tools with Serpent OS measured in a chroot on the same host with the same kernel and config.

One of the key focuses for early in the project is on reducing build time. Every feature can either add or subtract from the time it takes to produce a package. With a source/binary hybrid model, users will greatly benefit from the faster builds as well. In terms of what I've targeted in these tests is the performance of clang and testing some compiler flag options on cmake.

Clang Shows its Promise

clang has always been a compiler with a big future. The performance credentials have also been improving each release and are now starting to see it perform strongly against its GNU counterpart. It is common to hear that clang is slow and produces less optimized code. I will admit that most distros provide a slow build of clang, but that will not be the case in Serpent OS.

It is important to note that in this comparison the Host distro has pulled in some patches from LLVM-13 that greatly improve the performance of clang. Prior to this, their tests actually took 50% longer for cmake and configure but only 10% longer for compiling. boulder does not yet support patching in builds so the packages are completely vanilla.

Test using clang Serpent Host Difference
cmake LLVM 5.89s 10.58s 79.7%
Compile -j4 llvm-ar 126.16s 155.32s 23.1%
configure gettext 36.64s 63.55s 73.4%

Based on the results during testing, the performance of clang in Serpent OS still has room to improve and was just a quick tuning pass. At stages where I would have expected to be ahead already, the compile performance was only equal (but cmake and configure were still well ahead).

GCC Matters Too!

While clang is the default compiler in Serpent OS, there may be instances where the performance is not quite where it could be. It is common to see software have more optimized code paths where they are not tested with clang upstream. As an example, here's a couple of patches in flac (1, 2) that demonstrate this being improved. Using benchmarking-tools, it is easy to see where gcc and clang builds are running different functions via perf results.

In circumstances where the slowdown is due to hitting poor optimization paths in clang, we always have the option to build packages using gcc, where the GNU toolchain is essential for building glibc. Therefore having a solid GNU toolchain is important but small compile time improvements won't be noticed by users or developers as much.

Test using gcc Serpent Host Difference
cmake LLVM 7.00s 7.95s 13.6%
Compile llvm-ar 168.11s 199.07s 18.4%
configure gettext 45.45s 51.93s 14.3%

An OS is More Than Just a Compiler

While the current bootstrap exists only as a starting point for building the rest of Serpent OS, there are some other packages we can easily test and compare. Here's a summary of those results.

Test Serpent Host Difference
Pybench 1199.67ms 1024.33ms -14.6%
xz Compress Kernel (-3 -T1) 42.67s 46.57s 9.1%
xz Compress Kernel (-9 -T4) 71.25s 76.12s 6.8%
xz Decompress Kernel 8.03s 8.18s 1.9%
zlib Compress Kernel 12.60s 13.17s 4.5%
zlib Decompress Kernel 5.14s 5.21s 1.4%
zstd Compress Kernel (-8 -T1) 5.77s 7.06s 22.3%
zstd Compress Kernel (-19 -T4) 51.87s 66.52s 28.3%
zstd Decompress Kernel 2.90s 3.08s 6.3%

State of the Bootstrap

From my experiences with testing the bootstrap, it is clear there's some cobwebs in there that require some more iterations of the toolchain. There also seems to be some slowdowns in not including all the dependencies of some packages. Once more packages are included, naturally all the testing will be redone and help influence the default compiler flags of the project.

It's not yet clear the experience of using libc++ vs libstdc++ with the clang compiler. Once the cobwebs are out and Serpent OS further developed, the impact (if any) should become more obvious. There are also some parts not yet included in boulder such as stripping files, LTO and other flags by default that will speed up loading libraries. At this stage this is deliberate until integrating outputs from builds (such as symbol information).

But this provides an excellent platform to build out the rest of the OS. The raw speed of the clang compiler will make iterating and expanding the package set a real joy!

Hang On, What's Going on With Python?

Very astute of you to notice! python in its current state is an absolute minimal build of python in order to run meson. However, I did an analyze run in benchmarking-tools where it became obvious that they were doing completely different things.

Apples and oranges comparison

For now I'll simply be assuming this will sort itself out when python is built complete with all its functionality. And before anyone wants to point the finger at clang, you get the same result with gcc.