Skip to content

Blog

The March Of Progress

Despite a brief excursion out of the country for a first-in-a-lifetime holiday, I'm happily back at the desk to bring you up to date with the latest goings in Serpent OS. TLDR: Loads of awesome, baremetal is enabled, ISO cycle in next couple of weeks.

GNOME on Serpent OS

End of February Update

This update came a little later in the month, as we've got a lot of exciting news to share. Everything from boulder in Rust, to the GNOME 45 Desktop complete with moss triggers built atop a rebootstrapped toolchain.

Rusty Boulder

We're pleased to announce that over the course of this weekend, once testing has completed we'll deploy the latest version of boulder, our packaging build tool. This has been given the Rust treatment, directly sharing the codebase with moss.

Boulder in action

January Updates

Precisely one month since our end of year summary, so, what have we been up to? With a new year, a new start was long overdue. We're pleased to announce that we've finally shifted our butts over to our own server (again).

While this isn't quite as chunky a news update as the last post, we rather hope you appreciate the regularity of the news.

moss in action

End Of Year Summary

Burning question - how long before we can use Serpent OS on our development systems? It's a fair question - and one that requires context to answer. Let's start our new monthly update cycle with a recap of progress so far.

screenshot of moss

Introducing tarkah

We'd like to extend a huge official welcome, and thank you, to newest team member Cory Forsstrom (aka tarkah)!

Oxidised Moss

Allow me to start this post by stating something very important to me: I absolutely love the D programming language, along with the expressivity and creative freedom it brings. Therefore please do not interpret this post as an assassination piece.

For a number of months now the Serpent OS project has stood rather still. While this could naively be attributed to our shared plans with Solus - a deeper, technical issue is to be acredited.

img

D isn't quite there yet. But it will be, some day.

Again, allow me to say I have thoroughly enjoyed my experience with D over the last 3 or so years, it has been truly illuminating for me as an engineer. With that said, we have also become responsible for an awful lot of code. As an engineering-led distribution + tooling project, our focus is that of secure and auditable code paths. To that extent we pursued DIP1000 as far as practical and admit it has a way to go before addressing our immediate needs of memory safety.

While we're quite happy to be an upstream for Linux distributions by way of release channels and tooling releases, we don't quite have the resources to also be an upstream for the numerous D packages we'd need to create and maintain to get our works over the finish line.

With that said, I will still continue to use D in my own personal projects, and firmly believe that one day D will realise its true potential.

Rewarding contributors

Our priorities have shifted somewhat since the announcement of our shared venture with Solus, and we must make architectural decisions based on the needs of all stakeholders involved, including the existing contributor pool. Additionally, care should be taken to be somewhat populist in our choice of stacks in order to give contributors industry-relevant experience to add to their résumé (CV).

Playing to strengths

Typically Solus has been a Golang-oriented project, and has a number of experienced developers. With the addition of the Serpent developers, the total cross-over development team has a skill pool featuring Rust and Go, as well as various web stack technologies.

Reconsidering the total project architecture including our automated builds, the following decisions have been made that incorporate the requirements of being widely adopted/supported, robust ecosystems and established tooling:

  • Rust, for low level tooling and components. Chiefly: moss, boulder, libstone
  • ReactJS/TypeScript for our frontend work (Summit Web)
  • Go - for our web / build infrastructure (Summit, Avalanche, Vessel, etc)

Lets start... 2 days ago!

The new infrastructure will be brought up using widely available modules, and designed to be scalable from the outset as part of a Kubernetes deployment, with as minimal user interaction as needed. Our eventual plans include rebuilding the entire distribution from source with heavy caching once some part of the dependency graph changes.

This infrastructure will then be extended to support the Solus 4 series for quality of life improvements to Solus developers, enabling a more streamlined dev workflow: TL;DR less time babysitting builds = more Serpent development focus.

Our priority these past few days has been on the new moss-rs repository where we have begun to reimplement moss in Rust. So far we have a skeleton CLI powered by clap with an in-progress library for reading .stone archives, our custom package format.

The project is organised as a Rust workspace, with the view that stone, moss and boulder will all live in the same tree. Our hope is this vastly improves the onboarding experience and opens the doors (finally) to contributors.

Licensing

It should also be noted that the new tooling is made available under the terms of the Mozilla Public License (MPL-2.0). After internal discussion, we felt the MPL offered the greatest level of defence against patent trolls while still ensuring our code was widely free for all to respectfully use and adapt.

Please also note that we have always, and continue to deliberately credit copyright as:

Copyright © 2020-2023 Serpent OS Developers

This is a virtual collective consisting of all whom have contributed to Serpent OS (per git logs) and is designed to prevent us from being able to change the license down the line, i.e. a community protective measure.

Glad to be back

Despite some sadness in the change of circumstances, we must make decisions that benefit us collectively as a community. Please join us in raising a virtual glass to new pastures, and a brave new blazing fast 🚀 (TM) future for Serpent OS and Solus 5.

Cheers!

Snakes On A Boat

We had intended to get a blog post out a little bit quicker, but the last month has been extremely action packed. However, it has paid off immensely. As our friends at Solus recently announced it is time to embark on a new voyage.

General gist of it

There is very little point in rehashing the post, so I'll cover the essentials

  • We assisted the Solus team in deploying a complete new infrastructure. Some of this work is ongoing
  • The package delivery pipeline is in place, meaning updates will soon reach users.
  • We're working towards a mutually beneficial future whereby Solus adopts our tooling.

Is Serpent OS over?

Oh no, no. The most practical angle to view this from is that Serpent OS is free to build and innovate to create the basis of Solus 5. Outside of the distribution we're still keen to continue development on our tooling and strategies.

It is therefore critical that we continue development, and find the best approach for both projects, to make the transition to Solus 5 as seamless as possible. To that end we will still need to produce ISOs and have an active community of users and testers. During this transition period, despite being two separate projects, we're both heading to a common goal and interest.

Finances

We have an exciting journey ahead for all of us, and there are many moving parts involved. Until we're at the point of mutual merge, we will keep the entities and billing separate. Thus, funds to the Solus OpenCollective are intended for use within Solus, whereas we currently use GitHub Sponsors for our own project needs.

Currently Solus and Serpent OS share one server, which was essential for quick turnaround on infrastructure enabling. At the end of this month Serpent OS will migrate from that server to a new, separate system, ensuring the projects are billed separately.

Long story short, if you wish to sponsor Serpent OS specifically, please do so via our GitHub sponsors account as our monthly running costs will immediately rise at the end of this month. If you're supporting Solus development, please do visit them and sponsor them =)

Next steps?

Glad you asked! Now that the dust is settling, we're focusing on Serpent OS requirements, and helping Solus where we can. Our immediate goals are to build a dogfooding system for a small collection of developers to run as an unsupported prealpha configuration, allowing us to flesh out the tooling and processes.

This will include a live booting GNOME ISO, and sufficient base packages to freely iterate on moss, boulder, etc as well as our own infrastructure. Once we've attained a basic quality and have an installer option, those ISOs will be made available to you guys!

Infrastructure Launched!

After many months and much work, our infrastructure is finally online. We've had a few restarts, but it's now running fully online with 2 builders, ready to serve builds around the clock.

Firstly, I'd like to apologise for the delay since our last blog post. We made the decision to move this website to static content, which took longer than expected. We're still using our own D codebase, but prebuilding the site (and fake "API") so we can lower the load on our web server.

Introducing the Infra

Our infrastructure is composed of 3 main components.

Summit

This is the page you can see over at dash.serpentos.com. It contains the build scheduler. It monitors our git repositories, and as soon as it discovers any missing builds it creates build tasks for them. It uses a graph to ensure parallel builds happen as much as possible, and correctly orders (and blocks) builds based on build dependencies.

Avalanche

We have 2 instances of Avalanche, our builder tool, running. This accepts configuration + build requests from Summit, reporting status and available builds.

Vessel

This is an internal repository manager, which accepts completed packages from an Avalanche instance. Published packages land at dev.serpentos.com

Next steps

Super early days with the infra but we now have builds flowing as part of our continuous delivery solution. Keeping this blog post short and sweet... we're about to package GNOME and start racing towards our first real ISOs!

Don't forget

If you like what the project is doing, don't forget to sponsor!

Lift Off

Enough of this "2 years" nonsense. We're finally ready for lift off. It is with immense pleasure we can finally announce that Serpent OS has transitioned from a promise to a deliverable. Bye bye, phantomware!

We exist

As mentioned, we spent 2 years working on tooling and process. That's .. well. Kinda dull, honestly. You're not here for the tooling, you're here for the OS. To that end I made a decision to accelerate development of the actual Linux distro - and shift development of tooling into a parallel effort.

Infrastructure .. intelligently deferred

I deferred final enabling of the infrastructure until January to rectify the chicken/egg scenario whilst allowing us to grow a base of contributors and an actual distro to work with. We're in a good position with minimal blockers so no concern there.

A real software collection

This is our term for the classical "package repository". We're using a temporary collection right now to store all of the builds we produce. In keeping with the Avalanche requirements, this is the volatile software collection. Changes a lot, hasn't got a release policy.

A community.

It goes without saying, really, that our project isn't remotely possible without a community. I want to take the time to personally thank everyone that stepped up to the plate lately and contributed to Serpent OS. Without the work of the team, in which I include the contributors to our venom recipe repository, an ISO was never possible. Additionally contributions to tooling has helped us make significant strides.

It should be noted we've practically folded our old "team" concept and ensured we operate across the board as a singular community, with some members having additional responsibilities. Our belief is all in the community have equal share and say. With that said, to the original "team", members both past and present, I thank for their (long) support and contributions to the project.

An ISO.

We actually went ahead and created our first ISO. OK that's a lie, this is probably the 20th revision by now. And let's be brutally honest here:

It sucks.

We expected no less. However, the time is definitely here for us to begin our public iteration, transitioning from suckness to a project worth using. In order to do that, we need to get ourselves to a point whereby we can dogfood our work and build a daily driver. Our focus right now is building out the core technology and packaging to achieve those aims.

So if you want to try our uninstallable, buggy ISO, chiefly created as a brief introduction to our package manager and toolchain, head to our newly minted Download page. Set your expectations low, ignore your dreams, and you will not be disappointed!

All jokes aside, it took a long time to get to point where we could even construct our first, KVM-focused, UEFI-only snekvalidator.iso. We now have a baseline to improve on, a working contribution process, and a booting, self-hosting system.

The ISO is built using 2 layered collections, the protosnek collection containing our toolchain, and the new volatile collection. Much of the packaging work has been submitted by venom contributors and the core team. Note you can install neofetch which our very own Rune Morling (ermo) patched to support the Serpent OS logo.

Boot it in Qemu (or certain Intel laptops) and play with moss now! Note, this ISO is not installable, and no upgrade path exists. It is simply the beginnings of a public iteration process.

Next steps

In January we'll launch our infrastructure to scale out contributions as well as to permit the mass-rebuilds that need to happen. We have to enable our -dbginfo packages and stripping, which were disabled due to a parallelism issue. We need to introduce our boot management based around systemd-boot, provide more kernels, do hardware enabling, introduce moss-triggers, and much more. However, this is a pivotal moment for our project as we've finally become a real, if not sucky, distro. The future is incredibly bright, and we intend to deliver on every one of our promises.

As always, if you want to support our development, please consider sponsoring the work, or engaging with the community on Matrix or indeed our forums.

You can discuss this blog post, or leave feedback on the ISO, over at our forums.

The Big Update

Well - we've got some big news! The past few weeks have been an incredibly busy time for us, and we've hit some major milestones.

Funding

After much deliberation - we've decided to pull out of Open Collective. Among other reasons, the fees are simply too high and severely impact the funds available to us. In our early stages, the team consensus is that funds generated are used to compensate my time working on Serpent OS.

As such I'm now moving funding to my own GitHub Sponsors page - please do migrate! It ensures your entire donation makes it and keeps the lights on for longer =) Please remember I'm working full time on Serpent OS exclusively - I need your help to keep working.

Moved to GitHub

We've pretty much completed our transition to GitHub. We've now got the following organisations:

Forums

Don't forget - our forums are live over at forums.serpentos.com - please feel free to drop in and join in with the community =)

Rehash on the tooling

OK so what exactly are moss and boulder? In short - they're the absolute core pieces of our distribution model.

"What is moss?"

On the surface, moss looks and feels roughly the same as just about any other traditional package manager out there. Internally, however, its far more modern and has a few tricks up its sleeve. For instance, every time you initiate an operation in moss, be it installation, removal, upgrade, etc, a new filesystem transaction is generated. In short, if something is wrong with the new transaction - you can just boot to an older transaction when things worked fine.

Now, it's not implemented using bundles or filesystem specific functionality, internally its just intelligent use of hardlinks and deduplication policies, and we have our own container format with zstd based payload compression. Our strongly typed, deduplicating binary format is what powers moss.

Behind the scenes we also use some other cool technology, such as LMDB for super reliable and speedy database access. The net result is a next generation package management solution that offers all the benefits of traditional package managers (i.e. granularity and composition) with new world features, like atomic updates, deduplication, and repository snapshots.

boulder

It's one thing to manage and install packages, it's another entirely to build them. boulder builds conceptually on prior art such as pisi and the package.yml format used in ypkg. It is designed with automation and ease of integration in mind, i.e. less time spent focusing on packaging and more time on actually getting the thing building and installing correctly.

Boulder supports "macros" as seen in the RPM and ypkg world, to support consistent builds and integration. Additionally it automatically splits up packages into the appropriate subpackages, and automatically scans for binary, pkgconfig, perl and other dependencies during source analysis and build time. The end result is some .stone binary packages and a build manifest, which we use to flesh out our source package index.

Major moss improvements

We've spent considerable time reworking moss, our package manager. It now features a fresher (terminal) user interface, progress bars, and is rewritten to use the moss-db module encapsulating LMDB.

nspawn

It's also possible to manipulate the binary collections (software repositories) used by moss now. Note we're going to rename "remote" to "collection" for consistency.

At the time of writing:

$ mkdir destdir
$ sudo moss remote add -D destdir protosnek https://dev.serpentos.com/protosnek/x86_64/stone.index
$ sudo moss install -D destdir bash dash dbus dbus-broker systemd coreutils util-linux which moss nano
$ sudo systemd-nspawn -b -D destdir

This will be simplified once we introduce virtual packages (Coming Soon ™)

Local Collections

Boulder can now be instructed to utilise a local collection of stone packages, simplifying the development of large stack items.

sudo boulder bi stone.yml -p local-x86_64

Packages should be moved to /var/cache/boulder/collections/local-x86_64 and the index can be updated by running:

sudo moss idx /var/cache/boulder/collections/local-x86_64

Self Hosting

Serpent OS is now officially self hosting. Using our own packages, we're able to construct a root filesystem, then within that rootfs container we can use our own build tooling (boulder) to construct new builds of our packages in a nested container.

The protosnek collection has been updated to include the newest versions of moss and boulder.

self hosting

Booting

As a fun experiment, we wanted to see how far along things are. Using a throwaway kernel + initrd build, we were able to get Serpent OS booting using virtualisation (qemu)

booting

ISO When?

Right now everyone is working in the snekpit organisation to get our base packaging in order. I'm looking to freeze protosnek, our bootstrap collection, at the latest of tomorrow evening.

We now support layered, priority based collections (repositories) and dependency solving across collection boundaries, allowing us to build our new main collection with protosnek acting as a bootstrap seed.

Throughout this week, I'll focus on getting Avalanche, Summit and Vessel into shape for PoC so that we can enjoy automated builds of packages landing in the yet-to-be-launched volatile collection.

From there, we're going to iterate + improve packaging, fixing bugs and landing features as we discover the issues. Initially we'll look to integrate triggers in a stateless-friendly fashion (our packages can only ship /usr by design) - after that will come boot management.

An early target will be Qemu support via a stripped linux-kvm package to accelerate the bring up, and we encourage everyone to join in the testing. We're self hosting, we know how to boot, and now we're able to bring the awesome.

I cannot stress how important the support to the project is. Without it - I'm unable to work full time on the project. Please consider supporting my development work via GitHub Sponsors.

I'm so broke they've started naming black holes after my IBAN.

Thank you!

You can discuss this blog post over on our forums

Infrastructure Update

Since the last post, I've pivoted to full time work on Serpent OS, which is made all the more possible thanks to everyone supporting us via OpenCollective <3.

We've been working towards establishing an online infrastructure to support the automation of package builds, while revisiting some core components.

moss-db

During the development of the Serpent OS tooling we've been exploring the possibilities of D Lang, picking up new practices and refining our approach as we go. Naturally, some of our older modules are somewhat ... smelly. Most noticeable is our moss-db module, which was initially intended as a lightweight wrapper around RocksDB.

In practice that required an encapsulation API written in D around the C API, and our own wrapping on top of that. Naturally, it resulted in a very allocation-heavy implementation that just didn't sit right with us, and due to the absurd complexity of RocksDB was still missing quite a few features.

Enter LMDB

We're now using the Lightning Memory-Mapped Database as the driver implementation for moss-db. In short, we get rapid reads, ACID transactions, bulk inserts, you name it. Our implementation takes advantage of multiple database indexes (MDB_dbi) in LMDB to partition the database into internal components, so that we can provide "buckets", or collections. These internal DBs are used for bucket mapping to permit a key-compaction strategy - iteration of top level buckets and key-value pairs within a bucket.

Hat tip, boltdb

The majority of the API was designed with the boltdb API in mind. Additionally it was built with -preview=dip1000 and -preview=in enabled, ensuring safely scoped memory use and no room for memory lifetime issues. While we prefer the use of generics, the API is built with immutable(ubyte[]) as the internal key and value type.

Custom types can simply implement mossEncode or mossDecode to be instantly serialisable into the database as keys, values or bucket identifiers.

Example API usage:

Database db;
/* setup the DB with lmdb:// URI */

/* Write transaction */
auto err = db.update((scope tx) @safe
{
    auto bucket = tx.bucket("letters");
    return tx.set(bucket, "a", 1);
});

/* do something with the error */

err = db.view((in tx) @safe
{
    foreach (name, bucket ; tx.buckets!int)
    {
        foreach (key, value ; tx.iterator!(string,string)(bucket))
        {
            /* do something with the key value pairs, decoded as strings */
        }
    }

    /* WILL NOT COMPILE. tx is const scope ref :) */
    tx.removeBucket("numbers");

    return NoDatabaseError;
}

Next for moss

Moss will be ported to the new DB API and we'll gather some performance metrics, while implementing features like expired state garbage collection (disk cleanup), searching for names/descriptions, etc.

Avalanche

Early version of avalanche, in development

Avalanche is a core component of our upcoming infrastructure, providing the service for running builds on a local node, and a controller to coordinate a group of builders.

Summit will be the publicly accessible project dashboard, and will be responsible for coordinating incoming builds to Avalanche controllers and repositories. Developers will submit builds to Summit and have them dispatched correctly.

So far we have the core service process in place for the Controller + Node, and now we're working on persistence and handshake. TLDR; fancy use of moss-db and JSON Web tokens over mandated SSL. This means our build infra will be scalable from day 1 allowing multiple builders to be online very early on.

Timescale

We're planning to get an early version of our infrastructure up and running within the next 2 weeks, and get builds flowing =)