Building container images using nix and github actions

Seán Murphy
7 min readJan 24, 2024

I spent a little time investigating building container images with nix which contain python applications or python environments; I wanted to put this build process into a github action.

I found a few posts which partially cover this:

  • Mitchell Hashimoto has a post on creating a container image for a simple flask application using poetry2nix and the nix dockerTools module;
  • fasterthanlime has a post which goes into extensive detail on building a container image to support a service written in rust;
  • thewagner has a post on building container images to serve static content via python and the accompanying github repo contains a github action to perform a container image build whenever there is a push to the repo.

This post is strongly influenced by the above and mostly captures a few observations I made when trying to automate container creation of python applications and environments using nix.

Check out this repo for more info.

What did I want to do?

I wanted to understand how nix tooling can be integrated with github actions automation to support container build processes, particularly for python applications and environments. I was particularly interested in using self-hosted github runners as I have more control over these and was curious to understand how these can be managed. I wanted to use my local (NixOS) machine as a runner.

Adding the self-hosted runner on NixOS

Although NixOS is not listed in the set of supported architectures and operating systems for github runners, getting a github runner working on NixOS is very straightforward.

I chose to add the runner to a single private repo — this meant there was no possibility that anyone else could schedule work to this runner. (Aside: see this excellent post for a discussion on security risks associated with self-hosted github runners). I added a runner via the repo Settings, Actions -> Runners pane — I noted the token for the runner and put this into a file on my local filesystem.

With that file created, I added the following to my configuration.nix.

  services.github-runners = {
testrunner = {
enable = true;
name = "test-runner";
# token file is somewhere on local machine - in my case, it's not currently managed by nix
tokenFile = <TOKENFILE>
url = <GITREPO>
};
};O

After rebuilding NixOS, the runner was visible on github and the status of the runner is visible on the local machine using systemctl status github-runner-<RUNNER-NAME>.

Using the self-hosted runner

Once the runner was visible, it could pick up a workflow — the workflow job which should run on the self-hosted runner simply had to be marked runs-on: self-hosted for github to schedule the job to the new runner.

To test out the runner, I simply copied the content from the thewagner’s repo — I immediately encountered issues as I was using a NixOS runner. That workflow uses a standard ubuntu image and adds nix via the Determinate Systems Nix Installer Action github action. However, this was not necessary in my context as nix was already available. (Aside: the nix installed action tries to spin up an environment using KVM to provide for greater isolation between the host and the action — this did not work out of the box in my case and it fell back to running the job within the github runner environment which was fine for my purposes — I never looked into this in more detail).

Once I removed the nix installation process, the action was able to complete successfully. This worked in my context, but this is clearly an exception and assuming nix should be installed is the more sensible default.

Building a python app and environment

There are many ways to build python environments in nix; I just considered three: poetry2nix, dream2nix with pdm and pyproject.nix.

With poetry being a widely used tool for managing such environments, it was a natural to consider it; however, it is not entirely compliant with PEP 621, the standard for definition of pyproject.toml and it is reputedly quite slow. For these reasons, I didn’t consider it further.

PDM is a newer tool which is PEP 621 compliant and boasts of a fast resolver. Further, PDM is somewhat supported by the rather interesting dream2nix project. For these reasons, I looked into this a little.

Finally, pyproject.nix is a relatively new project which provides tools for parsing a pyproject.toml file and creating the necessary environment in nix; interestingly, it provides logic to support both poetry compatible and PEP-621 compatible pyproject.toml files — it also plays nice with PDM-compatible pyproject.toml files.

The dream2nix/PDM solution was not so straightforward — it required specifying the pyproject.toml file, and the pdm.lock. I used pdm add to add new dependencies — I ended up doing this via a virtualenv which felt rather clunky. Further, the build mechanism seemed somewhat magical with the dream2nix being incorporated via specialArgs within a module evaluation; also, the output of this module evaluation was not entirely clear (it’s in eval.config.public). With this, I was able to build a container image which contained a simple test application but it was not obvious how to run the python environment within the container. For sure much of this is down to my lack of understanding of dream2nix and probably the fact that PDM support is still a work in progress.

Unable to solve those issues, I had a look at pyproject.nix — it’s worth noting that dream2nix is also working on support for pyproject.nix so perhaps the same result will be possible using dream2nix in future.

Working with pyproject.nix was very straightforward: as the name implies, it supports building of a python environment driven by the pyproject.toml file. The logic was very simple: read in the pyproject.toml file, ‘render’ it — which in a nix context maps to generating the set of nix packages to be installed to create the derivation — and then use the standard nix buildPythonPackage which will generate a derivation for the python project. Note that the python package can be either a (runnable) python application or a python library — in the former case, a bin directory is created containing the application and in the latter case no bin directory is created and the content is stored in a lib folder. The pyproject.toml file indicates whether the project is an application by specifying [project.scripts].

The above was all tested with nix build and nix run to run the application. On to building the container image…

Building the container image

Building the container image was quite straightforward: all that was necessary was to use the buildLayeredImage in the dockerTools toolset:

      buildApplicationImage =
let
# we must somehow know something about the application here...
port = "5000";
in
pkgs.dockerTools.buildLayeredImage
{
name = "nix-build-application-image";
tag = revision;
contents = [ pythonEnv pkgs.bash pkgs.findutils pkgs.uutils-coreutils-noprefix ];
config = {
Cmd = [
# must know that the name of the application is app, as defined in the pyproject.toml file
"${pythonPackage}/bin/app"
];
ExposedPorts = {
"${port}/tcp" = { };O
};
};
};

The resulting container image is created as a gzip’d tarball. A cursory look inside this shows that there are quite a large number of layers — there were 104 in my case: all dependent python packages were determined with each each one resulting in a layer being generated in the docker image. Each of these layers contains the nix store content for the python package and there is an additional layer which glues everything together for the python package to be installed. There are also layers for other supporting packages which get installed (eg bash, coreutils etc).

Having such a large number of layers is certainly an issue — in some cases the number of layers can render the image unusable but also having such a large number of layers can result in significant delays when running these containers — for sure it makes sense to have some aggregation of the packages/layers such that less layers are generated and container launch times are reduced; the dependency grouping mechanisms within pyproject.toml files can probably support this.

Local testing of the container image was done using docker load to create the image within docker and docker run , docker stop, docker rm etc for running, stopping and removing the container. See the fasterthanlime post for more detail on how this works.

Putting it into a github action

Having built the image locally, it was then possible to put it into a github workflow. All that was necessary here was to build the image using nix build and push it to the oci image registry using skopeo. The workflow which does this is here.

Final comments

Although there has been lots of work on python solutions for nix, there is some tension between the nix package management paradigm which focuses strongly on reproducability and the multiple different python management paradigms which often result in different python environments. For sure, there is no single solution which fits all needs and it’s not even clear that nix and python are natural bedfellows given the different base assumptions. However, it’s worth exploring what’s possible here to support reliable, reproducable container image builds for different contexts.

--

--

Seán Murphy

Tech Tinkerer, Curious Thinker(er). Lost Leprechaun. Always trying to improve.