Nix in Docker – Best of Both Worlds

  • 0

I'm using Nix mostly on a mac as a development tool, and every now and then I get blocked by some packages not working on OS X.

For those situations I've been working for my own Nix image for Docker: A such minimal Docker image that it only contains the files from the Nix installer, but could re-use persistent shared Nix-installation between Docker containers to make itself fast, convenient and lean.

My build recipe is now available at:

Features:

  • A single Docker image, which can be used to run anything from nixpkgs.
  • You can nix-shell -p to get a Docker isolated development shell with all your requirements installed from nixpkgs.
  • You can -v /my/path:/var/nixpkgs to use your own nixpkgs clone.
  • You can -v /my/path:/etc/nix to use your own nix configuration.
  • With the shared data container:
    • Sequential and simultaneous containers can share the same Nix installation.
    • You can nix-env -i to add new commands (or manage custom profiles).
    • You can nix-collect-garbage -d to clean up the data container.
  • You can use it as a base image and add new stuff with nix-env -i.

Bootstrapping

Build a Docker image named nix using the provided Docker based build chain:

$ git clone https://gist.github.com/datakurre/a5d95794ce73c28f6d2f
$ cd a5d95794ce73c28f6d2f
$ make

Create a Docker data container named nix to use a shared persistent /nix for all your Nix containers:

$ docker create --name nix -v /nix nix sh

To know more about where the nix data gets stored with this setup, please, read Docker documentation about managing data in containers.

Examples of use

Running a Python interpreter with some packages:

$ docker run --rm --volumes-from=nix -ti nix \
         nix-shell -p python35Packages.pyramid --run python3

Running a Python Jupyter notebook with mounted context:

$ mkdir .jupyter
$ echo "c.NotebookApp.ip = '*'" > .jupyter/jupyter_notebook_config.py
$ docker run --rm --volumes-from=nix -ti \
         -v $PWD:/mnt -w /mnt -e HOME=/mnt -p 8888 nix \
         nix-shell -p python35Packages.notebook --run "jupyter notebook"

Running a Haskell Jupyter notebook with mounted context:

$ mkdir .jupyter
$ echo "c.NotebookApp.ip = '*'" > .jupyter/jupyter_notebook_config.py
$ docker run --rm --volumes-from=nix -ti \
         -v $PWD:/mnt -w /mnt -e HOME=/mnt -p 8888 nix \
         nix-shell -p ihaskell --run "ihaskell-notebook"

Running development shell for default.nix in mounted context:

Adding --help for nix-commands:

$ docker run --rm --volumes-from=nix nix nix-env -i man
$ docker run --rm --volumes-from=nix nix nix-env --help

Purging nix-store cache:

$ docker run --rm --volumes-from=nix nix nix-collect-garbage -d

Using the image as a base for a new Docker image, with ./Dockerfile:

FROM nix
RUN nix-env -i python
ENTRYPOINT ["/usr/local/bin/python"]
$ docker build -t python --rm=true --force-rm=true --no-cache=true .
$ docker run --rm -ti python

Creating Jupyter Docker-containers with Nix

  • 0

Jupyter is the new name and brand for an awesome interactive data science programming scratchpad previously known as iPython Notebook. While there are plenty of pre-built Docker images available for Jupyter, for customized images, I'm tempted to use Nix.

Here I describe approach for the following gists creating

Note: Because these Jupyter notebook configurations are build with Nix, their configuration is immutable and it's not possible for the user to install any additional packages directly from a notebook.

http://4.bp.blogspot.com/-qCDD1d_bCRw/VkXVCBC-P0I/AAAAAAAAAqs/9HiAnQ5TFZo/s1600/jupyter.png

Usage

With nix-shell (unless you are on a mac):

$ git clone https://gist.github.com/datakurre/49b6fbc4bafdef029183
$ cd 49b6fbc4bafdef029183
$ nix-shell --run "jupyter notebook"

With Docker (works also on a mac):

$ git clone https://gist.github.com/datakurre/49b6fbc4bafdef029183
$ cd 49b6fbc4bafdef029183
$ make run

Now, if you are on a mac, you need to figure out the IP and port where notebook is running with:

$ docker-machine ip default
$ docker ps

Explanation

At first, both of my Jupyter gists are based on my recipe for building Docker containers with Nix.

It builds a Docker image with Nix installation to build your Nix expressions, creates a nix store data container to store and share built Nix expressions between builds, and creates a Docker ready tarball from a built nix closure.

Then a few picks from the expressions:

with import <nixpkgs> {};
let dependencies = rec {
  # ...
  jupyter = python35Packages.notebook.override {
    postInstall = with python35Packages; ''
      mkdir -p $out/bin
      ln -s ${jupyter_core}/bin/jupyter $out/bin
      wrapProgram $out/bin/jupyter \
        --prefix PYTHONPATH : "${notebook}/lib/python3.5/site-packages:$PYTHONPATH" \
        --prefix PATH : "${notebook}/bin:$PATH"
    '';
  };
# ...
}

To be able to run Jupyter notebook, I want to use the normal jupyter command, but a problem is that the base command is defined in a python package jupyter_core, which does not include notebook-package, which includes the actual Jupyter notebook program and its jyputer notebook subcommand. The Nix solution is to install notebook package, but enhance its install with a custom postinstall, whick links the command from jupyter_core and wraps the command to be aware of notebook and itse dependencies.

with import <nixpkgs> {};
let dependencies = rec {
  builder = builtins.toFile "builder.sh" ''
    source $stdenv/setup
    mkdir -p $out
    cat > $out/kernel.json << EOF
    $json
    EOF
  '';
  # ...
  python34 = pkgs.python34.buildEnv.override {
    extraLibs = with python34Packages; [
      # Kernel
      ipykernel
      ipywidgets
      # Custom packages
      lightning
      # ...
    ];
  };
  python34_kernel = stdenv.mkDerivation rec {
    name = "python34";
    buildInputs = [ python34 ];
    json = builtins.toJSON {
      argv = [ "${python34}/bin/python3.4"
               "-m" "ipykernel" "-f" "{connection_file}" ];
      display_name = "Python 3.4";
      language = "python";
      env = { PYTHONPATH = ""; };
    };
    inherit builder;
  };
# ...
}

Next I want to be able to define and configure as many Jupyter kernels as I need in my notebook. This pattern first defines the kernel environment. Above that is Python 3.4 with the mandatory iPython packages and then any amount of Python packages I want to provide for the notebook.

The second part of the pattern just defines an iPython kernel configuration (usually created using jupyter kernelspec in mutable Jupyter installations) so that the kernel uses our previously defined Python 3.4 environment. The builder for actually creating the configuration file is defined an upper level of the expression so that it can easily be re-used with inherit builder;.

With this approach, is possible to have as many different and differently configured kernels as you want. It's possible to have both Python 3.4 and 3.5 or many different configurations for the same version. For example, when there's a major upgrade in some package, it's possible to have one kernel with the old version and another with the new version.

The example gists also include similarly configurable kernel configuration for R.

with import <nixpkgs> {};
let dependencies = rec {
  # ...
  jupyter_config_dir = stdenv.mkDerivation {
    name = "jupyter";
    buildInputs = [
      python34_kernel
      R_kernel
    ];
    builder = writeText "builder.sh" ''
      source $stdenv/setup
      mkdir -p $out/etc/jupyter/kernels $out/etc/jupyter/migrated
      ln -s ${python34_kernel} $out/etc/jupyter/kernels/${python34_kernel.name}
      ln -s ${R_kernel} $out/etc/jupyter/kernels/${R_kernel.name}
      cat > $out/etc/jupyter/jupyter_notebook_config.py << EOF
      import os
      c.KernelSpecManager.whitelist = {
        '${python34_kernel.name}',
        '${R_kernel.name}'
      }
      c.NotebookApp.ip = os.environ.get('JUPYTER_NOTEBOOK_IP', 'localhost')
      EOF
    '';
  };
  # ...
};

The next most important part is the expression to compose all the defined kernels into a compelete and immutable Jupyter configuration directory. Whitelisting kernels in configuration is required to hide the Python environment running the Jupyter notebook (because it only has the notebook dependencies and is missing all the interesting libraries). The line with c.NotebookApp.ip allows Docker to configure notebook to allow connections outside the container.

with import <nixpkgs> {};
let dependencies = rec {
  # ...
};
in with dependencies;
stdenv.mkDerivation rec {
  name = "jupyter";
  env = buildEnv { name = name; paths = buildInputs; };
  builder = builtins.toFile "builder.sh" ''
    source $stdenv/setup; ln -s $env $out
  '';
  buildInputs = [
    jupyter
    jupyter_config_dir
  ] ++ stdenv.lib.optionals stdenv.isLinux [ bash fontconfig tini ];
  shellHook = ''
    mkdir -p $PWD/.jupyter
    export JUPYTER_CONFIG_DIR=${jupyter_config_dir}/etc/jupyter
    export JUPYTER_PATH=${jupyter_config_dir}/etc/jupyter
    export JUPYTER_DATA_DIR=$PWD/.jupyter
    export JUPYTER_RUNTIME_DIR=$PWD/.jupyter
  '';
}

Finally, we define an buildable environment installation, which mainly includes Jupyter (command) and its configuration. On Linux a few extra dependencies are added to make Jupyter run in a Docker container. For nix-shell command the expression configures Jupyter to look configuration from the Nix built configuration directory and store volatile runtime files under the current working directory.

Environment variables are also the way to configure Jupyter to run properly in a Docker container. My Dockerfile configures Jupyter to look configuration from the directory created by my recipe for building Docker containers with Nix, and store volatile runtime files under the host directory mounted as /mnt. In my example Makefile that's configured to be the current working directory, which is also used shown as the notebook home directory.

FROM scratch
ADD default.nix.tar.gz /
ENV FONTCONFIG_FILE="/etc/fonts/fonts.conf" \
    JUPYTER_NOTEBOOK_IP="*" \
    JUPYTER_CONFIG_DIR="/etc/jupyter" \
    JUPYTER_PATH="/etc/jupyter" \
    JUPYTER_DATA_DIR="/mnt/.jupyter" \
    JUPYTER_RUNTIME_DIR="/mnt/.jupyter"
EXPOSE 8888
ENTRYPOINT ["/bin/tini", "--", "/bin/jupyter"]

Note: Running Jupyter notebook in Docker container may require tini (or supervisor) to allow Jupyter spawn all the kernel processes it needs within the container.

Nix for Python developers

  • 0

About a week ago, I had the pleasure of giving a presentation about my Nix experiences at PyCon Finland 2015. This is an executive afterthought summary of that presentation, focusing only on how to use Nix to build development environments. With a few cool additional examples.

Installing Nix

The easiest way to install Nix for development usage is the default single user installation:

$ sudo mkdir /nix
$ bash <(curl https://nixos.org/nix/install)

The default installation of Nix would install and build everything under that /nix, which makes it easy to uninstall Nix at any point by simply deleting that directory. It also comes configured for the latest nixpkgs release. (Nix is just the generic build system and package manager, nixpkgs is the recommended community managed package collection for it.)

After a successful installation, available packages can searched with:

nix-env -qaP|grep -i needle

Alternative installation methods would be to follow that installer script manually, build Nix from source or request your Linux distribution to package it for you. Read more about all the options and basic Nix usage at Nix Package Manager Guide. Building Nix from source would allow to choose where stores the build (other place than /nix), but that would also prevent it using the community binary caches (by default Nix tries to download builds from the community binary cache at first and only then build them locally).

Next you want to create a Nix configuration file /etc/nix/nix.conf with the following content of a couple of special configuration flags:

gc-keep-outputs = true
build-use-chroot = true

Option gc-keep-outputs = true will configure Nix garbage collector to be more developer friendly by not collecting build-time-only dependencies so easily. Option build-use-chroot will trigger isolated builds to ensure that nothing from your existing system can affect Nix builds.

At any point of Nix use, you could clean up /nix and possibly free some disk space by simply running its garbage collector:

$ nix-collect-garbage -d

Never ever manually remove files from /nix unless you are completely uninstalling it.

Nix offically supports Linux and OS X. Yet, if you are using OS X, you should read special instructions from the wiki for OS X. The OS X support has been in heavy development lately and not all available packages build yet on OS X. In addition to reading thw wiki page, you want to add the following lines into /etc/nix/nix.conf to ensure that Nix uses all available binary builds also on OS X:

binary-caches = https://cache.nixos.org https://hydra.nixos.org
use-binary-caches = true

For all OS X related Nix issues, you can get help from ##nix-darwin channel at Freenode IRC network.

The community members told me having used Nix also on Cygwin, FreeBSD, OpenBSD, NetBSD, OpenSolaris and SmartOS. Yet, on other systems, you would need to learn more about how nixpkgs work to get on of its standard build environments to work on your system.

Using Nix

Finally, let the fun begin:

Run anything with a one-liner

nix-shell can be used to run anything available in nixpkgs simply with:

$ nix-shell -p package --run "command"

For example:

$ nix-shell -p python35 --run "python3"

Or:

$ nix-shell -p redis --run "redis-server"
$ nix-shell -p nodejs --run "node"
$ nix-shell -p graphviz --run "dot -V"
$ nix-shell -p texLive --run "pdflatex --help"

Or with any number of packages:

$ nix-shell -p redis -p python35Packages.redis --run "python3"

Nix would simply either downloard or build all the defined packages, build a combined environment with all of them and then execute the given command in that environment. Everything would be installed under /nix and cleaned by garbage collector with nix-collect-garbage -d.

Get into shell with anything with a one-liner

Calling nix-shell without --run would drop you into an interactive shell with the required dependencies:

$ nix-shell -p texLive -p gnumake -p redis -p python35

Entering exit would exit the shell as usual.

Additionally, adding --pure into nix-shell arguments, would limit PATH and other environment variables to only include the listed packges while inside the shell.

Define script dependencies in a hashbang

nix-shell can also be used in a shell script hashbang line to execute the script in an environment with any required dependencies:

#! /usr/bin/env nix-shell
#! nix-shell -i python3 -p python35Packages.tornado -p nodejs

The first line #! /usr/bin/env nix-shell is a standard hasbang-line, but with nix-shell it can follow any number of #! nix-shell lines defining the required dependencies using nix-shell command line arguments.

The most common arguments for nix-shell in hashbang use are:

  • -p to define packages available in the execution environment
  • -i to define the interpreter command (from listed packages) used to actually run the script.

More examples are available in the Nix manual.

Build complex development environments with Nix expressions

When one-liners are not enough, it's possible to define a more complete development environment using the functional Nix expression language. Both nix-shell and nix-build can take a file with such expression as their first optional positional argument. Also both look for a file named ./default.nix by default.

You could use the following example as the base for your ./default.nix:

with import <nixpkgs> {};
stdenv.mkDerivation rec {
  name = "env";

  # Mandatory boilerplate for buildable env
  env = buildEnv { name = name; paths = buildInputs; };
  builder = builtins.toFile "builder.sh" ''
    source $stdenv/setup; ln -s $env $out
  '';

  # Customizable development requirements
  buildInputs = [
    # Add packages from nix-env -qaP | grep -i needle queries
    redis

    # With Python configuration requiring a special wrapper
    (python35.buildEnv.override {
      ignoreCollisions = true;
      extraLibs = with python35Packages; [
        # Add pythonPackages without the prefix
        redis
        tornado
      ];
    })
  ];

  # Customizable development shell setup with at last SSL certs set
  shellHook = ''
    export SSL_CERT_FILE=${cacert}/etc/ssl/certs/ca-bundle.crt
  '';
}

Running

$ nix-build

would now create symlinked directory ./result with ./result/bin with both ./result/bin/redis and ./result/bin/python3 with redis and tornado as importable packages. That build is comparable to familiar Python virtualenv, but for any dependencies, not just Python packages.

The resulting Python interpreter ./result/bin/python3 could also be used with IDE, e.g. configured as a project interpreter for PyCharm.

The resulting directory name can be changed from result into something else with argument -o myname. The directory also works as a so called garbage collection root, which prevents Nix garbage collection from clearing it until the directory (symlink) has been renamed, moved or deleted.

Running

$ nix-shell

would enter into an interactive shell with all dependencies in path as expected.

Running

$ nix-shell --run "python3"

would start that Python interpreter defined in ./default.nix with tornado and redis packages (and also the redis server available in the process' environment).

Finally, to turn the environment into a distributable docker container, check my Nix to Docker build pack example at GitHub.

Add custom dependencies into a Nix expression

Sometimes, yet unfortunatley often with Python packages, not all your dependencies are defined in nixpkgs already. The best solution, of course, would be to make pull requests to add them there, but it's also possible to just define them per project in the very same project specific ./default.nix.

For example, let's upgrade tornado into its latest beta, and add a comeletely new Python package, redis_structures, with the following dependencies pattern:

with import <nixpkgs> {};
let dependencies = rec {

  # Customized existing packages using expression override
  _tornado = with python35Packages; tornado.override rec {
    name = "tornado-4.3b1";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/t/tornado/${name}.tar.gz";
      sha256 = "c7ddda61d9469c5745f3ac00e480ede0703dd1a4ef540a3d9bd5e03e9796e430";
    };
  };

  # Custom new packages using buildPythonPackage expression
  _redis_structures= with python35Packages; buildPythonPackage rec {
    name = "redis_structures-0.1.3";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/r/redis_structures/${name}.tar.gz";
      sha256 = "4076cff3ea91b7852052d963bfd2533c74e8a0054826584e058e685a911f56c5";
    };
    # Fix broken packaging (package is missing README.rst)
    prePatch = "touch README.rst";
    # Define package requirements (without pythonPackages prefix)
    propagatedBuildInputs = [ redis ];
  };
};
in with dependencies;
stdenv.mkDerivation rec {
  name = "env";

  # Mandatory boilerplate for buildable env
  env = buildEnv { name = name; paths = buildInputs; };
  builder = builtins.toFile "builder.sh" ''
    source $stdenv/setup; ln -s $env $out
  '';

  # Customizable development requirements
  buildInputs = [
    # Add packages from nix-env -qaP | grep -i needle queries
    redis

    # With Python configuration requiring a special wrapper
    (python35.buildEnv.override {
      ignoreCollisions = true;
      extraLibs = with python35Packages; [
        # Add pythonPackages without the prefix
        _tornado
        _redis_collections
      ];
    })
  ];

  # Customizable development shell setup with at last SSL certs set
  shellHook = ''
    export SSL_CERT_FILE=${cacert}/etc/ssl/certs/ca-bundle.crt
  '';
}

See the full explanation of buildPythonPackage-expression in nixpkgs manual.

Generating Nix expressions

The only real issue in using Nix with Python is that only a portion of packages released at PyPI are available in nixpkgs. And those, which are available, have usually only the latest version there.

If it would be trivial to generate Nix-expressions for all public Python packages, that would have already been done. Unfortunately, it's not and it's not been done. And it's not because of Nix, but because of the various imperfect ways how Python packages can define their dependencies.

I was told that things would get better once PEP426 is implemented and used in practice.

Nevertheless, there are many tools to try for generating and maintaining Nix expressions for Python packages and projects. Each of them may emphase different things and may or may not always produce directly usable expression:

Personally I'm using and developing only collective.recipe.nix, which is currently only usable out of the box for Python 2.7 projects, I'm working on support for Python 3.x projects and easier usage.

Full example project

Finally, let's try developing a demo Python 3.5 async / await HTTP-AMQP-bridge: a http-service, which distributes all the request to workers through AMQP broker. Just for fun:

$ git clone https://gist.github.com/datakurre/2076247049dabe16627f
$ cd 2076247049dabe16627f
$ ls -1
connection.py
default.nix
server.py
setup.py
supervisord.nix
worker.py

This project only has a few files:

./setup.py
to define the python package
./connection.py
to manage the AMQP connection and give a new channel when requested (AMQP channels are kind of virtual AMQP connections running on top of the one real connection)
./server.py
to run a tornado server to handle the incoming requests by passing them to AMQP broker and returning the result
./worker.py
to handle requests from AMQP broken and return the results back to the serer.
./default.nix
the nix expression to setup up a development environment with RabbitMQ and Python with required packages
./supervisord.nix
an alternative nix expression for setting an environment with pre-configured supervisord.

Let see the ./default.nix in detail:

with import <nixpkgs> {};
let dependencies = rec {
  _erlang = erlang.override { wxSupport = false; };
  _rabbitmq_server = rabbitmq_server.override { erlang = _erlang; };
  _enabled_plugins = builtins.toFile "enabled_plugins" "[rabbitmq_management].";
  _tornado = with python35Packages; tornado.override {
    name = "tornado-4.3b1";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/t/tornado/tornado-4.3b1.tar.gz";
      sha256 = "c7ddda61d9469c5745f3ac00e480ede0703dd1a4ef540a3d9bd5e03e9796e430";
    };
  };
  _aioamqp = with python35Packages; buildPythonPackage {
    name = "aioamqp-0.4.0";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/a/aioamqp/aioamqp-0.4.0.tar.gz";
      sha256 = "4882ca561f1aa88beba3398c8021e7918605c371f4c0019b66c12321edda10bf";
    };
  };
};
in with dependencies;
stdenv.mkDerivation rec {
  name = "env";
  env = buildEnv { name = name; paths = buildInputs; };
  builder = builtins.toFile "builder.pl" ''
    source $stdenv/setup; ln -s $env $out
  '';
  buildInputs = [
    _rabbitmq_server
    (python35.buildEnv.override {
      ignoreCollisions = true;
      extraLibs = [
        _tornado
        _aioamqp
      ];
    })
  ];
  shellHook = ''
    mkdir -p $PWD/var
    export RABBITMQ_LOG_BASE=$PWD/var
    export RABBITMQ_MNESIA_BASE=$PWD/var
    export RABBITMQ_ENABLED_PLUGINS_FILE=${_enabled_plugins}
    export SSL_CERT_FILE=${cacert}/etc/ssl/certs/ca-bundle.crt
    export PYTHONPATH=`pwd`
  '';
}

The most interesting part is the shellHook (for the nix-shell command) at the end, which configures RabbitMQ server to be run so that its state is stored under the current project directory (./var). Also note, how builtins.toFile nix command is used to create a project specific configuration file for RabbitMQ, to be stored in Nix-store (to not bloat the project directory and to be purged with Nix garbage collector). Any app supporting configuration using environment variables could have a development environment specific configuration in the same way.

To test this out, simply start a few terminals to start RabbitMQ, server and workers (as many as you'd like to):

$ nix-shell --run "rabbitmq-server"
$ nix-shell --run "python3 server.py"
$ nix-shell --run "python3 worker.py"
$ nix-shell --run "python3 worker.py"
$ nix-shell --run "python3 worker.py"

Then then watch requests getting nicely balanced between all the workers:

$ ab -n 1000 -c 100 http://localhost:8080/

You can also follow requests through RabbitMQ's management view at http://localhost:15672 (user: guest, password: guest).

If you'd like to develop the project with IDE, just persist the environment with:

$ nix-build

And point your IDE (e.g. PyCharm) to use the Python interpreter created into ./result/bin/python3.

As an extra, there's an alternative environment with pre-configured supervisord:

$ nix-shell supervisord.nix
[nix-shell]$ supervisord
[nix-shell]$ supervisorctl status
rabbitmq                         RUNNING   pid 17683, uptime 0:00:01
server                           RUNNING   pid 17684, uptime 0:00:01
worker:worker-0                  RUNNING   pid 17682, uptime 0:00:01
worker:worker-1                  RUNNING   pid 17681, uptime 0:00:01
[nix-shell]$ supervisorctl shutdown
Shut down
[nix-shell]$ exit

More information

Nix manual, https://nixos.org/nix/
The official generic Nix manual for installing Nix, learning its built-in commands and the Nix language
Nixpkgs manual, https://nixos.org/nixpkgs/
The Nixpkgs manual for learning conventions and utilities provided in the Nix package collection (Nixpkgs)
Nix planet, http://planet.nixos.org/
Planet for Nix community bloggers
Nixpills, http://lethalman.blogspot.fi/search/label/nixpills
Famous blog series for learning how Nix really works in depth
Nix Conf, http://conf.nixos.org/
The first Nix conference site, hopefully hosting slides and links to recordings after the conference...
#nixos
The Nix, Nixpkgs and NixOS community IRC channel at Freenode
##nix-darwin
The Nix Darwin (OS X) user community IRC channel at Freenode

Generating Plone theming mockups with Chameleon

  • 0

Some days ago there was a question at the Plone IRC-channel, whether the Plone theming tool supports template inheritance [sic]. The answer is no, but let's play a bit with the problem.

The prefered theming solution for Plone, plone.app.theming, is based on Diazo theming engine, which allows to make a Plone theme from any static HTML mockup. To simplify a bit, just get a static HTML design, write a set of Diazo transformation rules, and you'll have a new Plone theme.

The ideal behind this theming solution is to make the theming story for Plone the easiest in the CMS industry: Just buy a static HTML design and you could use it as a theme as such. (Of course, the complexity of the required Diazo transformation rules depends on the complexity of the theme and themed content.)

But back to the original problem: Diazo encourages the themer to use a plenty of different HTML mockups to keep the transformation rules simple. One should not try to generate theme elements for different page types in Diazo transformation rules, but use dedicated HTML mockups for different page types. But what if the original HTML design came only with a very few selected mockups, and creating the rest from those is up to you. You could either copy and paste, or...

Here comes a proof of concept script for generating HTML mockups from TAL using Chameleon template compiler (and Nix to remove need for virtualenv, because of Python dependencies).

But at first, why TAL? Because METAL macros of TAL can be used to make the existing static HTML mockups into re-usable macros/mixins with customizable slots with minimal effort.

For example, an existing HTML mockup:

<html>
<head>...</head>
<body>
...
<div>
Here be dragons.
</div>
...
</body>
<html>

Could be made into a re-usable TAL template (main_template.html) with:

<metal:master define-macro="master">
<html>
<head>...</head>
<body>
...
<div metal:define-slot="content">
Here be dragons.
</div>
...
</body>
<html>
</metal:master>

And re-used in a new mockup with:

<html metal:use-macro="main_template.macros.master">
<body>
<div metal:fill-slot="content">
Thunderbirds are go!
</div>
</body>
<html>

Resulting a new compiled mockup:

<html>
<head>...</head>
<body>
...
<div>
Thunderbirds are go!
</div>
...
</body>

The script maps all direct sub-directories and files with .html suffix in the same directory with the compiled template into its TAL namespace, so that macros from those can be reached with METAL syntax metal:use-macro="filebasename.macros.macroname" or metal:use-macro="templatedirname['filebasename'].macros.macroname".

Finally, here comes the example code:

#! /usr/bin/env nix-shell
#! nix-shell -i python -p pythonPackages.chameleon pythonPackages.docopt pythonPackages.watchdog
"""Chameleon Composer

Copyright (c) 2015 Asko Soukka <asko.soukka@iki.fi>

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

Usage:
  ./compose.py <filename>
  ./compose.py src/front-page.html
  ./compose.py <source> <destination> [--watch]
  ./compose.py src build
  ./compose.py src build --watch

"""
from __future__ import print_function
from chameleon import PageTemplateFile
from chameleon import PageTemplateLoader
from docopt import docopt
from watchdog.observers import Observer
from watchdog.observers.polling import PollingObserver
from watchdog.utils import platform
import os
import sys
import time


def render(template):
    assert os.path.isfile(template)

    # Add siblings as templates into compilation context for macro-use
    context = {}
    dirname = os.path.dirname(template)
    for name in os.listdir(dirname):
        path = os.path.join(dirname, name)
        basename, suffix = os.path.splitext(name)
        if os.path.isdir(path):
            context[basename] = PageTemplateLoader(path, '.html')
        elif suffix == '.html':
            context[basename] = PageTemplateFile(path)

    return PageTemplateFile(template)(**context).strip()


class Composer(object):
    def __init__(self, source, destination):
        self.source = source
        self.destination = destination
        self.mapping = {}
        self.update()

    def update(self):
        source = self.source
        destination = self.destination
        mapping = {}

        # File to file
        if os.path.isfile(source) and os.path.splitext(destination)[-1]:
            mapping[source] = destination

        # File to directory
        elif os.path.isfile(source) and not os.path.splitext(destination)[-1]:
            mapping[source] = os.path.join(
                destination,
                os.path.splitext(os.path.basename(source))[0] + '.html'
            )

        # Directory to directory
        elif os.path.isdir(source):
            for filename in os.listdir(source):
                path = os.path.join(source, filename)
                if os.path.splitext(path)[-1] != '.html':
                    continue
                mapping[path] = os.path.join(
                    destination,
                    os.path.splitext(os.path.basename(path))[0] + '.html'
                )

        self.mapping = mapping

    def __call__(self):
        for source, destination in self.mapping.items():
            if os.path.dirname(destination):
                if not os.path.isdir(os.path.dirname(destination)):
                    os.makedirs(os.path.dirname(destination))
            with open(destination, 'w') as output:
                print('{0:s} => {1:s}'.format(source, destination))
                output.write(render(source).strip().encode('utf-8'))

    # noinspection PyUnusedLocal
    def dispatch(self, event):
        # TODO: Build only changed files
        self.update()
        self.__call__()

    def watch(self):
        if platform.is_darwin():
            observer = PollingObserver()  # Seen FSEventsObserver to segfault
        else:
            observer = Observer()
        observer.schedule(self, self.source, recursive=True)
        observer.start()
        try:
            while True:
                time.sleep(1)
        except KeyboardInterrupt:
            observer.stop()
        observer.join()
        sys.exit(0)


if __name__ == '__main__':
    arguments = docopt(__doc__, version='Chameleon Composer 1.0')

    if arguments.get('<filename>'):
        print(render(arguments.get('<filename>')))
        sys.exit(0)

    composer = Composer(arguments.get('<source>'),
                        arguments.get('<destination>'))
    composer()

    if arguments.get('--watch'):
        print('Watching {0:s}'.format(arguments.get('<source>')))
        composer.watch()

Building Docker containers from scratch using Nix

  • 0

Nix makes it reasonable to build Docker containers from scratch. The resulting containers are still big (yet I heard there's ongoing work to make Nix builds more lean), but at least you don't need to think about choosing and keeping the base images up to date.

Next follows an example, how to make a Docker image for Plone with Nix.

Creating Nix expression with collective.recipe.nix

At first, we need Nix expression for Plone. Here I use one built with my buildout based generator, collective.recipe.nix. It generates a few exression, including plone.nix and plone-env.nix. The first one is only really usable with nix-shell, but the other one can be used building a standalone Plone for Docker image.

To create ./plone-env.nix, I need a buildout environment in ./default.nix:

with import <nixpkgs> {}; {
  myEnv = stdenv.mkDerivation {
    name = "myEnv";
    buildInputs = [
      pythonPackages.buildout
    ];
    shellHook = ''
      export SSL_CERT_FILE=~/.nix-profile/etc/ca-bundle.crt
    '';
  };
}

And a minimal Plone buildout using my recipe in ./buildout.cfg:

[buildout]
extends = https://dist.plone.org/release/4-latest/versions.cfg
parts = plone
versions = versions

[instance]
recipe = plone.recipe.zope2instance
eggs = Plone
user = admin:admin

[plone]
recipe = collective.recipe.nix
eggs =
    ${instance:eggs}
    plone.recipe.zope2instance

[versions]
zc.buildout =
setuptools =

And finally produce both plone.nix and the required plone-env.nix with:

$ nix-shell --run buildout

Creating Docker container with Nix Docker buildpack

Next up is building the container with our Nix expression with the help of a builder container, which I call Nix Docker buildpack.

At first, we need to clone that:

$ git clone https://github.com/datakurre/nix-build-pack-docker
$ cd nix-build-pack-docker

And build the builder:

$ cd builder
$ docker build -t nix-build-pack --rm=true --force-rm=true --no-cache=true .
$ cd ..

Now the builder can be used to build a tarball, which only contains the built Nix derivation Plone. Let's copy the created plone-env.nix into the current working directory and run:

$ docker run --rm -v `pwd`:/opt nix-build-pack /opt/plone-env.nix

After a while, that directory should contain file called plone-env.nix.tar.gz, which only contains two directories in its root: /nix for the built derivation and /app for easy access symlinks, like /app/bin/python.

Now we need ./Dockerfile for building the final Plone image:

FROM scratch
ADD plone.env.nix.tar.gz /
EXPOSE 8080
USER 1000
ENTRYPOINT ["/app/bin/python"]

And finally, a Plone image can be built with

$ docker build -t plone --rm=true --force-rm=true --no-cache=true .

Running Nix-built Plone container

To run Plone in a container with the image built above, we still need the configuration for Plone. We can the normal buildout generated configuration, but we need to

  1. remove site.py from parts/instance.
  2. fix paths to match in parts/instance/zope.conf to match the mounted paths in Docker container (/opt/...)
  3. create some temporary directory to be mounted into container

Also, we need a small wrapper to call the Plone instance script, ./instance.py, because we cannot use the buildout generated one:

import sys
import plone.recipe.zope2instance.ctl

sys.exit(plone.recipe.zope2instance.ctl.main(
    ['-C', '/opt/parts/instance/etc/zope.conf']
    + sys.argv[1:]
))

When these are in place, within the buildout directory, we should now be able to run Plone in Docker container with:

$ docker run --rm -v `pwd`:/opt -v `pwd`/tmp:/tmp -P plone /opt/instance.py fg

The current working directory is mapped to /opt and some temporary directory is mapped to /tmp (because our image didn't contain even a /tmp).

Note: When I tried this out, for some reason (possibly because VirtualBox mount with boot2docker), I had to remove ./var/filestorage/Data.fs.tmp between runs or I got errors on ZODB writes.