On building fat themes for Plone

Could fat themes become the common ground between filesystem Plone developers and through-the-web integrators?

Plone ships with a lot of bundled batteries for building sophisticated content management solutions. Content types, workflows, portlets and event based content rules can all be customized just by using browser without writing a single line of new code. Yet, bad early experiences from maintaining such through-the-web implementations, have made it common to disregard that approach, and prefer (more technical) file system based approach instead.

During the last few years, thanks to Diazo based theming framework for Plone, there has been a slow renaissance of through-the-web customization of Plone. Besides Diazo itself, the new theming framework introduced a clever new storage layer, plone.resource, which supported both python packaged and through-the-web* developed themes. In addition, the new theming editor made it easy to export through-the-web developed themes as re-usable zip packages.

Initially, I was hoping for some kind of new TTW add-on approach to emerge on top plone.resource. Nowadays it's getting clear that we are just going add more features into themes instead. Maybe it's better that way.

By fat themes, I mean themes which do not only provide look, but also some behavior for the site. Most of my such themes have provided all customizable configuration for their sites. The main benefit has been faster iterations, because I've been able to deliver updates without running buildout or restarting the site.

Obviously, configuring everything in theme is not yet possible with vanilla Plone, but requires selected theming related add-ons and tools:


collective.themefragments makes it possible to include Zope/Chameleon page template fragments in your theme, and inject them into rendered content using Diazo rules. It was originally proposed as a core feature for Plone theming (by Martin Aspeli), but because it was rejected, I had to to release it as its own add-on. Later I added support for restricted python scripts (callable from those fragments) and a tile to make fragments addable into Plone Mosaic layouts.

Use of themefragments requires the add-on to be available for Plone (e.g. by adding it to eggs in buildout and running the buildout) and writing fragment templates into fragments* subdirectory of theme:


  <h1 tal:content="context/Title">Title</h1>

And injecting them in ./rules.xml:

<replace css:theme="h1" css:content="h1" href="/@@theme-fragment/title" />


<replace css:theme="h1">
  <xsl:copy-of select="document('@@theme-fragment/title',
                       $diazo-base-document)/html/body/*" />

depending on the flavor of your Diazo rules.

It's good to know that rendering fragments and executing their scripts rely on Zope 2 restricted python, and may cause unexpected Unauthorized exceptions (because you write them as admin, but viewers may be unauthenticated). More than once I've needed to set the verbose-security flag to figure out the real reason of causing a such exception...


rapido.plone must be mentioned, even I don't have it in production yet by myself. Rapido goes beyond just customizing existing features of Plone by making it possible to implement completely new interactive features purely in theme. Rapido is the spiritual successor of Plomino and probably the most powerful add-on out there when it comes to customizing Plone.

When compared to to themefragments, Rapido is more permissive in its scripts (e.g. allows use of plone.api).It also provides its own fast storage layer (Souper) for storing, indexing and accessing custom data.


collective.themesitesetup has become my "Swiss Army knife" for configuring Plone sites from theme. It's a theming plugin, which imports Generic Setup steps directly from theme, when the theme is being activated. It also includes helper views for exporting the current exportable site configuration into editable theme directories.

This is the theming add-on, which makes it possible to bundle custom content types, workflows, portlets, content rule configurations, registry configuration and other Generic Setup-configurable stuff in a theme.

Recently, I also added additional support for importing also translation domains, XML schemas and custom permissions.

A theme manifest enabling the plugin (once the plugin is available for Plone) could look like the:



permissions =
    MyProject.AddBlogpost    MyProject: Add Blogpost

and the theme package might include files like:





collective.taxonomy is not really a theming plugin, but makes it possible to include large named vocabularies with translations in a Generic Setup profile. That makes it a nice companion to collective.themesitesetup by keeping XML schemas clean from large vocabularies.


collective.dexteritytextindexer is also "just" a normal add-on, but because it adds searchable text indexing support for custom fields of custom content types, it is a mandatory add-on when theme contains new content types.


Of course, the core of any theme is still about CSS and JavaScript to make the site frontend look and feel good. Since Mockup and Plone 5, we've had RequireJS based JavaScript modules and bundles for Plone, and LESS based default theme, Barceloneta (with also SASS version available). Unfortunately, thanks to the ever-changing state of JavaScript ecosystem, there's currently no single correct tool or re-building and customizing these Plone frontend resource.

My current tool of choice for building frontend resources for a Plone theme is Webpack, which (with help of my plugin) makes it possible to bundle (almost) all frontend resources from Plone resource registry into theme, and inject my customizations while doing that. And with a single "publicPath", setting, the resulting theme could load those bundles from a CDN.

Configuring Webpack is not the easiest thing to learn, and debugging possible bundle build issues could be even harder. Yet, I've tried to make it easy to try it out with plonetheme.webpacktemplate mr.bob-template.


It should be clear by now, that even my themes are compatible and customizable with through-the-web* approach, I still work on filesystem with version control and traditional Plone add-on development toolchain (I may even have automated acceptance tests for non-trivial theme features). For a long time, I just configured a global plone.resource-directory in buildout and rsync'd theme updates to servers. It was about time to automate that.

plonetheme-upload is a npm installable NodeJS-package, which provides a simple command line tool for uploading theme directory into Plone using Upload Zip file feature of Plone Theme settings. Its usage is as simple as:

$ plonetheme-upload my-theme-dir http://my.plone.domain

Possibly the next version shoud include another CLI tool, plonetheme-download*, to help also through-the-web themers to keep their themes under version control.

Plone Barcelona Sprint 2016 Report

For the last week, I was lucky enough to be allowed to participate Plone community sprint at Barcelona. The print was about polishing the new RESTful API for Plone, and experimenting with new front end and backend ideas, to prepare Plone for the next decade (as visioned in its roadmap). And once again, the community proved the power of its deeply rooted sprinting culture (adopted from the Zope community in the early 2000).

Just think about this: You need to get some new features for your sophisticated software framework, but you don't have resources to do it on your own. So, you set up a community sprint: reserve the dates and the venue, choose the topics for the sprint, advertise it or invite the people you want, and get a dozen of experienced developers to enthusiastically work on your topics for more for a full week, mostly at their own cost. It's a crazy bargain. More than too good to be true. Yet, that's just what seems to happen in the Plone community, over and over again.

To summarize, the sprint had three tracks: At first there was the completion of plone.restapi – a high quality and fully documented RESTful hypermedia API for all of the currently supported Plone versions. And after this productive sprint, the first official release for that should be out at any time now.

Then there was the research and prototyping of a completely new REST API based user interface for Plone 5 and 6: An extensible Angular 2 based app, which does all its interaction with Plone backend through the new RESTful API, and would universally support both server side and browser side rendering for fast response time, SEO and accessibility. Also these goals were reached, all the major blockers were resolved, and the chosen technologies were proven to be working together. To pick of my favorite sideproduct from that track: Albert Casado, the designer of Plone 5 default theme in LESS, appeared to migrate the theme to SASS.

Finally, there was our small backend moonshot team: Ramon and Aleix from Iskra / Intranetum (Catalonia), Eric from AMP Sport (U.S.), Nathan from Wildcard (U.S.) and yours truly from University of Jyväskylä (Finland). Our goal was to start with an alternative lightweight REST backend for the new experimental frontend, re-using the best parts of the current Plone stack when possible. Eventually, to meet our goals within the given time constraints, we agreed on the following stack: aiohttp based HTTP server, the Plone Dexterity content-type framework (without any HTML views or forms) built around Zope Toolkit, and ZODB as our database, all on Python 3.5 or greater. Yet, Pyramid remains as a possible alternative for ZTK later.


I was responsible for preparing the backend track in advance, and got us started with a a simple aiohttp based HTTP backend with experimental ZODB connection supporting multiple concurrent transaction (when handled with care). Most of my actual sprint time went into upgrading Plone Dexterity content-type framework (and its tests) to support Python 3.5. That also resulted in backwards compatible fixes and pull requests for Python 3.5 support for all its dependencies in plone.* namespace.

Ramon took the lead in integrating ZTK into the new backend, implemented a content-negotiation and content-language aware traversal, and kept us motivated by rising the sprint goal once features started clicking together. Aleix implemented an example docker-compose -setup for everything being developd at the sprint, and open-sourced their in-house OAuth-server as plone.oauth. Nathan worked originally in the frontend-team, but joined us for the last third of the sprint for pytest-based test setup and asyncio-integrated Elasticsearch integration. Eric replaced Zope2-remains in our Dexterity fork with ZTK equivalents, and researched all the available options in integrating content serialization of plone.restapi into our independent backend, eventually leading into a new package called plone.jsonserializer.

The status of our backend experiment after the sprint? Surprisingly good. We got far enough, that it's almost easier to point the missing and incomplete pieces that still remain on our to do:

  • We ported all Plone Dexterity content-type framework dependencies to Python 3.5. We only had to fork the main plone.dexterity-package, which still has some details in its ZTK integration to do and tests to be fixed. Also special fields (namely files, richtext and maybe relations) are still to be done.
  • Deserialization from JSON to Dexterity was left incomplete, because we were not able to fully re-use the existing plone.restapi-code (it depends on z3c.form-deserializers, which we cannot depend on).
  • We got a basic aiohttp-based Python 3.5 asyncio server running with ZODB and asynchronous traverse, permissions, REST-service mapping and JSON-serialization of Dexterity content. Integration with the new plone.oauth and zope.security was also almost done, and Ramon promised to continue to work on that to get the server ready for their in-house projects.
  • Workflows and their integration are to be done. We planned to try repoze.worklfow at first, and if that's not a fit, then look again into porting DCWorkflow or other 3rd party libraries.
  • Optimization for asyncio still needs more work, once the basic CRUD-features are in place.

So, that was a lot of checkbox ticked in a single sprint, really something to be proud of. And if not enough, an overlapping Plone sprint at Berlin got Python 3.5 upgrades of our stack even further, my favorite result being a helper tool for migrating Python 2 version ZODB databases to Python 3. These two sprints really transformed the nearing end-of-life of Python 2 from a threat into a possibility for our communitt, and confirmed that Plone has a viable roadmap well beyond 2020.

Personally, I just cannot wait for a suitable project with Dexterity based content-types on a modern asyncio based http server, or the next change meet our wonderful Catalan friends! :)

Evolution of a Makefile for building projects with Docker

It's hard to move to GitLab and resist the temptation of its integrated GitLab CI. And with GitLab CI, it's just natural to run all CI jobs in Docker containers. Yet, to avoid vendor lock of its integrated Docker support, we choosed to keep our .gitlab-ci.yml configurations minimal and do all Docker calls with GNU make instead. This also ensured, that all of our CI tasks remain locally reproducible. In addition, we wanted to use official upstream Docker images from the official hub as far as possible.

As always with make, it it's a danger that Makefiles themselves become projects of their own. So, let's begin with a completely hypothetical Makefile:

all: test

     karma test

.PHONY: all test

Separation of concerns

At first, we want to keep all Docker related commands separate from the actual project specific commands. This lead us to have two separate Makefiles. A traditional default one, which expects all the build tools and other dependencies to exist in the running system, and a Docker specific one. We named them Makefile (as already seen above) and Makefile.docker (below):

all: test

     docker run --rm -v $PWD:/build -w /build node:5 make test

.PHONY: all test

So, we simply run a Docker container of required upstream language image (here Node 5), mount our project into the container and run make for the default Makefile inside the container.

$ make -f Makefile.docker

Of course, the logical next step is to abstract that Docker call into a function to make it trivial to wrap also other make targets to be run in Docker:

make = docker run --rm -v $PWD:/build -w /build node:5 make $1

all: test

     $(call make,test)

.PHONY: all test

Docker specific steps in the main Makefile

In the beginning, I mentioned, that we try to use the official upstream Docker images whenever possible, to keep our Docker dependencies fresh and supported. Yet, what if we need just minor modifications to them, like installation of a couple of extra packages...

Because our Makefile.docker mostly just wraps the make call for the default Makefile into a auto-removed Docker container run (docker run --rm), we cannot easily install extra packages into the container in Makefile.docker. This is the exception, when we add Docker-related commands into the default Makefile.

There are probably many ways to detect the run in Docker container, but my favourite is testing the existence of /.dockerenv file. So, any Docker container specific command in Makefile is wrapped with test for that file, as in:

all: test

     [ -f /.dockerenv ] && npm -g i karma || true
     karma test

.PHONY: all test

Getting rid of the filesystem side-effects

Unfortunately, one does not simply mount a source directory from the host into a container and run arbitrary commands with arbitrary users with that mount in place. (Unless one wants to play to game of having matching user ids inside and outside the container.)

To avoid all issues related to Docker possibly trying to (and sometimes succeeding in) creating files into mounted host file system, we may run Docker without host mount at all, by piping project sources into the container:

make = git archive HEAD | \
       docker run -i --rm -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make $1"

all: test

test: bin/test
     $(call make,test)

.PHONY: all test
  • git archive HEAD writes tarball of the project git repository HEAD (latest commit) into stdout.
  • -i in docker run enables stdin in Docker.
  • -v /build in docker run ensures /build to exist in container (as a temporary volume).
  • bash -c "tar x --warning=all && make $1" is the single command to be run in the container (bash with arguments). It extracts the piped tarball from stdin into the current working directory in container (/build) and then executes given make target from the extracted tarball contents' Makefile.

Caching dependencies

One well known issue with Docker based builds is the amount of language specific dependencies required by your project on top of the official language image. We've solved this by creating a persistent data volume for those dependencies, and share that volume from build to build.

For example, defining a persistent NPM cache in our Makefile.docker would look like this:

CACHE_VOLUME = npm-cache

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: bin/test
     $(call make,test)

.PHONY: all test

    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5
  • CACHE_VOLUME variable holds the fixed name for the shared volume and the dummy container keeping the volume from being garbage collected by docker run --rm.
  • INIT_CACHE ensures that the cache volume is always present (so that it can simply be removed if its state goes bad).
  • -v $(CACHE_VOLUME:/cache in docker run mounts the cache volume into test container.
  • NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' in docker run sets a make variable NPM_INSTALL_ARGS with arguments to configure cache location for NPM. That variable, of course, should be explicitly defined and used in the default Makefile:

all: test

     @[ -f /.dockerenv ] && npm -g $(NPM_INSTALL_ARGS) i karma || true
     karma test

.PHONY: all test

Cache volume, of course, adds state between the builds and may cause issues that require resetting the cache containers when that hapens. Still, most of the time, these have been working very well for us, significantly reducing the required build time.

Retrieving the build artifacts

The downside of running Docker without mounting anything from the host is that it's a bit harder to get build artifacts (e.g. test reports) out of the container. We've tried both stdout and docker cp for this. At the end we ended up using dedicated build data volume and docker cp in Makefile.docker:

CACHE_VOLUME = npm-cache

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build $(DOCKER_RUN_ARGS) node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD)
test: bin/test
     $(call make,test); \
       status=$$?; \
       docker cp $(BUILD):/build .; \
       docker rm -f -v $(BUILD); \
       exit $$status

.PHONY: all test

    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5

# http://cakoose.com/wiki/gnu_make_thunks
BUILD_GEN = $(shell docker create -v /build node:5

A few powerful make patterns here:

  • DOCKER_RUN_ARGS = sets a placeholder variable for injecting make target specific options into docker run.
  • test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD) sets a make target local value for DOCKER_RUN_ARGS. Here it adds volumes from a container uuid defined in variable BUILD.
  • BUILD is a lazily evaluated Make variable (created with GNU make thunk -pattern). It gets its value when it's used for the first time. Here it is set to an id of a new container with a shareable volume at /build so that docker run ends up writing all its build artifacts into that volume.
  • Because make would stop its execution after the first failing command, we must wrap the make test call of docker run so that we
    1. capture the original return value with status=$$?
    2. copy the artifacts to host using docker cp
    3. delete the build container
    4. finally return the captured status with exit $$status.

This pattern may look a bit complex at first, but it has been powerful enough to start any number of temporary containers and link or mount them with the actual test container (similarly to docker-compose, but directly in Makefile). For example, we use this to start and link Selenium web driver containers to be able run Selenium based acceptance tests in the test container on top of upstream language base image, and then retrieve the test reports from the build container volume.

Building a Plone form widget with React + Redux

As much I love the new through-the-web resource registries in Plone 5 (I really do), for the current Plone 5 sites in development or already in production, I've ended up bundling all front-end resources into theme with Webpack. That gives me the same "state of art" frontend toolchain to other current projects, but also adds some overhead, because I need to do extra work for each new add-on with front-end resources. So, I still cannot really recommend Webpack for Plone, unless you are already familiar with Webpack. Yet, learning to bundle everything with Webpack really helps to appreciate, how well Plone 5 resource registries already work.

My current workflow, in brief, is to add all common configration into plonetheme.webpack and re-use that as a git submodule in individual projects, similarly to plonetheme.webpackexample. The latter also includes the example code for this post. I was asked, how everything goes together when using React and Redux for building widgets for Plone. Here's how...

(You can see the complete example in plonetheme.webpackexample, particuarly in 1 and 2.)

Injecting a pattern with Diazo

In a usual use case, I have a custom content type (maybe TTW designed) with simple textline or lines (textarea) fields, which require rich JavaScript widgets to ease entering of valid input.

The current Plone convention for such widgets is to implement the widget as a Patternslib compatible pattern. The required classname (and options) for the pattern initialization could, of course, be injected by registering a custom z3c.form widget for the field, but it can also be done with a relatively simple Diazo rule with some XSLT:

<!-- Inject license selector pattern -->
<replace css:content="textarea#form-widgets-IDublinCore-rights">
    <xsl:attribute name="class">
      <xsl:value-of select="concat(@class, ' pat-license-selector')" />
    <xsl:apply-templates select="@*[name()!='class']|node()" />

Registering a pattern in ES6

Of course, you cannot yet use ES6 in Plone without figuring out a way to way to transpile it into JavaScript currently supported by your target browsers and RequireJS (that something, which comes quite easily with Webpack). If you can do it, registering a Patternslib compatible pattern in ES6 appeared to be really simple:

import Registry from 'patternslib/core/registry';

// ... (imports for other requirements)


  name: 'license-selector',
  trigger: '.pat-license-selector',

  init ($el, options) {
    // ... (pattern code)

Choosing React + Redux for widgets

You must have already heard about the greatest benefits in using React as a view rendering library: simple unidirectional data flow with stateless views and pretty fast rendering with "shadow DOM" based optimization. While there are many alternatives for React now, it probably has the best ecosystem, and React Lite-like optimized implementations, make it small enough to be embeddable anywhere.

Redux, while technically independent from React, helps to enforce the React ideals of predictable stateless views in your React app. In my use case of building widgets for individual input fields, it feels optimal because of its "single data store model": It's simple to both serialize the widget value (Redux store state) into a single input field and de-serialize it later from the field for editing.

Single file React + Redux skeleton

Even that Redux is very small library with simple conventions, it seems to be hard to find an easy example for using it. That's because most of the examples seem to assume that you are building a large scale app with them. Yet, with a single widget, it would be nice to have all the required parts close to each other in a single file.

As an example, I implemented a simple Creative Commons license selector widget, which includes all the required parts of React + Redux based widget in a single file (including Patternslib initialization):

import React from 'react';
import ReactDOM from 'react-dom';
import {createStore, compose} from 'redux'
import Registry from 'patternslib/core/registry';

// ... (all the required imports)
// ... (all repeating marker values as constants)

function deserialize(value) {
  // ... (deserialize value from field into initial Redux store state)

function serialize(state) {
  // ... (serialize value Redux store state into input field value)

function reducer(state={}, action) {
  // ... ("reducer" to apply action to state and return new state)

export default class LicenseSelector extends React.Component {
  render() {
    // ...

LicenseSelector.propTypes = {
  // ...

// ... (all the required React components with property annotations)

  name: 'license-selector',
  trigger: '.pat-license-selector',

  init ($el) {
    // Get form input element and hide it
    const el = $el.hide().get(0)

    // Define Redux store and initialize it from the field value
    const store = createStore(reducer, deserialize($el.val()));

    // Create container for the widget
    const container = document.createElement('div');
    el.parentNode.insertBefore(container, el);
    container.className = 'license-selector';

    // Define main render
    function render() {
      // Serialize current widget value back into input field

      // Render widget with current state
          // Pass state
          // Pass Redux action factories
          setSharing={(value) => store.dispatch({
            type: SET_SHARING,
            value: value
          setCommercial={(value) => store.dispatch({
            type: SET_COMMERCIAL,
            value: value
      ), container);

    // Subscribe to render when state changes

    // Call initial render

Not too complex, after all...

Implementing and injecting a display widget as a themefragment

Usually displaying value from a custom field requires more HTML that's convenient to inline into Diazo rules, and may also require data, which is not rendered by the default Dexterity views. My convention for implementing these "display widgets" in theme is the following combination of theme fragments and Diazo rules.

At first, I define a theme fragment. Theme fragments are simple TAL templates saved in ./fragments folder inside a theme, and are supported by installing collective.themefragments add-on. My example theme has the following fragment at ./fragments/license.pt:

<html xmlns="http://www.w3.org/1999/xhtml"
<p tal:condition="context/rights|undefined">
  <img src="https://i.creativecommons.org/l/${context/rights}/4.0/88x31.png"
       alt="${context/rights}" />

Finally, the fragment is injected into desired place using Diazo. In my example, I use Diazo inline XSLT to append the fragment into below content viewlets' container:

<!-- Inject license badge below content body -->
<replace css:content="#viewlet-below-content-body"
    <xsl:apply-templates select="@*|node()" />
    <xsl:copy-of select="document('@@theme-fragment/license',
                         $diazo-base-document)/html/body/*" />

Building Plone theme with Webpack

I just fixed my old post on customizing Plone 5 default theme on the fly to work with the final Plone 5.0 release.

But if you could not care less about TTW (through-the-web) theme development, here's something for you too: it is possible to build a theme for Plone 5 with all Plone 5's stylesheets and javascripts using Webpack – the current tool of choice for bundling web app frontent resources.

With Webpack, you can completely ignore Plone 5's TTW resource registry, and build your own optimal CSS and JS bundles with all the mockup patterns and other JS frameworks you need - with live preview during development.

To try it out, take a look at my WIP example theme at: https://github.com/datakurre/plonetheme.webpack


  • Ship your theme with Webpack-optimized resource chunks automatically split into synchronous and asynchronously required resources.
  • Get faster-than-reload live previews of your changes during development thanks to Webpack's development server's hot module replacement support.
  • Get complete control of Plone 5 frontend resources and completely bypass Plone 5 TTW resource registry (it's awesome for TTW workflow, but not optimal for thefilesystem one).
  • Use the latest JS development tools (Webpack integrates nicely with Babel, ESLint and others) without need for legacy Bower, Grunt, Gulp or RequireJS.


  • Installing a new Plone add-on requires configuring and building add-on's resources into theme.
  • You are on your own now, because you no longer get JS / CSS updates with new Python package releases, but you always need to also re-build your theme.

Nix in Docker – Best of Both Worlds

I'm using Nix mostly on a mac as a development tool, and every now and then I get blocked by some packages not working on OS X.

For those situations I've been working for my own Nix image for Docker: A such minimal Docker image that it only contains the files from the Nix installer, but could re-use persistent shared Nix-installation between Docker containers to make itself fast, convenient and lean.

My build recipe is now available at:


  • A single Docker image, which can be used to run anything from nixpkgs.
  • You can nix-shell -p to get a Docker isolated development shell with all your requirements installed from nixpkgs.
  • You can -v /my/path:/var/nixpkgs to use your own nixpkgs clone.
  • You can -v /my/path:/etc/nix to use your own nix configuration.
  • With the shared data container:
    • Sequential and simultaneous containers can share the same Nix installation.
    • You can nix-env -i to add new commands (or manage custom profiles).
    • You can nix-collect-garbage -d to clean up the data container.
  • You can use it as a base image and add new stuff with nix-env -i.


Build a Docker image named nix using the provided Docker based build chain:

$ git clone https://gist.github.com/datakurre/a5d95794ce73c28f6d2f
$ cd a5d95794ce73c28f6d2f
$ make

Create a Docker data container named nix to use a shared persistent /nix for all your Nix containers:

$ docker create --name nix -v /nix nix sh

To know more about where the nix data gets stored with this setup, please, read Docker documentation about managing data in containers.

Examples of use

Running a Python interpreter with some packages:

$ docker run --rm --volumes-from=nix -ti nix \
         nix-shell -p python35Packages.pyramid --run python3

Running a Python Jupyter notebook with mounted context:

$ mkdir .jupyter
$ echo "c.NotebookApp.ip = '*'" > .jupyter/jupyter_notebook_config.py
$ docker run --rm --volumes-from=nix -ti \
         -v $PWD:/mnt -w /mnt -e HOME=/mnt -p 8888 nix \
         nix-shell -p python35Packages.notebook --run "jupyter notebook"

Running a Haskell Jupyter notebook with mounted context:

$ mkdir .jupyter
$ echo "c.NotebookApp.ip = '*'" > .jupyter/jupyter_notebook_config.py
$ docker run --rm --volumes-from=nix -ti \
         -v $PWD:/mnt -w /mnt -e HOME=/mnt -p 8888 nix \
         nix-shell -p ihaskell --run "ihaskell-notebook"

Running development shell for default.nix in mounted context:

Adding --help for nix-commands:

$ docker run --rm --volumes-from=nix nix nix-env -i man
$ docker run --rm --volumes-from=nix nix nix-env --help

Purging nix-store cache:

$ docker run --rm --volumes-from=nix nix nix-collect-garbage -d

Using the image as a base for a new Docker image, with ./Dockerfile:

FROM nix
RUN nix-env -i python
ENTRYPOINT ["/usr/local/bin/python"]
$ docker build -t python --rm=true --force-rm=true --no-cache=true .
$ docker run --rm -ti python

Creating Jupyter Docker-containers with Nix

Jupyter is the new name and brand for an awesome interactive data science programming scratchpad previously known as iPython Notebook. While there are plenty of pre-built Docker images available for Jupyter, for customized images, I'm tempted to use Nix.

Here I describe approach for the following gists creating

Note: Because these Jupyter notebook configurations are build with Nix, their configuration is immutable and it's not possible for the user to install any additional packages directly from a notebook.



With nix-shell (unless you are on a mac):

$ git clone https://gist.github.com/datakurre/49b6fbc4bafdef029183
$ cd 49b6fbc4bafdef029183
$ nix-shell --run "jupyter notebook"

With Docker (works also on a mac):

$ git clone https://gist.github.com/datakurre/49b6fbc4bafdef029183
$ cd 49b6fbc4bafdef029183
$ make run

Now, if you are on a mac, you need to figure out the IP and port where notebook is running with:

$ docker-machine ip default
$ docker ps


At first, both of my Jupyter gists are based on my recipe for building Docker containers with Nix.

It builds a Docker image with Nix installation to build your Nix expressions, creates a nix store data container to store and share built Nix expressions between builds, and creates a Docker ready tarball from a built nix closure.

Then a few picks from the expressions:

with import <nixpkgs> {};
let dependencies = rec {
  # ...
  jupyter = python35Packages.notebook.override {
    postInstall = with python35Packages; ''
      mkdir -p $out/bin
      ln -s ${jupyter_core}/bin/jupyter $out/bin
      wrapProgram $out/bin/jupyter \
        --prefix PYTHONPATH : "${notebook}/lib/python3.5/site-packages:$PYTHONPATH" \
        --prefix PATH : "${notebook}/bin:$PATH"
# ...

To be able to run Jupyter notebook, I want to use the normal jupyter command, but a problem is that the base command is defined in a python package jupyter_core, which does not include notebook-package, which includes the actual Jupyter notebook program and its jyputer notebook subcommand. The Nix solution is to install notebook package, but enhance its install with a custom postinstall, whick links the command from jupyter_core and wraps the command to be aware of notebook and itse dependencies.

with import <nixpkgs> {};
let dependencies = rec {
  builder = builtins.toFile "builder.sh" ''
    source $stdenv/setup
    mkdir -p $out
    cat > $out/kernel.json << EOF
  # ...
  python34 = pkgs.python34.buildEnv.override {
    extraLibs = with python34Packages; [
      # Kernel
      # Custom packages
      # ...
  python34_kernel = stdenv.mkDerivation rec {
    name = "python34";
    buildInputs = [ python34 ];
    json = builtins.toJSON {
      argv = [ "${python34}/bin/python3.4"
               "-m" "ipykernel" "-f" "{connection_file}" ];
      display_name = "Python 3.4";
      language = "python";
      env = { PYTHONPATH = ""; };
    inherit builder;
# ...

Next I want to be able to define and configure as many Jupyter kernels as I need in my notebook. This pattern first defines the kernel environment. Above that is Python 3.4 with the mandatory iPython packages and then any amount of Python packages I want to provide for the notebook.

The second part of the pattern just defines an iPython kernel configuration (usually created using jupyter kernelspec in mutable Jupyter installations) so that the kernel uses our previously defined Python 3.4 environment. The builder for actually creating the configuration file is defined an upper level of the expression so that it can easily be re-used with inherit builder;.

With this approach, is possible to have as many different and differently configured kernels as you want. It's possible to have both Python 3.4 and 3.5 or many different configurations for the same version. For example, when there's a major upgrade in some package, it's possible to have one kernel with the old version and another with the new version.

The example gists also include similarly configurable kernel configuration for R.

with import <nixpkgs> {};
let dependencies = rec {
  # ...
  jupyter_config_dir = stdenv.mkDerivation {
    name = "jupyter";
    buildInputs = [
    builder = writeText "builder.sh" ''
      source $stdenv/setup
      mkdir -p $out/etc/jupyter/kernels $out/etc/jupyter/migrated
      ln -s ${python34_kernel} $out/etc/jupyter/kernels/${python34_kernel.name}
      ln -s ${R_kernel} $out/etc/jupyter/kernels/${R_kernel.name}
      cat > $out/etc/jupyter/jupyter_notebook_config.py << EOF
      import os
      c.KernelSpecManager.whitelist = {
      c.NotebookApp.ip = os.environ.get('JUPYTER_NOTEBOOK_IP', 'localhost')
  # ...

The next most important part is the expression to compose all the defined kernels into a compelete and immutable Jupyter configuration directory. Whitelisting kernels in configuration is required to hide the Python environment running the Jupyter notebook (because it only has the notebook dependencies and is missing all the interesting libraries). The line with c.NotebookApp.ip allows Docker to configure notebook to allow connections outside the container.

with import <nixpkgs> {};
let dependencies = rec {
  # ...
in with dependencies;
stdenv.mkDerivation rec {
  name = "jupyter";
  env = buildEnv { name = name; paths = buildInputs; };
  builder = builtins.toFile "builder.sh" ''
    source $stdenv/setup; ln -s $env $out
  buildInputs = [
  ] ++ stdenv.lib.optionals stdenv.isLinux [ bash fontconfig tini ];
  shellHook = ''
    mkdir -p $PWD/.jupyter
    export JUPYTER_CONFIG_DIR=${jupyter_config_dir}/etc/jupyter
    export JUPYTER_PATH=${jupyter_config_dir}/etc/jupyter
    export JUPYTER_DATA_DIR=$PWD/.jupyter
    export JUPYTER_RUNTIME_DIR=$PWD/.jupyter

Finally, we define an buildable environment installation, which mainly includes Jupyter (command) and its configuration. On Linux a few extra dependencies are added to make Jupyter run in a Docker container. For nix-shell command the expression configures Jupyter to look configuration from the Nix built configuration directory and store volatile runtime files under the current working directory.

Environment variables are also the way to configure Jupyter to run properly in a Docker container. My Dockerfile configures Jupyter to look configuration from the directory created by my recipe for building Docker containers with Nix, and store volatile runtime files under the host directory mounted as /mnt. In my example Makefile that's configured to be the current working directory, which is also used shown as the notebook home directory.

FROM scratch
ADD default.nix.tar.gz /
ENV FONTCONFIG_FILE="/etc/fonts/fonts.conf" \
    JUPYTER_CONFIG_DIR="/etc/jupyter" \
    JUPYTER_PATH="/etc/jupyter" \
    JUPYTER_DATA_DIR="/mnt/.jupyter" \
ENTRYPOINT ["/bin/tini", "--", "/bin/jupyter"]

Note: Running Jupyter notebook in Docker container may require tini (or supervisor) to allow Jupyter spawn all the kernel processes it needs within the container.