Nix expressions as executable commands

  • 0

Updated 2014-09-24: I learned that in a mixed (OSX and nixpkgs) environment, one should not set LD_LIBRARY_PATH, but fix dynamic linking to use absolute paths. Yet, I refactored my wrapper to use myEnvFun when required (see the buildout example).

Updated 2014-09-22: I was wrong about, how nix-built Python environments could be used together with buildout and updated this post to reflect my experiences.

My main tools for Python based software development have been virtualenv and buildout for a long time. I've used virtualenv for providing isolated Python installation (separate from so often polluted system python) and buildout for managing both the required Python packages, developed packages, and supporting software (like Redis or memcached).

Basically everything still works, but:

  • Managing clean Python virtualenvs for only to avoid possible conflicts with system installed packages feels a lot of work with a small return.
  • Remembering to activate and deactivate the correct Python virtualenv is not fun either.
  • Also, while buildout provides excellent tool (mr.developer) for managing sources for all the project packages, it's far from optimal for building and managing supporting (not Python) software.

I've also using quite a bit of Vagrant and Docker, but, because I'm mostly working on Mac, those require a VM, which makes them much less convenient.

About Nix

I believe, I heard about Nix package manager from Rok at the first time at Barcelona Plone Testing Sprint in the early 2013. It sounded a bit esoteric and complex back then, but after about about twenty months of more virtualenvs, buildouts, Vagrantfiles, Docker containers and puppet manifests... not so much anymore.

Currently, outside NixOS, I understand Nix as

  1. a functional language for describing configuration of software and
  2. a package manager for managing those configurations.

From my own experience, the easiest way to get familiar with Nix is to follow Domen's blog post about getting started with Nix package manager. But to really make it a new tool to your toolbox, you should learn to write your own Nix expressions.

Even the the most common way to use the Nix package manager is to install Nix expressions into your current environment with nix-env, the expressions can also be used without really installing them, in a quite stateless way.

I'm not sure how proper use of Nix this is, but it seems to work for me.

(Yes, I'm aware of myEnvFun, for creating named stateful development environments with Nix expressions, but here I'm trying to use Nix in a more stateless, Docker-inspired way.)

Nix expressions as virtualenv replacements

It's almost never safe to install a Python software directly into your system Python. Different software may require different versions of same libraries and sooner than later the conflicting requirements break your Python installation.

Let's take a small utility software called i18ndude as an example of such software with way too frightening dependencies for any system Python. Traditionally, you could install it into a separate Python virtualenv and use it with the following steps:

$ virtualenv ~/.../i18ndude-env
$ source ~/.../i18ndude-env/bin/activate
$ pip install i18ndude
$ i18ndude
$ deactivate

With an executable Nix expression, I can call it in a stateless way with simply executing the expression:

$ ./i18ndude.nix
➜ /nix/store/gjhzw843qs1736r0qcd9mz69247g4svb-python2.7-i18ndude-3.3.5/bin/i18ndude
usage: i18ndude [-h]
18ndude: error: too few arguments

Maybe even better, I can install the expression into my default Nix environment with

$ nix-env -i -f i18ndude.nix

and use it like it would have been installed into my system Python in the first place (but this time without polluting it):

$ i18ndude.nix
usage: i18ndude [-h]
18ndude: error: too few arguments

No more activating or deactivating virtualenvs, not to mention about needing to remember their names or locations.

For the most common Python softare, it's not required to write your own expression, but you could simply install the contributed expressions directly from Nix packages repository.

The easiest way to check for existing expressions from nixpkgs Python packages seems to be grepping the package list with nix-env -qaP \*|grep something.

If you'd like to see more packages available by default, you can contribute them to upstream with a simple pull request.

Anyway, since i18ndude was not yet available in time of writing (although, most of its dependencies were), this is how my expression for it looked like:

#!/usr/bin/env nix-exec
with import <nixpkgs> { };

let dependencies = rec {
  ordereddict = buildPythonPackage {
    name = "ordereddict-1.1";
    src = fetchurl {
      url = "";
      md5 = "a0ed854ee442051b249bfad0f638bbec";

in with dependencies; rec {
  i18ndude = buildPythonPackage {
    name = "i18ndude-3.3.5";
    src = fetchurl {
      url = "";
      md5 = "ef599b1c64eaabba4049fcd2b027ba21";
    propagatedBuildInputs = [

Nix expression for nix-exec shell wrapper

Of course, Nix expressions are not executable by default. To get them work as I wanted, I had to create tiny wrapper script to be used as the hash-bang line #!/usr/bin/env nix-exec of executable expressions.

The script simply calls nix-build and then the named executable from the build output directory (with some standard environment variables set). To put it another way, the wrapper script translates the following command:

$ ./i18ndude.nix --help


$ `nix-build --no-out-link i18ndude.nix`/bin/i18ndude --help

It's not required to suffix the expressions fiels with .nix, but they could also be named without suffix to look more like real commands.

The wrapper script itself, of course, can be installed from a Nix expression into your default Nix environment with nix-env -i -f filename.nix:

with import <nixpkgs> { };

stdenv.mkDerivation {
  name = "datakurre-nix-exec-1.2.1";

  builder = builtins.toFile "" "
    source $stdenv/setup
    mkdir -p $out/bin
    echo \"#!/bin/bash
build=\\`nix-build --no-out-link \\$1\\`
if [ \\$build ]; then


  if [ -d \\$build/dev-envs ]; then
    source \\\"\\$build/dev-envs/\\\"*

    export TZ=\\\"\\$MY_TZ\\\"
    export PATH=\\\"\\$PATH:\\$MY_PATH\\\"
    export http_proxy=\\\"\\$MY_http_proxy\\\"
    export ftp_proxy=\\\"\\$MY_ftp_proxy\\\"

    export CFLAGS=\\`echo \\$NIX_CFLAGS_COMPILE|sed 's/-isystem /-I/g'\\`
    export PATH=\\\"\\$MY_PATH\\\"

  cmd=\\$\{1##*/\}; cmd=\\$\{cmd%%@*\}; cmd=\\$\{cmd%.nix\}
  paths=(\\\"\\$build/bin\\\" \\\"\\$build/sbin\\\" \\\"\\$build/libexec\\\")
  for path in \\\"\\$\{paths[@]\}\\\"; do
    if [ -f \\\"\\$path/\\$\{cmd\}\\\" ]; then

  if [ -t 1 ]; then echo \\\"➜\\\" \\$cmd \\\"\\$\{@:2\}\\\"; fi
  \\\"\\$cmd\\\" \\\"\\$\{@:2\}\\\"
\" > $out/bin/nix-exec
    chmod a+x $out/bin/nix-exec

The wrapper does execute the expression defined command in a fully clean environment (the only isolation is the one myEnvFun provides), but mostly prepends everything defined by the expression into its surrounding execution environment (so that its paths are preferred over the versions in the current environment).

A mostly positive side effect from using Nix expressions like this (only building them, but not installing them into any environment) is that they can be cleaned (from the disk) anytime with simply:

$ nix-collect-garbage

Nix expressions with buildout

Update 2014-09-24: The example was updated to use myEnvFun to simplify the wrapper script.

Update 2014-09-22: I originally covered Nix expressions with buildout as an example of replacing Python virtualenvs with Nix. Unfortunately, because of some buildout limitations that didn't work out as I expected...

A very special case of Python development environment is the one with buildout, which is required e.g. for all development with Plone.

When using Nix expressions with buildout, there is a one very special limitation: buildout does not support any additional Python packages installed into your Nix expression based environment.

That's because buildout sees Nix defined Python as a system Python, and buildout does its best to prevent any extra packages installed into system Python being available for the buildout by default.

An additional issue for buildout is that the extra Python packages defined in Nix expression are not installed directly into under the Python installation, but are made available only when that Python is executed through a specialc Nix generated wrapper.

But to cut this short, here's an example executable Nix expression, which could be used as a Plone-compatible Python environment. It includes a clean Python installation with some additional (non-Python) libraries required by Plone buildout to be able to compile a few special Python packages (like Pillow, lxml and python-ldap):

#!/usr/bin/env nix-exec
with import <nixpkgs> { };

let dependencies = rec {
  buildInputs = [

in with dependencies; buildEnv {
  name = "nix";
  paths = [(myEnvFun { name = "nix"; inherit buildInputs; })] ++ buildInputs;

With this Nix expression named as an executable ./python.nix, it could be used to execute buildout's bootstrap, buildout and eventually launching the Plone site like:

$ ./python.nix
$ ./python.nix bin/buildout  # or ./python.nix -S bin/buildout
$ ./python.nix bin/instance fg

I must agree that this is not as convenient as it should be, because each command (bootstrap, buildout and the final buildout generated script) must be executed explicitly using our executable Nix expression defining the required Python-environment.

Also, probably because my wrapper does not completely isolate the Nix expression call from its surrounding environment, sometimes it's required to call the buildout with -S given for the Python expression, like ./python.nix -S bin/buildout (otherwise buildout does not find it's own bootstrapped installation).

On the other hand, this approach defines the execution environment explicitly and statelessly for each call.

P.S. Because I'm working with RHEL systems, it's nice to use Python with similar configuration with those. With Nix, it's easy to define local overrides for existing packages (nixpkgs derivations) there's a special function with only the required configuration changes. The following ~/.nixpkgs/config.nix-example configures Python with similar unicode flag to RHEL's native Python:

  packageOverrides = pkgs : with pkgs; rec {
    python27 = pkgs.python27.overrideDerivation (args: {
      configureFlags = "--enable-shared --with-threads --enable-unicode=ucs4";
    }) // { modules = pkgs.python27.modules; };

Nix expressions as stateless development environments

Updated 2014-09-24: Because of OSX, I had to fix openldap expression to fix link one library with absolute path to not allow it to resolve an OSX library instead of the Nix built one.

In test driven development, the whole development environment can be built just around the selected test runner.

Here's an example Nix expression, which saved as an executable file called ./py.test can be used to execute pytest test runner with a couple of selected plugins and all the dependencies required by the tested software in question:

#!/usr/bin/env nix-exec
with import <nixpkgs> { };

let dependencies = rec {
  execnet = buildPythonPackage {
    name = "execnet-1.2.0";
    src = fetchurl {
      url = "";
      md5 = "1886d12726b912fc2fd05dfccd7e6432";
    doCheck = false;
  pycparser = buildPythonPackage {
    name = "pycparser-2.10";
    src = fetchurl {
      url = "";
      md5 = "d87aed98c8a9f386aa56d365fe4d515f";
  cffi = buildPythonPackage {
    name = "cffi-0.8.6";
    src = fetchurl {
      url = "";
      md5 = "474b5a68299a6f05009171de1dc91be6";
    propagatedBuildInputs = [ pycparser ];
  pytest_cache = buildPythonPackage {
    name = "pytest-cache-1.0";
    src = fetchurl {
      url = "";
      md5 = "e51ff62fec70a1fd456d975ce47977cd";
    propagatedBuildInputs = [
  pytest_flakes = buildPythonPackage {
    name = "pytest-flakes-0.2";
    src = fetchurl {
      url = "";
      md5 = "44b8f9746fcd827de5c02f14b01728c1";
    propagatedBuildInputs = [
  pytest_pep8 = buildPythonPackage {
    name = "pytest-pep8-1.0.6";
    src = fetchurl {
      url = "";
      md5 = "3debd0bac8f63532ae70c7351e73e993";
    propagatedBuildInputs = [
  buildInputs = [
    (python27Packages.pytest.override {
      propagatedBuildInputs = [
    (lib.overrideDerivation openldap (args: {
      postBuild = if stdenv.isDarwin then ''
        install_name_tool -change /libsasl2.dylib ${cyrus_sasl}/lib/libsasl2.dylib servers/slapd/slapadd
     '' else null;

in with dependencies; buildEnv {
  name = "nix";
  paths = [(myEnvFun { name = "nix"; inherit buildInputs; })] ++ buildInputs;

In other words, this expression could work as a stateless environment for developing the product in question:

$ ./py.test
➜ /nix/store/a2w3hwc66gqm6bncic8km6b69lw2byc6-py.test/bin/py.test
================================== test session starts ==================================
platform darwin -- Python 2.7.8 -- pytest-2.5.1
plugins: flakes, cache, pep8
collected 2 items

src/.../tests/ ..
=============================== 2 passed in 0.22 seconds ================================

And, once the development is completed, an another expression could be defined for using the developed product.

Nix expression for Robot Framework test runner

Finally, as a bonus, here's an expression, which configures a Python environment with Robot Framework and its Selenium2Library with PhantomJS:

#!/usr/bin/env nix-exec
with import <nixpkgs> { };

let dependencies = rec {
  docutils = buildPythonPackage {
    name = "docutils-0.12";
    src = fetchurl {
      url = "";
      md5 = "4622263b62c5c771c03502afa3157768";
  selenium = buildPythonPackage {
    name = "selenium-2.43.0";
    src = fetchurl {
      url = "";
      md5 = "bf2b46caa5c1ea4b68434809c695d69b";
  decorator = buildPythonPackage {
    name = "decorator-3.4.0";
    src = fetchurl {
      url = "";
      md5 = "1e8756f719d746e2fc0dd28b41251356";
  robotframework = buildPythonPackage {
    name = "robotframework-2.8.5";
    src = fetchurl {
      url = "";
      md5 = "2d2c6938830f71a6aa6f4be32227997f";
    propagatedBuildInputs = [
  robotframework-selenium2library = buildPythonPackage {
    name = "robotframework-selenium2library-1.5.0";
    src = fetchurl {
      url = "";
      md5 = "07c64a9e183642edd682c2b79ba2f32c";
    propagatedBuildInputs = [

in with dependencies; buildEnv {
  name = "pybot";
  paths = [
    (robotframework.override {
      propagatedBuildInputs = [ robotframework-selenium2library ];

Since you may need differently configured Robot Framework installations (with different add-on keyword libraries installed) for different projects, this should be a good fit as an executable Nix expression:

$ ./pybot.nix
➜ /nix/store/q15bimgng25qcxkq2q10finyk0n6qkm2-pybot/bin/pybot
[ ERROR ] Expected at least 1 argument, got 0.

Try --help for usage information.

Asynchronous stream iterators and experimental promises for Plone

  • 0

This post may contain traces of legacy Zope2 and Python 2.x.

Some may think that Plone is bad in concurrency, because it's not common to deployt it with WSGI, but run it on top of a barely known last millennium asynchronous HTTP server called Medusa.

See, The out-of-the-box installation of Plone launches with only a single asynchronous HTTP server with just two fixed long-running worker threads. And it's way too easy to write custom code to keep those worker threads busy (for example, by with writing blocking calls to external services), effectively resulting denial of service for rest of the incoming requests

Well, as far as I know, the real bottleneck is not Medusa, but the way how ZODB database connections work. It seems that to optimize the database connection related caches, ZODB is best used with fixed amount of concurrent worker threads, and one dedicated database connection per thread. Finally, MVCC in ZODB limits each thread can serve only one request at time.

In practice, of course, Plone-sites use ZEO-clustering (and replication) to overcome the limitations described above.

Back to the topic (with a disclaimer). The methods described in this blog post have not been battle tested yet and they may turn out to be bad ideas. Still, it's been fun to figure out how our old asynchronous friend, Medusa, could be used to serve more concurrent request in certain special cases.

ZPublisher stream iterators

If you have been working with Plone long enough, you must have heard the rumor that blobs, which basically means files and images, are served from the filesystem in some special non-blocking way.

So, when someone downloads a file from Plone, the current worker thread only initiates the download and can then continue to serve the next request. The actually file is left to be served asynchronously by the main thread.

This is possible because of a ZPublisher feature called stream iterators (search IStreamIterator interface and its implementations in Zope2 and Stream iterators are basically a way to postpone I/O-bound operations into the main thread's asyncore loop through a special Medusa-level producer object.

And because stream iterators are consumed only within the main thread, they come with some very strict limitations:

  • they are executed only after a completed transaction so they cannot interact with the transaction anymore
  • they must not read from the ZODB (because their origin connection is either closed or in use of their origin worker thread)
  • they must not fail unexpectedly, because you don't want to crash the main thread
  • they must not block the main thread, for obvious reasons.

Because of these limitations, the stream iterators, as such, are usable only for the purpose they have been made for: streaming files or similar immediately available buffers.

Asynchronous stream iterators

What if you could use ZPublisher's stream iterator support also for CPU-bound post-processing tasks? Or for post-processing tasks requiring calls to external web services or command-line utilities?

If you have a local Plone instance running somewhere, you can add the following proof-of-concept code and its slow_ok-method into a new External Method (also available as a gist):

import StringIO
import threading

from zope.interface import implements
from ZPublisher.Iterators import IStreamIterator
from ZServer.PubCore.ZEvent import Wakeup

from zope.globalrequest import getRequest

class zhttp_channel_async_wrapper(object):
    """Medusa channel wrapper to defer producers until released"""

    def __init__(self, channel):
        # (executed within the current Zope worker thread)
        self._channel = channel

        self._mutex = threading.Lock()
        self._deferred = []
        self._released = False
        self._content_length = 0

    def _push(self, producer, send=1):
        if (isinstance(producer, str)
                and producer.startswith('HTTP/1.1 200 OK')):
            # Fix Content-Length to match the real content length
            # (an alternative would be to use chunked encoding)
            producer = producer.replace(
                'Content-Length: 0\r\n',
                'Content-Length: {0:s}\r\n'.format(str(self._content_length))
        self._channel.push(producer, send)

    def push(self, producer, send=1):
        # (executed within the current Zope worker thread)
        with self._mutex:
            if not self._released:
                self._deferred.append((producer, send))
                self._push(producer, send)

    def release(self, content_length):
        # (executed within the exclusive async thread)
        self._content_length = content_length
        with self._mutex:
            for producer, send in self._deferred:
                self._push(producer, send)
            self._released = True
        Wakeup()  # wake up the asyncore loop to read our results

    def __getattr__(self, key):
        return getattr(self._channel, key)

class AsyncWorkerStreamIterator(StringIO.StringIO):
    """Stream iterator to publish the results of the given func"""


    def __init__(self, func, response, streamsize=1 << 16):
        # (executed within the current Zope worker thread)

        # Init buffer
        self._streamsize = streamsize

        # Wrap the Medusa channel to wait for the func results
        self._channel = response.stdout._channel
        self._wrapped_channel = zhttp_channel_async_wrapper(self._channel)
        response.stdout._channel = self._wrapped_channel

        # Set content-length as required by ZPublisher
        response.setHeader('content-length', '0')

        # Fire the given func in a separate thread
        self.thread = threading.Thread(target=func, args=(self.callback,))

    def callback(self, data):
        # (executed within the exclusive async thread)

    def next(self):
        # (executed within the main thread)
        if not self.closed:
            data =
            if not data:
                return data
        raise StopIteration

    def __len__(self):
        return len(self.getvalue())

def slow_ok_worker(callback):
    # (executed within the exclusive async thread)
    import time

def slow_ok():
    """The publishable example method"""
    # (executed within the current Zope worker thread)
    request = getRequest()
    return AsyncWorkerStreamIterator(slow_ok_worker, request.response)

The above code example simulates a trivial post-processing with time.sleep, but it should apply for anything from building a PDF from the extracted data to calling an external web service before returning the final response.

An out-of-the-box Plone instance can handle only two (2) concurrent calls to a method, which would take one (1) second to complete.

In the above code, however, the post-processing could be delegated to a completely new thread, freeing the Zope worker thread to continue to handle the next request. Because of that, we can get much much better concurrency:

$ ab -c 100 -n 100 http://localhost:8080/Plone/slow_ok
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,

Benchmarking localhost (be patient).....done

Server Software:        Zope/(2.13.22,
Server Hostname:        localhost
Server Port:            8080

Document Path:          /Plone/slow_ok
Document Length:        2 bytes

Concurrency Level:      100
Time taken for tests:   1.364 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      15400 bytes
HTML transferred:       200 bytes
Requests per second:    73.32 [#/sec] (mean)
Time per request:       1363.864 [ms] (mean)
Time per request:       13.639 [ms] (mean, across all concurrent requests)
Transfer rate:          11.03 [Kbytes/sec] received

Connection Times (ms)
               min  mean[+/-sd] median   max
Connect:        1    2   0.6      2       3
Processing:  1012 1196  99.2   1202    1359
Waiting:     1011 1196  99.3   1202    1359
Total:       1015 1199  98.6   1204    1361

Percentage of the requests served within a certain time (ms)
  50%   1204
  66%   1256
  75%   1283
  80%   1301
  90%   1331
  95%   1350
  98%   1357
  99%   1361
  100%   1361 (longest request)

Of course, most of the stream iterator limits still apply: Asynchronous stream iterator must not access the database, which limits the possible use cases a lot. For the same reasons, also plone.transformchain is effectively skipped (no Diazo or Blocks), which limits this to be usable only for non-HTML responses.


To go experimenting even further, what if you could do similar non-blocking asynchronous processing in the middle of a request? For example, to free the current Zope working thread while fetching a missing or outdated RSS feed in a separate thread and only then continue to render the final response.

An interesting side effect of using streaming iterators is that they allow you to inject code into the main thread's asynchronous loop. And when you are there, it's even possible to queue completely new request for ZPublisher to handle.

So, how would the following approach sound like:

  • let add-on code to annotate requests with promises for fetching the required data (each promise would be a standalone function, which could be executed under the asynchronous stream iterator rules, and when called, would resolve into a value, effectively the future of the promise), for example:

    def content(self):
        if 'my_unique_key' in IFutures(self.request):
            return IFutures(self.request)['my_unique_key']
            IPromises(self.request)['my_unique_key'] = my_promise_func
            return u''
  • when promises are found, the response is turned into an asynchronous stream iterator, which would then execute all the promises in parallel threads and collects the resolved values, futures:

    def transformIterable(self, result, encoding):
        if IPromises(self.request):
            return PromiseWorkerStreamIterator(
                IPromises(self.request), self.request, self.request.response)
            return None
  • finally, we'd wrap the current Medusa channel in a way that instead of publishing any data yet, a cloned request is queued for the ZPublisher (similarly how retries are done after conflict errors), but those cloned request and annotated to carry the resolved futures:

    def next(self):
       if self._futures:
           self._futures = {}  # mark consumed to raise StopIteration
           from ZServer.PubCore import handle
           handle('Zope2', self._zrequest, self._zrequest.response)
           raise StopIteration
  • now the add-on code in question would find the futures from request, not issue any promises anymore and the request would result a normal response pushed all the way to the browser, which initiated the original request.

I'm not sure yet, how good or bad idea this would be, but I've been tinkering with a proof-of-concept implementation called experimental.promises to figure it out.

Of course, there are limits and issues to be aware of. Handling the same request twice is not free, which makes approach effective only when some significant processing can be moved to be done outside the worker threads. Also, because there may be other request between the first and the second pass (freeing the worker to handle other request is the whole point), the database may change between the passes (kind of breaking the MVCC promise). Finally, currently it's possible write the code always set new promises and end into never ending loop.

Anyway, if you are interested to try out these approaches (at your own risk, of course), feel free to ask more via Twitter or IRC.

Cross-Browser Selenium testing with Robot Framework and Sauce Labs

  • 0

How do you keep your Selenium tests up-to-date with your ever-changing user interface? Do you try to fix your existing tests, or do you just re-record them over and over again?

In the Plone Community, we have chosen the former approach (Plone is a popular open source CMS written in Python). We use a tool called Robot Framework to write our Selenium acceptance tests as maintainable BDD-style stories. Robot Framework's extensible test language allows us to describe Plone's features in a natural language sentences, which can then be expanded into either our domain specific or Selenium WebDriver API based testing language.

As an example, have a look at the following real-life acceptance test case on the next generation multilingual support in Plone:

*** Test Cases ***

Scenario: As an editor I can add new translation
    Given a site owner
      and a document in English
      and a document in Catalan
     When I view the Catalan document
      and I add the document in English as a translation
      and I switch to English
     Then I can view the document in English

*** Keywords ***

a site owner
    Enable autologin as  Manager

a document in English
    Create content  type=Document
    ...  container=/${PLONE_SITE_ID}/en/
    ...  id=an-english-document
    ...  title=An English Document

a document in Catalan
    Create content  type=Document
    ...  container=/${PLONE_SITE_ID}/ca/
    ...  id=a-catalan-document
    ...  title=A Catalan Document

I view the Catalan document
    Go to  ${PLONE_URL}/ca/a-catalan-document
    Wait until page contains  A Catalan Document

I add the document in English as a translation
    Click Element  css=#plone-contentmenu-multilingual .actionMenuHeader a
    Wait until element is visible  css=#_add_translations

    Click Element  css=#_add_translations
    Wait until page contains element
    ...  css=#formfield-form-widgets-content-widgets-query .searchButton

    Click Element  css=#formfield-form-widgets-content-widgets-query .searchButton
    Wait until element is visible  css=#form-widgets-content-contenttree a[href$='/plone/en']

    Click Element  css=#form-widgets-content-contenttree a[href$='/plone/en']
    Wait until page contains  An English Document

    Click link  xpath=//*[contains(text(), 'An English Document')]/parent::a
    Click Element  css=.contentTreeAdd

    Select From List  name=form.widgets.language:list  en
    Click Element  css=#form-buttons-add_translations
    Click Element  css=#contentview-view a
    Wait until page contains  A Catalan Document

I switch to English
    Click Link  English
    Wait until page contains  An English Document

I can view the document in English
    Page Should Contain Element
    ...  xpath=//*[contains(@class, 'documentFirstHeading')][./text()='An English Document']
    Page Should Contain Element
    ...  xpath=//ul[@id='portal-languageselector']/li[contains(@class, 'currentLanguage')]/a[@title='English']

About Robot Framework

Robot Framework is a generic keyword-driven test automation framework for acceptance testing and acceptance test-driven development. It's a neat tool by itself, yet its testing capabilities can be extended implementing custom test libraries either with Python or Java – without any other limits.

The super powers of Robot Framework come from its user keyword feature: in addition to the keywords provided by the extension libraries, users can create new higher-level keywords from the existing ones using the same syntax that is used for creating test cases. And, of course, everything can be parametrized with variables.

How could you cross-browser test your web applications with Robot Framework and its popular Selenium WebDriver keyword library?

Installing Robot Framework

To get started, we need Firefox, Python 2.7 (or Python 2.6) with pip package manager and virtualenv isolation packages installed. For Linux-distributions, all of these should be available directly from the system repositories (e.g. using apt-get install python-virtualenv in Ubuntu), but on OS X and Windows, some extra steps would be needed.

Once all these prerequisites all available, you can install Robot Framework and all the requirements for Selenium-testing with:

$ virtualenv robot --no-site-packages
$ robot/bin/pip install robotframework-selenium2library

The installation process should look something like:

$ virtualenv robot --no-site-packages
New python executable in robot/bin/python
Installing Setuptools...done.
Installing Pip...done.

$ robot/bin/pip install robotframework-selenium2library
Downloading/unpacking robotframework-selenium2library
  Downloading robotframework-selenium2library-1.5.0.tar.gz (216kB): 216kB downloaded
  Running egg_info for package robotframework-selenium2library
Downloading/unpacking decorator>=3.3.2 (from robotframework-selenium2library)
  Downloading decorator-3.4.0.tar.gz
  Running egg_info for package decorator
Downloading/unpacking selenium>=2.32.0 (from robotframework-selenium2library)
  Downloading selenium-2.40.0.tar.gz (2.5MB): 2.5MB downloaded
  Running egg_info for package selenium
Downloading/unpacking robotframework>=2.6.0 (from robotframework-selenium2library)
  Downloading robotframework-2.8.4.tar.gz (579kB): 579kB downloaded
  Running egg_info for package robotframework
Downloading/unpacking docutils>=0.8.1 (from robotframework-selenium2library)
  Downloading docutils-0.11.tar.gz (1.6MB): 1.6MB downloaded
  Running egg_info for package docutils
Installing collected packages: robotframework-selenium2library, decorator, selenium, robotframework, docutils
  Running install for robotframework-selenium2library
  Running install for decorator
  Running install for selenium
  Running install for robotframework
  Running install for docutils
 Successfully installed robotframework-selenium2library decorator selenium robotframework
 Cleaning up...

And we should end up having the Robot Framework executable installed at:

$ robot/bin/pybot

Writing a Selenium test suite in robot

In the following examples, we use Robot Framework's space separated plain text test format. In this format a simple test suite can be written in a single plain text file named with a .robot suffix. To maximize readability, only two or more spaces are required to separate the different test syntax parts in the same line.

In the first example, we:

  • import Selenium2Library to enable Selenium keywords (because only the built-in keywords are available by default)
  • define simple test setup and teardown keywords
  • implement a simple test case using the imported Selenium keywords
  • use a tag to categorize the test case
  • abstract the test with a variable to make it easier to update the test later.

Now, write the following complete Selenium test suite into a file named test_saucelabs_login.robot:

*** Settings ***

Library  Selenium2Library

Test Setup  Open test browser
Test Teardown  Close test browser

*** Variables ***

${LOGIN_FAIL_MSG}  Incorrect username or password.

*** Test Cases ***

Incorrect username or password
    [Tags]  Login
    Go to

    Page should contain element  id=username
    Page should contain element  id=password

    Input text  id=username  anonymous
    Input text  id=password  secret

    Click button  id=submit

    Page should contain  ${LOGIN_FAIL_MSG}

*** Keywords ***

Open test browser
    Open browser  about:

Close test browser
    Close all browsers

A standalone test suite may contain one to four sections from *** Settings ***, *** Variables ***, *** Test Cases *** and *** Keywords ***, but always the *** Test Cases *** section. To summarize the sections:

Imports all the used keyword libraries and user keyword resource files. Contains all test suite level configuration such as suite/test setup and teardown instructions.
Defines all suite level variables with their default values.
Test Cases
Contains all the test cases for the test suite.
Contains all the suite level user keyword implementations.

For the complete list of all available features for each of these sections, you can refer to Robot Framework User Guide.

Running a robot test suite

The default Robot Framework test runner is called pybot. Next, we can execute our test suite and create a test report from the execution by typing:

$ robot/bin/pybot test_saucelabs_login.robot

Besides opening a web browser, our example test suite run should look like:

$ robot/bin/pybot test_saucelabs_login.robot
Test Saucelabs Login
Incorrect username or password                                        | PASS |
Test Saucelabs Login                                                  | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
Output:  /.../output.xml
Log:     /.../log.html
Report:  /.../report.html

And the test run should result a HTML test report file named report.html and a complete step by step test log file named log.html. The latter should look like:

To see all the available options for the test runner, just type:

$ robot/bin/pybot --help

Writing a Sauce-Labs Selenium test suite in robot

Now that we have a working Robot Framework installation and a functional test suite, we can continue to refactor the test suite to support cross-browser testing with Sauce Labs.

Let's update our test_saucelabs_login.robot to look like:

*** Settings ***

Library  Selenium2Library
Library  SauceLabs

Test Setup  Open test browser
Test Teardown  Close test browser

*** Variables ***

${BROWSER}  firefox

${LOGIN_FAIL_MSG}  Incorrect username or password.

*** Test Cases ***

Incorrect username or password
    [Tags]  Login
    Go to

    Page should contain element  id=username
    Page should contain element  id=password

    Input text  id=username  anonymous
    Input text  id=password  secret

    Click button  id=submit

    Page should contain  ${LOGIN_FAIL_MSG}

*** Keywords ***

Open test browser
    Open browser  about:  ${BROWSER}
    ...  remote_url=${REMOTE_URL}
    ...  desired_capabilities=${DESIRED_CAPABILITIES}

Close test browser
    Run keyword if  '${REMOTE_URL}' != ''
    ...  Report Sauce status
    ...  ${SUITE_NAME} | ${TEST_NAME}
    Close all browsers

All the things we changed:

  • a new keyword library called SauceLabs is imported
  • keyword Open test browser is abstracted to be configurable with variables to support running the tests at Sauce Labs
  • keyword Close test browser is enhanced to send test details and test result to Sauce Labs by calling the new Report Sauce status keyword.

Next, we must implement our custom Sauce Labs keyword library with Python by creating the following file to provide the new Report Sauce status keyword:

import re
import requests
import simplejson as json

from robot.api import logger
from robot.libraries.BuiltIn import BuiltIn

USERNAME_ACCESS_KEY = re.compile('^(http|https):\/\/([^:]+):([^@]+)@')

def report_sauce_status(name, status, tags=[], remote_url=''):
    # Parse username and access_key from the remote_url
    assert USERNAME_ACCESS_KEY.match(remote_url), 'Incomplete remote_url.'
    username, access_key = USERNAME_ACCESS_KEY.findall(remote_url)[0][1:]

    # Get selenium session id from the keyword library
    selenium = BuiltIn().get_library_instance('Selenium2Library')
    job_id = selenium._current_browser().session_id

    # Prepare payload and headers
    token = (':'.join([username, access_key])).encode('base64').strip()
    payload = {'name': name,
               'passed': status == 'PASS',
               'tags': tags}
    headers = {'Authorization': 'Basic {0}'.format(token)}

    # Put test status to Sauce Labs
    url = '{0}/jobs/{1}'.format(username, job_id)
    response = requests.put(url, data=json.dumps(payload), headers=headers)
    assert response.status_code == 200, response.text

    # Log video url from the response
    video_url = json.loads(response.text).get('video_url')
    if video_url:'<a href="{0}">video.flv</a>'.format(video_url), html=True)

Finally, we must install a couple of required Python libraries into our Python virtual environment with:

$ robot/bin/pip install simplejson requests

We are almost there!

Running a robot test suite with Sauce Labs

Once we have abstracted our test suite to support Sauce Labs with configurable suite variables, we can run the tests either locally, or using Sauce Labs, or using different browsers at Sauce Labs, just by executing the Robot Framework test runner with different arguments.

  1. To run the test suite locally, we simply type:

    $ robot/bin/pybot test_saucelabs_login.robot
  2. To run the test at Sauce Labs, we pass the Sauce Labs OnDemand address as ${REMOTE_URL} variable by using -v argument supported by the test runner:

    $ robot/bin/pybot -v REMOTE_URL: test_saucelabs_login.robot

    Make sure to replace USERNAME and ACCESS_KEY with your Sauce Labs account username and its current access key!

  3. To change the Sauce Labs test browser or platform, we just need to add an another variable with -v to define the required browser in ${DESIRED_CAPABILITIES} variable passed to Selenium.

    The only trick is to know the format used by the Selenium keyword library: a comma separated string with KEY:VALUE-pairs of the desired WebDriver capabilities.

    The full command to run the test suite with iPhone 7 browser at Sauce Labs would look like:

    $ robot/bin/pybot -v DESIRED_CAPABILITIES:"platform:OS X 10.9,browserName:iphone,version:7" -v REMOTE_URL: test_saucelabs_login.robot
  4. And to top the cake, we can also include our CI build number, just by adding the parameter into our desired capabilities string:

    $ robot/bin/pybot -v DESIRED_CAPABILITIES:"build:demo,platform:OS X 10.9,browserName:iphone,version:7" -v REMOTE_URL: test_saucelabs_login.robot

This is how our final tests would look in the Sauce Labs test table, with test names, tags, build numbers, results and all the stuff!

Quite nice, isn't it.

P.S. The final example can be downloaded as a gist at:

Written by Asko Soukka – an occasional Plone core contributor and a full time web developer at University of Jyväskylä, Finland.

My week of Plone sprinting with friends and robots

  • 0

I'm a bit late, but yes, I had a great time at last week's Plone sprints. At first I flew to Amsterdam Stroopwafel Sprint for the weekend, and then continued by train to Cologne Cathedral Sprint 2014 for the week. Met old friends, got new ones, eat well and code a quite lot. Special thanks go to Sven for both organizing the Amsterdam sprint and hosting me there, Clean Clothes Campaign for providing the sprint facilities, Timo for organizing the Cologne sprint, GFU Cyrus AG for hosting the sprint and, of course, my employer, University of Jyväskylä, for allowing me to attend the sprints.

There's already a lot of sprint reports out there: Paul has summarized the Amsterdam sprint while Ramon, Bo, Johannes and Victor have all reported a good part the huge amount of work done at the Cologne sprint.

My week can be summarized shortly with two small subprojects: papyrus and plone.themepreview:


For the Sunday at Stroopwafel Sprint I paired with Giacomo to combine his work for making our end user and developer documentation translatable using Transifex with mine for translating also the screenshots. We ended up with a special buildout named papyrus.

So, Papyrus is a buildout for translating Plone-specific documentation into multiple languages. It's simple a buildout and a makefile to:

  • Build Sphinx-documentation with embedded Robot Framework-scripted Selenium-powered screenshots in multiple languages.
  • Extract translatable strings to gettext-POT-files for both translating Sphinx-documentation and embedded screenshots.
  • Push translatable files to Transifex and pull the translations form to build the local documentation.

Papyrus is a buildout for now, but it could be refactored into a recipe later. Anyway, it's designed to be separate tool bundle from the actual documentation to make it reusable by any Plone-related documentation later.

Currently, there's an example Travis-CI-configuration to build the current collective.usermanual in English and Italian. Unfortunately, there's not much translated yet, so there's sure a lot of work left for the next documentation sprints.


During Cathedral Sprint 2014 I spent a lot of time to help the other sprinters with various Robot Framework and acceptance testing, but also made a small contribution for Theme and QA teams by recycling Timo's old theme screenshot suite into a reusable Sphinx-documentation called plone.themepreview.

plone.themepreview is a pre-written, Robot Framework and Selenium -powered, Sphinx-documentation with a lot of scripted screenshots for a Plone site with some client specific configuration – usually just a custom theme.

In other words, plone.themepreview comes with a Sphinx-scripts, which should should be able to launch a Plone sandbox with your theme and make a preview out of it to make it easier to evaluate if all the normal Plone use-cases have been covered by your brand new theme.

The README links to a couple of example configurations for using themepreview with Travis-CI. I guess, you all have seen plonetheme.sunburst, but how about diazotheme.bootstrap or plonetheme.onegov?

Of course, the current set of screenshots is not perfect, but if you think so, please, contribute! Just fork the project, do a pull request, and Travis-CI tell me if the changes are safe to merge. And if you have any questions, just file new issues!

Probably, also plone.themepreview could somehow be turned into a recipe at the end, but for now it seems to be easiest to just clone it and its build with theme specific configuration.

Happy theming!

Review: Robot Framework Test Automation (Bisht 2013)

  • 0

During the past year, I've been lucky to meet many people new to Robot Framework and help them to get started in writing acceptance tests with it. While Robot Framework already comes with a nice Quick Start Guide and a very comprehensive User Guide, there's still a lot room for good narratives on getting started with it. So, when I heard about a new book about Robot Framework, I was quite excited. Unfortunately, Robot Framework Test Automation (Bisht 2013) appeared not to be the book that I was waiting for.

Robot Framework Test Automation is a new Robot Framework book written by Sumit Bisht and published by Packt Publishing (Oct 2013). The book gives a nice (short) introduction to acceptance testing in general and a quite ok overview of the features available in Robot Framework. After reading the book, you should have a good idea of all the things that are possible with Robot Framework, but you might be confused of how all the presented topics are related and how to actually use them in practice.

While the book is a quick read to get familiar with most of the features and concepts in Robot Framework, it's not a sufficient standalone guide for actually writing any tests with it. For example, the book uses a few different Robot Framework syntaxes inconsistently, and only a very few examples are complete (without downloading the bundled source code). In general, I would recommend to skip the examples in this book and learn the best practices from the examples in the official User Guide instead.

However, if you find both the official User Guide and Quick Start Guide too intimidating to get started with Robot Framework, you could try this book to learn the concepts and possibilities, and then return to the official guides for learning the details.

Disclaimer: I got a review copy of the book from Packt.

Meet the Robot family (for Plone developers)

  • 0

I only need to go two years back in time, and I had never really tried Selenium nor heard about Robot Framework before. Then I attended my first Plone Conference in San Francisco in 2011.

Originally, I was planning to sprint on Deco in San Francisco conference sprints, because I had spent the previous winter in developing a tile based in-house e-Portfolio management app (in Finnish) for my employer. Little did I know. I didn't get my employer's disclaimer for Plone contributor agreement in time, could not really get involved in the Deco-sprint, and ended up being completely side-tracked into the world of acceptance testing.

I can thank Ed Manlove for teaching me how to setup and run Selenium during the San Francisco conference. I also attended Godefroid Chapelle's session about how we could make Selenium testing easy with Robot Framework and its Selenium RC -library. Back then, writing and running robot tests for Plone was not as convenient as it is today, but it was still inspiring enough to get us where we are now.

During the last two years we have written and contributed to many robot related packages. Now, it's more than time to summarize, what all these robot packages are and what they can do for us:

Robot Framework (core)

Robot Framework is a well documented standalone generic test automation framework for acceptance testing and acceptance test-driven development. It's written in Python and has no other dependencies. It has active core development team, and development of the core framework is supported by Nokia Siemens Networks.

To make it clear: Robot Framework has no ties to Plone. Also, it can be used completely without Selenium (as most of its users have always done).

The first Plone-related contribution to Robot Framework that I'm aware of, is Plone Foundation's GSOC 2013 project to update Robot Framework's ReST parser to support so called space separated Robot Framework test syntax, (that's our favourite robot syntax). This should be included in Robot Framework 2.8.2 and later.


RobotSuite, authored by me, provides helpers for wrapping any Robot Framework test suite into robotsuite.RobotTestSuite, which makes any robot test suite compatible with Python standard unittest-library's unittest.TestSuite.

RobotSuite makes it possible to run Robot Framework tests using unittest-compatible test runners, like zope.testrunner.


Selenium2Library is a web testing library for Robot Framework that leverages the Selenium 2 (WebDriver) libraries. It's a rewrite of the original Selenium RC -library. Ed Manlove, a member of our Plone community, is one of its current maintainers.

While Robot Framework and Selenium2Library are enough to run robot browser tests for Plone, RobotSuite makes it possible to run the tests with zope.testrunner. And that makes it possible to run tests properly against volatile Plone, with our custom test fixtures, so that each test is run in isolation., authored by me and a lot of contributors from Godefroid Chapelle's original work for plone.act, is our dedicated robot testing integration library for Plone. It is not required to test Plone with robot, but it provides conventions and a lot of convenient helpers, including:

  • variables for writing test suites, which support Selenium grids
  • variables and keywords for writing SauceLabs-compatible test suites
  • remote keyword framework for writing fast test setup keywords in Python
  • autologin remote library to skip login forms in Selenium tests
  • Zope2Server-library for writing and running robot tests based on with just pure Robot Framework, completely without robotsuite and zope.testrunner
  • robot-server-script for running volatile Plone-server with given
  • code-reloading robot-server to support test fixture driven development in sauna.reload-style
  • optional SpeakJS-integration, which can make your Plone talk in screencasts
  • shared Plone-keyword library for robot (still in development).


Selenium2Screenshots, authored by me, is a Robot Framework keyword library for annotating and cropping capturing screenshots with Selenium2Library and jQuery.

It's a fun tool for creating screencasts with robot or screenshots for documentation purposes. The package has no dependencies on Plone so it can be used also with other projects.


Selenium2Accessibility, authored by me (from Paul Roeland's idea), is a highly experimental Robot Framework library for automating accessibility regression tests using Selenium. It bundles a special Firefox-profile with WebAim's WAVE Toolbar Firefox extension, WCAG Contrast checker Firefox extension and a custom Firefox extension to provide JavaScript-bridge between robot and those Firefox extensions.

It is not proven yet, but the library may help preventing new accessibility issues being introduced.


Finally, Robot Framework -integration for Sphinx, authored with a GSOC 2013 -project, executes embedded Robot Framework tests when Sphinx-based documentation is being built. This makes it possible to embed such Robot Framework tests into documentation that just building the documentation would generate all the screenshots that the documentation needs. For an example, see and its source.

Of course, it's time consuming to write tests just for generating screenshots. But once those tests have been written, they can keep the screenshots always up-to-date. And if the tests are written well, those tests can be used to generate screenshots for different languages (or maybe even for different themes) for free.

The most difficult part in acceptance testing, in general, is to decide what to test. And that's even more important for Selenium tests, which are slow to write and slow to run.

The original idea for robot tests for Plone was to test all the JavaScript-based features. Now, when those are already being testes with unit tests (with speed and in more detail) in the mockup-project. This Sphinx-integration for robot tests, could give us the new answer:

If a feature is documented in user documentation, it should be tested with acceptance test. And now those test could be embedded in the same documentation, and they would pay for themselves by creating screenshots for the documentation.

I'm quite exited of this robot family we have created together.

P.S. Yes, I'm sorry for not being able to attend the Plone Conference this year.