Nix expressions as executable commands

  • 0

My main tools for Python based software development have been virtualenv and buildout for a long time. I've used virtualenv for providing isolated Python installation (separate from polluted system python) and buildout for managing both the required Python packages, developed packages, and supporting software (like Redis or memcached).

Basically everything still works, but:

  • Managing clean Python virtualenvs for only to avoid possible conflicts with system installed packages doesn't feel so good.
  • Remembering to activate and deactivate the correct Python virtualenv is not fun either.
  • Also, while buildout provides excellent tool (mr.developer) for managing sources for all the project packages, it's far from optimal for building and managing supporting software.
  • Finally, buildout requires that extra bootstrapping to get you started.

I've also using quite a bit of Vagrant and Docker, but, because I'm mostly working on Mac, those require a VM, which makes them much less convenient.

About Nix

I believe, I heard about Nix package manager from Rok at the first time at Barcelona Plone Testing Sprint in the early 2013. It sounded a bit esoteric and complex back then, but after about about twenty months of more virtualenvs, buildouts, Vagrant files, Docker containers and puppet manifests... not so much anymore.

Currently, outside NixOS, I understand Nix as

  1. a functional language for describing configuration of something and
  2. a package manager for managing those configurations.

From my own experience, the easiest way to get familiar with Nix is to follow Domen's blog post about getting started with Nix package manager. But to really make it a new tool for your toolbox, you really should learn to write your own Nix expressions. Because, even the the most common way to use the package manager is to install those expressions into your currently environment with nix-env, the expressions can also be used without installing them, in a quite stateless way.

I'm not sure how proper use of Nix this is, but it seems to work for me.

Nix expression as virtualenv replacement

Setting up a new Python development environment for Plone development could traditionally look as following (with virtualenvwrapper):

$ mkvirtualenv Plone43
$ workon Plone43
$ pip install Pillow
$ python bootstrap.py
$ bin/buildout
$ deactivate

But with the following Nix expression as an executable script ./buildout.nix:

#!/usr/bin/env nix-exec
with import <nixpkgs> { };

buildEnv {
 name = "buildout";
  paths = [(
    python27Full.override {
      extraLibs = [
        python27Packages.recursivePthLoader
        python27Packages.setuptools
        python27Packages.buildout
        python27Packages.ldap
        python27Packages.pillow
        python27Packages.lxml
      ] ++ lib.attrValues python27.modules;
      ignoreCollisions = true;
    }
  )];
}

I can now run the buildout with simple:

$ ./buildout.nix

Or with any buildout arguments, like:

$ ./buildout.nix annotate

No activating or deactivating of virtualenvs and no more bootstrapping the buildout, because it's installed by the expression.

Nix expression for nix-exec shell wrapper

Of course, Nix expressions are not executable by default. To get them work as I wanted, I had to create tiny wrapper script to be used as the hash-bang line #!/usr/bin/env nix-exec of executable expressions.

The script simply calls nix-build and then the named executable from the build output directory. To put it another way, the wrapper script translates the following command:

$ ./buildout.nix annotate

into

$ `nix-build --no-out-link buildout.nix`/bin/buildout annotate

The script itself, of course, can be installed from a Nix expression into your default Nix environment with nix-env -i -f filename.nix:

with import <nixpkgs> { };

stdenv.mkDerivation {
  name = "datakurre-nix-exec-1.0.1";

  builder = builtins.toFile "builder.sh" "
    source $stdenv/setup
    mkdir -p $out/bin
    echo \"#!/bin/bash
build=\\`nix-build --no-out-link \\$1\\`
if [ \\$build ]; then
  filename=\\$\{1##*/\}

  if [ -f \\$build/bin/\\$\{filename%.nix\} ]; then
    echo \\\"➜\\\" \\$build/bin/\\$\{filename%.nix\} \\\"\\$\{@:2\}\\\"
    \\$build/bin/\\$\{filename%.nix\} \\\"\\$\{@:2\}\\\"

  elif [ -f \\$build/sbin/\\$\{filename%.nix\} ]; then
    echo \\\"➜\\\" \\$build/sbin/\\$\{filename%.nix\} \\\"\\$\{@:2\}\\\"
    \\$build/sbin/\\$\{filename%.nix\} \\\"\\$\{@:2\}\\\"

  elif [ -f \\$build/libexec/\\$\{filename%.nix\} ]; then
    echo \\\"➜\\\" \\$build/libexec/\\$\{filename%.nix\} \\\"\\$\{@:2\}\\\"
    \\$build/libexec/\\$\{filename%.nix\} \\\"\\$\{@:2\}\\\"

  fi
fi
\" > $out/bin/nix-exec
    chmod a+x $out/bin/nix-exec
  ";
}

A mostly positive side effect from using Nix expressions like this (only building them, but not installing them into any environment) is that they can be cleaned (from the disk) anytime with simply:

$ nix-collect-garbage

Nix expression for a Python-package outside nixpkgs

Sometimes, I need Python packages, which are not yet included in nixpkgs or are not available in public. Here's an example of Nix expression for a such package, i18ndude (public, but not yet in nixpkgs):

#!/usr/bin/env nix-exec
with import <nixpkgs> { };

let dependencies = rec {
  ordereddict = buildPythonPackage {
    name = "ordereddict-1.1";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/o/ordereddict/ordereddict-1.1.tar.gz";
      md5 = "a0ed854ee442051b249bfad0f638bbec";
    };
  };
};

in with dependencies; rec {
  i18ndude = buildPythonPackage {
    name = "i18ndude-3.3.5";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/i/i18ndude/i18ndude-3.3.5.zip";
      md5 = "ef599b1c64eaabba4049fcd2b027ba21";
    };
    propagatedBuildInputs = [
      ordereddict
      python27Packages."zope.tal-3.5.2"
      python27Packages."plone.i18n-2.0.9"
    ];
  };
}

Note: The easiest way to check for existing nixpkgs Python packages seems to be grepping the package list with nix-env -qaP \*|grep something. If you'd like to see more packages available by default, you can contribute them to upstream with a simple pull request.

In this case, though, it might make more sense to just install this expression into the curren Nix env with nix-env -i -f filename.nix. Still, it works just fine also as an executable script ./i18ndude.nix:

$ ./i18ndude.nix
➜ /nix/store/gjhzw843qs1736r0qcd9mz69247g4svb-python2.7-i18ndude-3.3.5/bin/i18ndude
usage: i18ndude [-h]
                {find-untranslated,rebuild-pot,merge,sync,filter,admix,list,trmerge}
                ...
18ndude: error: too few arguments

Nix expression for Robot Framework test runner

Here's a more complex expression, which configures a Python environment with Robot Framework and its Selenium2Library:

#!/usr/bin/env nix-exec
with import <nixpkgs> { };

let dependencies = rec {
  docutils = buildPythonPackage {
    name = "docutils-0.12";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/d/docutils/docutils-0.12.tar.gz";
      md5 = "4622263b62c5c771c03502afa3157768";
    };
  };
  selenium = buildPythonPackage {
    name = "selenium-2.43.0";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/s/selenium/selenium-2.43.0.tar.gz";
      md5 = "bf2b46caa5c1ea4b68434809c695d69b";
    };
  };
  decorator = buildPythonPackage {
    name = "decorator-3.4.0";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/d/decorator/decorator-3.4.0.tar.gz";
      md5 = "1e8756f719d746e2fc0dd28b41251356";
    };
  };
  robotframework = buildPythonPackage {
    name = "robotframework-2.8.5";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/r/robotframework/robotframework-2.8.5.tar.gz";
      md5 = "2d2c6938830f71a6aa6f4be32227997f";
    };
    propagatedBuildInputs = [
      docutils
    ];
  };
  robotframework-selenium2library = buildPythonPackage {
    name = "robotframework-selenium2library-1.5.0";
    src = fetchurl {
      url = "https://pypi.python.org/packages/source/r/robotframework-selenium2library/robotframework-selenium2library-1.5.0.tar.gz";
      md5 = "07c64a9e183642edd682c2b79ba2f32c";
    };
    propagatedBuildInputs = [
      robotframework
      decorator
      selenium
    ];
  };
  pybot = buildPythonPackage {
    name = robotframework.name;
    src = robotframework.src;
    propagatedBuildInputs = [
      robotframework-selenium2library
    ];
  };
};

in with dependencies; buildEnv {
  name = "pybot";
  paths = [(
    python27Full.override {
      extraLibs = [
        python27Packages.recursivePthLoader
        robotframework
        robotframework-selenium2library
        pybot
      ] ++ lib.attrValues python27.modules;
      ignoreCollisions = true;
    }
  )];
}

Note: Redundantly looking pybot-package is required to properly include Selenium2Library into the sys.path of the resulting Python environment.

Since you may need differently configured Robot Framework installations for different projects, this should be a good fit as an executable Nix expression:

$ ./pybot.nix
➜ /nix/store/q15bimgng25qcxkq2q10finyk0n6qkm2-pybot/bin/pybot
[ ERROR ] Expected at least 1 argument, got 0.

Try --help for usage information.

Asynchronous stream iterators and experimental promises for Plone

  • 0

This post may contain traces of legacy Zope2 and Python 2.x.

Some may think that Plone is bad in concurrency, because it's not common to deployt it with WSGI, but run it on top of a barely known last millennium asynchronous HTTP server called Medusa.

See, The out-of-the-box installation of Plone launches with only a single asynchronous HTTP server with just two fixed long-running worker threads. And it's way too easy to write custom code to keep those worker threads busy (for example, by with writing blocking calls to external services), effectively resulting denial of service for rest of the incoming requests

Well, as far as I know, the real bottleneck is not Medusa, but the way how ZODB database connections work. It seems that to optimize the database connection related caches, ZODB is best used with fixed amount of concurrent worker threads, and one dedicated database connection per thread. Finally, MVCC in ZODB limits each thread can serve only one request at time.

In practice, of course, Plone-sites use ZEO-clustering (and replication) to overcome the limitations described above.

Back to the topic (with a disclaimer). The methods described in this blog post have not been battle tested yet and they may turn out to be bad ideas. Still, it's been fun to figure out how our old asynchronous friend, Medusa, could be used to serve more concurrent request in certain special cases.

ZPublisher stream iterators

If you have been working with Plone long enough, you must have heard the rumor that blobs, which basically means files and images, are served from the filesystem in some special non-blocking way.

So, when someone downloads a file from Plone, the current worker thread only initiates the download and can then continue to serve the next request. The actually file is left to be served asynchronously by the main thread.

This is possible because of a ZPublisher feature called stream iterators (search IStreamIterator interface and its implementations in Zope2 and plone.app.blobs). Stream iterators are basically a way to postpone I/O-bound operations into the main thread's asyncore loop through a special Medusa-level producer object.

And because stream iterators are consumed only within the main thread, they come with some very strict limitations:

  • they are executed only after a completed transaction so they cannot interact with the transaction anymore
  • they must not read from the ZODB (because their origin connection is either closed or in use of their origin worker thread)
  • they must not fail unexpectedly, because you don't want to crash the main thread
  • they must not block the main thread, for obvious reasons.

Because of these limitations, the stream iterators, as such, are usable only for the purpose they have been made for: streaming files or similar immediately available buffers.

Asynchronous stream iterators

What if you could use ZPublisher's stream iterator support also for CPU-bound post-processing tasks? Or for post-processing tasks requiring calls to external web services or command-line utilities?

If you have a local Plone instance running somewhere, you can add the following proof-of-concept code and its slow_ok-method into a new External Method (also available as a gist):

import StringIO
import threading

from zope.interface import implements
from ZPublisher.Iterators import IStreamIterator
from ZServer.PubCore.ZEvent import Wakeup

from zope.globalrequest import getRequest


class zhttp_channel_async_wrapper(object):
    """Medusa channel wrapper to defer producers until released"""

    def __init__(self, channel):
        # (executed within the current Zope worker thread)
        self._channel = channel

        self._mutex = threading.Lock()
        self._deferred = []
        self._released = False
        self._content_length = 0

    def _push(self, producer, send=1):
        if (isinstance(producer, str)
                and producer.startswith('HTTP/1.1 200 OK')):
            # Fix Content-Length to match the real content length
            # (an alternative would be to use chunked encoding)
            producer = producer.replace(
                'Content-Length: 0\r\n',
                'Content-Length: {0:s}\r\n'.format(str(self._content_length))
            )
        self._channel.push(producer, send)

    def push(self, producer, send=1):
        # (executed within the current Zope worker thread)
        with self._mutex:
            if not self._released:
                self._deferred.append((producer, send))
            else:
                self._push(producer, send)

    def release(self, content_length):
        # (executed within the exclusive async thread)
        self._content_length = content_length
        with self._mutex:
            for producer, send in self._deferred:
                self._push(producer, send)
            self._released = True
        Wakeup()  # wake up the asyncore loop to read our results

    def __getattr__(self, key):
        return getattr(self._channel, key)


class AsyncWorkerStreamIterator(StringIO.StringIO):
    """Stream iterator to publish the results of the given func"""

    implements(IStreamIterator)

    def __init__(self, func, response, streamsize=1 << 16):
        # (executed within the current Zope worker thread)

        # Init buffer
        StringIO.StringIO.__init__(self)
        self._streamsize = streamsize

        # Wrap the Medusa channel to wait for the func results
        self._channel = response.stdout._channel
        self._wrapped_channel = zhttp_channel_async_wrapper(self._channel)
        response.stdout._channel = self._wrapped_channel

        # Set content-length as required by ZPublisher
        response.setHeader('content-length', '0')

        # Fire the given func in a separate thread
        self.thread = threading.Thread(target=func, args=(self.callback,))
        self.thread.start()

    def callback(self, data):
        # (executed within the exclusive async thread)
        self.write(data)
        self.seek(0)
        self._wrapped_channel.release(len(data))

    def next(self):
        # (executed within the main thread)
        if not self.closed:
            data = self.read(self._streamsize)
            if not data:
                self.close()
            else:
                return data
        raise StopIteration

    def __len__(self):
        return len(self.getvalue())


def slow_ok_worker(callback):
    # (executed within the exclusive async thread)
    import time
    time.sleep(1)
    callback('OK')


def slow_ok():
    """The publishable example method"""
    # (executed within the current Zope worker thread)
    request = getRequest()
    return AsyncWorkerStreamIterator(slow_ok_worker, request.response)

The above code example simulates a trivial post-processing with time.sleep, but it should apply for anything from building a PDF from the extracted data to calling an external web service before returning the final response.

An out-of-the-box Plone instance can handle only two (2) concurrent calls to a method, which would take one (1) second to complete.

In the above code, however, the post-processing could be delegated to a completely new thread, freeing the Zope worker thread to continue to handle the next request. Because of that, we can get much much better concurrency:

$ ab -c 100 -n 100 http://localhost:8080/Plone/slow_ok
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient).....done

Server Software:        Zope/(2.13.22,
Server Hostname:        localhost
Server Port:            8080

Document Path:          /Plone/slow_ok
Document Length:        2 bytes

Concurrency Level:      100
Time taken for tests:   1.364 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      15400 bytes
HTML transferred:       200 bytes
Requests per second:    73.32 [#/sec] (mean)
Time per request:       1363.864 [ms] (mean)
Time per request:       13.639 [ms] (mean, across all concurrent requests)
Transfer rate:          11.03 [Kbytes/sec] received

Connection Times (ms)
               min  mean[+/-sd] median   max
Connect:        1    2   0.6      2       3
Processing:  1012 1196  99.2   1202    1359
Waiting:     1011 1196  99.3   1202    1359
Total:       1015 1199  98.6   1204    1361

Percentage of the requests served within a certain time (ms)
  50%   1204
  66%   1256
  75%   1283
  80%   1301
  90%   1331
  95%   1350
  98%   1357
  99%   1361
  100%   1361 (longest request)

Of course, most of the stream iterator limits still apply: Asynchronous stream iterator must not access the database, which limits the possible use cases a lot. For the same reasons, also plone.transformchain is effectively skipped (no Diazo or Blocks), which limits this to be usable only for non-HTML responses.

experimental.promises

To go experimenting even further, what if you could do similar non-blocking asynchronous processing in the middle of a request? For example, to free the current Zope working thread while fetching a missing or outdated RSS feed in a separate thread and only then continue to render the final response.

An interesting side effect of using streaming iterators is that they allow you to inject code into the main thread's asynchronous loop. And when you are there, it's even possible to queue completely new request for ZPublisher to handle.

So, how would the following approach sound like:

  • let add-on code to annotate requests with promises for fetching the required data (each promise would be a standalone function, which could be executed under the asynchronous stream iterator rules, and when called, would resolve into a value, effectively the future of the promise), for example:

    @property
    def content(self):
        if 'my_unique_key' in IFutures(self.request):
            return IFutures(self.request)['my_unique_key']
        else:
            IPromises(self.request)['my_unique_key'] = my_promise_func
            return u''
    
  • when promises are found, the response is turned into an asynchronous stream iterator, which would then execute all the promises in parallel threads and collects the resolved values, futures:

    def transformIterable(self, result, encoding):
        if IPromises(self.request):
            return PromiseWorkerStreamIterator(
                IPromises(self.request), self.request, self.request.response)
        else:
            return None
    
  • finally, we'd wrap the current Medusa channel in a way that instead of publishing any data yet, a cloned request is queued for the ZPublisher (similarly how retries are done after conflict errors), but those cloned request and annotated to carry the resolved futures:

    def next(self):
       if self._futures:
           IFutures(self._zrequest).update(self._futures)
           self._futures = {}  # mark consumed to raise StopIteration
    
           from ZServer.PubCore import handle
           handle('Zope2', self._zrequest, self._zrequest.response)
       else:
           raise StopIteration
    
  • now the add-on code in question would find the futures from request, not issue any promises anymore and the request would result a normal response pushed all the way to the browser, which initiated the original request.

I'm not sure yet, how good or bad idea this would be, but I've been tinkering with a proof-of-concept implementation called experimental.promises to figure it out.

Of course, there are limits and issues to be aware of. Handling the same request twice is not free, which makes approach effective only when some significant processing can be moved to be done outside the worker threads. Also, because there may be other request between the first and the second pass (freeing the worker to handle other request is the whole point), the database may change between the passes (kind of breaking the MVCC promise). Finally, currently it's possible write the code always set new promises and end into never ending loop.

Anyway, if you are interested to try out these approaches (at your own risk, of course), feel free to ask more via Twitter or IRC.

Cross-Browser Selenium testing with Robot Framework and Sauce Labs

  • 0

How do you keep your Selenium tests up-to-date with your ever-changing user interface? Do you try to fix your existing tests, or do you just re-record them over and over again?

In the Plone Community, we have chosen the former approach (Plone is a popular open source CMS written in Python). We use a tool called Robot Framework to write our Selenium acceptance tests as maintainable BDD-style stories. Robot Framework's extensible test language allows us to describe Plone's features in a natural language sentences, which can then be expanded into either our domain specific or Selenium WebDriver API based testing language.

As an example, have a look at the following real-life acceptance test case on the next generation multilingual support in Plone:

*** Test Cases ***

Scenario: As an editor I can add new translation
    Given a site owner
      and a document in English
      and a document in Catalan
     When I view the Catalan document
      and I add the document in English as a translation
      and I switch to English
     Then I can view the document in English

*** Keywords ***

a site owner
    Enable autologin as  Manager

a document in English
    Create content  type=Document
    ...  container=/${PLONE_SITE_ID}/en/
    ...  id=an-english-document
    ...  title=An English Document

a document in Catalan
    Create content  type=Document
    ...  container=/${PLONE_SITE_ID}/ca/
    ...  id=a-catalan-document
    ...  title=A Catalan Document

I view the Catalan document
    Go to  ${PLONE_URL}/ca/a-catalan-document
    Wait until page contains  A Catalan Document

I add the document in English as a translation
    Click Element  css=#plone-contentmenu-multilingual .actionMenuHeader a
    Wait until element is visible  css=#_add_translations

    Click Element  css=#_add_translations
    Wait until page contains element
    ...  css=#formfield-form-widgets-content-widgets-query .searchButton

    Click Element  css=#formfield-form-widgets-content-widgets-query .searchButton
    Wait until element is visible  css=#form-widgets-content-contenttree a[href$='/plone/en']

    Click Element  css=#form-widgets-content-contenttree a[href$='/plone/en']
    Wait until page contains  An English Document

    Click link  xpath=//*[contains(text(), 'An English Document')]/parent::a
    Click Element  css=.contentTreeAdd

    Select From List  name=form.widgets.language:list  en
    Click Element  css=#form-buttons-add_translations
    Click Element  css=#contentview-view a
    Wait until page contains  A Catalan Document

I switch to English
    Click Link  English
    Wait until page contains  An English Document

I can view the document in English
    Page Should Contain Element
    ...  xpath=//*[contains(@class, 'documentFirstHeading')][./text()='An English Document']
    Page Should Contain Element
    ...  xpath=//ul[@id='portal-languageselector']/li[contains(@class, 'currentLanguage')]/a[@title='English']

About Robot Framework

Robot Framework is a generic keyword-driven test automation framework for acceptance testing and acceptance test-driven development. It's a neat tool by itself, yet its testing capabilities can be extended implementing custom test libraries either with Python or Java – without any other limits.

The super powers of Robot Framework come from its user keyword feature: in addition to the keywords provided by the extension libraries, users can create new higher-level keywords from the existing ones using the same syntax that is used for creating test cases. And, of course, everything can be parametrized with variables.

How could you cross-browser test your web applications with Robot Framework and its popular Selenium WebDriver keyword library?

Installing Robot Framework

To get started, we need Firefox, Python 2.7 (or Python 2.6) with pip package manager and virtualenv isolation packages installed. For Linux-distributions, all of these should be available directly from the system repositories (e.g. using apt-get install python-virtualenv in Ubuntu), but on OS X and Windows, some extra steps would be needed.

Once all these prerequisites all available, you can install Robot Framework and all the requirements for Selenium-testing with:

$ virtualenv robot --no-site-packages
$ robot/bin/pip install robotframework-selenium2library

The installation process should look something like:

$ virtualenv robot --no-site-packages
New python executable in robot/bin/python
Installing Setuptools...done.
Installing Pip...done.

$ robot/bin/pip install robotframework-selenium2library
Downloading/unpacking robotframework-selenium2library
  Downloading robotframework-selenium2library-1.5.0.tar.gz (216kB): 216kB downloaded
  Running setup.py egg_info for package robotframework-selenium2library
  ...
Downloading/unpacking decorator>=3.3.2 (from robotframework-selenium2library)
  Downloading decorator-3.4.0.tar.gz
  Running setup.py egg_info for package decorator
  ...
Downloading/unpacking selenium>=2.32.0 (from robotframework-selenium2library)
  Downloading selenium-2.40.0.tar.gz (2.5MB): 2.5MB downloaded
  Running setup.py egg_info for package selenium
  ...
Downloading/unpacking robotframework>=2.6.0 (from robotframework-selenium2library)
  Downloading robotframework-2.8.4.tar.gz (579kB): 579kB downloaded
  Running setup.py egg_info for package robotframework
  ...
Downloading/unpacking docutils>=0.8.1 (from robotframework-selenium2library)
  Downloading docutils-0.11.tar.gz (1.6MB): 1.6MB downloaded
  Running setup.py egg_info for package docutils
  ...
Installing collected packages: robotframework-selenium2library, decorator, selenium, robotframework, docutils
  Running setup.py install for robotframework-selenium2library
  ...
  Running setup.py install for decorator
  ...
  Running setup.py install for selenium
  ...
  Running setup.py install for robotframework
  ...
  Running setup.py install for docutils
  ...
 Successfully installed robotframework-selenium2library decorator selenium robotframework
 docutils
 Cleaning up...

And we should end up having the Robot Framework executable installed at:

$ robot/bin/pybot

Writing a Selenium test suite in robot

In the following examples, we use Robot Framework's space separated plain text test format. In this format a simple test suite can be written in a single plain text file named with a .robot suffix. To maximize readability, only two or more spaces are required to separate the different test syntax parts in the same line.

In the first example, we:

  • import Selenium2Library to enable Selenium keywords (because only the built-in keywords are available by default)
  • define simple test setup and teardown keywords
  • implement a simple test case using the imported Selenium keywords
  • use a tag to categorize the test case
  • abstract the test with a variable to make it easier to update the test later.

Now, write the following complete Selenium test suite into a file named test_saucelabs_login.robot:

*** Settings ***

Library  Selenium2Library

Test Setup  Open test browser
Test Teardown  Close test browser

*** Variables ***

${LOGIN_FAIL_MSG}  Incorrect username or password.

*** Test Cases ***

Incorrect username or password
    [Tags]  Login
    Go to  https://saucelabs.com/login

    Page should contain element  id=username
    Page should contain element  id=password

    Input text  id=username  anonymous
    Input text  id=password  secret

    Click button  id=submit

    Page should contain  ${LOGIN_FAIL_MSG}

*** Keywords ***

Open test browser
    Open browser  about:

Close test browser
    Close all browsers

A standalone test suite may contain one to four sections from *** Settings ***, *** Variables ***, *** Test Cases *** and *** Keywords ***, but always the *** Test Cases *** section. To summarize the sections:

Settings
Imports all the used keyword libraries and user keyword resource files. Contains all test suite level configuration such as suite/test setup and teardown instructions.
Variables
Defines all suite level variables with their default values.
Test Cases
Contains all the test cases for the test suite.
Keywords
Contains all the suite level user keyword implementations.

For the complete list of all available features for each of these sections, you can refer to Robot Framework User Guide.

Running a robot test suite

The default Robot Framework test runner is called pybot. Next, we can execute our test suite and create a test report from the execution by typing:

$ robot/bin/pybot test_saucelabs_login.robot

Besides opening a web browser, our example test suite run should look like:

$ robot/bin/pybot test_saucelabs_login.robot
==============================================================================
Test Saucelabs Login
==============================================================================
Incorrect username or password                                        | PASS |
------------------------------------------------------------------------------
Test Saucelabs Login                                                  | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Output:  /.../output.xml
Log:     /.../log.html
Report:  /.../report.html

And the test run should result a HTML test report file named report.html and a complete step by step test log file named log.html. The latter should look like:

http://1.bp.blogspot.com/-EgtubAVwqRs/UypsztN22oI/AAAAAAAAAjs/IvuzBPFIrMY/s1600/saucelabs-robot-log.png

To see all the available options for the test runner, just type:

$ robot/bin/pybot --help

Writing a Sauce-Labs Selenium test suite in robot

Now that we have a working Robot Framework installation and a functional test suite, we can continue to refactor the test suite to support cross-browser testing with Sauce Labs.

Let's update our test_saucelabs_login.robot to look like:

*** Settings ***

Library  Selenium2Library
Library  SauceLabs

Test Setup  Open test browser
Test Teardown  Close test browser

*** Variables ***

${BROWSER}  firefox
${REMOTE_URL}
${DESIRED_CAPABILITIES}

${LOGIN_FAIL_MSG}  Incorrect username or password.

*** Test Cases ***

Incorrect username or password
    [Tags]  Login
    Go to  https://saucelabs.com/login

    Page should contain element  id=username
    Page should contain element  id=password

    Input text  id=username  anonymous
    Input text  id=password  secret

    Click button  id=submit

    Page should contain  ${LOGIN_FAIL_MSG}

*** Keywords ***

Open test browser
    Open browser  about:  ${BROWSER}
    ...  remote_url=${REMOTE_URL}
    ...  desired_capabilities=${DESIRED_CAPABILITIES}

Close test browser
    Run keyword if  '${REMOTE_URL}' != ''
    ...  Report Sauce status
    ...  ${SUITE_NAME} | ${TEST_NAME}
    ...  ${TEST_STATUS}  ${TEST_TAGS}  ${REMOTE_URL}
    Close all browsers

All the things we changed:

  • a new keyword library called SauceLabs is imported
  • keyword Open test browser is abstracted to be configurable with variables to support running the tests at Sauce Labs
  • keyword Close test browser is enhanced to send test details and test result to Sauce Labs by calling the new Report Sauce status keyword.

Next, we must implement our custom Sauce Labs keyword library with Python by creating the following SauceLabs.py file to provide the new Report Sauce status keyword:

import re
import requests
import simplejson as json

from robot.api import logger
from robot.libraries.BuiltIn import BuiltIn

USERNAME_ACCESS_KEY = re.compile('^(http|https):\/\/([^:]+):([^@]+)@')


def report_sauce_status(name, status, tags=[], remote_url=''):
    # Parse username and access_key from the remote_url
    assert USERNAME_ACCESS_KEY.match(remote_url), 'Incomplete remote_url.'
    username, access_key = USERNAME_ACCESS_KEY.findall(remote_url)[0][1:]

    # Get selenium session id from the keyword library
    selenium = BuiltIn().get_library_instance('Selenium2Library')
    job_id = selenium._current_browser().session_id

    # Prepare payload and headers
    token = (':'.join([username, access_key])).encode('base64').strip()
    payload = {'name': name,
               'passed': status == 'PASS',
               'tags': tags}
    headers = {'Authorization': 'Basic {0}'.format(token)}

    # Put test status to Sauce Labs
    url = 'https://saucelabs.com/rest/v1/{0}/jobs/{1}'.format(username, job_id)
    response = requests.put(url, data=json.dumps(payload), headers=headers)
    assert response.status_code == 200, response.text

    # Log video url from the response
    video_url = json.loads(response.text).get('video_url')
    if video_url:
        logger.info('<a href="{0}">video.flv</a>'.format(video_url), html=True)

Finally, we must install a couple of required Python libraries into our Python virtual environment with:

$ robot/bin/pip install simplejson requests

We are almost there!

Running a robot test suite with Sauce Labs

Once we have abstracted our test suite to support Sauce Labs with configurable suite variables, we can run the tests either locally, or using Sauce Labs, or using different browsers at Sauce Labs, just by executing the Robot Framework test runner with different arguments.

  1. To run the test suite locally, we simply type:

    $ robot/bin/pybot test_saucelabs_login.robot
    
  2. To run the test at Sauce Labs, we pass the Sauce Labs OnDemand address as ${REMOTE_URL} variable by using -v argument supported by the test runner:

    $ robot/bin/pybot -v REMOTE_URL:http://USERNAME:ACCESS_KEY@ondemand.saucelabs.com:80/wd/hub test_saucelabs_login.robot
    

    Make sure to replace USERNAME and ACCESS_KEY with your Sauce Labs account username and its current access key!

  3. To change the Sauce Labs test browser or platform, we just need to add an another variable with -v to define the required browser in ${DESIRED_CAPABILITIES} variable passed to Selenium.

    The only trick is to know the format used by the Selenium keyword library: a comma separated string with KEY:VALUE-pairs of the desired WebDriver capabilities.

    The full command to run the test suite with iPhone 7 browser at Sauce Labs would look like:

    $ robot/bin/pybot -v DESIRED_CAPABILITIES:"platform:OS X 10.9,browserName:iphone,version:7" -v REMOTE_URL:http://USERNAME:ACCESS_KEY@ondemand.saucelabs.com:80/wd/hub test_saucelabs_login.robot
    
  4. And to top the cake, we can also include our CI build number, just by adding the parameter into our desired capabilities string:

    $ robot/bin/pybot -v DESIRED_CAPABILITIES:"build:demo,platform:OS X 10.9,browserName:iphone,version:7" -v REMOTE_URL:http://USERNAME:ACCESS_KEY@ondemand.saucelabs.com:80/wd/hub test_saucelabs_login.robot
    

This is how our final tests would look in the Sauce Labs test table, with test names, tags, build numbers, results and all the stuff!

http://3.bp.blogspot.com/-UsrXUoyBFs0/UyptB8AcAqI/AAAAAAAAAj0/mrPBn_17xtU/s1600/saucelabs-table.png

Quite nice, isn't it.

P.S. The final example can be downloaded as a gist at: https://gist.github.com/datakurre/9589707


Written by Asko Soukka – an occasional Plone core contributor and a full time web developer at University of Jyväskylä, Finland.

My week of Plone sprinting with friends and robots

  • 0

I'm a bit late, but yes, I had a great time at last week's Plone sprints. At first I flew to Amsterdam Stroopwafel Sprint for the weekend, and then continued by train to Cologne Cathedral Sprint 2014 for the week. Met old friends, got new ones, eat well and code a quite lot. Special thanks go to Sven for both organizing the Amsterdam sprint and hosting me there, Clean Clothes Campaign for providing the sprint facilities, Timo for organizing the Cologne sprint, GFU Cyrus AG for hosting the sprint and, of course, my employer, University of Jyväskylä, for allowing me to attend the sprints.

There's already a lot of sprint reports out there: Paul has summarized the Amsterdam sprint while Ramon, Bo, Johannes and Victor have all reported a good part the huge amount of work done at the Cologne sprint.

My week can be summarized shortly with two small subprojects: papyrus and plone.themepreview:

Papyrus

http://3.bp.blogspot.com/-kocLIybCGqI/UwTvaicOIZI/AAAAAAAAAi8/7ybDMRwBWjk/s1600/working-copy_checkin.png

For the Sunday at Stroopwafel Sprint I paired with Giacomo to combine his work for making our end user and developer documentation translatable using Transifex with mine for translating also the screenshots. We ended up with a special buildout named papyrus.

So, Papyrus is a buildout for translating Plone-specific documentation into multiple languages. It's simple a buildout and a makefile to:

  • Build Sphinx-documentation with embedded Robot Framework-scripted Selenium-powered screenshots in multiple languages.
  • Extract translatable strings to gettext-POT-files for both translating Sphinx-documentation and embedded screenshots.
  • Push translatable files to Transifex and pull the translations form to build the local documentation.

Papyrus is a buildout for now, but it could be refactored into a recipe later. Anyway, it's designed to be separate tool bundle from the actual documentation to make it reusable by any Plone-related documentation later.

Currently, there's an example Travis-CI-configuration to build the current collective.usermanual in English and Italian. Unfortunately, there's not much translated yet, so there's sure a lot of work left for the next documentation sprints.

plone.themepreview

http://1.bp.blogspot.com/-LNzNAwryB9Y/UwTwws0XWkI/AAAAAAAAAjI/g40X3iGyJEE/s1600/document-edit3.png

During Cathedral Sprint 2014 I spent a lot of time to help the other sprinters with various Robot Framework and acceptance testing, but also made a small contribution for Theme and QA teams by recycling Timo's old theme screenshot suite into a reusable Sphinx-documentation called plone.themepreview.

plone.themepreview is a pre-written, Robot Framework and Selenium -powered, Sphinx-documentation with a lot of scripted screenshots for a Plone site with some client specific configuration – usually just a custom theme.

In other words, plone.themepreview comes with a Sphinx-scripts, which should should be able to launch a Plone sandbox with your theme and make a preview out of it to make it easier to evaluate if all the normal Plone use-cases have been covered by your brand new theme.

The README links to a couple of example configurations for using themepreview with Travis-CI. I guess, you all have seen plonetheme.sunburst, but how about diazotheme.bootstrap or plonetheme.onegov?

Of course, the current set of screenshots is not perfect, but if you think so, please, contribute! Just fork the project, do a pull request, and Travis-CI tell me if the changes are safe to merge. And if you have any questions, just file new issues!

Probably, also plone.themepreview could somehow be turned into a recipe at the end, but for now it seems to be easiest to just clone it and its build with theme specific configuration.

Happy theming!

Review: Robot Framework Test Automation (Bisht 2013)

  • 0

During the past year, I've been lucky to meet many people new to Robot Framework and help them to get started in writing acceptance tests with it. While Robot Framework already comes with a nice Quick Start Guide and a very comprehensive User Guide, there's still a lot room for good narratives on getting started with it. So, when I heard about a new book about Robot Framework, I was quite excited. Unfortunately, Robot Framework Test Automation (Bisht 2013) appeared not to be the book that I was waiting for.

Robot Framework Test Automation is a new Robot Framework book written by Sumit Bisht and published by Packt Publishing (Oct 2013). The book gives a nice (short) introduction to acceptance testing in general and a quite ok overview of the features available in Robot Framework. After reading the book, you should have a good idea of all the things that are possible with Robot Framework, but you might be confused of how all the presented topics are related and how to actually use them in practice.

While the book is a quick read to get familiar with most of the features and concepts in Robot Framework, it's not a sufficient standalone guide for actually writing any tests with it. For example, the book uses a few different Robot Framework syntaxes inconsistently, and only a very few examples are complete (without downloading the bundled source code). In general, I would recommend to skip the examples in this book and learn the best practices from the examples in the official User Guide instead.

However, if you find both the official User Guide and Quick Start Guide too intimidating to get started with Robot Framework, you could try this book to learn the concepts and possibilities, and then return to the official guides for learning the details.

Disclaimer: I got a review copy of the book from Packt.

Meet the Robot family (for Plone developers)

  • 0

I only need to go two years back in time, and I had never really tried Selenium nor heard about Robot Framework before. Then I attended my first Plone Conference in San Francisco in 2011.

Originally, I was planning to sprint on Deco in San Francisco conference sprints, because I had spent the previous winter in developing a tile based in-house e-Portfolio management app (in Finnish) for my employer. Little did I know. I didn't get my employer's disclaimer for Plone contributor agreement in time, could not really get involved in the Deco-sprint, and ended up being completely side-tracked into the world of acceptance testing.

I can thank Ed Manlove for teaching me how to setup and run Selenium during the San Francisco conference. I also attended Godefroid Chapelle's session about how we could make Selenium testing easy with Robot Framework and its Selenium RC -library. Back then, writing and running robot tests for Plone was not as convenient as it is today, but it was still inspiring enough to get us where we are now.

During the last two years we have written and contributed to many robot related packages. Now, it's more than time to summarize, what all these robot packages are and what they can do for us:

http://3.bp.blogspot.com/-DWOCJ1iWuws/Ukc0hU8ySpI/AAAAAAAAAhs/sRtPR56Hkwc/s1600/meet_the_robot_family.png

Robot Framework (core)

Robot Framework is a well documented standalone generic test automation framework for acceptance testing and acceptance test-driven development. It's written in Python and has no other dependencies. It has active core development team, and development of the core framework is supported by Nokia Siemens Networks.

To make it clear: Robot Framework has no ties to Plone. Also, it can be used completely without Selenium (as most of its users have always done).

The first Plone-related contribution to Robot Framework that I'm aware of, is Plone Foundation's GSOC 2013 project to update Robot Framework's ReST parser to support so called space separated Robot Framework test syntax, (that's our favourite robot syntax). This should be included in Robot Framework 2.8.2 and later.

RobotSuite

RobotSuite, authored by me, provides helpers for wrapping any Robot Framework test suite into robotsuite.RobotTestSuite, which makes any robot test suite compatible with Python standard unittest-library's unittest.TestSuite.

RobotSuite makes it possible to run Robot Framework tests using unittest-compatible test runners, like zope.testrunner.

robotframework-selenium2library

Selenium2Library is a web testing library for Robot Framework that leverages the Selenium 2 (WebDriver) libraries. It's a rewrite of the original Selenium RC -library. Ed Manlove, a member of our Plone community, is one of its current maintainers.

While Robot Framework and Selenium2Library are enough to run robot browser tests for Plone, RobotSuite makes it possible to run the tests with zope.testrunner. And that makes it possible to run tests properly against volatile plone.app.testing-sandboxed Plone, with our custom test fixtures, so that each test is run in isolation.

plone.app.robotframework

plone.app.robotframework, authored by me and a lot of contributors from Godefroid Chapelle's original work for plone.act, is our dedicated robot testing integration library for Plone. It is not required to test Plone with robot, but it provides conventions and a lot of convenient helpers, including:

  • variables for writing test suites, which support Selenium grids
  • variables and keywords for writing SauceLabs-compatible test suites
  • remote keyword framework for writing fast test setup keywords in Python
  • autologin remote library to skip login forms in Selenium tests
  • Zope2Server-library for writing and running robot tests based on plone.app.testing-fixtures with just pure Robot Framework, completely without robotsuite and zope.testrunner
  • robot-server-script for running volatile Plone-server with given plone.app.testing-fixture
  • code-reloading robot-server to support test fixture driven development in sauna.reload-style
  • optional SpeakJS-integration, which can make your Plone talk in screencasts
  • shared Plone-keyword library for robot (still in development).

robotframework-selenium2screenshots

Selenium2Screenshots, authored by me, is a Robot Framework keyword library for annotating and cropping capturing screenshots with Selenium2Library and jQuery.

It's a fun tool for creating screencasts with robot or screenshots for documentation purposes. The package has no dependencies on Plone so it can be used also with other projects.

robotframework-selenium2accessibility

Selenium2Accessibility, authored by me (from Paul Roeland's idea), is a highly experimental Robot Framework library for automating accessibility regression tests using Selenium. It bundles a special Firefox-profile with WebAim's WAVE Toolbar Firefox extension, WCAG Contrast checker Firefox extension and a custom Firefox extension to provide JavaScript-bridge between robot and those Firefox extensions.

It is not proven yet, but the library may help preventing new accessibility issues being introduced.

sphinxcontrib-robotframework

Finally, Robot Framework -integration for Sphinx, authored with a GSOC 2013 -project, executes embedded Robot Framework tests when Sphinx-based documentation is being built. This makes it possible to embed such Robot Framework tests into documentation that just building the documentation would generate all the screenshots that the documentation needs. For an example, see http://elvenmagic.pandala.org/ and its source.

Of course, it's time consuming to write tests just for generating screenshots. But once those tests have been written, they can keep the screenshots always up-to-date. And if the tests are written well, those tests can be used to generate screenshots for different languages (or maybe even for different themes) for free.

The most difficult part in acceptance testing, in general, is to decide what to test. And that's even more important for Selenium tests, which are slow to write and slow to run.

The original idea for robot tests for Plone was to test all the JavaScript-based features. Now, when those are already being testes with unit tests (with speed and in more detail) in the mockup-project. This Sphinx-integration for robot tests, could give us the new answer:

If a feature is documented in user documentation, it should be tested with acceptance test. And now those test could be embedded in the same documentation, and they would pay for themselves by creating screenshots for the documentation.

I'm quite exited of this robot family we have created together.

P.S. Yes, I'm sorry for not being able to attend the Plone Conference this year.

Your commit broke my blog!

  • 0
But how much does it makes sense to write acceptance tests while creating screenshots for new blog posts presenting Plone add-ons?

I have a new experimental project:

What you should see, is A Sphinx-built blog written in ReStructuredText being built, including annotated screenshots generated from embedded Robot Framework -acceptance tests describing a Plone add-on. (Technically, each blog post could present different add-ons.)

The resulting blog could be hosted on GitHub, served via GitHub-pages and it should be possible to edit it collaborately using branching/forking and pull-requests.

I'm not sure if this would really work out, and won't include this in Planet yet, but you can see the preview at

and example configuration with raw posts at

and ping me at Twitter or IRC, if you are interested in participating.

Ingredients