Loading...

Plone Conference Barcelona 2017

It was awesome to be back at Plone Conference this year. Finally! We have had participation in Plone conferences in 2009, 2011–2012 and 2014–2017, but for me the previous one was years ago: Plone Conference Bristol in 2014. Needless to say that I have missed the warm and welcoming atmosphere of a Plone conferences, and It's my pleasure to report that Barcelona did not let me down. Even the weather was still warm there in this October.

This year there was no single big Plone news at the conference. The latest major release of Plone CMS was released already two years ago, and the next feature release is still waiting for its gold master. Yet, there was still a lot of good news, and putting all the puzzle pieces together resulted in a clear picture of the future of Plone.

Disclaimer: These are obviously just my personal opinions on all these things Plone...

Published originally at http://tech.blog.jyu.fi/2017/10/plone-conference-barcelona-2017.html

https://4.bp.blogspot.com/-RPnuOC4AJH8/WfWkebmt4lI/AAAAAAAABLI/CRNo1l_VN2kclL7AFq7MD9poEa9-sKMFQCLcBGAs/s1600/DMsDO13WsAAh3zG.jpg%253Alarge.jpeg

Plone Conference Barcelona was so much of fun that I took a piece of it with me back home.

Plone 2020 and beyond

At first, let's say it clear that Plone CMS remains to be a safe bet for a long-term enterprise CMS solution. If there ever was any doubt, whether Plone could make it to Python 3 in-time before the end of Python 2.7 maintenance in 2020, that should be no more. Plone will make it.

All the major blockers seem to have been solved, and the rest is just hard work left for our community (check some related talks by Alexander and Hanno about the recent events on that). Python 3 version of Zope application server powering Plone is already in beta, and it is scheduled to be released within a year. Plone, for sure, has still plenty of packages to be ported from Python 2.7 to Python 3, but there are already many sprints scheduled to continue that work in near future (including the already completed Barcelona Conference sprints). We might even have an alpha version of Plone on Python 3 before end of 2018.

In addition that, it's always good to mention, that Plone Foundation has continued to do its usual great job in all the possible paper work around managing Plone's copyrights and trademarks.

All these should be good and relaxing news for any long-term Plone user.

Let's go frontend!

The greatest challenge for Plone CMS seems to be keeping up with the ever increasing UX expections of the day, while complying with the high accessibility standards. After Plone 5 rewrote the default theme and whole front-end resource management in Plone, there are no longer blockers for using any current front-end tech with Plone. But just being able to use some tech is not enough – also the real work for better UX needs to be done. And even a lot has been done for Plone 5 and 5.1, that work seems to never end.

Plone Conference Barcelona included a great amount of front-end, user experience and accessibility related talks to educate our community. So many that I can only mention a few.

At first, there were talks regarding the current Plone user interface: Johannes gave a bit technical, but very comprehensive talk how the new frontend resource registries in Plone 5 really work. My talk instructed, how to combine the ancient powers of Zope application server with the modern Plone 5 theming support to achieve shorter iterations and faster deployments when developing new UX features. Our Rikupekka talked about our migration experiences from Plone 4 to Plone 5, and gave a demo about of the UI features we have developed using the approach I discussed in my talk. Finally, I want to mention Wildcards' Kim's talk about Castle CMS, which really showcased, how much difference well lead and focused UX development for Plone based distribution could do in just about a year. Although, the fact that Castle's development had to be forked a bit from the main Plone distribution is also telling, how difficult it is to make the same UX please everyone.

Then there were many talks about the future: there's a new branch of Plone user interfaces built completely in JavaScript on top of the great Plone REST API (which Timo gave a nice presentation about). With Plone REST API it's possible to combine the great and robust CMS features of our secure Plone backend with leading edge JavaScript based frontend. It also makes Plone based solutions feasible for the current generation of frontend developers, because only very basic Plone knowledge is needed to get started. And while there is no complete replacement of Plone user interface in JavaScript yet, there are SDK like projects with many familiar UI components already for ReactJS, Angular (check Eric's talk) and even on VueJS.

If these don't feel ambitious enough, there was one more thing: Albert's talk about Pastanaga UI – a proposal for next generation UI for generic CMSs.

Guillotina – server for a more civilized age

I'm not sure how common mistake it is, but at least we have sometimes ended up using Plone as a framework for projects, for which Plone was not really the most optimal solution. That has happened, because Plone has some very unique features we love and trust: object database with URL traversal, extremely flexible Zope Component Architecture, and very proven security model especially designed for hierarchical data.

At Barcelona conference, Nathan from Onna presented their new ”AsyncIO REST Resource Application Server” called Guillotina (open sourced through Plone Foundation)r What makes Guillotina very special and interesting is that it has all those unique features we have learned to love in Plone ”framework”, but with minimal server footprint and first class support for asynchronous programming using Python 3 AsyncIO event loop. That should allow Guillotina to go places where no Plone has gone before.

I really hope the next year brings us a suitable project to try Guillotina in practice...

There and back again

To summarize all this, here's my picture of the future of Plone on the base of Plone Conference Barcelona 2017 in three sentences:

  • Plone CMS as we know it remains here to stay – the current users remain safe with Plone
  • Plone REST API and all the UI SDKs based on it ”save Plone” by making it a feasible solution for content management related progressive web apps
  • Guillotina ”saves Plone developers” by allowing them to transfer their current Plone ”framework” knowledge into era of high-performance Python 3 AsyncIO microservices.

Obviously there was a lot more in the conference than this. There was a lot of great talks by talented speakers. It was great to see all the old friends and make some new ones. I had a chance to meet my GSOC 2017 student Oshane Bailey. And there are no parties like parties in Plone Conferences.

Thanks once again for all the organizers. It was a pleasure to be there.

We'll see if I get to see Tokyo next year...

https://4.bp.blogspot.com/-MtyXqb6O3Yw/WfWkeV2I-tI/AAAAAAAABLM/MVy3r6Utv-MM-3z7Dxaqi28CkS0Zn_IDgCLcBGAs/s1600/DMlL1SzWsAM9mfO.jpg

Photo of me, Oshane Bailey and David Bain by Maik Derstappen. They said this pose is to honor Usain Bolt.

Building instant features with advanced Plone themes

Plone, ”The Ultimate Enterprise CMS”, ships with built-in batteries for building sophisticated content management solutions without writing a single line of new Python code. For example, a fresh installation of Plone allows to build custom structured content types with custom HTML views, define custom state based workflows, customize various user interface elements, and finish the user experience by configuring custom event triggered content rules to react on users' actions. Not to mention the Diazo based theming tool, which allows unlimited tweaking of the resulting HTML.

All this by just clicking and typing things through-the-web (TTW) with your browser.

Yet, still some say that Plone is a difficult to customize and extend.

The flip side of customizing Plone TTW is that it's way too easy to lost track of your customizations. That adds to technical debt and therefore cost of maintaining those customizations over years and upgrades to future Plone releases. The suggested solution to avoid those problems has long been to avoid TTW customizations altogether, in favor of customizing everything using ”buildout-installed file-system Python packages”. But that makes customizing Plone feel unnecessary difficult and technical.

At Plone Conference 2017 I gave a talk, where I showed an alternative way for this: if it was possible to bundle all those customizations together, for example in TTW managed theme, maintaining those customizations would no longer be the blocker.

Customizing Plone could be made easy again.

Requirements

Technically, Plone has supported exporting and importing most of the possible TTW customizations for more than ten years, but the user interface for that has been cumbersomely technical. Finally, Plone 4.1 introduced a new Diazo based theming feature with easy to use theming control panel and theme editor. And now, with only a couple of extra packages in your Plone setup, Plone theming features get super powers to apply site customizations with any theme.

To complete the following example, you need a Plone site with these two extra Python packages installed: collective.themesitesetup and collective.themefragments.

As usual, those can be installed by customizing and running buildout

[instance]
eggs =
    ...
    collective.themesitesetup
    collective.themefragments

or you can try out with the official Plone docker image:

$ docker run -p 8080:8080 -e PLONE_ADDONS="collective.themesitesetup collective.themefragments" plone fg

Case of the day: Wall of images

As an example feature, we build a simple folder view that displays a list of varying size images in an optimal grid layout using popular Masonry.js layout library, with help an another library called imagesLoaded.

To summarize, building that view requires:

  • Providing JS bundles for both Masonry and imagesLoaded
  • Registering those bundles into Plone resource registry
  • A folder view template that renders images in that folder
  • Way to configure that view on a folder
  • JS code to initialize Masonry layout on that view
https://3.bp.blogspot.com/-LNyBEyLbLxE/We4-2UJN28I/AAAAAAAABKc/1W8CRGj0ykc7k1ov9zGOagl6CmxNIEqbQCLcBGAs/s1600/three-columns.png

Getting started with theming

To get a fast start, we create a dummy theme base named demotheme that simply re-uses styles and rules from Barceloneta, the default theme of Plone 5. Your theme base should contain the following files:

  • ./index.html
  • ./rules.xml
  • ./scripts.js
  • ./styles.css
  • ./manifest.cfg

At first, ./index.html is just a copy of the same theme file from Barceloneta:

<!doctype html>
<html>
  <head>
    <title>Plone Theme</title>
    <link rel="shortcut icon" type="image/x-icon"
          href="++theme++barceloneta/barceloneta-favicon.ico" />
    <link rel="apple-touch-icon"
          href="++theme++barceloneta/barceloneta-apple-touch-icon.png" />
    <link rel="apple-touch-icon-precomposed" sizes="144x144"
          href="++theme++barceloneta/barceloneta-apple-touch-icon-144x144-precomposed.png" />
    <link rel="apple-touch-icon-precomposed" sizes="114x114"
          href="++theme++barceloneta/barceloneta-apple-touch-icon-114x114-precomposed.png" />
    <link rel="apple-touch-icon-precomposed" sizes="72x72"
          href="++theme++barceloneta/barceloneta-apple-touch-icon-72x72-precomposed.png" />
    <link rel="apple-touch-icon-precomposed" sizes="57x57"
          href="++theme++barceloneta/barceloneta-apple-touch-icon-57x57-precomposed.png" />
    <link rel="apple-touch-icon-precomposed"
          href="++theme++barceloneta/barceloneta-apple-touch-icon-precomposed.png" />
  </head>
  <body>
    <section id="portal-toolbar">
    </section>
    <div class="outer-wrapper">
      <header id="content-header">
        <div class="container">
          <header id="portal-top">
          </header>
          <div id="anonymous-actions">
          </div>
        </div>
      </header>
      <div id="mainnavigation-wrapper">
        <div id="mainnavigation">
        </div>
      </div>
      <div id="hero" class="principal">
        <div class="container">
          <div class="gigantic">
          </div>
        </div>
      </div>
      <div id="above-content-wrapper">
          <div id="above-content">
          </div>
      </div>
      <div class="container">
        <div class="row">
          <aside id="global_statusmessage"></aside>
        </div>
        <main id="main-container" class="row row-offcanvas row-offcanvas-right">
          <div id="column1-container">
          </div>
          <div id="content-container">
          </div>
          <div id="column2-container">
          </div>
        </main><!--/row-->
      </div><!--/container-->
    </div> <!--/outer-wrapper -->
    <footer id="portal-footer-wrapper">
      <div class="container" id="portal-footer"></div>
    </footer>
  </body>
</html>

Then, ./rules.xml does nothing more than includes the existing rules directly from the always available Barceloneta theme:

<?xml version="1.0" encoding="UTF-8"?>
<rules
    xmlns="http://namespaces.plone.org/diazo"
    xmlns:css="http://namespaces.plone.org/diazo/css"
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xmlns:xi="http://www.w3.org/2001/XInclude">

  <!-- Import Barceloneta rules -->
  <xi:include href="++theme++barceloneta/rules.xml" />

</rules>

File ./scripts.js starts empty and file ./styles.css with the following content to reuse styles from Barceloneta theme:

@import "../++theme++barceloneta/less/barceloneta-compiled.css";

.plone-breadcrumb ol {
    padding: 18px 0;
    font-size: 14px;
}

They both should be registered as the implicit ”theme bundle” (or ”Diazo-bundle”) in ./manifest.cfg by setting production-css and production-js attributes as follows:

[theme]
title = Demo Theme
description =
production-css = /++theme++demotheme/styles.css
production-js = /++theme++demotheme/scripts.js

Saving these files and enabling the theme should already give the basic Barceloneta experience. But let's continue to extend it with our special feature...

Registering Masonry.js bundles

Plone 5 resource registry supports many ways to configure new front end resources. We go with the easy way by simply downloading the 3rd party JS distributions and registering them mostly as such for Plone with the following steps:

  1. Create folder ./bundles into theme to keep the required front-end bundles separate from the other theme files

  2. Download the official minified Masonry.js distribution and save it as ./bundles/masonry.pkgd.min.js

  3. Download the official minified imagesLoaded distribution and save it as ./bundles/imagesloaded.pkgd.min.js

  4. Edit both of the previous files by adding line

    (function() { var require, define;
    

    into the beginning of the file, and line

    })();
    

    into the end of the file. These are required for any ”AMD packaged” JS distribution to work in Plone's Require.js based JS environment.

  5. Add two empty files ./bundles/masonry.pkgd.min.css and ./bundles/imagesloaded.pkgd.min.css for pleasing the Plone resource registry in the next step.

  6. Create folder ./install with file ./install/registry.xml with the following contents to register the above bundles into Plone resource registry:

    <?xml version="1.0"?>
    <registry>
      <records prefix="plone.bundles/imagesloaded-js"
               interface="Products.CMFPlone.interfaces.IBundleRegistry">
        <value key="depends">plone</value>
        <value key="jscompilation">++theme++demotheme/bundles/imagesloaded.pkgd.min.js</value>
        <value key="csscompilation">++theme++demotheme/bundles/imagesloaded.pkgd.min.css</value>
        <value key="last_compilation">2017-10-06 00:00:00</value>
        <value key="compile">False</value>
        <value key="enabled">True</value>
      </records>
      <records prefix="plone.bundles/masonry-js"
               interface="Products.CMFPlone.interfaces.IBundleRegistry">
        <value key="depends">imagesloaded-js</value>
        <value key="jscompilation">++theme++demotheme/bundles/masonry.pkgd.min.js</value>
        <value key="csscompilation">++theme++demotheme/bundles/masonry.pkgd.min.css</value>
        <value key="last_compilation">2017-10-06 00:00:00</value>
        <value key="compile">False</value>
        <value key="enabled">True</value>
      </records>
    </registry>
    

Now, once edited theme files are saved and the theme re-activated or updated, thanks to collective.themesitesetup, every response from our site should include our these new resources.

Creating a folder view with list of images

Creating a view with collective.themefragments is similar for writing any view template for Plone. Simply add a folder ./fragments into your theme with our example view ./fragments/wall_of_images.pt with the following contents:

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"
      xmlns:tal="http://xml.zope.org/namespaces/tal"
      xmlns:metal="http://xml.zope.org/namespaces/metal"
      xmlns:i18n="http://xml.zope.org/namespaces/i18n"
      lang="en"
      metal:use-macro="context/main_template/macros/master"
      i18n:domain="plone">
<body>
<metal:main fill-slot="main">
  <metal:content-core define-macro="content-core">
    <div class="wall-of-images container-fluid"
         tal:define="items context/@@contentlisting">
      <tal:image tal:repeat="item items">
        <img tal:define="obj item/getObject;
                         scale_func obj/@@images;
                         scaled_image python:scale_func.scale('image', scale='preview')"
             tal:replace="structure python:scaled_image.tag()"
             tal:on-error="string:error" />
      </tal:image>
    </div>
  </metal:content-core>
</metal:main>
</body>
</html>

Please, note, how the view template uses plone.app.contentlisting API for iterating through every item in the folder and then plone.app.imaging API for rendering image tags for scaled images. Also, note the use of tal:on-error to suppress all possible error messages (you may not always want that, though).

Enabling the view on a site

Unfortunately, collective.themefragments' views do not magically appear into Plone toolbar display menu yet. Fortunately, those views can be either be set as the default view of a content type or manually assigned to a content item by setting its layout-property:

  1. At first, let's assume that we have a folder

    http://localhost:8080/Plone/wall-of-images

  2. Then, let's open the good old properties edit form for it

    http://localhost:8080/Plone/wall-of-images/manage_propertiesForm

  3. Finally, let's add a new property of type string with name layout and value ++themefragment++wall_of_images

Now the content should be rendered using our brand new template, displaying all the images one after one. It still does not look as intended, though, because nothing enables Masonry.js for it.

Invoking Masonry.js on the view

To enable Masonry.js on our brand new view, we could add the following code into a theme file ./scripts.js:

jQuery(function($) {
  $('.wall-of-images').imagesLoaded(function() {
    $('.wall-of-images').masonry({
      itemSelector: 'img',
      percentPosition: true
    });
  });
});

That code simply uses jQuery to find our view templates main element and configures Masonry.js for it after every image below it has been loaded.

An alternative for that jQuery script would be to rely on Plone's Require.js setup and define the code as a pattern:

require([
  'pat-base'
], function(Base) {
  'use strict';

  var Masonry = Base.extend({
    name: 'masonry',
    trigger: '.wall-of-images',

    init: function() {
      var self = this;
      self.$el.imagesLoaded(function() {
        self.$el.masonry({
          itemSelector: 'img',
          percentPosition: true
        });
      });
    }
  });

  return Masonry;
});

But something is still missing. Masonry.js is distributed without any default styles. To make our wall of images look as it should, we need to define responsive styles with our desired breakpoints in ./styles.css:

@media only screen {
   .wall-of-images {
        padding-left: 0;
        padding-right: 0;
        margin-top: -20px;
    }
    .wall-of-images img {
        float: left;
        width: 100%;
        height: auto;
        border: 5px solid transparent;
    }
}

@media only screen and (min-width: 768px) {
    .wall-of-images img {
        float: left;
        width: 50%;
        height: auto;
    }
}

@media screen and (min-width: 900px) {
    .wall-of-images img {
        float: left;
        width: 33.3333333%;
        height: auto;
    }
}

@media screen and (min-width: 1200px) {
  .wall-of-images img {
        float: left;
        width: 25%;
        height: auto;
    }
}

Finally, we'd like to make our wall of images be displayed on full browser window width. That's a bit tricky, because we need to escape Barceloneta theme's default content container, but still fully possible by adding the following Diazo rules into ./rules.xml:

<!-- Wall of Images -->
<rules css:if-content=".wall-of-images">
  <!-- Make fullwidth -->
  <replace css:theme=".outer-wrapper > .container"
           css:content=".wall-of-images" />
  <!-- Include status message -->
  <before css:theme=".outer-wrapper > .container"
          css:content="#global_statusmessage"
          css:if-content=".wall-of-images" />
  <replace css:content="#global_statusmessage">
    <div id="global_statusmessage" class="container-fluid">
      <xsl:apply-templates />
    </div>
</replace>

Now our wall of images shines in every resolution:

https://2.bp.blogspot.com/-BvYcyG5TSaw/We4-2X7YH3I/AAAAAAAABKg/-plstMVUlqASoViy7xW9bQVPn9dC__c3wCLcBGAs/s320/four-columns.png
https://3.bp.blogspot.com/-LNyBEyLbLxE/We4-2UJN28I/AAAAAAAABKc/1W8CRGj0ykc7k1ov9zGOagl6CmxNIEqbQCLcBGAs/s320/three-columns.png
https://3.bp.blogspot.com/-IBT7ypReBsY/We4-3N3cn9I/AAAAAAAABKo/izx9FYyuItUToO0dP5AU_K18Jpm67OHlgCLcBGAs/s320/two-columns.png
https://3.bp.blogspot.com/-W7aRImzYMYE/We4-2nVw6UI/AAAAAAAABKk/f4hSBVIB5LIm8iSK6gMxijz1Ot-Ax6Y4ACLcBGAs/s320/one-column.png

PS. If want to learn more, my talk materials include a more complex example with custom content types, workflows, permissions, portlet assignments and content rules.

Tile based layouts and ESI on Plone

Plone's Blocks: Grid based layouts is an old manifest (originally dated back to 2008 or 2009) about simplifying Plone's (ME)TAL-macros, content providers (portlets and viewlets) based layout machinery with a composition of independently rendered static HTML layouts and dynamic content tiles. The initial implementation of the manifest was completed already by 2011 with the first versions of plone.tiles and plone.app.blocks. It was supposed to be a core part of Plone 5, but continues to be delayed because the failure of the Plone Deco project. Sad.

Because of the separation of content and composition, the new approach introduced new options for the actual page rendering process: It was no longer necessary to render a complete Plone page at once, but each page could be composed of multiple independently rendered and cached content fragment. Of course, the complete rendering could still be done on Plone at once like before, but it also became possible to compose the page from its fragments with ESI (Edge Side Includes) in a compatible caching proxy, or with JavaScript (e.g. with pat-inject) in an end users browser. Both of these approaches providing parallel rendering of page fragments, while each of those fragment could be cached independently simply by using the standard HTTP caching headers. Which is great.

So, what could all this mean in practice? Thanks to tile based layouts, Varnish and ESI, we are now able to cache every cacheable part of our pages also for logged-in users, resulting in noticeably better performance and user experience.

(And yes, this approach may look already outdated, when compared various front-end based composition options of headless CMS era, but still solves real issues with the current Plone with server-rendered HTML.)

Blocks rendering process revisited

To really understand the goals of tile based layouts, let's revisit the once revolutionary page composition process implemented in plone.app.blocks.

In the simplest form of rendering, a Plone page could still render a complete HTML document as before:

<!DOCTYPE html>
<html>
<head>
  <title>Title</title>
</head>
<body>
  <!-- ... -->
</body>
</html>

But instead of always rendering everything, with tile based layouts it become possible to speed up the main rendering of the page by delegating the rendering of dynamic parts of the page to separate independent renderers, called tiles:

<!DOCTYPE html>
<html>
<head>
  <title>Title</title>
</head>
<body>
  <!-- ... -->
  <div data-tile="TILE_URL" />
  <!-- ... -->
</body>
</html>

The page rendering output could include as many plaholder elements with data-tile-attribute as required, and expect something later to replace those elements with the contents defined by their URL values. This something is still Plone by default, but it could also be a middleware (like ESI in caching proxy) or JavaScript in a browser.

The main benefits from decoupling rendering and composition like this include (with either ESI or JavaScript -based composition) include:

  1. Experiential speed-up, because the main page may be already partially or completely visible in the browser while the delegated parts are still being rendered
  2. real speed-up, because the delegated parts may be rendered in parallel
  3. real speed-up, because the delegated parts may be cached separately with optimized HTTP caching headeaders.

It's crucial that the value of data-tile attribute is full absolute or relative URL, and that the target address can be rendered independently from the main page. These assumptions would make the composition logically independent from then underlying server technology. It's even possible to compose page from snippets rendered by multiple different services.

In addition to the data-tile-composition, plone.app.blocks provides an additional composition to separate content area page design (content layout) from its surroundings (site layout).

To use this additional site layout composition, a page rendering must define the URL of the used site layout and the panels (slots) it fills into that layout by using the additional data-attributes data-layout and data-panel as in the following example:

<!DOCTYPE html>
<html>
<body data-layout="LAYOUT_URL">
  <div data-panel="PANEL_ID">
    <!-- ... -->
    <div data-tile="TILE_URL" />
    <!-- ... -->
  </div>
</body>
</html>

Together, these attributes instruct the composition as follows: Please, load a site layout at LAYOUT_URL and render it with its panel named PANEL_ID filled with childrens of this tag.

So, if the site layout in question would look like:

<!DOCTYPE html>
<html>
<head>
  <title>Title</title>
</head>
<body>
  <!-- ... -->
  <div data-panel="PANEL_ID">
    <!-- ... -->
  </div>
  <!-- ... -->
</body>
</html>

The main rendering of the page would look like:

<!DOCTYPE html>
<html>
<head>
  <title>Title</title>
</head>
<body>
  <!-- ... -->
  <div>
    <!-- ... -->
    <div data-tile="TILE_URL" />
    <!-- ... -->
  </div>
  <!-- ... -->
</body>
</html>

Obviously, the site layout could define multiple panels, and the content layout could fill anything from none to all of them.

Currently, this so called panel merge is always done by Plone with transform code in plone.app.blocks, but technically this could also be done e.g. in a WSGI middleware, releasing Plone worker threads even more earlier than is currently possible with just ESI or browser side composition of tiles.

Caching ESI tiles for logged-in users

ESI (Edge Side Includes) is an old proposal (mainly by Akamai) for an XML namespace to describe HTML page composition from multiple separate resources. A quite minimal subset of the language is implemented also in Varnish, a popular and recommended caching proxy also in Plone setups.

Using and enabling ESI with plone.tiles, plone.app.blocks and Varnish is well documented in those packages' READMEs. Yet, something we discovered only very recently was, how to use ESI to safely cache tiles for logged-in users.

Of course, by default, Plone never caches anything for logged-in-users. At first, plone.app.caching declares all responses private, unless they should be visible for anonymous users. And then, the recommended Varnish configuration skips caches whenever Plone session cookie is present in request. So, by default, we are protected from both sides. (And that's great design to protect us from our own mistakes!)

The first step to support caching for logged-in users is to allow Varnish (in default.vcl) to do cache lookup for ESI tiles:

sub vcl_recv {
  # ...
  if (req.esi_level > 0) {
      set req.http.X-Esi-Level = req.esi_level;
      return (hash);
  } else {
      unset req.http.X-Esi-Level;
  }
  # ....

Of course, this would allow lookup for only completely public tiles, because only those could be cached by Varnish by default. That's why, in the example above, we also manage a completely new header X-Esi-Level, and we make sure it's only available when Varnish is doing its internal subrequests for ESI-tiles.

With that extra header in place, we can instruct Varnish to hash responses to ESI-subrequests separately from responses to main requests. In other words, we split Varnish cache into public and private areas. While public cache remains accessible for anyone knowning just the cached URL, the private one is only accessible for Varnish itself, when it's doing ESI subrequests:

sub vcl_hash {
    hash_data(req.url);
    if (req.http.host) {
        hash_data(req.http.host);
    } else {
        hash_data(server.ip);
    }
    if (req.http.X-Esi-Level) {
       hash_data(req.http.X-Esi-Level);
    }
    return (lookup);
}

Now we are almost ready to patch let Plone to allow caching of restricted tiles. But only tiles. Because of X-Esi-Level would only be set for Varnish's internal subrequests for tiles, all other requests would be handled as before. This patch is done by monkeypatching an utility method in plone.app.caching to allow public Cache-Control-header for otherwise restricted tiles when the trusted X-Esi-Level-header is in place:

def visibleToRole(published, role, permission='View'):
    request = getRequest()
    if request.getHeader('X-Esi-Level'):
        return True
    else:
        return role in rolesForPermissionOn(permission, published)

Please, don't do this, unless you really know and test what you are doing!

Because something crucial is still missing: Even with the private cache for ESI tiles, Varnish will still cache tiles just by URL. And that would mean that, by default, all users would get whatever version of the tile was cached first.

To really make it safe to cache restricted tiles for logged-in users, we must ensure that cacheable tiles should have unique URL for users with different roles. We fixed this by implementing a custom transform, which hashes roles of the current user (with time based salt) into an extra query string parameter for each tile URL in the rendered page. The result: users with same set of roles share the same cache of tiles. Fast.

On building fat themes for Plone

Could fat themes become the common ground between filesystem Plone developers and through-the-web integrators?

Plone ships with a lot of bundled batteries for building sophisticated content management solutions. Content types, workflows, portlets and event based content rules can all be customized just by using browser without writing a single line of new code. Yet, bad early experiences from maintaining such through-the-web implementations, have made it common to disregard that approach, and prefer (more technical) file system based approach instead.

During the last few years, thanks to Diazo based theming framework for Plone, there has been a slow renaissance of through-the-web customization of Plone. Besides Diazo itself, the new theming framework introduced a clever new storage layer, plone.resource, which supported both python packaged and through-the-web* developed themes. In addition, the new theming editor made it easy to export through-the-web developed themes as re-usable zip packages.

Initially, I was hoping for some kind of new TTW add-on approach to emerge on top plone.resource. Nowadays it's getting clear that we are just going add more features into themes instead. Maybe it's better that way.

By fat themes, I mean themes which do not only provide look, but also some behavior for the site. Most of my such themes have provided all customizable configuration for their sites. The main benefit has been faster iterations, because I've been able to deliver updates without running buildout or restarting the site.

Obviously, configuring everything in theme is not yet possible with vanilla Plone, but requires selected theming related add-ons and tools:

collective.themefragments

collective.themefragments makes it possible to include Zope/Chameleon page template fragments in your theme, and inject them into rendered content using Diazo rules. It was originally proposed as a core feature for Plone theming (by Martin Aspeli), but because it was rejected, I had to to release it as its own add-on. Later I added support for restricted python scripts (callable from those fragments) and a tile to make fragments addable into Plone Mosaic layouts.

Use of themefragments requires the add-on to be available for Plone (e.g. by adding it to eggs in buildout and running the buildout) and writing fragment templates into fragments* subdirectory of theme:

./fragments/title.pt:

<html>
<title></title>
<body>
  <h1 tal:content="context/Title">Title</h1>
<body>
</html>

And injecting them in ./rules.xml:

<replace css:theme="h1" css:content="h1" href="/@@theme-fragment/title" />

or:

<replace css:theme="h1">
  <xsl:copy-of select="document('@@theme-fragment/title',
                       $diazo-base-document)/html/body/*" />
</replace>

depending on the flavor of your Diazo rules.

It's good to know that rendering fragments and executing their scripts rely on Zope 2 restricted python, and may cause unexpected Unauthorized exceptions (because you write them as admin, but viewers may be unauthenticated). More than once I've needed to set the verbose-security flag to figure out the real reason of causing a such exception...

rapido.plone

rapido.plone must be mentioned, even I don't have it in production yet by myself. Rapido goes beyond just customizing existing features of Plone by making it possible to implement completely new interactive features purely in theme. Rapido is the spiritual successor of Plomino and probably the most powerful add-on out there when it comes to customizing Plone.

When compared to to themefragments, Rapido is more permissive in its scripts (e.g. allows use of plone.api).It also provides its own fast storage layer (Souper) for storing, indexing and accessing custom data.

collective.themesitesetup

collective.themesitesetup has become my "Swiss Army knife" for configuring Plone sites from theme. It's a theming plugin, which imports Generic Setup steps directly from theme, when the theme is being activated. It also includes helper views for exporting the current exportable site configuration into editable theme directories.

This is the theming add-on, which makes it possible to bundle custom content types, workflows, portlets, content rule configurations, registry configuration and other Generic Setup-configurable stuff in a theme.

Recently, I also added additional support for importing also translation domains, XML schemas and custom permissions.

A theme manifest enabling the plugin (once the plugin is available for Plone) could look like the:

./manifest.cfg:

...

[theme:genericsetup]
permissions =
    MyProject.AddBlogpost    MyProject: Add Blogpost

and the theme package might include files like:

./install/registry.xml
./install/rolemap.xml
./install/types/Blog.xml
./install/types.xml
./install/workflows/blog_workflow/definition.xml
./install/workflows/blog_workflow/scripts
./install/workflows/blog_workflow/scripts/addExpirationDate.py
./install/workflows.xml

./models/Blog.xml

./locales/manual.pot
./locales/myproject.pot
./locales/plone.pot
./locales/fi/LC_MESSAGES/myproject.po
./locales/fi/LC_MESSAGES/plone.po

collective.taxonomy

collective.taxonomy is not really a theming plugin, but makes it possible to include large named vocabularies with translations in a Generic Setup profile. That makes it a nice companion to collective.themesitesetup by keeping XML schemas clean from large vocabularies.

collective.dexteritytextindexer

collective.dexteritytextindexer is also "just" a normal add-on, but because it adds searchable text indexing support for custom fields of custom content types, it is a mandatory add-on when theme contains new content types.

plonetheme.webpacktemplate

Of course, the core of any theme is still about CSS and JavaScript to make the site frontend look and feel good. Since Mockup and Plone 5, we've had RequireJS based JavaScript modules and bundles for Plone, and LESS based default theme, Barceloneta (with also SASS version available). Unfortunately, thanks to the ever-changing state of JavaScript ecosystem, there's currently no single correct tool or re-building and customizing these Plone frontend resource.

My current tool of choice for building frontend resources for a Plone theme is Webpack, which (with help of my plugin) makes it possible to bundle (almost) all frontend resources from Plone resource registry into theme, and inject my customizations while doing that. And with a single "publicPath", setting, the resulting theme could load those bundles from a CDN.

Configuring Webpack is not the easiest thing to learn, and debugging possible bundle build issues could be even harder. Yet, I've tried to make it easy to try it out with plonetheme.webpacktemplate mr.bob-template.

plonetheme-upload

It should be clear by now, that even my themes are compatible and customizable with through-the-web* approach, I still work on filesystem with version control and traditional Plone add-on development toolchain (I may even have automated acceptance tests for non-trivial theme features). For a long time, I just configured a global plone.resource-directory in buildout and rsync'd theme updates to servers. It was about time to automate that.

plonetheme-upload is a npm installable NodeJS-package, which provides a simple command line tool for uploading theme directory into Plone using Upload Zip file feature of Plone Theme settings. Its usage is as simple as:

$ plonetheme-upload my-theme-dir http://my.plone.domain

Possibly the next version shoud include another CLI tool, plonetheme-download*, to help also through-the-web themers to keep their themes under version control.

Plone Barcelona Sprint 2016 Report

For the last week, I was lucky enough to be allowed to participate Plone community sprint at Barcelona. The print was about polishing the new RESTful API for Plone, and experimenting with new front end and backend ideas, to prepare Plone for the next decade (as visioned in its roadmap). And once again, the community proved the power of its deeply rooted sprinting culture (adopted from the Zope community in the early 2000).

Just think about this: You need to get some new features for your sophisticated software framework, but you don't have resources to do it on your own. So, you set up a community sprint: reserve the dates and the venue, choose the topics for the sprint, advertise it or invite the people you want, and get a dozen of experienced developers to enthusiastically work on your topics for more for a full week, mostly at their own cost. It's a crazy bargain. More than too good to be true. Yet, that's just what seems to happen in the Plone community, over and over again.

To summarize, the sprint had three tracks: At first there was the completion of plone.restapi – a high quality and fully documented RESTful hypermedia API for all of the currently supported Plone versions. And after this productive sprint, the first official release for that should be out at any time now.

Then there was the research and prototyping of a completely new REST API based user interface for Plone 5 and 6: An extensible Angular 2 based app, which does all its interaction with Plone backend through the new RESTful API, and would universally support both server side and browser side rendering for fast response time, SEO and accessibility. Also these goals were reached, all the major blockers were resolved, and the chosen technologies were proven to be working together. To pick of my favorite sideproduct from that track: Albert Casado, the designer of Plone 5 default theme in LESS, appeared to migrate the theme to SASS.

Finally, there was our small backend moonshot team: Ramon and Aleix from Iskra / Intranetum (Catalonia), Eric from AMP Sport (U.S.), Nathan from Wildcard (U.S.) and yours truly from University of Jyväskylä (Finland). Our goal was to start with an alternative lightweight REST backend for the new experimental frontend, re-using the best parts of the current Plone stack when possible. Eventually, to meet our goals within the given time constraints, we agreed on the following stack: aiohttp based HTTP server, the Plone Dexterity content-type framework (without any HTML views or forms) built around Zope Toolkit, and ZODB as our database, all on Python 3.5 or greater. Yet, Pyramid remains as a possible alternative for ZTK later.

https://2.bp.blogspot.com/-zRaopbaHcPY/V0H3WpxtQHI/AAAAAAAAAzg/1xQ5hbLqP1ITLbddj7jSTST_v0rC8Y41ACKgB/s1600/IMG_0599.JPG

I was responsible for preparing the backend track in advance, and got us started with a a simple aiohttp based HTTP backend with experimental ZODB connection supporting multiple concurrent transaction (when handled with care). Most of my actual sprint time went into upgrading Plone Dexterity content-type framework (and its tests) to support Python 3.5. That also resulted in backwards compatible fixes and pull requests for Python 3.5 support for all its dependencies in plone.* namespace.

Ramon took the lead in integrating ZTK into the new backend, implemented a content-negotiation and content-language aware traversal, and kept us motivated by rising the sprint goal once features started clicking together. Aleix implemented an example docker-compose -setup for everything being developd at the sprint, and open-sourced their in-house OAuth-server as plone.oauth. Nathan worked originally in the frontend-team, but joined us for the last third of the sprint for pytest-based test setup and asyncio-integrated Elasticsearch integration. Eric replaced Zope2-remains in our Dexterity fork with ZTK equivalents, and researched all the available options in integrating content serialization of plone.restapi into our independent backend, eventually leading into a new package called plone.jsonserializer.

The status of our backend experiment after the sprint? Surprisingly good. We got far enough, that it's almost easier to point the missing and incomplete pieces that still remain on our to do:

  • We ported all Plone Dexterity content-type framework dependencies to Python 3.5. We only had to fork the main plone.dexterity-package, which still has some details in its ZTK integration to do and tests to be fixed. Also special fields (namely files, richtext and maybe relations) are still to be done.
  • Deserialization from JSON to Dexterity was left incomplete, because we were not able to fully re-use the existing plone.restapi-code (it depends on z3c.form-deserializers, which we cannot depend on).
  • We got a basic aiohttp-based Python 3.5 asyncio server running with ZODB and asynchronous traverse, permissions, REST-service mapping and JSON-serialization of Dexterity content. Integration with the new plone.oauth and zope.security was also almost done, and Ramon promised to continue to work on that to get the server ready for their in-house projects.
  • Workflows and their integration are to be done. We planned to try repoze.worklfow at first, and if that's not a fit, then look again into porting DCWorkflow or other 3rd party libraries.
  • Optimization for asyncio still needs more work, once the basic CRUD-features are in place.

So, that was a lot of checkbox ticked in a single sprint, really something to be proud of. And if not enough, an overlapping Plone sprint at Berlin got Python 3.5 upgrades of our stack even further, my favorite result being a helper tool for migrating Python 2 version ZODB databases to Python 3. These two sprints really transformed the nearing end-of-life of Python 2 from a threat into a possibility for our communitt, and confirmed that Plone has a viable roadmap well beyond 2020.

Personally, I just cannot wait for a suitable project with Dexterity based content-types on a modern asyncio based http server, or the next change meet our wonderful Catalan friends! :)

Evolution of a Makefile for building projects with Docker

It's hard to move to GitLab and resist the temptation of its integrated GitLab CI. And with GitLab CI, it's just natural to run all CI jobs in Docker containers. Yet, to avoid vendor lock of its integrated Docker support, we choosed to keep our .gitlab-ci.yml configurations minimal and do all Docker calls with GNU make instead. This also ensured, that all of our CI tasks remain locally reproducible. In addition, we wanted to use official upstream Docker images from the official hub as far as possible.

As always with make, it it's a danger that Makefiles themselves become projects of their own. So, let's begin with a completely hypothetical Makefile:

all: test

test:
     karma test

.PHONY: all test
https://4.bp.blogspot.com/-upjVr6G_zGY/VyR85K0lLRI/AAAAAAAAAww/Il9BZ2gELggcRmIcdHFQncwaO5OT7WEUACKgB/s1600/IMG_0370.JPG

Separation of concerns

At first, we want to keep all Docker related commands separate from the actual project specific commands. This lead us to have two separate Makefiles. A traditional default one, which expects all the build tools and other dependencies to exist in the running system, and a Docker specific one. We named them Makefile (as already seen above) and Makefile.docker (below):

all: test

test:
     docker run --rm -v $PWD:/build -w /build node:5 make test

.PHONY: all test

So, we simply run a Docker container of required upstream language image (here Node 5), mount our project into the container and run make for the default Makefile inside the container.

$ make -f Makefile.docker

Of course, the logical next step is to abstract that Docker call into a function to make it trivial to wrap also other make targets to be run in Docker:

make = docker run --rm -v $PWD:/build -w /build node:5 make $1

all: test

test:
     $(call make,test)

.PHONY: all test

Docker specific steps in the main Makefile

In the beginning, I mentioned, that we try to use the official upstream Docker images whenever possible, to keep our Docker dependencies fresh and supported. Yet, what if we need just minor modifications to them, like installation of a couple of extra packages...

Because our Makefile.docker mostly just wraps the make call for the default Makefile into a auto-removed Docker container run (docker run --rm), we cannot easily install extra packages into the container in Makefile.docker. This is the exception, when we add Docker-related commands into the default Makefile.

There are probably many ways to detect the run in Docker container, but my favourite is testing the existence of /.dockerenv file. So, any Docker container specific command in Makefile is wrapped with test for that file, as in:

all: test

test:
     [ -f /.dockerenv ] && npm -g i karma || true
     karma test

.PHONY: all test

Getting rid of the filesystem side-effects

Unfortunately, one does not simply mount a source directory from the host into a container and run arbitrary commands with arbitrary users with that mount in place. (Unless one wants to play to game of having matching user ids inside and outside the container.)

To avoid all issues related to Docker possibly trying to (and sometimes succeeding in) creating files into mounted host file system, we may run Docker without host mount at all, by piping project sources into the container:

make = git archive HEAD | \
       docker run -i --rm -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make $1"

all: test

test: bin/test
     $(call make,test)

.PHONY: all test
  • git archive HEAD writes tarball of the project git repository HEAD (latest commit) into stdout.
  • -i in docker run enables stdin in Docker.
  • -v /build in docker run ensures /build to exist in container (as a temporary volume).
  • bash -c "tar x --warning=all && make $1" is the single command to be run in the container (bash with arguments). It extracts the piped tarball from stdin into the current working directory in container (/build) and then executes given make target from the extracted tarball contents' Makefile.

Caching dependencies

One well known issue with Docker based builds is the amount of language specific dependencies required by your project on top of the official language image. We've solved this by creating a persistent data volume for those dependencies, and share that volume from build to build.

For example, defining a persistent NPM cache in our Makefile.docker would look like this:

CACHE_VOLUME = npm-cache

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: bin/test
     $(INIT_CACHE)
     $(call make,test)

.PHONY: all test

INIT_CACHE = \
    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5
  • CACHE_VOLUME variable holds the fixed name for the shared volume and the dummy container keeping the volume from being garbage collected by docker run --rm.
  • INIT_CACHE ensures that the cache volume is always present (so that it can simply be removed if its state goes bad).
  • -v $(CACHE_VOLUME:/cache in docker run mounts the cache volume into test container.
  • NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' in docker run sets a make variable NPM_INSTALL_ARGS with arguments to configure cache location for NPM. That variable, of course, should be explicitly defined and used in the default Makefile:
NPM_INSTALL_ARGS =

all: test

test:
     @[ -f /.dockerenv ] && npm -g $(NPM_INSTALL_ARGS) i karma || true
     karma test

.PHONY: all test

Cache volume, of course, adds state between the builds and may cause issues that require resetting the cache containers when that hapens. Still, most of the time, these have been working very well for us, significantly reducing the required build time.

Retrieving the build artifacts

The downside of running Docker without mounting anything from the host is that it's a bit harder to get build artifacts (e.g. test reports) out of the container. We've tried both stdout and docker cp for this. At the end we ended up using dedicated build data volume and docker cp in Makefile.docker:

CACHE_VOLUME = npm-cache
DOCKER_RUN_ARGS =

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build $(DOCKER_RUN_ARGS) node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD)
test: bin/test
     $(INIT_CACHE)
     $(call make,test); \
       status=$$?; \
       docker cp $(BUILD):/build .; \
       docker rm -f -v $(BUILD); \
       exit $$status

.PHONY: all test

INIT_CACHE = \
    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5

# http://cakoose.com/wiki/gnu_make_thunks
BUILD_GEN = $(shell docker create -v /build node:5
BUILD = $(eval BUILD := $(BUILD_GEN))$(BUILD)

A few powerful make patterns here:

  • DOCKER_RUN_ARGS = sets a placeholder variable for injecting make target specific options into docker run.
  • test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD) sets a make target local value for DOCKER_RUN_ARGS. Here it adds volumes from a container uuid defined in variable BUILD.
  • BUILD is a lazily evaluated Make variable (created with GNU make thunk -pattern). It gets its value when it's used for the first time. Here it is set to an id of a new container with a shareable volume at /build so that docker run ends up writing all its build artifacts into that volume.
  • Because make would stop its execution after the first failing command, we must wrap the make test call of docker run so that we
    1. capture the original return value with status=$$?
    2. copy the artifacts to host using docker cp
    3. delete the build container
    4. finally return the captured status with exit $$status.

This pattern may look a bit complex at first, but it has been powerful enough to start any number of temporary containers and link or mount them with the actual test container (similarly to docker-compose, but directly in Makefile). For example, we use this to start and link Selenium web driver containers to be able run Selenium based acceptance tests in the test container on top of upstream language base image, and then retrieve the test reports from the build container volume.