Dockerising a Contao website

This article is based on Contao 3. There is a new version, see Dockerising Contao 4

I’m a fan of containerisation! It feels much cleaner and systems don’t age that quickly.

Latest project that I am supposed to maintain is a new Contao website. The company who built the website of course just delivered files and a database. The files contain the Contao installation next to Contao extensions next to configuration and customised themes.. All merged into a blob… Thus, in the files it is hard to distinguish between Contao-based files and user generated content. So I needed to study Contao’s documentation and reinstall the website to learn what files should go into the Docker image and which files to store outside.

However, I finally came up with a solution that is based on two Contao images :)

A general Contao image

PLEASE NOTE: sSMTP is not maintained anymore! Please swith to msmtp, for example, as I explained in Migrating from sSMTP to msmtp.

The general Contao image is supposed to contain a plain Conato installation. That is, the recipe just installs dependencies (such as curl, zip, and ssmtp) and downloads and extracts Contao’s sources. The Dockerfile looks like this:

FROM php:apache
MAINTAINER martin scharm <>

# for mail configuration see

RUN apt-get update \
 && apt-get install -y -q --no-install-recommends \
    wget \
    curl \
    unzip \
    zlib1g-dev \
    libpng-dev \
    libjpeg62-turbo \
    libjpeg62-turbo-dev \
    libcurl4-openssl-dev \
    libfreetype6-dev \
    libmcrypt-dev \
    libxml2-dev \
    ssmtp \
 && apt-get clean \
 && rm -r /var/lib/apt/lists/*

RUN wget -O /tmp/ \
 && unzip /tmp/ -d /var/www/ \
 && rm -rf /var/www/html /tmp/ \
 && ln -s /var/www/contao* /var/www/html \
 && echo 0 > /var/www/html/system/cron/cron.txt \
 && chown -R www-data: /var/www/contao* \
 && a2enmod rewrite

RUN docker-php-source extract \
 && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
 && docker-php-ext-install -j$(nproc) zip gd curl mysqli soap \
 && docker-php-source delete

RUN php -r "copy('', 'composer-setup.php');" \
 && php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
 && mkdir -p composer/packages \
 && php composer-setup.php --install-dir=composer \
 && php -r "unlink('composer-setup.php');" \
 && chown -R www-data: composer

The first block apt-get installs necessary stuff from the Debian repositories. The second block downloads a Contao 3.5 from, extracts it to /var/www/, and links /var/www/html to it. It also creates the cron.txt (see The third block installs a few required and/or useful PHP extensions. And finally the fourth block retrieves and installs Composer to /var/www/html/composer, where the Contao-composer-plugin expects it.

That’s already it! We have a recipe to create a general Docker image for Contao. Quickly setup an automatic build and .. thada .. available as binfalse/contao.

A personalised Contao image

Besides the plain Contao installation, a Contao website typically also contains a number of extensions. Those are installed through composer, and they can always be reinstalled. As we do not want to install a load of plugins everytime a new container is started we create a personalised Contao image. All you need is the composer.json that contains the information on which extensions and which versions to install. This json should be copied to /var/www/html/composer/composer.json, before composer can be run to install the stuff. Here is an example of such a Dockerfile:

FROM binfalse/contao
MAINTAINER martin scharm <>

COPY composer.json composer/composer.json

USER www-data

# we need to run it this twice... you probably know the error:
# 'Warning: Contao core 3.5.31 was about to get installed but 3.5.31 has been found in project root, to recover from this problem please restart the operation'
# not sure why it doesn't run the necessary things itself? seems idiot to me, but... yes.. we run it twice if it fails...

RUN php composer/composer.phar --working-dir=composer update || php composer/composer.phar --working-dir=composer update

USER root

This image can then be build using:

docker build -t contao-personalised .

The resulting image tagged contao-personalised will contain all extensions required for your website. Thus, it is highly project specific and shouldn’t be shared..

How to use the personalised Contao image

The usage is basically very simple. You just need to mount a few things inside the container:

  • /var/www/html/files/ should contain files that you uploaded etc.
  • /var/www/html/templates/ may contain your customised layout.
  • /var/www/html/system/config/FILE.php should contain some configuration files. This may include the localconfig.php or a pathconfig.php.

Optionally you can link a MariaDB for the database.

Tying it all together using Docker-Compose

Probably the best way to orchestrate the containers is using Docker-Compose. Here is an example docker-compose.yml:

version: '2'

      build: /path/to/personalised/Dockerfile
      restart: unless-stopped
      container_name: contao
        - contao_db
        - "8080:80"
        - $PATH/files:/var/www/html/files
        - $PATH/templates:/var/www/html/templates:ro
        - $PATH/system/config/localconfig.php:/var/www/html/system/config/localconfig.php

      image: mariadb
      restart: always
      container_name: contao_db
        MYSQL_DATABASE: contao_database
        MYSQL_USER: contao_user
        MYSQL_PASSWORD: contao_password
        MYSQL_ROOT_PASSWORD: very_secret
        - $PATH/database:/var/lib/mysql

This assumes that your personalised Dockerfile is located in path/to/personalised/Dockerfile and your website files are stored in $PATH/files, $PATH/templates, and $PATH/system/config/localconfig.php. Docker-Compose will then build the personalised image (if necessary) and create 2 containers:

  • contao based on this image, all user-based files are mounted into the proper locations
  • contao_db a MariaDB to provide a MySQL server

To make Contao speak to the MariaDB server you need to configure the database connection in $PATH/system/config/localconfig.php just like:

$GLOBALS['TL_CONFIG']['dbDriver'] = 'MySQLi';
$GLOBALS['TL_CONFIG']['dbHost'] = 'contao_db';
$GLOBALS['TL_CONFIG']['dbUser'] = 'contao_user';
$GLOBALS['TL_CONFIG']['dbPass'] = 'contao_password';
$GLOBALS['TL_CONFIG']['dbDatabase'] = 'contao_database';
$GLOBALS['TL_CONFIG']['dbPconnect'] = false;
$GLOBALS['TL_CONFIG']['dbCharset'] = 'UTF8';
$GLOBALS['TL_CONFIG']['dbPort'] = 3306;
$GLOBALS['TL_CONFIG']['dbSocket'] = '';

Here, the database should be accessible at contao_db:3306, as it is setup in the compose file above.

If you’re running contao with “Rewrite URLs” using an .htaccess you also need to update Apache’s configuration to allow for rewrites. Thus, you may for example mount the follwoing file to /etc/apache2/sites-available/000-default.conf:

<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html
    <Directory /var/www/>
        AllowOverride All
        Options FollowSymLinks
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined

This tells Apache to allow everything in any .htaccess file in /var/www.

When everything is up running the Conato install will be available at port 8080 (see ports definition in the compose file) of the machine hosting the Docker containers.

Mail support

PLEASE NOTE: sSMTP is not maintained anymore! Please swith to msmtp, for example, as I explained in Migrating from sSMTP to msmtp.

The image above comes with sSMTP installed. If you need support for email with your Contao installation, you just need to mount two more files into the container:

Tell PHP to mail through sSMTP

The following file tells PHP to use the ssmtp binary for mailing. Just mount the file to /usr/local/etc/php/conf.d/mail.ini:

[mail function]
sendmail_path = "/usr/sbin/ssmtp -t"

Configure sSMTP

PLEASE NOTE: sSMTP is not maintained anymore! Please swith to msmtp, for example, as I explained in Migrating from sSMTP to msmtp.

The sSMTP configuration is very easy. The following few lines may already be sufficient, when mounted to /etc/ssmtp/ssmtp.conf:


For more information read Mail support for Docker’s php:fpm and the Arch Linux wiki on sSMTP or the Debian wiki on sSMTP.

Archiving a (Wordpress) Website

I needed to migrate a lot of tools and projects that we’ve been working on in the SEMS group at the University of Rostock. Among others, the Wordpress website needed to be serialised to get rid of PHP and all the potential insecure and expensive Wordpress maintenance. I decided to mirror the page using HTTrack and some subsequent fine tuning. This is just a small report, maybe interesting if you also need to archive a dynamic web page.

Prepare the page

Some stuff in your (Wordpress) installation are properly useless after serialisation (or have never been working either) - get rid of them. For example:

  • Remove the search box - it’s useless without PHP. You may add a link to a search engine instead…?
  • Remove unnecessary trackers like Google analytics and Piwik. You probably don’t need it anymore and users may be unnecessarily annoyed by tracking and/or 404s.
  • Disable unnecessary plugins.
  • Check that manual links (e.g. in widgets) are still up-to-date, also after archiving..
  • Check for unpublished drafts in posts/pages. Those will be lost as soon as you close the CMS.
  • Recreate sitemap and rss feeds (if not created automatically)

I also recommend to setup some monitoring, e.g. using check_link, to make sure all resources are afterwards accessible as expected!

Mirror the website

I decided to mirror the web content using HTTrack. That’s basically quite simple. At the target location you only need to call:

httrack --mirror

This will create a directory containing the mirrored contend. In addition you’ll find logs in hts-log.txt and the cached content in hts-cache/.

However, I tweaked the call a bit and actually executed HTTrack like this:

httrack --mirror '-*trac/*' '-*comments/feed*' '-*page_id=*' -%k --disable-security-limits -%c160 -c20

This ignores all links that match *trac/* (there was a Trac running, but that moved to GitHub and an Nginx will permanently redirect the traffic), in addition it will keep connections alive (-%k). As I’m the admin of the original site (which I know won’t die too soon, and in worst case I can just restart it) I increased the speed to a max of 160 connections per second (-%c160) and max 20 simultaneous connections (-c20). For that I also needed to disable HTTrack’s security limits (--disable-security-limits).

That went quite well and I quickly had a copy of the website. However, there were a few issues…

Problems with redirects.

Turns out that HTTrack has problems with redirects. At some point we installed proper SSL certificates and since then we were redirecting traffic at port 80 (HTTP) to port 443 (HTTPS). However, some people manually created links that point to the HTTP resources, such as If HTTrack stumbles upon such a redirect it will try to remodel that redirect. However, in case of redirects from to, the target is the same as the source (from HTTrack’s point of view) and it will redirect to … itself.. -.-

The created HTML page looks like that:

<!-- Created by HTTrack Website Copier/3.49-2 [XR&CO'2014] -->

<!-- Mirrored from by HTTrack Website Copier/3.x [XR&CO'2014], Wed, 24 Jan 2018 07:16:38 GMT -->
<!-- Added by HTTrack --><meta http-equiv="content-type" content="text/html;charset=iso-8859-1" /><!-- /Added by HTTrack -->
<META HTTP-EQUIV="Content-Type" CONTENT="text/html;charset=UTF-8"><META HTTP-EQUIV="Refresh" CONTENT="0; URL=index.html"><TITLE>Page has moved</TITLE>
<A HREF="index.html"><h3>Click here...</h3></A>
<!-- Created by HTTrack Website Copier/3.49-2 [XR&CO'2014] -->

<!-- Mirrored from by HTTrack Website Copier/3.x [XR&CO'2014], Wed, 24 Jan 2018 07:16:38 GMT -->

As you can see, both the link and the meta refresh will redirect to the very same index.html, effectively producing a reload-loop… And as already exists it won’t store the content behind, which will be lost…

I have no idea for an easy fix. I’ve been playing around with the url-hacks flag, but I did not find a working solution.. (see also

What I ended up with was to grep for this page and to find pages that link to it:

grep "Click here" -rn | grep 'HREF="index.html"'

(Remember: some of the Click here pages are legit: They implement proper redirects! Only self-links to HREF="index.html" are the enemies.)

At SEMS we for example also had a wrong setting in the calendar plugin, which was still configured for a the HTTP version of the website and, thus, generating many of these problematic URLs.

The back-end search helped a lot to find the HTTP links. When searching for http://sems in posts and pages I found plenty of pages that hard-coded the wrong link target.. Also remember that links may also appear in post-excerpts!

If nothing helps, you can still temporarily disable the HTTPS redirect for the time of mirroring.. ;-)

Finalising the archive

To complete the mirror I also rsync‘ed the files in wp-content/uploads/, as not all files are linked in through the web site. Sometimes we just uploaded files and shared them through e-mails or on other websites.

I also manually grabbed the sitemap(s), as HTTrack apparently didn’t see them:

wget --quiet -O
wget --quiet -O - | egrep -o "https?://[^<]+" | wget -i -

iptables: log and drop

Linux has a sohpisticated firewall built right into the kernel: It’s called iptables! I’m pretty sure you heard about it. You can do realy crazy things with iptables. But here I just want to log how to log+drop a packet in a single rule.

Usually, you would probably do something like that:

iptables -A INPUT -j LOG --log-level warning --log-prefix "INPUT-DROP:"
iptables -A INPUT -j DROP

Works perfectly, but dramatically messes your rules table up.. Especially, if you want to log+drop packets that match a complicated filter. You’ll end up with twice as many table entries as desired..

The trick is to instead create a new rule chain that will log+drop in sequence:

iptables -N LOG_DROP

So here I created a new chain called LOG_DROP. We can now append (-A) two new rules to that chain, which do the actual drop+log:

iptables -A LOG_DROP -j LOG --log-level warning --log-prefix "INPUT-DROP:"
iptables -A LOG_DROP -j DROP

(similar like the first code above, just not for the INPUT chain but for the LOG_DROP chain)

That’s basically it! If you now need to log+drop a packet you can append a new rule to e.g. the INPUT chain that routes the packet to the LOG_DROP chain:

iptables -A INPUT [...filter specification...] -j LOG_DROP

You should consider to limit the number of redundant log entries per time to prevent flooding of your logs.. For more documentation you should consult the manual of iptables(8).

Common Name vs Subject Alternative Names

You probably heard about the conflict between the fields Common Name (CN) and Subject Alt Names (subjectAltName) in SSL certificates. It seems best practice for clients to compare the CN value with the server’s name. However, RFC 2818 already advised against using the Common Name and google now takes the gloves off. Since Chrome version 58 they do not support the CN anymore, but throw an error:

Subject Alternative Name Missing

Good potential for some administrative work ;-)

Check for a Subject Alternative Names

You can use OpenSSL to obtain a certificate, for example for

openssl s_client -showcerts -connect </dev/null 2>/dev/null

Here, openssl will connect to the server behind at port 443 (default port for HTTPS) to request the SSL certificate and dump it to your terminal. openssl can also print the details about a certificate. You just need to pipe the certificate into:

openssl x509 -text -noout

Thus, the whole command including the output may look like this:

openssl s_client -showcerts -connect </dev/null | openssl x509 -text -noout
    Version: 3 (0x2)
    Serial Number:
  Signature Algorithm: sha256WithRSAEncryption
    Issuer: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
      Not Before: May 12 07:11:00 2017 GMT
      Not After : Aug 10 07:11:00 2017 GMT
    Subject: CN =
    Subject Public Key Info:
      Public Key Algorithm: rsaEncryption
        Public-Key: (4096 bit)
        Exponent: 65537 (0x10001)
    X509v3 extensions:
      X509v3 Key Usage: critical
        Digital Signature, Key Encipherment
      X509v3 Extended Key Usage: 
        TLS Web Server Authentication, TLS Web Client Authentication
      X509v3 Basic Constraints: critical
      X509v3 Subject Key Identifier: 
      X509v3 Authority Key Identifier: 
      Authority Information Access: 
        OCSP - URI:
        CA Issuers - URI:
      X509v3 Subject Alternative Name:
      X509v3 Certificate Policies: 
          User Notice:
            Explicit Text: This Certificate may only be relied upon by Relying Parties and only in accordance with the Certificate Policy found at
  Signature Algorithm: sha256WithRSAEncryption

As you can see in the X.509 extension this server’s SSL certificate does have a Subject Alternative Name:

X509v3 Subject Alternative Name:

To quick-check one of your websites you may want to use the following grep filter:

openssl s_client -showcerts -connect </dev/null | openssl x509 -text -noout | grep -A 1 "Subject Alternative Name"

If that doesn’t print a proper Subject Alternative Name you should go and create a new SSL certificate for that server!

#android: No Internet Access Detected, won't automatically reconnect -aka- Connected, no Internet.

Android: No Internet Access Detected, won't automatically reconnect.
Android: No Internet Access Detected, won't automatically reconnect.

Hands up: who knows what an android device does when it sees a WiFi network coming up? Exactly, since Lollipo (Android 5) your phone or tablet leaks a quick HTTP request to check if it has internet access. This check is, for example, done with, a “webpage” that always returns an HTTP status code 204 No Content. Thus, if the phone receives a 204 it is connected to the internet, otherwise it assumes that this network does not provide proper internet access or is just a captive portal. However, that way Google of course always knows when you connect from where. And how often. And which device you’re using. etc… :(

How to prevent the leak

Even if people may like that feature, that is of course a privacy issue – so how can we counter that?

I briefly mentioned that a few years ago. You could use AdAway (available from F-Droid, source from GitHub) to redirect all traffic for and to nirvana.

I already maintain a convenient configuration for AdAway at, which blocks Google’s captive portal detection.

However, blocking that “feature” also comes with some drawbacks…

The downside of blocking captive portal detection

Android: Connected, no Internet.
Android: Connected, no Internet.

The consequences of blocking all request of the captive portal detection are obvious: your phone assumes that no network hat internet access. And therefore, it wouldn’t connect automatically, saying

No Internet Access Detected, won’t automatically reconnect. see image on top

That will probably increase your mobile data usage, as you always need (to remember) to do connect manually. And even if you manually connect to a network “without internet” the WiFi icon will get an exclamation mark and the phone says

Connected, no Internet. see second image


What can we do about it?

Disable captive portal detection

With a rooted phone you can simply disable captive portal detection. Just get a root-shell through adb (or SSH etc) to run the following command:

settings put global captive_portal_detection_enabled 0

Changed as of Android 7, see update below!

One small drawback of that approach: you need to execute that again after flashing a new image… However, I guess you’ll anyway have a small workflow for re-flashing your phone – just add that tiny bit to it ;-)

Another drawback is that you loose the captive portal detection… Of course, that’s what you intended, but sometimes it may be useful to have that feature in hotels etc..

Change the server for captive portal detection with the Android API

You can also change the URL to the captive portal server to a server under your control. Let’s say you have a site running at that simulates a captive portal detection server backend(!?) and always returns 204, no matter what request. Then you can use that URL for captive portal detection! Override the captive portal server on a root-shell (adb or SSH etc) by calling:

settings put global captive_portal_server

Changed as of Android 7, see update below!

This way you retain the captive portal detection without leaking data to Google. However, you will again loose the setting when flashing the phone again..

Change the server for captive portal detection using AdAway

Another option for changing the captive portal detection server is to change its IP address to one that’s under your control. You can do that with AdAway, for example. Let’s say your captive portal detection server has the IP address, then you may add the following to your AdAway configuration:

The webserver at should then of course accept requests for the foreign domains.

This way, you also don’t leak the data to Google and you will also keep the settings after flashing the phone (as long as you leave AdAway installed). However, there are also some things to keep in mind: First, I could imagine that Google may be a bit upset if you redirect their domains to a different server? And second, you don’t know if those are the only servers used for captive portal detection. If Google at some point comes up with another domain for captive portal detection, such as, you’re screwed.

Supplementary material

See also the CaptivePortal description at the android reference.

Create captive portal detection server with Nginx

Just add the following to your Nginx configuration:

location /generate_204 { return 204; }

Create captive portal detection server with Apache

If you’re running an Apache web server you need to enable mod_rewrite, then create a .htaccess in the DocumentRoot containing:

<IfModule mod_rewrite.c>
	RewriteEngine On
	RewriteCond %{REQUEST_URI} /generate_204$
	RewriteRule $ / [R=204]

Create captive portal detection server with PHP

A simple PHP script will also do the trick:

<?php http_response_code (204); ?>


As of Android 7 the settings have changes. To enable/disable captive portal detection you need to set captive_portal_mode to either

To define the captive portal server you actually have three settings:

  • captive_portal_use_https should the phone use HTTPS for captive portal detection? (0 = HTTP, 1 = HTTPS)
  • captive_portal_http_url URL to the captive portal w/o HTTPS.
  • captive_portal_https_url URL to the captive portal when using HTTPS.

Martin Scharm

stuff. just for the records.

Do you like this page?
You can actively support me!