binfalse
Dockerising a Contao website
January 24th, 2018This article is based on Contao 3. There is a new version, see Dockerising Contao 4
I’m a fan of containerisation! It feels much cleaner and systems don’t age that quickly.
Latest project that I am supposed to maintain is a new Contao website. The company who built the website of course just delivered files and a database. The files contain the Contao installation next to Contao extensions next to configuration and customised themes.. All merged into a blob… Thus, in the files it is hard to distinguish between Contao-based files and user generated content. So I needed to study Contao’s documentation and reinstall the website to learn what files should go into the Docker image and which files to store outside.
However, I finally came up with a solution that is based on two Contao images :)
A general Contao image
PLEASE NOTE: sSMTP is not maintained anymore! Please swith to
msmtp
, for example, as I explained in Migrating from sSMTP to msmtp.
The general Contao image is supposed to contain a plain Conato installation. That is, the recipe just installs dependencies (such as curl, zip, and ssmtp) and downloads and extracts Contao’s sources. The Dockerfile looks like this:
The first block apt-get install
s necessary stuff from the Debian repositories.
The second block downloads a Contao 3.5 from https://download.contao.org/3.5/zip
, extracts it to /var/www/
, and links /var/www/html
to it.
It also creates the cron.txt
(see github.com/contao/core/pull/8838).
The third block installs a few required and/or useful PHP extensions.
And finally the fourth block retrieves and installs Composer to /var/www/html/composer
, where the Contao-composer-plugin expects it.
That’s already it! We have a recipe to create a general Docker image for Contao. Quickly setup an automatic build and .. thada .. available as binfalse/contao
.
A personalised Contao image
Besides the plain Contao installation, a Contao website typically also contains a number of extensions.
Those are installed through composer, and they can always be reinstalled.
As we do not want to install a load of plugins everytime a new container is started we create a personalised Contao image.
All you need is the composer.json
that contains the information on which extensions and which versions to install.
This json should be copied to /var/www/html/composer/composer.json
, before composer can be run to install the stuff.
Here is an example of such a Dockerfile:
This image can then be build using:
The resulting image tagged contao-personalised
will contain all extensions required for your website.
Thus, it is highly project specific and shouldn’t be shared..
How to use the personalised Contao image
The usage is basically very simple. You just need to mount a few things inside the container:
/var/www/html/files/
should contain files that you uploaded etc./var/www/html/templates/
may contain your customised layout./var/www/html/system/config/FILE.php
should contain some configuration files. This may include thelocalconfig.php
or apathconfig.php
.
Optionally you can link a MariaDB for the database.
Tying it all together using Docker-Compose
Probably the best way to orchestrate the containers is using Docker-Compose.
Here is an example docker-compose.yml
:
This assumes that your personalised Dockerfile is located in path/to/personalised/Dockerfile
and your website files are stored in $PATH/files
, $PATH/templates
, and $PATH/system/config/localconfig.php
.
Docker-Compose will then build the personalised image (if necessary) and create 2 containers:
contao
based on this image, all user-based files are mounted into the proper locationscontao_db
a MariaDB to provide a MySQL server
To make Contao speak to the MariaDB server you need to configure the database connection in $PATH/system/config/localconfig.php
just like:
Here, the database should be accessible at contao_db:3306
, as it is setup in the compose file above.
If you’re running contao with “Rewrite URLs” using an .htaccess you also need to update Apache’s configuration to allow for rewrites.
Thus, you may for example mount the follwoing file to /etc/apache2/sites-available/000-default.conf
:
This tells Apache to allow everything in any .htaccess file in /var/www.
When everything is up running the Conato install will be available at port 8080
(see ports
definition in the compose file) of the machine hosting the Docker containers.
Mail support
PLEASE NOTE: sSMTP is not maintained anymore! Please swith to
msmtp
, for example, as I explained in Migrating from sSMTP to msmtp.
The image above comes with sSMTP installed. If you need support for email with your Contao installation, you just need to mount two more files into the container:
Tell PHP to mail through sSMTP
The following file tells PHP to use the ssmtp
binary for mailing. Just mount the file to /usr/local/etc/php/conf.d/mail.ini
:
Configure sSMTP
PLEASE NOTE: sSMTP is not maintained anymore! Please swith to
msmtp
, for example, as I explained in Migrating from sSMTP to msmtp.
The sSMTP configuration is very easy. The following few lines may already be sufficient, when mounted to /etc/ssmtp/ssmtp.conf
:
For more information read Mail support for Docker’s php:fpm and the Arch Linux wiki on sSMTP or the Debian wiki on sSMTP.
Archiving a (Wordpress) Website
January 24th, 2018I needed to migrate a lot of tools and projects that we’ve been working on in the SEMS group at the University of Rostock. Among others, the Wordpress website needed to be serialised to get rid of PHP and all the potential insecure and expensive Wordpress maintenance. I decided to mirror the page using HTTrack and some subsequent fine tuning. This is just a small report, maybe interesting if you also need to archive a dynamic web page.
Prepare the page
Some stuff in your (Wordpress) installation are properly useless after serialisation (or have never been working either) - get rid of them. For example:
- Remove the search box - it’s useless without PHP. You may add a link to a search engine instead…?
- Remove unnecessary trackers like Google analytics and Piwik. You probably don’t need it anymore and users may be unnecessarily annoyed by tracking and/or 404s.
- Disable unnecessary plugins.
- Check that manual links (e.g. in widgets) are still up-to-date, also after archiving..
- Check for unpublished drafts in posts/pages. Those will be lost as soon as you close the CMS.
- Recreate sitemap and rss feeds (if not created automatically)
I also recommend to setup some monitoring, e.g. using check_link, to make sure all resources are afterwards accessible as expected!
Mirror the website
I decided to mirror the web content using HTTrack. That’s basically quite simple. At the target location you only need to call:
This will create a directory sems.uni-rostock.de
containing the mirrored contend.
In addition you’ll find logs in hts-log.txt
and the cached content in hts-cache/
.
However, I tweaked the call a bit and actually executed HTTrack like this:
This ignores all links that match *trac/*
(there was a Trac running, but that moved to GitHub and an Nginx will permanently redirect the traffic), in addition it will keep connections alive (-%k
).
As I’m the admin of the original site (which I know won’t die too soon, and in worst case I can just restart it) I increased the speed to a max of 160 connections per second (-%c160
) and max 20 simultaneous connections (-c20
).
For that I also needed to disable HTTrack’s security limits (--disable-security-limits
).
That went quite well and I quickly had a copy of the website. However, there were a few issues…
Problems with redirects.
Turns out that HTTrack has problems with redirects.
At some point we installed proper SSL certificates and since then we were redirecting traffic at port 80 (HTTP) to port 443 (HTTPS).
However, some people manually created links that point to the HTTP resources, such as http://sems.uni-rostock.de/home/
.
If HTTrack stumbles upon such a redirect it will try to remodel that redirect.
However, in case of redirects from http://sems.uni-rostock.de/home/
to https://sems.uni-rostock.de/home/
, the target is the same as the source (from HTTrack’s point of view) and it will redirect to … itself.. -.-
The created HTML page sems.uni-rostock.de/home/index.html
looks like that:
As you can see, both the link and the meta refresh will redirect to the very same index.html
, effectively producing a reload-loop…
And as sems.uni-rostock.de/home/index.html
already exists it won’t store the content behind https://sems.uni-rostock.de/home/
, which will be lost…
I have no idea for an easy fix. I’ve been playing around with the url-hacks flag, but I did not find a working solution.. (see also forum.httrack.com/readmsg/10334/10251/index.html)
What I ended up with was to grep for this page and to find pages that link to it:
(Remember: some of the Click here
pages are legit: They implement proper redirects! Only self-links to HREF="index.html"
are the enemies.)
At SEMS we for example also had a wrong setting in the calendar plugin, which was still configured for a the HTTP version of the website and, thus, generating many of these problematic URLs.
The back-end search helped a lot to find the HTTP links. When searching for http://sems
in posts and pages I found plenty of pages that hard-coded the wrong link target..
Also remember that links may also appear in post-excerpts!
If nothing helps, you can still temporarily disable the HTTPS redirect for the time of mirroring.. ;-)
Finalising the archive
To complete the mirror I also rsync
‘ed the files in wp-content/uploads/
, as not all files are linked in through the web site.
Sometimes we just uploaded files and shared them through e-mails or on other websites.
I also manually grabbed the sitemap(s), as HTTrack apparently didn’t see them:
iptables: log and drop
July 17th, 2017Linux has a sohpisticated firewall built right into the kernel: It’s called iptables
!
I’m pretty sure you heard about it.
You can do realy crazy things with iptables.
But here I just want to log how to log+drop a packet in a single rule.
Usually, you would probably do something like that:
Works perfectly, but dramatically messes your rules table up.. Especially, if you want to log+drop packets that match a complicated filter. You’ll end up with twice as many table entries as desired..
The trick is to instead create a new rule chain that will log+drop in sequence:
So here I created a new chain called LOG_DROP
.
We can now append (-A
) two new rules to that chain, which do the actual drop+log:
(similar like the first code above, just not for the INPUT
chain but for the LOG_DROP
chain)
That’s basically it!
If you now need to log+drop a packet you can append a new rule to e.g. the INPUT
chain that routes the packet to the LOG_DROP
chain:
You should consider to limit the number of redundant log entries per time to prevent flooding of your logs..
For more documentation you should consult the manual of iptables(8)
.
Common Name vs Subject Alternative Names
May 19th, 2017You probably heard about the conflict between the fields Common Name (CN
) and Subject Alt Names (subjectAltName
) in SSL certificates.
It seems best practice for clients to compare the CN
value with the server’s name.
However, RFC 2818 already advised against using the Common Name and google now takes the gloves off.
Since Chrome version 58 they do not support the CN anymore, but throw an error:
Subject Alternative Name Missing
Good potential for some administrative work ;-)
Check for a Subject Alternative Names
You can use OpenSSL to obtain a certificate, for example for binfalse.de
:
Here, openssl
will connect to the server behind binfalse.de
at port 443
(default port for HTTPS) to request the SSL certificate and dump it to your terminal.
openssl
can also print the details about a certificate. You just need to pipe the certificate into:
Thus, the whole command including the output may look like this:
As you can see in the X.509
extension this server’s SSL certificate does have a Subject Alternative Name:
To quick-check one of your websites you may want to use the following grep
filter:
If that doesn’t print a proper Subject Alternative Name you should go and create a new SSL certificate for that server!
#android: No Internet Access Detected, won't automatically reconnect -aka- Connected, no Internet.
February 7th, 2017Hands up: who knows what an android device does when it sees a WiFi network coming up?
Exactly, since Lollipo (Android 5) your phone or tablet leaks a quick HTTP request to check if it has internet access.
This check is, for example, done with clients3.google.com/generate_204
, a “webpage” that always returns an HTTP status code 204 No Content
.
Thus, if the phone receives a 204
it is connected to the internet, otherwise it assumes that this network does not provide proper internet access or is just a captive portal.
However, that way Google of course always knows when you connect from where. And how often. And which device you’re using. etc… :(
How to prevent the leak
Even if people may like that feature, that is of course a privacy issue – so how can we counter that?
I briefly mentioned that a few years ago.
You could use AdAway (available from F-Droid, source from GitHub) to redirect all traffic for clients3.google.com
and clients.l.google.com
to nirvana.
I already maintain a convenient configuration for AdAway at stuff.lesscomplex.org/adaway.txt, which blocks Google’s captive portal detection.
However, blocking that “feature” also comes with some drawbacks…
The downside of blocking captive portal detection
The consequences of blocking all request of the captive portal detection are obvious: your phone assumes that no network hat internet access. And therefore, it wouldn’t connect automatically, saying
No Internet Access Detected, won’t automatically reconnect. see image on top
That will probably increase your mobile data usage, as you always need (to remember) to do connect manually. And even if you manually connect to a network “without internet” the WiFi icon will get an exclamation mark and the phone says
Connected, no Internet. see second image
Annoying…
What can we do about it?
Disable captive portal detection
With a rooted phone you can simply disable captive portal detection. Just get a root-shell through adb (or SSH etc) to run the following command:
Changed as of Android 7, see update below!
One small drawback of that approach: you need to execute that again after flashing a new image… However, I guess you’ll anyway have a small workflow for re-flashing your phone – just add that tiny bit to it ;-)
Another drawback is that you loose the captive portal detection… Of course, that’s what you intended, but sometimes it may be useful to have that feature in hotels etc..
Change the server for captive portal detection with the Android API
You can also change the URL to the captive portal server to a server under your control.
Let’s say you have a site running at scratch.binfalse.de/generate_204
that simulates a captive portal detection server backend(!?) and always returns 204
, no matter what request.
Then you can use that URL for captive portal detection!
Override the captive portal server on a root-shell (adb or SSH etc) by calling:
Changed as of Android 7, see update below!
This way you retain the captive portal detection without leaking data to Google. However, you will again loose the setting when flashing the phone again..
Change the server for captive portal detection using AdAway
Another option for changing the captive portal detection server is to change its IP address to one that’s under your control.
You can do that with AdAway, for example.
Let’s say your captive portal detection server has the IP address 5.189.140.231
, then you may add the following to your AdAway configuration:
The webserver at 5.189.140.231
should then of course accept requests for the foreign domains.
This way, you also don’t leak the data to Google and you will also keep the settings after flashing the phone (as long as you leave AdAway installed).
However, there are also some things to keep in mind:
First, I could imagine that Google may be a bit upset if you redirect their domains to a different server?
And second, you don’t know if those are the only servers used for captive portal detection.
If Google at some point comes up with another domain for captive portal detection, such as captive.google.com
, you’re screwed.
Supplementary material
See also the CaptivePortal description at the android reference.
Create captive portal detection server with Nginx
Just add the following to your Nginx configuration:
Create captive portal detection server with Apache
If you’re running an Apache web server you need to enable mod_rewrite
, then create a .htaccess
in the DocumentRoot containing:
Create captive portal detection server with PHP
A simple PHP script will also do the trick:
UPDATE
As of Android 7 the settings have changes.
To enable/disable captive portal detection you need to set captive_portal_mode
to either
0
Don’t attempt to detect captive portals, see CAPTIVE_PORTAL_MODE_IGNORE.1
When detecting a captive portal, display a notification that prompts the user to sign in, see CAPTIVE_PORTAL_MODE_PROMPT.2
When detecting a captive portal, immediately disconnect from the network and do not reconnect to that network in the future, see CAPTIVE_PORTAL_MODE_AVOID.
To define the captive portal server you actually have three settings:
captive_portal_use_https
should the phone use HTTPS for captive portal detection? (0
= HTTP,1
= HTTPS)captive_portal_http_url
URL to the captive portal w/o HTTPS.captive_portal_https_url
URL to the captive portal when using HTTPS.