iptables: log and drop

Linux has a sohpisticated firewall built right into the kernel: It’s called iptables! I’m pretty sure you heard about it. You can do realy crazy things with iptables. But here I just want to log how to log+drop a packet in a single rule.

Usually, you would probably do something like that:

iptables -A INPUT -j LOG --log-level warning --log-prefix "INPUT-DROP:"
iptables -A INPUT -j DROP

Works perfectly, but dramatically messes your rules table up.. Especially, if you want to log+drop packets that match a complicated filter. You’ll end up with twice as many table entries as desired..

The trick is to instead create a new rule chain that will log+drop in sequence:

iptables -N LOG_DROP

So here I created a new chain called LOG_DROP. We can now append (-A) two new rules to that chain, which do the actual drop+log:

iptables -A LOG_DROP -j LOG --log-level warning --log-prefix "INPUT-DROP:"
iptables -A LOG_DROP -j DROP

(similar like the first code above, just not for the INPUT chain but for the LOG_DROP chain)

That’s basically it! If you now need to log+drop a packet you can append a new rule to e.g. the INPUT chain that routes the packet to the LOG_DROP chain:

iptables -A INPUT [...filter specification...] -j LOG_DROP

You should consider to limit the number of redundant log entries per time to prevent flooding of your logs.. For more documentation you should consult the manual of iptables(8).

Common Name vs Subject Alternative Names

You probably heard about the conflict between the fields Common Name (CN) and Subject Alt Names (subjectAltName) in SSL certificates. It seems best practice for clients to compare the CN value with the server’s name. However, RFC 2818 already advised against using the Common Name and google now takes the gloves off. Since Chrome version 58 they do not support the CN anymore, but throw an error:

Subject Alternative Name Missing

Good potential for some administrative work ;-)

Check for a Subject Alternative Names

You can use OpenSSL to obtain a certificate, for example for binfalse.de:

openssl s_client -showcerts -connect binfalse.de:443 </dev/null 2>/dev/null

Here, openssl will connect to the server behind binfalse.de at port 443 (default port for HTTPS) to request the SSL certificate and dump it to your terminal. openssl can also print the details about a certificate. You just need to pipe the certificate into:

openssl x509 -text -noout

Thus, the whole command including the output may look like this:

openssl s_client -showcerts -connect binfalse.de:443 </dev/null | openssl x509 -text -noout
    Version: 3 (0x2)
    Serial Number:
  Signature Algorithm: sha256WithRSAEncryption
    Issuer: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
      Not Before: May 12 07:11:00 2017 GMT
      Not After : Aug 10 07:11:00 2017 GMT
    Subject: CN = binfalse.de
    Subject Public Key Info:
      Public Key Algorithm: rsaEncryption
        Public-Key: (4096 bit)
        Exponent: 65537 (0x10001)
    X509v3 extensions:
      X509v3 Key Usage: critical
        Digital Signature, Key Encipherment
      X509v3 Extended Key Usage: 
        TLS Web Server Authentication, TLS Web Client Authentication
      X509v3 Basic Constraints: critical
      X509v3 Subject Key Identifier: 
      X509v3 Authority Key Identifier: 
      Authority Information Access: 
        OCSP - URI:http://ocsp.int-x3.letsencrypt.org/
        CA Issuers - URI:http://cert.int-x3.letsencrypt.org/
      X509v3 Subject Alternative Name: 
      X509v3 Certificate Policies: 
          CPS: http://cps.letsencrypt.org
          User Notice:
            Explicit Text: This Certificate may only be relied upon by Relying Parties and only in accordance with the Certificate Policy found at https://letsencrypt.org/repository/
  Signature Algorithm: sha256WithRSAEncryption

As you can see in the X.509 extension this server’s SSL certificate does have a Subject Alternative Name:

X509v3 Subject Alternative Name:

To quick-check one of your websites you may want to use the following grep filter:

openssl s_client -showcerts -connect binfalse.de:443 </dev/null | openssl x509 -text -noout | grep -A 1 "Subject Alternative Name"

If that doesn’t print a proper Subject Alternative Name you should go and create a new SSL certificate for that server!

#android: No Internet Access Detected, won't automatically reconnect -aka- Connected, no Internet.

Android: No Internet Access Detected, won't automatically reconnect.
Android: No Internet Access Detected, won't automatically reconnect.

Hands up: who knows what an android device does when it sees a WiFi network coming up? Exactly, since Lollipo (Android 5) your phone or tablet leaks a quick HTTP request to check if it has internet access. This check is, for example, done with clients3.google.com/generate_204, a “webpage” that always returns an HTTP status code 204 No Content. Thus, if the phone receives a 204 it is connected to the internet, otherwise it assumes that this network does not provide proper internet access or is just a captive portal. However, that way Google of course always knows when you connect from where. And how often. And which device you’re using. etc… :(

How to prevent the leak

Even if people may like that feature, that is of course a privacy issue – so how can we counter that?

I briefly mentioned that a few years ago. You could use AdAway (available from F-Droid, source from GitHub) to redirect all traffic for clients3.google.com and clients.l.google.com to nirvana.

I already maintain a convenient configuration for AdAway at stuff.lesscomplex.org/adaway.txt, which blocks Google’s captive portal detection.

However, blocking that “feature” also comes with some drawbacks…

The downside of blocking captive portal detection

Android: Connected, no Internet.
Android: Connected, no Internet.

The consequences of blocking all request of the captive portal detection are obvious: your phone assumes that no network hat internet access. And therefore, it wouldn’t connect automatically, saying

No Internet Access Detected, won’t automatically reconnect. see image on top

That will probably increase your mobile data usage, as you always need (to remember) to do connect manually. And even if you manually connect to a network “without internet” the WiFi icon will get an exclamation mark and the phone says

Connected, no Internet. see second image


What can we do about it?

Disable captive portal detection

With a rooted phone you can simply disable captive portal detection. Just get a root-shell through adb (or SSH etc) to run the following command:

settings put global captive_portal_detection_enabled 0

Changed as of Android 7, see update below!

One small drawback of that approach: you need to execute that again after flashing a new image… However, I guess you’ll anyway have a small workflow for re-flashing your phone – just add that tiny bit to it ;-)

Another drawback is that you loose the captive portal detection… Of course, that’s what you intended, but sometimes it may be useful to have that feature in hotels etc..

Change the server for captive portal detection with the Android API

You can also change the URL to the captive portal server to a server under your control. Let’s say you have a site running at scratch.binfalse.de/generate_204 that simulates a captive portal detection server backend(!?) and always returns 204, no matter what request. Then you can use that URL for captive portal detection! Override the captive portal server on a root-shell (adb or SSH etc) by calling:

settings put global captive_portal_server scratch.binfalse.de

Changed as of Android 7, see update below!

This way you retain the captive portal detection without leaking data to Google. However, you will again loose the setting when flashing the phone again..

Change the server for captive portal detection using AdAway

Another option for changing the captive portal detection server is to change its IP address to one that’s under your control. You can do that with AdAway, for example. Let’s say your captive portal detection server has the IP address, then you may add the following to your AdAway configuration: clients3.google.com clients.l.google.com

The webserver at should then of course accept requests for the foreign domains.

This way, you also don’t leak the data to Google and you will also keep the settings after flashing the phone (as long as you leave AdAway installed). However, there are also some things to keep in mind: First, I could imagine that Google may be a bit upset if you redirect their domains to a different server? And second, you don’t know if those are the only servers used for captive portal detection. If Google at some point comes up with another domain for captive portal detection, such as captive.google.com, you’re screwed.

Supplementary material

See also the CaptivePortal description at the android reference.

Create captive portal detection server with Nginx

Just add the following to your Nginx configuration:

location /generate_204 { return 204; }

Create captive portal detection server with Apache

If you’re running an Apache web server you need to enable mod_rewrite, then create a .htaccess in the DocumentRoot containing:

<IfModule mod_rewrite.c>
	RewriteEngine On
	RewriteCond %{REQUEST_URI} /generate_204$
	RewriteRule $ / [R=204]

Create captive portal detection server with PHP

A simple PHP script will also do the trick:

<?php http_response_code (204); ?>


As of Android 7 the settings have changes. To enable/disable captive portal detection you need to set captive_portal_mode to either

To define the captive portal server you actually have three settings:

  • captive_portal_use_https should the phone use HTTPS for captive portal detection? (0 = HTTP, 1 = HTTPS)
  • captive_portal_http_url URL to the captive portal w/o HTTPS.
  • captive_portal_https_url URL to the captive portal when using HTTPS.

Docker MySQL Backup

Even with Docker you need to care about backups.. ;-)

As you usually mount all the persistent data into the container the files will actually be on your host. Thus, you can simply do the backup of these files. However, for MySQL/MariaDB I prefer having an actual SQL-dump. Therefore I just developed the Docker MySQL-Backup tool. You will find the sources at the corresponding GitHub repository.

How does Docker MySQL-Backup work?

The tool basically consists of two scripts:

The script /etc/cron.daily/docker-mysql-backup parses the output of the docker ps command to find running containers of the MySQL image. More precisely, it looks for containers of images that start with either \smysql or \smariadb. The actual filter command is

docker ps --format '{{.Names}}\t{{.Image}}'  | /bin/grep '\s\(mysql\|mariadb\)' | awk '{print $1}'

That of course only matches the original MySQL/MariaDB image names (if you have a good reason to derive an own version of that image please tell me!). For every matching $container the script will exec the following command:

docker exec "$container" \
	sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' \
	| ${GZIP} -9 > "${BACKUP_DIR}/${NOW}_complete.sql.gz"

With the following variables:

  • $BACKUP_DIR is a concatenation of $BACKUP_BASE (configured in /etc/default/docker-mysql-backup) and the container name,
  • $NOW is the current time stamp as date +"%Y-%m-%d_%H-%M".

Thus, the backups are compressed, organised in subdirectories of $BACKUP_BASE, and the SQL-dumps have a time stamp in their names. $BACKUP_BASE defaults to /srv/backup/mysql/, but can be configured in /etc/default/docker-mysql-backup.

Last but not least, the script also cleans the backups itself. It will keep the backups of the last 30 days and all backups of days that end with a 2. So you will keep the backups from the 2nd, the 12th, and the 22nd of every month.

As the script is stored in /etc/cron.daily/ the cron tool will execute the backup script on a daily basis.

Restore a dump

Restoring the dump is quite easy. Let’s assume your container’s name is $container and the dump to restore carries the time stamp $date. Then you just need to run:

docker exec "$container" -v "${BACKUP_BASE}/docker_${container}":/srv sh -c \
      'exec gunzip < /srv/${date}_complete.sql.gz | mysql -uroot -p"$MYSQL_ROOT_PASSWORD"'

This will mount the backup directory in /srv of the running container and then decompress and import the SQL-dump on the fly.


Manual installation through GitHub

Clone the Docker MySQL-Backup repository:

git clone https://github.com/binfalse/docker-mysql-backup.git

Copy the backup script to the cron.daily (most likely /etc/cron.daily/) directory on your system:

cp docker-mysql-backup/etc/cron.daily/docker-mysql-backup /etc/cron.daily/

Copy the configuration to /etc/default/:

cp docker-mysql-backup/etc/default/docker-mysql-backup /etc/default/

Installation from my Apt repository

If you’re running a Debian-based system you may want to use my apt-repository to install the Docker MySQL-Backup tool. In that case you just need to run

aptitude install bf-docker-mysql-backup

Afterwards, look into /etc/default/docker-mysql-backup for configuration options. This way, you’ll always stay up-to-date with bug fixes and new features :)

Automatically update Docker images

Automatically Update Docker Images
Automatically Update your Docker Images

Docker is cool. Jails tools into containers. That of course sounds clean and safe and beautiful etc. However, the tools are still buggy and subject to usual attacks, just as they were running on your main host! Thus, you still need to make sure your containers are up to date.

But how would you do that?

Approaches so far

docker-compose pull

On the one hand, let’s assume you’re using Docker Compose, then you can go to the directory containing the docker-compose.yml and call

docker-compose pull
docker-compose up -d --remove-orphans

However, this will just update the images used in that Docker Compose setup – all the other images on your system wouldn’t be updated. And you need to do that for all Docker Compose environments. And if you’re running 30 containers of the same image it would check 30 times for an update of that image – quite a waste or power and time..


On the other hand, you may use the dupdate tool, introduced earlier:

dupdate -s

It is able to go through all your images and update them, one after the other. That way, all the images on your system will be updated. However, dupdate doesn’t know about running containers. Thus, currently running tools and services won’t be restarted..

Better: Docker Auto-Update

Therefore, I just developed a tool called Docker Auto-Update that combines the benefits of both approaches. It first calls dupdate -s to update all your images and then iterates over a pre-defined list of Docker Compose environments to call a docker-compose up -d --remove-orphans.

The tool consists of three files:

  • /etc/cron.daily/docker-updater reads the configuration in /etc/default/docker-updater and does the regular update
  • /etc/default/docker-updater stores the configuration. You need to set the ENABLED variable to 1, otherwise the update tool won’t run.
  • /etc/docker-compose-auto-update.conf carries a list of Docker Compose environments. Add the paths to the docker-compose.yml files on your system, one per line

As it’s installed in /etc/cron.daily/, cron will take care of the job and update your images and containers on a daily basis. If your system is configured properly, cron will send an email to the systems administrator when it updates an image or restarts a container.

You see, no magic, but a very convenient workflow! :)



To install the Docker Auto-Update tool, you may clone the git repository at GitHub. Then,

  1. move the ./etc/cron.daily/docker-updater script to /etc/cron.daily/docker-updater
  2. move the ./etc/default/docker-updater config file to /etc/default/docker-updater
  3. update the setup in /etc/default/docker-updater – at least set ENABLED=1
  4. create a list of Docker Compose config files in /etc/docker-compose-auto-update.conf - one path to a docker-compose.yml per line.

Debian Package

If you’re using a Debian based system you may install the Docker-Tools through my apt-repository:

aptitude install bf-docker-tools

Afterwards, configure /etc/default/docker-updater and at least set ENABLED=1. This way, you’ll stay up-to-date with bug fixes etc.


The tool will update your images and containers automatically – very convenient but also dangerous! The new version of an image may break your tool or may require an updated configuration.

Therefore, I recommend to monitor your tools through Nagios/Icinga/check_mk or whatever. And study the mails generated by cron!

Martin Scharm

stuff. just for the records.

Do you like this page?
You can actively support me!