Automatically update Docker images

Automatically Update Docker Images
Automatically Update your Docker Images

Docker is cool. Jails tools into containers. That of course sounds clean and safe and beautiful etc. However, the tools are still buggy and subject to usual attacks, just as they were running on your main host! Thus, you still need to make sure your containers are up to date.

But how would you do that?

Approaches so far

docker-compose pull

On the one hand, let’s assume you’re using Docker Compose, then you can go to the directory containing the docker-compose.yml and call

docker-compose pull
docker-compose up -d --remove-orphans

However, this will just update the images used in that Docker Compose setup – all the other images on your system wouldn’t be updated. And you need to do that for all Docker Compose environments. And if you’re running 30 containers of the same image it would check 30 times for an update of that image – quite a waste or power and time..

dupdate

On the other hand, you may use the dupdate tool, introduced earlier:

dupdate -s

It is able to go through all your images and update them, one after the other. That way, all the images on your system will be updated. However, dupdate doesn’t know about running containers. Thus, currently running tools and services won’t be restarted..

Better: Docker Auto-Update

Therefore, I just developed a tool called Docker Auto-Update that combines the benefits of both approaches. It first calls dupdate -s to update all your images and then iterates over a pre-defined list of Docker Compose environments to call a docker-compose up -d --remove-orphans.

The tool consists of two files:

  • a docker-updater.sh in /etc/cron.daily/ that does the regular update
  • a /etc/docker-compose-auto-update.conf that carries a list of Docker Compose config files, one per line

As it’s installed in /etc/cron.daily/, cron will take care of the job and update your images and containers on a daily basis. If your system is configured properly, cron will send an email to the systems administrator when it updates an image or restarts a container.

You see, no magic, but a very convenient workflow! :)

Installation

To install the Docker Auto-Update tool, you may clone the git repository at GitHub. Then move the docker-updater.sh script to /etc/cron.daily/docker-updater.sh and create a list of Docker Compose config files in /etc/docker-compose-auto-update.conf - one path to a docker-compose.yml per line.

If you’re using a Debian based system you may install the Docker-Tools through my apt-repository:

aptitude install bf-docker-tools

This way, you’ll stay up-to-date with bug fixes etc.

Disclaimer

The tool will update your images and containers automatically – very convenient but also dangerous! The new version of an image may break your tool or may require an updated configuration.

Therefore, I recommend to monitor your tools through Nagios/Icinga/check_mk or whatever. And study the mails generated by cron!

Rsync of ZFS data with a FreeBSD live system

Booting into FreeBSD
Booting into FreeBSD

Let’s assume you rendered your FreeBSD system unbootable.. Yeah, happens to the best, but how can you still copy the data stored on a ZFS to another machine? You probably just shouted RSYNC - but it’s not that easy.

You would need a FreeBSD live os (either on a USB pen drive or on a CD/DVD) and boot into that system. However, by default you do not have network, the ZPool is not mounted, there is no rsync and SSH is not running, and the live os is not writable, which brings another few issues…

This is a step-by-step how-to through all the obstacles. Just boot into your live os (get it from freebsd.org) and go on with the following…

Get Networking

By default your live system does not have networking setup correctly. Call ifconfig to see if the network interface is up. If it’s not you can bring it up using:

ifconfig em0 up

(assuming your inteface is called em0)

If it is up, you need to configure it. When you’re using a DHCP server you can just ask for an IP address using:

dhclient em0

Otherwise you need to configure the addresses manually:

ifconfig em0 inet 1.2.3.4 netmask 255.255.255.0

Afterwards you should be able to ping other machines, such as

ping 8.8.8.8

Mount the ZPool

Your ZPool won’t be mounted by default; you need to do it manually. To list all pools available on that machine just call:

zpool import

This searches through the devices in /dev to discover ZPools. You may specify a different directory with -d (see man page for zpool). To actually import and mount your ZPool you need to provide its name, for example:

zpool import -f -o altroot=/mnt zroot

This will import the ZPool zroot. Moreover, the argument -o altroot=/mnt will mount it to /mnt instead of / and the -f will mount it even if it may be in use by another system (here we’re sure it isn’t, aren’t we?).

Create some Writeable Directories

The next problem is, that you do not have permissions to write to /etc, which you need to e.g. create SSH host keys etc. However, that’s also not a big issue as we have the unionfs filesystem! :)

UnionFS will mount a directory as an overlay over another directory. Let’s assume you have some space in $SPACE (maybe in the ZPool that you just mounted or on another USB drive), then you can just create a few directories:

mkdir $SPACE/{etc,var,usr,tmp}

and mount it as unionfs to the root’s equivalents:

mount_unionfs $SPACE/etc /etc
mount_unionfs $SPACE/var /var
mount_unionfs $SPACE/usr /usr
mount_unionfs $SPACE/tmp /tmp

Now we can write to /etc, while the actual changes will be written to $SPACE/etc! Isn’t that a great invention?

Start the SSH service

Now that /etc is writable we can start caring about the SSH daemon. First, we need to configure it to allow root to login. Add the follwing line to the /etc/ssh/sshd_config:

PermitRootLogin yes

Then, we can start the ssh daemon using:

service sshd onestart

It will automatically create host keys and all the necessary things for a first start of SSH. If that was successful, port 22 should now be open:

# sockstat -4 -l
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     sshd       938   4  tcp4   *:22                  *:*
root     syslogd    542   7  udp4   *:514                 *:*

Set root Password

To be able to login you of course need to set a root password:

passwd root

Aftwerwards, you should be able to login through SSH from any other machine. Go ahaed and give it a try!

Install and Run rsync

Almost there, but the freeBSD live image doesn’t come with rsync installed. So we need to do it manually:

pkg install rsync

This will first tell us that not even pkg is installed, but answering the question with y it will automatically install itself. And as everything is mounted as UnionFS, the stuff will actually be installed to $SPACE/... instead of /. However, you should now be able to do the rsync job from where ever you want :)

Sector 32 is already in use by the program `FlexNet'

Just tried to install Grub on a debootstrap‘ed hard drive, but Grub complained:

Installing for i386-pc platform.
grub-install: warning: Sector 32 is already in use by the program 'FlexNet'; avoiding it.  This software may cause boot or other problems in future.  Please ask its authors not to store data in the boot track.
DRM is bugging us
DRM is bugging us! Image by Brendan Mruk and Matt Lee, shared under CC BY-SA 3.0

Never heard of that FlexNet thing, but according to Wikipedia it’s a software license manager. And we all know how this whole DRM thing just bugs us.. So it bugged me because the new system wouldn’t boot properly.. Other people having similar problems.

However, it seems impossible to force grub overriding this sector, but you may wipe it manually. In my case sector 32 was infected by DRM, so I did the following:

dd if=/dev/zero of=/dev/sda bs=512 count=1 seek=32

If that’s done Grub installs like a charm, the system booted again, and the admin was happy that another DRM thing died :)

The figure I used in this article was made by Brendan Mruk and Matt Lee. They share it as CC BY-SA 3.0.

Handy Docker Tools

Handy Docker Tools
Docker Tools

As I’m working with Docker quite intensively it was about time to develop some tools that help me managing different tasks. Some of them have already been existing as functions in my environment or something, but now they are assembled in a git repository at GitHub.

The toolbox currently consists of the following tools:.

dclean cleans your setup

The Docker-Clean tool dclean helps getting rid of old, exited Docker containers. Sometimes I forget the --rm flag during tests, and when I realise it there are already hundreds of orhpaned containers hanging around.. Running dclean without arguments removes all of them quickly.

Additionally, the dclean tool accepts a -i flag which will clean the images. It will prune all dangling images. Dangling images are orphaned and usually not needed anymore. Thus, dclean -i will remove them.

denter gets you into a containers

The Docker-Enter tool denter beames you into a running Docker container. Just provide the container’s name or CID as an argument to get a /bin/bash inside the container. Internally, denter will just call

docker exec -it "$NAME" "$EXEC"

with $EXEC being /bin/bash by default. So there is no magic, it’s just a shortcut.. You may overwrite the program to be executed by providing it as a second argument. That means,

denter SOMEID ps -ef

will execute ps -ef in the container with the id SOMEID.

dip shows IP addresses

The Docker-IP tool dip shows the IP addresses of running containers. Without arguments it will print the IP addresses, names, and container ids of all running containers. If your interested in the IP address of a specific container you may pass that container’s CID as an argument with -c, just like:

dip -c SOMEID

This will show the IP of the container with id SOMEID.

dkill stops all running containers

The Docker-Kill tool dkill is able to kill all running containers. It doesn’t care what’s in the container, it will just iterate over the docker ps list to stop all running containers.

As this is quite dangerous, it requires a -f flag to actually kill the containers. You may afterwards run the dclean tool from above to get rid of the cadavers..

dupdate updates images

The Docker-Update tool dupdate helps you staying up-to-date. It will iterate over all your images and tries to pull new versions of that image from the Docker registry (or your own registry, if you have one). By default, it will echo the images that have been updates and tells you which images cannot be found (anymore) on the registry. You may pass the -v to dupdate to enable verbose mode and also get a report for images that do not have a newer version at the registry. This way, you can make sure that all images are checked. Similarly, you can pass -s to enable silent mode and suppress messages about images that cannot be found at the registry.

Installation

Installing the tools is very easy: Just clone the Docker-Tools git repository at GitHub. If you’re using a Debian based system you may also install the tools through my apt-repository:

aptitude install bf-docker-tools

This way, you’ll stay up-to-date with bug fixes etc.

Firefox: Mute Media

Silence Firefox
Silence Firefox

You middle-click a few youtube videos and all start shouting against each other. You enter a website and it immediately slaps sound in you face. How annoying…

But there may be help.

Enter about:config and set

  • media.block-play-until-visible to true to only play media that is also in the current tab an do not play the stuff from the background
  • media.autoplay.enabled to false to stop autoplaying of some of the media (doens’t work everywhere, not sure why..)
  • dom.audiochannel.mutedByDefault sets the audio muted by default – essential for offices
  • plugins.click_to_play requires a click to run plugins, such as flash (which you are anyway not using!)

Fix highlight colors for QT apps on a GTK desktop

Okular: highlighted text is hardly readable
Okular: highlighted text is hardly readable

I’m using the i3 window manger. As smart as possible, increases productivity, and feels clean. Exactly how I like my desktop. I’m still very happy that Uschy hinted me towards i3!

However, I’m experiencing a problem with highlighted text in Okular, my preferred PDF viewer. When I highlight something in Okular the highlight-color (blue) is far too dark, the highlighted text isn’t readable anymore. I used to live with that, but it was quite annoying. Especially when you’re in a meeting/presentation and you want to highlight something at the projector. I just saw that problem occurring in Okular. Not sure why, but I honestly do not understand this whole desktop config thing – probably one of the reasons why I love i3 ;-)

Today, I eventually digged into the issue and found out what’s the problem how to solve the problem. Apparently, Okular uses a Qt configuration, that can be modified using the qtconfig tool. Just install it (here for Qt4 applications):

Configure the highlight color using the qtconfig tool
Configure the highlight color using the qtconfig tool
aptitude install qt4-qtconfig

When you run qt4-qtconfig a window will pop up, as you can see in the figure on the right:

  1. Select a GUI Style that is not Desktop Settings (Default), e.g. Cleanlooks.
  2. Then you can click the Tune Palette… button in the Build Palette section.
  3. A second window will pop up. Select Highlight in the Central color roles section.
  4. Finally you’re good to select the hightlight color using the color chooser button! :)
Okular highlighting text with fixed colors
Okular highlighting text with fixed colors

Was a bit difficult to find, but the result is worth it! The figure on the bottom shows the new highlight color – much better.

I will probably never understand all these KDE, QT, Gnome, GTK, blah settings. Every environment does it differently and changes the configuration format and location like every few months. At least for me that’s quite frustrating…

Mail support for Docker's php:fpm

Sending Mails from within a Docker Container
Sending Mails from within a Docker Container

Dockerizing everything is fun and gives rise to sooo many ideas and opportunities. However, sometimes it’s also annoying as …. For example, I just tried to use a Docker container for a PHP application that sends emails. Usually, if your server is configured ok-ish, it works out of the box and I never had problems with something like that.

The Issue

In times of Docker there is just one application per container. That means the PHP container doesn’t know anything about emailing. Even worse, the configuration tool that comes with PHP tries configuring the sendmail_path to something like $SENDMAILBINARY -t -i. That obviously fails, because there is no sendmail binary and $SENDMAILBINARY remains empty, thus the actual setting becomes:

sendmail_path = " -t -i"

That, in turn, leads to absurd messages in your log files, because there is no such binary as -t:

WARNING: [pool www] child 7 said into stderr: "sh: 1: -t: not found"

Hard times to start debugging that issue..

The Solution

To solve this problem I forked the php:fpm image to install sSMTP, a very simple MTA that is able to deliver mail to a mail hub. Afterwards I needed to configure the sSMTP as well as the PHP mail setup.

Install sSMTP into php:fpm

Nothing easier than that, just create a Dockerfile based on php:fpm and install sSMTP through apt:

FROM php:fpm
MAINTAINER martin scharm <https://binfalse.de>

# Install sSMTP for mail support
RUN apt-get update \
	&& apt-get install -y -q --no-install-recommends \
		ssmtp \
	&& apt-get clean \
	&& rm -r /var/lib/apt/lists/*

Docker-build that image either through command line or using Docker Compose or whatever is your workflow. For this example, let’s call this image binfalse/php-fpm-extended.

Setup for the sSMTP

Configuring the sSMTP is easy. Basically, all you need to do is to specify the address to the mail hub using the mailhub option. However, as my mail server is running on a different physical server I also want to enable encryption, so I set UseTLS and UseSTARTTLS to YES. Docker containers usually get cryptic names, so I reset the hostname using the hostname variable. And last but not least I allowed the applications to overwrite of the From field in emails using the FromLineOverride. Finally, your full configuration may look like:

FromLineOverride=YES
mailhub=mail.server.tld
hostname=php-fpm.yourdomain.tld
UseTLS=YES
UseSTARTTLS=YES

Just store that in a file, e.g. /path/to/ssmtp.conf. We’ll mount that into the container later on.

Configure mail for php:fpm

Even if we installed the sSMTP the PHP configuration is still invalid, we need to set the sendmail_path correctly. That’s actually super easy, just create a file containing the following lines:

[mail function]
sendmail_path = "/usr/sbin/ssmtp -t"

Save it as /path/to/php-mail.conf to mount it into the container later on.

Putting it all together

To run it, you would need to mount the following things:

  • /path/to/php-mail.conf to /usr/local/etc/php/conf.d/mail.ini
  • /path/to/ssmtp.conf to /etc/ssmtp/ssmtp.conf
  • your PHP scripts to wherever your sources are expected..

Thus a Docker Compose configuration may look like:

fpm:
	restart: always
	image: binfalse/php-fpm-extended
	volumes:
		# CONFIG
		- /path/to/ssmtp.conf:/etc/ssmtp/ssmtp.conf:ro
		- /path/to/php-mail.conf:/usr/local/etc/php/conf.d/mail.ini:ro
		# PHP scripts
		- /path/to/scripts:/scripts/:ro
	logging:
		driver: syslog
		options:
			tag: docker/fpm

Give it a try and let me know if that doesn’t work!

Docker resources:

Some sSMTP resources that helped me configuring things:

Thunderbird opens multiple windows on startup

Thunderbird demands attention...
Thunderbird demands attention... (Screenshot of my i3 desktop)

Since some time my Icedove/Thunderbird (currently version 38.8.0) uses to open two main windows when I launch it. That’s a bit annoying, as there are always two windows visually trying to catch my attention. Even if I read the new mail in one of them the other one would still demand attention.

That’s of course a bit annoying. The preferences dialog doesn’t seem to offer a button for that. I’ve been looking for a solution on the internet, but wasn’t able to find something. And unfortunately, there is an easy workaround: Just close the second window after startup… The other window will appear again with the next start of Icedove/Thunderbird, but that’s the problem of future-me. These nasty easy workarounds! You won’t fix the problem and it tends to bug you a little harder with every appearance.

Today I had some time to look into the issue.

I sensed the problem in my Thunderbird settings - if that would be an actual Thunderbird issue I would have found something on the internet.. Thus, it must be in my ~/.icedove/XXXX.default directory. Studying the prefs.js file was quite interesting, but didn’t reveal anything useful for the current issue. Then I spotted a session.json file, and it turns out that this caused the problem! It contained these lines: cat session.json | json_pp

{
  "windows" : [
  {
    "type" : "3pane",
    "tabs" : {
      "selectedIndex" : 0,
      "rev" : 0,
      "tabs" : [
      {
        "selectedIndex" : 0,
        "rev" : 0,
        "tabs" : [
        {
          ....
        }
        ]
      }
      ]
    }
  },
  {
    "tabs" : {
      "tabs" : [
      {
        ....
      },
      {
        "mode" : "tasks",
        "ext" : {},
        "state" : {
          "background" : true
        }
      },
      {
        "ext" : {},
        "mode" : "calendar",
        "state" : {
          "background" : true
        }
      },
      {
        "state" : {
          "messageURI" : "imap-message://someidentifyer/INBOX#anotheridentifier"
        },
        "mode" : "message",
        "ext" : {}
      }
      ],
      "rev" : 0,
      "selectedIndex" : 0
    },
    "type" : "3pane"
  }
  ],
  "rev" : 0
}

(For brevity I replaced my actual main tab’s content with ....)

As you see, there is JSON object containing a single key windows with an array of two objects. These two objects apparently represent my two windows. Both have tabs. For example in the second window object my calendar and my tasks are opened in tabs (that’s the lightning/iceowl extension), and a single message occupies another tab.

Brave as I am I just deleted the first window object (I decided for the first one as that had no extra tabs). And voilà! The next launch of Thunderbird just opens a single window! :)

I don’t know who wrote the stuff into the session.json, it was probably caused by one of my extensions. However, it seems to be stable now. And if it ever happens again I’ll know how to fix it.

Easy solution, should have fixed that immediately!

Update

Fix the settings in FireTray
Fix the settings in FireTray (version 0.6.1)

I found out what caused the problem: It is FireTray – an extension that I’m using to have a systray notification icon. This extension sits in the systray and changes its icon when a new mail arrives. That’s super useful, but… by default it doesn’t close windows but just hides them to systray! That means you can restore them in the context menu of the systray icon… And that means that the windows aren’t really closed and will appear again with the next start of the application.

To change that behaviour just right-click the icon and click Preferences. A dialog window will pop up and you just need to unselect the Closing window hides to systray. Compare the screenshot. You may also go for Only last window can be hidden.

Generate PDF documents from smartphones, smartwatches, Raspberry Pis, and everywhere..

TEXPILE  – compiling LaTeX projects online
TEXPILE – compiling LaTeX projects online

I recently wanted to create a PDF file with a table of data on a device that did neither have much computational power nor disk space or any other available resources. So what are the options? Installing Word plus various add-ons? Or some Adobe-bloat? Pah.. that’s not even running on big machines…
The best option is of course LaTeX. Generating a tex document is neither storage nor time intensive. But to get proper LaTeX support you need some gigabytes of disk space available and compiling a tex document requires quite some computational time… So basically also no option for all devices..

If there wouldn’t be …

The Network Way

So we could just install the LaTeX dependencies on another, more powerful machine on the network and send our documents there to get it compiled. On that server we would have a web server running that has some scripts to

  • accept the tex file,
  • store it somewhere temporary,
  • execute the pdflatex call,
  • and send back the resulting PDF file.

And that’s exactly what TEXPILE does! It comPILEs laTEX documents on a webserver.

To compile a LaTeX project you just need to throw it as a form-encoded HTTP POST request against the TEXPILE server and the server will reply with the resulting PDF document. If your document project consists of multiple files, for example if you want to embed an image etc, you can create a ZIP file of all files that are necessary to compile the project and send this ZIP via HTTP POST to the TEXPILE server. In that case, however, you also need to tell TEXPILE which of the files in the ZIP container is suppossed to be the root document…

Sounds a bit scary, doesn’t it? PHP? Doing an exec to cal a binary? With user-uploaded data?

If there wouldn’t be …

The Docker Way

This approach wouldn’t be cool if it doesn’t follow the Docker-Hype!

We can put all of that in a Docker image and run a TEXPILE container on whatever machine we have at hand. I already provide a TEXPILE Docker container over there at the Docker Hub. That is, you may run a container where ever you want, you will always get the same result (#reproducibility) and you do not need to worry about attacks (#safe). Even if there is an attacke able to cheat the PHP and pdflatex tools, he will still be jailed in the Docker container that you can easily throw away every once in a while and start a new clean one…

And running a fresh container is really super easy! With docker installed you just need to call the following command:

docker run -it --rm -p 1234:80 binfalse/texpile

It will download the latest version of TEXPILE from the Docker hub (if you do not have it, yet) and run a container of it on your machine. It will also bind port 1234 of your machine to the web server of TEXPILE, so you will be able to talk to TEXPILE at http://your.machine:1234.

Give it a try. Just accessing it with a web browser will show you some help message.

For Example

Single-Document Project

Let’s try an example using curl. Let’s assume your TEXPILE container is running on a machine with the DNS name localhost, and let’s say you forward port 1234 to the HTTP server inside the container. Then you can just send your LaTeX document example.tex as project field of a form-encoded HTTP POST request to TEXPILE:

curl -F project=@example.tex http://localhost:1234 > /tmp/exmaple.pdf

Have a look into /tmp/exmaple.pdf to find the PDF version of example.tex.

Multi-Document Project

If you have a project that consists of multiple documents, for example tex files, images, header files, bibliography etc, then you need to ZIP the whole project before you can throw it against TEXPILE. Let’s assume your ZIP container can be found in /tmp/zipfile.zip and the root tex-document in the container is called example.tex. Then you can send the ZIP container as the project field and the root document name as the filename field, as demonstrated in the following call:

curl -F project=@/tmp/zipfile.zip -F filename=example.tex http://localhost:1234 > /tmp/pdffile.pdf

If the root document is not on the top level of the archive, but for example in the somedir directory you need to add the directory name to the filename field. Just to update the filename parameter of the previous call to:

curl -F project=@/tmp/zipfile.zip -F filename=somedir/example.tex http://localhost:1234 > /tmp/pdffile.pdf

The resulting PDF document can then be found in /tmp/pdffile.pdf.

More Examples

The GIT project over at GitHub contains some more examples for different programming languages. It also comes with some sample projects, so you can give it a try without much hassle…

Error Control

Obviously, the compilation may fail. Everyone who’s every been working with LaTeX knows that. You may have an error in your tex code, or the file-upload failed, or TEXPILE’s disk ran out of space, or…

Good news is that TEXPILE was developed with these problems in mind. TEXPILE will give you a hint on what happened in its HTTP status code. Only if everything was fine and the compilation was succesfull you’ll get a 2XX status code. In this case you can expect to find a proper PDF document in TEXPILE’s response
If there was an error at any point you’ll either get a 4xx status code or a 5xx status code in return. In case of an error TEXPILE of course cannot return a PDF document, but it will return an HTML document with the detailed error. Depending on how far it has come with it’s job, you’ll find some error messages on the upload, or on missing parameters, and if the LaTeX compilation faile you’ll even get the whole output from the pdflatex command! A lot of information that will help you debug the actual problem.

Summary

Using TEXPILE it is super easy to generate PDF document from every device with network access. You can for example export some sensor data as a nice table from your smartwatch, or some medic information as graphs and formulas from you fitbit, or create tikz-images on a raspberry pi, you can even instantly generate new versions of an EULA on your google glasses…

TEXPILE is free software and I am always super-happy when people contribute to open tools! Thus, go ahead and

  • Send comments, issues, and feature requests by creating a new ticket
  • Spread the tool: Tell your friends, share your thoughts in your blog, etc.
  • Propose new modifications: fork the repo – add your changes – send a pull request
  • Contribute more example code for other languages

No matter if it’s actual code extensions, more examples, bug fixes, typo corrections, artwork/icons: Everything is welcome!!

Eventually learning how to wield PAM

PAM. The Pluggable Authentication Modules. I’m pretty sure you heard of it. It sits in its /etc/pam.d/ and does a very good job. Reliable and performant, as my guts tend to tell me.

Unless… You want something specific! In my case that always implied a lot of trial and error. Copying snippets from the internet and moving lines up and down and between different PAM config files. So far, I managed to conquer that combinatorial problem in less time than I would need to learn PAM - always with bad feeling because I don’t know what I’ve been doing with the f*cking sensible auth system…

But this time PAM drives me nuts. I want to authenticate users via the default *nix passwd as well as using an LDAP server -AND- pam_mount should mount something special for every LDAP user. The trial and error method gave me two versions of the config that works for one of the tasks, but I’m unable to find a working config for both. So… Let’s learn PAM.

The PAM

On linux systems, PAM lives in /etc/pam.d/. There are several config files for differen purposes. You may change them and they will have effect immediately – no need to restart anything.

PAM allows for what they call “stacking” of modules: Every configuration file defines a stack of modules that are relevant for the authentication for an application. Every line in a config file containes three tokens:

  • the realm is the first word in a line; it is either auth, account, password or session
  • he control determines what PAM should do if the module either succeeds/fails
  • the module the actual module that gets invoked and optionally some module parameters

Realms

There are four different realms:

  • auth: checks that the user is who he claims to be; usually password base
  • account: account verification functionality; for example checking group membership, account expiration, time of day if a user only has part-time access, and whether a user account is local or remote
  • password: needed for updating passwords for a given service; may involve e.g. dictionary checking
  • session: stuff to setup or cleanup a service for a given user; e.g. launching system-wide init script, performing special logging, or configuring SSO

Controls

In most cases the controls consist of a single keyword that tells PAM what to do if the corresponding module either succeeds or fails. You need to understand, that this just controls the PAM library, the actual module neither knows not cares about it. The four typical keywords are:

  • required: if a ‘required’ module is not successful, the operation will ultimately fail. BUT only after the modules below are done! That is because an attacker shouldn’t know which or when a module fails. So all modules will be invoked even if the first on fails, giving less information to the bad guys. Just note, that a single failure of a ‘required’ module will cause the whole thing to fail, even if everything else succeeds.
  • requisite: similar to required, but the whole thing will fail immediately and the following modules won’t be invoked.
  • sufficient: a successful ‘sufficient’ module is enough to satisfy the requirements in that realm. That means, the following ‘sufficient’ won’t be invoked! However, sufficient modules are only sufficient, that means (i) they may fail but the realm may still be satisfied by something else, and (ii) they may all succeed but the realm may fail because a required module failed.
  • optional: optional modules are non-critical, they may succeed or fail, PAM actually doesn’t care. PAM only cares if there are exclusively optional modules in a particular stack. In that case at least one of them needs to succeed.

Modules

The last token of every line lists the path to the module that will be invoked. You may point to the module using an absolute path starting with / or a path relative from PAMs search directories. The search path depends on the system your working on, e.g. for Debian it is /lib/security/ or /lib64/security/. You may also pass arguments to the module, common arguments include for example use_first_pass which provides the password of an earlier module in the stack, so the users doesn’t need to enter their password again (e.g. useful when mounting an encrypted device that uses the same password as the user account).

There are dozens of modules available, every module supporting different options. Let me just forward you to a PAM modules overview at linux-pam.org. and an O’Reilly article on useful PAM modules.

That’s it

Yeah, reading and writing about it helped me fixing my problem. This article is probably just one within a hundred, so if it didn’t help you I’d like to send you to one of the following. Try reading them, if that doesn’t help write a blog post about it ;-)

Further Resources