Generate PDF documents from smartphones, smartwatches, Raspberry Pis, and everywhere..

I recently wanted to create a PDF file with a table of data on a device that did neither have much computational power nor disk space or any other available resources. So what are the options? Installing Word plus various add-ons? Or some Adobe-bloat? Pah.. that’s not even running on big machines…
The best option is of course LaTeX. Generating a tex document is neither storage nor time intensive. But to get proper LaTeX support you need some gigabytes of disk space available and compiling a tex document requires quite some computational time… So basically also no option for all devices..

If there wouldn’t be …

The Network Way

So we could just install the LaTeX dependencies on another, more powerful machine on the network and send our documents there to get it compiled. On that server we would have a web server running that has some scripts to

• accept the tex file,
• store it somewhere temporary,
• execute the pdflatex call,
• and send back the resulting PDF file.

And that’s exactly what TEXPILE does! It comPILEs laTEX documents on a webserver.

To compile a LaTeX project you just need to throw it as a form-encoded HTTP POST request against the TEXPILE server and the server will reply with the resulting PDF document. If your document project consists of multiple files, for example if you want to embed an image etc, you can create a ZIP file of all files that are necessary to compile the project and send this ZIP via HTTP POST to the TEXPILE server. In that case, however, you also need to tell TEXPILE which of the files in the ZIP container is suppossed to be the root document…

Sounds a bit scary, doesn’t it? PHP? Doing an exec to cal a binary? With user-uploaded data?

If there wouldn’t be …

The Docker Way

This approach wouldn’t be cool if it doesn’t follow the Docker-Hype!

We can put all of that in a Docker image and run a TEXPILE container on whatever machine we have at hand. I already provide a TEXPILE Docker container over there at the Docker Hub. That is, you may run a container where ever you want, you will always get the same result (#reproducibility) and you do not need to worry about attacks (#safe). Even if there is an attacke able to cheat the PHP and pdflatex tools, he will still be jailed in the Docker container that you can easily throw away every once in a while and start a new clean one…

And running a fresh container is really super easy! With docker installed you just need to call the following command:

It will download the latest version of TEXPILE from the Docker hub (if you do not have it, yet) and run a container of it on your machine. It will also bind port 1234 of your machine to the web server of TEXPILE, so you will be able to talk to TEXPILE at http://your.machine:1234.

Give it a try. Just accessing it with a web browser will show you some help message.

For Example

Single-Document Project

Let’s try an example using curl. Let’s assume your TEXPILE container is running on a machine with the DNS name localhost, and let’s say you forward port 1234 to the HTTP server inside the container. Then you can just send your LaTeX document example.tex as project field of a form-encoded HTTP POST request to TEXPILE:

Have a look into /tmp/exmaple.pdf to find the PDF version of example.tex.

Multi-Document Project

If you have a project that consists of multiple documents, for example tex files, images, header files, bibliography etc, then you need to ZIP the whole project before you can throw it against TEXPILE. Let’s assume your ZIP container can be found in /tmp/zipfile.zip and the root tex-document in the container is called example.tex. Then you can send the ZIP container as the project field and the root document name as the filename field, as demonstrated in the following call:

If the root document is not on the top level of the archive, but for example in the somedir directory you need to add the directory name to the filename field. Just to update the filename parameter of the previous call to:

The resulting PDF document can then be found in /tmp/pdffile.pdf.

More Examples

The GIT project over at GitHub contains some more examples for different programming languages. It also comes with some sample projects, so you can give it a try without much hassle…

Error Control

Obviously, the compilation may fail. Everyone who’s every been working with LaTeX knows that. You may have an error in your tex code, or the file-upload failed, or TEXPILE’s disk ran out of space, or…

Good news is that TEXPILE was developed with these problems in mind. TEXPILE will give you a hint on what happened in its HTTP status code. Only if everything was fine and the compilation was succesfull you’ll get a 2XX status code. In this case you can expect to find a proper PDF document in TEXPILE’s response
If there was an error at any point you’ll either get a 4xx status code or a 5xx status code in return. In case of an error TEXPILE of course cannot return a PDF document, but it will return an HTML document with the detailed error. Depending on how far it has come with it’s job, you’ll find some error messages on the upload, or on missing parameters, and if the LaTeX compilation faile you’ll even get the whole output from the pdflatex command! A lot of information that will help you debug the actual problem.

Summary

Using TEXPILE it is super easy to generate PDF document from every device with network access. You can for example export some sensor data as a nice table from your smartwatch, or some medic information as graphs and formulas from you fitbit, or create tikz-images on a raspberry pi, you can even instantly generate new versions of an EULA on your google glasses…

TEXPILE is free software and I am always super-happy when people contribute to open tools! Thus, go ahead and

• Send comments, issues, and feature requests by creating a new ticket
• Propose new modifications: fork the repo – add your changes – send a pull request
• Contribute more example code for other languages

No matter if it’s actual code extensions, more examples, bug fixes, typo corrections, artwork/icons: Everything is welcome!!

Eventually learning how to wield PAM

PAM. The Pluggable Authentication Modules. I’m pretty sure you heard of it. It sits in its /etc/pam.d/ and does a very good job. Reliable and performant, as my guts tend to tell me.

Unless… You want something specific! In my case that always implied a lot of trial and error. Copying snippets from the internet and moving lines up and down and between different PAM config files. So far, I managed to conquer that combinatorial problem in less time than I would need to learn PAM - always with bad feeling because I don’t know what I’ve been doing with the f*cking sensible auth system…

But this time PAM drives me nuts. I want to authenticate users via the default *nix passwd as well as using an LDAP server -AND- pam_mount should mount something special for every LDAP user. The trial and error method gave me two versions of the config that works for one of the tasks, but I’m unable to find a working config for both. So… Let’s learn PAM.

The PAM

On linux systems, PAM lives in /etc/pam.d/. There are several config files for differen purposes. You may change them and they will have effect immediately – no need to restart anything.

PAM allows for what they call “stacking” of modules: Every configuration file defines a stack of modules that are relevant for the authentication for an application. Every line in a config file containes three tokens:

• the realm is the first word in a line; it is either auth, account, password or session
• he control determines what PAM should do if the module either succeeds/fails
• the module the actual module that gets invoked and optionally some module parameters

Realms

There are four different realms:

• auth: checks that the user is who he claims to be; usually password base
• account: account verification functionality; for example checking group membership, account expiration, time of day if a user only has part-time access, and whether a user account is local or remote
• password: needed for updating passwords for a given service; may involve e.g. dictionary checking
• session: stuff to setup or cleanup a service for a given user; e.g. launching system-wide init script, performing special logging, or configuring SSO

Controls

In most cases the controls consist of a single keyword that tells PAM what to do if the corresponding module either succeeds or fails. You need to understand, that this just controls the PAM library, the actual module neither knows not cares about it. The four typical keywords are:

• required: if a ‘required’ module is not successful, the operation will ultimately fail. BUT only after the modules below are done! That is because an attacker shouldn’t know which or when a module fails. So all modules will be invoked even if the first on fails, giving less information to the bad guys. Just note, that a single failure of a ‘required’ module will cause the whole thing to fail, even if everything else succeeds.
• requisite: similar to required, but the whole thing will fail immediately and the following modules won’t be invoked.
• sufficient: a successful ‘sufficient’ module is enough to satisfy the requirements in that realm. That means, the following ‘sufficient’ won’t be invoked! However, sufficient modules are only sufficient, that means (i) they may fail but the realm may still be satisfied by something else, and (ii) they may all succeed but the realm may fail because a required module failed.
• optional: optional modules are non-critical, they may succeed or fail, PAM actually doesn’t care. PAM only cares if there are exclusively optional modules in a particular stack. In that case at least one of them needs to succeed.

Modules

The last token of every line lists the path to the module that will be invoked. You may point to the module using an absolute path starting with / or a path relative from PAMs search directories. The search path depends on the system your working on, e.g. for Debian it is /lib/security/ or /lib64/security/. You may also pass arguments to the module, common arguments include for example use_first_pass which provides the password of an earlier module in the stack, so the users doesn’t need to enter their password again (e.g. useful when mounting an encrypted device that uses the same password as the user account).

There are dozens of modules available, every module supporting different options. Let me just forward you to a PAM modules overview at linux-pam.org. and an O’Reilly article on useful PAM modules.

PHP file transfer: Forget the @ - use curl_file_create

I just struggled uploading a file with PHP cURL. Basically, sending HTTP POST data is pretty easy. And to send a file you just needed to prefix it with an at sign (@). Adapted from the most cited example:

You see, if you add an ‘@’ sign as the first character of a post field the content will be interpreted as a file name and will be replaced by the file’s content.

At least, that is how it used to be… And how most of the examples out there show you.

However, they changed the behaviour. They recognised that this is obviously inconvenient, insecure and error prone. You cannot send POST data that starts with an @ and you always need to sanitise user-data before sending it, as it otherwise may send the contents of files on your server. And, thus, they changed that behaviour in version 5.6, see the RFC.

That means by default the @/some/filename.ext won’t be recognized as a file – PHP cURL will literally send the @ and the filename (@/some/filename.ext) instead of the content of that file. Took ma a while and some tcpdumping to figure that out..

Instead, they introduced a new function called curl_file_create that will create a proper CURLFile object for you. Thus, you should update the above snippet with the following:

Copy the Dockerfile to the right place

Maven’s resources-plugin is ideally suited to deal with resources. To copy all Docker related resources to the target directory you can use the following snippet:

In addition, the <filtering>true</filtering> part also makes sure to replace all Maven-related placeholders, just like the ${project.version} that we’ve been using. Thus, this solves (i) and (ii) and after the validate phase we’ll have a proper target/Dockerfile. Build a Docker image Using Maven’s antrun-plugin we can call the docker tool: This executes a command like after the deploy phase. Thus, it builds a docker image tagged with the current version of your tool. The build’s context is target, so it will use the target/Dockerfile which COPYs the new version of your tool into the image. Automatically build images using a Maven profile I created a docker profile in Maven’s configuration file that is active per default if there is a src/main/docker/Dockerfile in your repository: Bonus: Also push the new image to the Docker Hub To also push the image you need to execute the push command: And due to the latest-confusion of Docker you also should create the latest-alias and also push that: However, both is easy. Just append a few more exec calls in the antrun-plugin! The final pom.xml snippet can be found on GitHub. Supplement The image for this article was derived from Wikipedia’s Apache Logo and Wikipedia’s Docker logo, licensed under the Apache License, Version 2.0. Create an Unscanable Letter Some time ago I’ve heard about the EURion constellation. Never heard about it? Has nothing to do with stars or astrology. It’s the thing on your money! :) Take a closer look at your bills and you’ll discover plenty of EURions, as shown in the picture on the right. Just a few inconspicuous dots. So what’s it all about? The EURion constellation is a pattern to be recognized by imaging software, so that it can recognize banknotes. It was invented to prevent people from copying money :) But I don’t know of any law that prohibits using that EURion, so I’ve been playing around with it. Took me some trials to find the optimal size, but I was able to create a $LaTeX$ document that includes the EURion. That’s the essential tex code: The whole $LaTeX$ environment can be found on GitHub, together with the EURion image and stuff. I also provide the resulting letter. Of course I immediately asked some friends to try to scan the letter, but it turns out, that not all scanners/printers are aware of the EURion… So it’s a bit disappointing, but I learned another thing. Good good. And to be honest, I do not have a good use case. Why should I prevent someone from printing my letters? Maybe photographers can use the EURion in their images. Copyright bullshit or something… Monitoring of XOS devices This week I developed some plugins for Nagios/Icinga to monitor network devices of the vendor Extreme Networks. All these plugins receive status information of, eg. switches, via SNMP. The Basic: Check Mem, CPU, and Fans Checking for available memory, for the device’s temperature, for the power supplies, and for fan states is quite straight forward. You just ask the switch for the values of a few OIDs, evaluate the answer, and tell Nagios/Icinga what to do. The Simple Network Management Protocol (SNMP) is actually a very easy to use protocol. There is an SNMP server, such as a router or a switch, which exposes management data through the SNMP protocol. To access these data you just send an object identify (OID) to an SNMP server and receive the corresponding value. So called management information bases (MIB) can tell you what a certain OID stands for. On the command line, for example, you could use snmpwalk to iterate over an OID subtree to, e.g., obtain information about the memory on a device: usr@srv$ snmpwalk -v 2c -c publicCommunityString switch.address.com 1.3.6.1.4.1.1916.1.32.2.2.1
1.3.6.1.4.1.1916.1.32.2.2.1.1.1 = Gauge32: 1
1.3.6.1.4.1.1916.1.32.2.2.1.2.1 = STRING: "262144"
1.3.6.1.4.1.1916.1.32.2.2.1.3.1 = STRING: "116268"
1.3.6.1.4.1.1916.1.32.2.2.1.4.1 = STRING: "7504"
1.3.6.1.4.1.1916.1.32.2.2.1.5.1 = STRING: "138372"


The OID 1.3.6.1.4.1.1916.1.32.2.2.1 addresses the memory information table of the SNMP provider at switch.address.com. The value at *.2.1 shows how much memory is installed, *.3.1 shows how much memory is free, *.4.1 shows how much is consumed by the system, and *.5.1 shows how much is consumed by user processes. Basic calculations tell us there are 262144/1024 = 256KB in total and 100*116268/262144 = 44.35% is free. A bit more logic for a warning/critical switch and the plugin is done.

The Feature: Monitoring of the FDB

But I would probably not write about that basic stuff if there was not an extra feature! I implemented a script to also monitor the FDB. FDB is and abbreviation for forwarding databases: The switch maintains a forwarding database (FDB) of all MAC addresses received on all of its ports. It, for example, uses the information in this database to decide whether a frame should be forwarded or filtered. Each entry consists of

• the MAC address of the device behind the port
• the associated VLAN
• the age of the entry – depending on the configuration the entries age out of the table
• some flags – e.g. is the entry dynamic or static
• the port

The table may look like the following:

> show fdb
Mac                     Vlan       Age  Flags         Port / Virtual Port List
------------------------------------------------------------------------------
01:23:45:67:89:ab    worknet(0060) 0056 n m           9
01:23:42:67:89:ab     mobnet(0040) 0001 n m           21

Flags : d - Dynamic, s - Static, p - Permanent, n - NetLogin, m - MAC, i - IP,
x - IPX, l - lockdown MAC, L - lockdown-timeout MAC, M- Mirror, B - Egress Blackhole,
b - Ingress Blackhole, v - MAC-Based VLAN, P - Private VLAN, T - VLAN translation,
D - drop packet, h - Hardware Aging, o - IEEE 802.1ah Backbone MAC,
S - Software Controlled Deletion, r - MSRP


As soon as the switch gets a frame on one port it learns the corresponding MAC address, port number, etc. into this table. So if a frame for this MAC address arrives it know where to send it to.

However, that content of a networking class. All we need to know is that a switch can tell you which device which MAC address is is connected to which port. And that’s the idea of check_extreme_fdb.pl! It compares the entries of the FDB with some expected entries in an CSV file. The CSV is supposed to contain three coloumns:

mac,port,vlan


If a MAC address in the FDB matches the MAC address in the CSV file it checks the ports and vlans. If those do not match, it will raise an error.

For the CSV: Feel free to leave port or vlan empty if you do not care about this detail. That means, if you just want to make sure that the device with the MAC 01:23:45:67:89:ab is in vlan worknet you add an entry such as:

01:23:45:67:89:ab,,worknet


Use -e <FILE> to pass the CSV file containing expected entry to the program and call it like beckham:

perl -w check_extreme_fdb.pl -s <SWITCH> -C <COMMUNITY-STRING> -e <EXPECTED>


Here, SWITCH being the switch’s address and COMMUNITY-STRING beeing the SNMP “passphrase”. You may also want to add -w to raise a warning if one of the entries in the CSV file wasn’t found in the FDB. To create a sample CSV file that matches the current FDB you can call it with --print.

To get the script have a look at the check_extreme_fdb.pl software page.

More Extreme Stuff

In addition there are some other scripts to monitor Extreme Networks devices:

Do I have a CD-RW?

You don’t know whether the CD drive on your machine is able to burn CDs? And too lazy to go off with your head under your table? Or you’re remote on the machine? Then that’s your command line:

$cat /proc/sys/dev/cdrom/info CD-ROM information, Id: cdrom.c 3.20 2003/12/17 drive name: sr0 drive speed: 32 drive # of slots: 1 Can close tray: 1 Can open tray: 1 Can lock tray: 1 Can change speed: 1 Can select disk: 0 Can read multisession: 1 Can read MCN: 1 Reports media changed: 1 Can play audio: 1 Can write CD-R: 1 Can write CD-RW: 1 Can read DVD: 1 Can write DVD-R: 1 Can write DVD-RAM: 1 Can read MRW: 1 Can write MRW: 1 Can write RAM: 1  Works on Debian based systems :) Docker Jail for Skype As I’m now permanently installed at our University (yeah!) I probably need to use skype more often than desired. However, I still try to avoid proprietary software, and skype is the worst of all. Skype is an obfuscated malicious binary blob with network capabilities as jvoisin beautifully put into words. I came in contact with skype multiple times and it was always a mess. Ok, but what are the options if I need skype? So far I’ve been using a virtual box if I needed to call somebody who insisted on using skype, but now that I’ll be using skype more often I need an alternative to running a second OS on my machine. My friend Tom meant to make a joke about using Docker and … TA-DAH! … Turns out it’s actually possible to jail a usable skype inside a Docker container! Guided by jvoisin’s article Running Skype in docker I created my own setup: The Dockerfile The Dockerfile is available from the skype-on-docker project on GitHub. Just clone the project and change into the directory: $ git clone https://github.com/binfalse/skype-on-docker.git
$cd skype-on-docker$ ls -l
total 12
-rw-r--r-- 1 martin martin   32 Jan  4 17:26 authorized_keys
-rw-r--r-- 1 martin martin 1144 Jan  4 17:26 Dockerfile
-rw-r--r-- 1 martin martin  729 Jan  4 17:26 README.md


The Docker image is based on a Debian:stable. It will install an OpenSSH server (it exposes 22) and download the skype binaries. It will also install the authorized_keys file in the home directories of root and the unprivileged user. Thus, to be able to connect to the container you need to copy your public SSH key into that file:

$cat ~/.ssh/id_rsa.pub >> authorized_keys  Good so far? Ok, then go for it! Build a docker image: $ docker build -t binfalse/skype .


This might take a while. Docker will execute the commands given in the Dockerfile and create a new Docker image with the name binfalse/skype. Feel free to choose a different name.. As soon as that’s finished you can instantiate and run a new container using:

$docker run -d -p 127.0.0.1:55757:22 --name skype_container binfalse/skype  This will start the container as a daemon (-d) with the name skype_container (--name skype_container) and the host’s port 55757 mapped to the container’s port 22 (-p 127.0.0.1:55757:22). Give it a millisecond to come up and then you should be able to connect to that container via ssh. From that shell you should be able to start an configure skype: $ ssh -X -p 55555 docker@127.0.0.1

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jan  4 23:07:37 2016 from 172.17.42.1
$skype  You can immediately go and do your chats and stuff, but you can also just configure skype. Do setup everything just like you want to find it when starting skype, for example tick the auto-login button to get rid of the login screen etc. As soon as that’s done, commit the changes to build a new image reflecting your preferences: $ docker commit skype_container binfalse/deb-skype


Now you’ll have an image called binfalse/deb-skype that contains a fully configured skype installation. Just kill the other container:

$docker stop skype_container$ docker rm skype_container


And now your typical workflow might look like:

docker run -d -p 127.0.0.1:55757:22 --name skype__ binfalse/deb-skype
sleep 1
ssh -X -p 55757 docker@127.0.0.1 skype && docker rm -f skype__


Feel free to cast it in a mould just as I did. The script is also available from my apt repo, it’s name is bf-skype-on-docker:

echo "deb http://apt.binfalse.de binfalse main" > /etc/apt/sources.list.d/binfalse.list
apt-get update && apt-get install bf-skype-on-docker


Getting into a new group

You know, … you just got this new floppy disk with very important material but you cannot access it because you’re not in the system’s floppy group and, thus, you’re not allowed to access the floppy device. Solution is easy: add your current user to the floppy group! Sounds easy, doesn’t it? The annoying thing is that those changes won’t take affect in the current session. You need to log out and log in again – quite annoying, especially if you’re into something with lots of windows and stuff. Just happened to me with docker again..

However, there are two methods to get into the new groups without the need to kill the current session:

• su yourself: let’s say your username is myname you just need to su myname to get a prompt with the new group memberships.
• ssh localhost: that also gives you a new session with updated affiliations.

That way, you do not need to start a new session. However, you still need to start all applications/tools from that terminal - might be odd to those who are used to the gnome/kde menues.. :)

Supplemental material

Display group membership:

 groups USERNAME


 groupadd GROUPNAME

 useradd -G GROUPNAME USERNAME