Generate PDF documents from smartphones, smartwatches, Raspberry Pis, and everywhere..

TEXPILE  – compiling LaTeX projects online
TEXPILE – compiling LaTeX projects online

I recently wanted to create a PDF file with a table of data on a device that did neither have much computational power nor disk space or any other available resources. So what are the options? Installing Word plus various add-ons? Or some Adobe-bloat? Pah.. that’s not even running on big machines…
The best option is of course LaTeX. Generating a tex document is neither storage nor time intensive. But to get proper LaTeX support you need some gigabytes of disk space available and compiling a tex document requires quite some computational time… So basically also no option for all devices..

If there wouldn’t be …

The Network Way

So we could just install the LaTeX dependencies on another, more powerful machine on the network and send our documents there to get it compiled. On that server we would have a web server running that has some scripts to

  • accept the tex file,
  • store it somewhere temporary,
  • execute the pdflatex call,
  • and send back the resulting PDF file.

And that’s exactly what TEXPILE does! It comPILEs laTEX documents on a webserver.

To compile a LaTeX project you just need to throw it as a form-encoded HTTP POST request against the TEXPILE server and the server will reply with the resulting PDF document. If your document project consists of multiple files, for example if you want to embed an image etc, you can create a ZIP file of all files that are necessary to compile the project and send this ZIP via HTTP POST to the TEXPILE server. In that case, however, you also need to tell TEXPILE which of the files in the ZIP container is suppossed to be the root document…

Sounds a bit scary, doesn’t it? PHP? Doing an exec to cal a binary? With user-uploaded data?

If there wouldn’t be …

The Docker Way

This approach wouldn’t be cool if it doesn’t follow the Docker-Hype!

We can put all of that in a Docker image and run a TEXPILE container on whatever machine we have at hand. I already provide a TEXPILE Docker container over there at the Docker Hub. That is, you may run a container where ever you want, you will always get the same result (#reproducibility) and you do not need to worry about attacks (#safe). Even if there is an attacke able to cheat the PHP and pdflatex tools, he will still be jailed in the Docker container that you can easily throw away every once in a while and start a new clean one…

And running a fresh container is really super easy! With docker installed you just need to call the following command:

docker run -it --rm -p 1234:80 binfalse/texpile

It will download the latest version of TEXPILE from the Docker hub (if you do not have it, yet) and run a container of it on your machine. It will also bind port 1234 of your machine to the web server of TEXPILE, so you will be able to talk to TEXPILE at http://your.machine:1234.

Give it a try. Just accessing it with a web browser will show you some help message.

For Example

Single-Document Project

Let’s try an example using curl. Let’s assume your TEXPILE container is running on a machine with the DNS name localhost, and let’s say you forward port 1234 to the HTTP server inside the container. Then you can just send your LaTeX document example.tex as project field of a form-encoded HTTP POST request to TEXPILE:

curl -F project=@example.tex http://localhost:1234 > /tmp/exmaple.pdf

Have a look into /tmp/exmaple.pdf to find the PDF version of example.tex.

Multi-Document Project

If you have a project that consists of multiple documents, for example tex files, images, header files, bibliography etc, then you need to ZIP the whole project before you can throw it against TEXPILE. Let’s assume your ZIP container can be found in /tmp/zipfile.zip and the root tex-document in the container is called example.tex. Then you can send the ZIP container as the project field and the root document name as the filename field, as demonstrated in the following call:

curl -F project=@/tmp/zipfile.zip -F filename=example.tex http://localhost:1234 > /tmp/pdffile.pdf

If the root document is not on the top level of the archive, but for example in the somedir directory you need to add the directory name to the filename field. Just to update the filename parameter of the previous call to:

curl -F project=@/tmp/zipfile.zip -F filename=somedir/example.tex http://localhost:1234 > /tmp/pdffile.pdf

The resulting PDF document can then be found in /tmp/pdffile.pdf.

More Examples

The GIT project over at GitHub contains some more examples for different programming languages. It also comes with some sample projects, so you can give it a try without much hassle…

Error Control

Obviously, the compilation may fail. Everyone who’s every been working with LaTeX knows that. You may have an error in your tex code, or the file-upload failed, or TEXPILE’s disk ran out of space, or…

Good news is that TEXPILE was developed with these problems in mind. TEXPILE will give you a hint on what happened in its HTTP status code. Only if everything was fine and the compilation was succesfull you’ll get a 2XX status code. In this case you can expect to find a proper PDF document in TEXPILE’s response
If there was an error at any point you’ll either get a 4xx status code or a 5xx status code in return. In case of an error TEXPILE of course cannot return a PDF document, but it will return an HTML document with the detailed error. Depending on how far it has come with it’s job, you’ll find some error messages on the upload, or on missing parameters, and if the LaTeX compilation faile you’ll even get the whole output from the pdflatex command! A lot of information that will help you debug the actual problem.

Summary

Using TEXPILE it is super easy to generate PDF document from every device with network access. You can for example export some sensor data as a nice table from your smartwatch, or some medic information as graphs and formulas from you fitbit, or create tikz-images on a raspberry pi, you can even instantly generate new versions of an EULA on your google glasses…

TEXPILE is free software and I am always super-happy when people contribute to open tools! Thus, go ahead and

  • Send comments, issues, and feature requests by creating a new ticket
  • Spread the tool: Tell your friends, share your thoughts in your blog, etc.
  • Propose new modifications: fork the repo – add your changes – send a pull request
  • Contribute more example code for other languages

No matter if it’s actual code extensions, more examples, bug fixes, typo corrections, artwork/icons: Everything is welcome!!

Eventually learning how to wield PAM

PAM. The Pluggable Authentication Modules. I’m pretty sure you heard of it. It sits in its /etc/pam.d/ and does a very good job. Reliable and performant, as my guts tend to tell me.

Unless… You want something specific! In my case that always implied a lot of trial and error. Copying snippets from the internet and moving lines up and down and between different PAM config files. So far, I managed to conquer that combinatorial problem in less time than I would need to learn PAM - always with bad feeling because I don’t know what I’ve been doing with the f*cking sensible auth system…

But this time PAM drives me nuts. I want to authenticate users via the default *nix passwd as well as using an LDAP server -AND- pam_mount should mount something special for every LDAP user. The trial and error method gave me two versions of the config that works for one of the tasks, but I’m unable to find a working config for both. So… Let’s learn PAM.

The PAM

On linux systems, PAM lives in /etc/pam.d/. There are several config files for differen purposes. You may change them and they will have effect immediately – no need to restart anything.

PAM allows for what they call “stacking” of modules: Every configuration file defines a stack of modules that are relevant for the authentication for an application. Every line in a config file containes three tokens:

  • the realm is the first word in a line; it is either auth, account, password or session
  • he control determines what PAM should do if the module either succeeds/fails
  • the module the actual module that gets invoked and optionally some module parameters

Realms

There are four different realms:

  • auth: checks that the user is who he claims to be; usually password base
  • account: account verification functionality; for example checking group membership, account expiration, time of day if a user only has part-time access, and whether a user account is local or remote
  • password: needed for updating passwords for a given service; may involve e.g. dictionary checking
  • session: stuff to setup or cleanup a service for a given user; e.g. launching system-wide init script, performing special logging, or configuring SSO

Controls

In most cases the controls consist of a single keyword that tells PAM what to do if the corresponding module either succeeds or fails. You need to understand, that this just controls the PAM library, the actual module neither knows not cares about it. The four typical keywords are:

  • required: if a ‘required’ module is not successful, the operation will ultimately fail. BUT only after the modules below are done! That is because an attacker shouldn’t know which or when a module fails. So all modules will be invoked even if the first on fails, giving less information to the bad guys. Just note, that a single failure of a ‘required’ module will cause the whole thing to fail, even if everything else succeeds.
  • requisite: similar to required, but the whole thing will fail immediately and the following modules won’t be invoked.
  • sufficient: a successful ‘sufficient’ module is enough to satisfy the requirements in that realm. That means, the following ‘sufficient’ won’t be invoked! However, sufficient modules are only sufficient, that means (i) they may fail but the realm may still be satisfied by something else, and (ii) they may all succeed but the realm may fail because a required module failed.
  • optional: optional modules are non-critical, they may succeed or fail, PAM actually doesn’t care. PAM only cares if there are exclusively optional modules in a particular stack. In that case at least one of them needs to succeed.

Modules

The last token of every line lists the path to the module that will be invoked. You may point to the module using an absolute path starting with / or a path relative from PAMs search directories. The search path depends on the system your working on, e.g. for Debian it is /lib/security/ or /lib64/security/. You may also pass arguments to the module, common arguments include for example use_first_pass which provides the password of an earlier module in the stack, so the users doesn’t need to enter their password again (e.g. useful when mounting an encrypted device that uses the same password as the user account).

There are dozens of modules available, every module supporting different options. Let me just forward you to a PAM modules overview at linux-pam.org. and an O’Reilly article on useful PAM modules.

That’s it

Yeah, reading and writing about it helped me fixing my problem. This article is probably just one within a hundred, so if it didn’t help you I’d like to send you to one of the following. Try reading them, if that doesn’t help write a blog post about it ;-)

Further Resources

PHP file transfer: Forget the @ - use `curl_file_create`

I just struggled uploading a file with PHP cURL. Basically, sending HTTP POST data is pretty easy. And to send a file you just needed to prefix it with an at sign (@). Adapted from the most cited example:

<?php
$target_url = 'http://server.tld/';

$post = array (
    'extra_info' => '123456',
    'file_contents' => '@' . realpath ('./sample.jpeg');
    );

$ch = curl_init ($target_url);
curl_setopt ($ch, CURLOPT_POST, 1);
curl_setopt ($ch, CURLOPT_POSTFIELDS, $post);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$result=curl_exec ($ch);
curl_close ($ch);
?>

You see, if you add an ‘@’ sign as the first character of a post field the content will be interpreted as a file name and will be replaced by the file’s content.

At least, that is how it used to be… And how most of the examples out there show you.

However, they changed the behaviour. They recognised that this is obviously inconvenient, insecure and error prone. You cannot send POST data that starts with an @ and you always need to sanitise user-data before sending it, as it otherwise may send the contents of files on your server. And, thus, they changed that behaviour in version 5.6, see the RFC.

That means by default the @/some/filename.ext won’t be recognized as a file – PHP cURL will literally send the @ and the filename (@/some/filename.ext) instead of the content of that file. Took ma a while and some tcpdumping to figure that out..

Instead, they introduced a new function called curl_file_create that will create a proper CURLFile object for you. Thus, you should update the above snippet with the following:

<?php
$target_url = 'http://server.tld/';

$post = array (
    'extra_info' => '123456',
    'file_contents' => curl_file_create ('./sample.jpeg');
    );
    
$ch = curl_init ($target_url);
curl_setopt ($ch, CURLOPT_POST, 1);
curl_setopt ($ch, CURLOPT_POSTFIELDS, $post);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$result=curl_exec ($ch);
curl_close ($ch); 
?>

Note that the contents of the file_contents field of the $post data differs.

The php.net manual for the file_contents function is unfortunatelly not very verbose, but the RFC on wiki.php.net tells you a bit more about the new function:

<?php
/**
 * Create CURLFile object
 * @param string $name File name
 * @param string $mimetype Mime type, optional
 * @param string $postfilename Post filename, defaults to actual filename
 */
function curl_file_create($name, $mimetype = '', $postfilename = '')
{}
?>

So you can also define a mime type and some file name that will be sent.

That of course makes it a bit tricky to develop for different platforms, as you want your code to be valid on both PHP 5.4 and PHP 5.6 etc. Therefore you could introduce the following as a fallback:

<?php
if (!function_exists('curl_file_create'))
{
	function curl_file_create($filename, $mimetype = '', $postname = '')
	{
		return "@$filename;filename="
			. ($postname ?: basename($filename))
			. ($mimetype ? ";type=$mimetype" : '');
	}
}
?>

as suggested by mipa on php.net. This creates a dummy function that is compliant with the old behaviour if there is no curl_file_create, yet.

Auto-Update Debian based systems

Updating System: automatically fixing security issues and installing latest features
SysUpdate in Progress

Updating your OS is obviously super-important. But it’s also quite annoying and tedious, especially if you’re in charge of a number of systems. In about 99% it’s a monkey’s job, as it just involves variations of

aptitude update
aptitude upgrade

Usually, nothing interesting happens, you just need to wait for the command to finish.

The Problem

The potential consequences in the 1% cases lets us usually swallow the bitter pill and play the monkey. The Problem is, that in some cases there is a an update that involves a modification of some configuration file that contains some adjustments of you. Let’s say you configured a daemon to listen at a specific port of your server, but in the new version they changed the syntax of the config file. That can hardly be automatised. Leaving the old version of the config will break the software, deploying the new version will dispose your settings. Thus, human interaction is required…

At least I do not dare to think about a solution on how to automatise that. But we could …

Detect the 1% and Automatise the 99%

What do we need do to prevent the configuration conflict? We need to find out which software will be updated and see if we modified one of the configuration files:

Update the package list

Updating your systems package list can be considered safe:

aptitude update

The command downloads a list of available packages from the repositories and compares it with the list of packages installed on your system. Based on that, your update-systems knows which packages can be upgraded.

Find out which software will be updated.

The list of upgradeable packages can be obtained by doing a dry-run. The --simulate flag shows us what will be done without touching the system, -y answers every question with yes without human interaction, and -v gives us a parsable list. For example, from a random system:

root@srv » aptitude --simulate -y -v safe-upgrade
The following packages will be upgraded:
  ndiff nmap
2 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,230 kB of archives. After unpacking 256 kB will be used.
Inst nmap [6.47-3+b1] (6.47-3+deb8u2 Debian:8.5/stable [amd64])
Inst ndiff [6.47-3] (6.47-3+deb8u2 Debian:8.5/stable [all])
Conf nmap (6.47-3+deb8u2 Debian:8.5/stable [amd64])
Conf ndiff (6.47-3+deb8u2 Debian:8.5/stable [all])

That tells us, the new versions of the tools nmap and ndiff will be installed. Capturing that is simple, we basically just need to grep for '^Inst'.

Check if we modified corresponding configuration files

To get the configuration files of a specific package we can ask the dpkg subsystem, for example for a dhcp client:

dpkg-query --showformat='${Conffiles}\n' --show isc-dhcp-client
 /etc/dhcp/debug 521717b5f9e08db15893d3d062c59aeb
 /etc/dhcp/dhclient-exit-hooks.d/rfc3442-classless-routes 95e21c32fa7f603db75f1dc33db53cf5
 /etc/dhcp/dhclient.conf 649563ef7a61912664a400a5263958a6

Every non-empty line contains a configuration file’s name and the corresponding md5 sum of the contents as delivered by the repository. That means, we just need to md5sum all the files on our system and compare the hashes to see if we modified the file:

# for every non-empty line
dpkg-query --showformat='${Conffiles}\n' --show $pkg | grep -v "^$" |
# do the following
while read -r conffile
do
  file=$(echo $conffile | awk '{print $1}')
  # the hash of the original content
  exphash=$(echo $conffile | awk '{print $2}')
  # the hash of the content on our system
  seenhash=$(md5sum $file | awk '{print $1}')
  # do thy match?
  if [ "$exphash" != "$seenhash" ]
  then
    # STOP THE UPGRADE
    # AND NOTIFY THE ADMIN
    # TO MANUALLY UPGRADE THE OS
  fi
done

Now we should have everything we need to compile it into a script that we can give to cron :)

The safe-upgrade script

I developed a tiny tool that can be downloaded from GitHub.. It consists of two files:

  • /etc/cron.daily/safeupdatescript.sh is the actual acript that does the update and safe-upgrade of the system.
  • /etc/default/deb-safeupgrade can be used to overwrite the settings (hostname, mail address of the admin, etc) for a system. If it exists, the other script will source it.

In addition, there is a Debian package available from my apt-repository. Just install it with:

aptitude install bf-safeupgrade

and let me know if there are any issues.

Disclaimer

The mentioned figure 99% is just a guess and may vary. It strongly depends on your operating system, version, and the software installed ;-)

References

mvn: Automagically create a Docker image

Maven conjures Docker images
Maven conjures Docker images

Having a Docker image of your software projects may make things easier for you, but will for sure lower the barrier for users to use your tools — at least in many cases ;-)

I am developing many tools in Java using Maven to manage dependencies. Thus, I’ve been looking for means to generate corresponding Docker files using the very same build management. There are already a few approaches to build Docker images through Maven, e.g. alexec’s docker-maven-plugin, and fabric8io’s docker-maven-pluginand so on — just to name a few. However, all theses solutions seem super-heavy and they require learning new syntax and stuff, while it is so easy and doesn’t require any third party plugins.

Build Docker images using maven-antrun

Maven’s antrun plugin allows for execution of external commands. That means, you just need to maintain a proper Dockerfile along with your sources and after building the tool with maven you can call the docker-build command to create a Docker image of the current version of your tool.

I did that for a Java web application. The Dockerfile is stored as a resource in src/main/docker/Dockerfile is actually very simple:

FROM tomcat:8-jre8
MAINTAINER martin scharm

# remove the default tomcat application
RUN rm -rf /usr/local/tomcat/webapps/ROOT /usr/local/tomcat/webapps/ROOT.war

# add the BiVeS-WebApp as the new default web app
COPY BiVeS-WS-${project.version}.war /usr/local/tomcat/webapps/ROOT.war

Using Maven we can make sure (i) that the Dockerfile is copied next to the compiled and packaged tool in the target directory, (ii) that the placeholder ${project.version} in the Dockerfile is replaced with the current version of your tool, and (iii) that the docker-build command is invoked.

Copy the Dockerfile to the right place

Maven’s resources-plugin is ideally suited to deal with resources. To copy all Docker related resources to the target directory you can use the following snippet:

<plugin>
    <artifactId>maven-resources-plugin</artifactId>
    <executions>
        <execution>
            <id>copy-resources</id>
            <phase>validate</phase>
            <goals>
                <goal>copy-resources</goal>
            </goals>
            <configuration>
                <outputDirectory>${basedir}/target</outputDirectory>
                <resources>
                    <resource>
                        <directory>src/main/docker</directory>
                        <filtering>true</filtering>
                    </resource>
                </resources>
            </configuration>
        </execution>
    </executions>
</plugin>

In addition, the <filtering>true</filtering> part also makes sure to replace all Maven-related placeholders, just like the ${project.version} that we’ve been using. Thus, this solves (i) and (ii) and after the validate phase we’ll have a proper target/Dockerfile.

Build a Docker image

Using Maven’s antrun-plugin we can call the docker tool:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.6</version>
<executions>
  <execution>
      <phase>deploy</phase>
      <configuration>
          <target>
              <exec executable="docker">
                  <arg value="build"/>
                  <arg value="-t"/>
                  <arg value="binfalse/bives-webapp:${project.version}"/>
                  <arg value="target"/>
              </exec>
          </target>
      </configuration>
      <goals>
          <goal>run</goal>
      </goals>
  </execution>
</executions>
</plugin>

This executes a command like

docker build -t binfalse/bives-webapp:1.6.2 target

after the deploy phase. Thus, it builds a docker image tagged with the current version of your tool. The build’s context is target, so it will use the target/Dockerfile which COPYs the new version of your tool into the image.

Automatically build images using a Maven profile

I created a docker profile in Maven’s configuration file that is active per default if there is a src/main/docker/Dockerfile in your repository:

<profile>
    <id>docker</id>
    <activation>
        <file>
            <exists>src/main/docker/Dockerfile</exists>
        </file>
    </activation>
    <build>
        <plugins>
            <plugin>
                <artifactId>maven-resources-plugin</artifactId>
                <!-- ... see above ... -->
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-antrun-plugin</artifactId>
                <!-- ... see above ... -->
            </plugin>
        </plugins>
    </build>
</profile>

Bonus: Also push the new image to the Docker Hub

To also push the image you need to execute the push command:

docker push binfalse/bives-webapp:1.6.2

And due to the latest-confusion of Docker you also should create the latest-alias and also push that:

docker tag -f binfalse/bives-webapp:1.6.2 binfalse/bives-webapp:latest
docker push binfalse/bives-webapp:latest

However, both is easy. Just append a few more exec calls in the antrun-plugin! The final pom.xml snippet can be found on GitHub.

Supplement

The image for this article was derived from Wikipedia’s Apache Logo and Wikipedia’s Docker logo, licensed under the Apache License, Version 2.0.

Create an Unscanable Letter

The EURion constallation on a letter
The EURion constallation on a letter

Some time ago I’ve heard about the EURion constellation. Never heard about it? Has nothing to do with stars or astrology. It’s the thing on your money! :)

Take a closer look at your bills and you’ll discover plenty of EURions, as shown in the picture on the right. Just a few inconspicuous dots. So what’s it all about? The EURion constellation is a pattern to be recognized by imaging software, so that it can recognize banknotes. It was invented to prevent people from copying money :)

But I don’t know of any law that prohibits using that EURion, so I’ve been playing around with it. Took me some trials to find the optimal size, but I was able to create a document that includes the EURion. That’s the essential tex code:

wanna scan my letter?
\includegraphics[width=7mm]{EURion.pdf}

The whole environment can be found on GitHub, together with the EURion image and stuff. I also provide the resulting letter.

Of course I immediately asked some friends to try to scan the letter, but it turns out, that not all scanners/printers are aware of the EURion… So it’s a bit disappointing, but I learned another thing. Good good. And to be honest, I do not have a good use case. Why should I prevent someone from printing my letters? Maybe photographers can use the EURion in their images. Copyright bullshit or something…

Monitoring of XOS devices

Monitor Relocation of Hardware
Monitor Relocation of Hardware

This week I developed some plugins for Nagios/Icinga to monitor network devices of the vendor Extreme Networks. All these plugins receive status information of, eg. switches, via SNMP.

The Basic: Check Mem, CPU, and Fans

Checking for available memory, for the device’s temperature, for the power supplies, and for fan states is quite straight forward. You just ask the switch for the values of a few OIDs, evaluate the answer, and tell Nagios/Icinga what to do.

The Simple Network Management Protocol (SNMP) is actually a very easy to use protocol. There is an SNMP server, such as a router or a switch, which exposes management data through the SNMP protocol. To access these data you just send an object identify (OID) to an SNMP server and receive the corresponding value. So called management information bases (MIB) can tell you what a certain OID stands for.

On the command line, for example, you could use snmpwalk to iterate over an OID subtree to, e.g., obtain information about the memory on a device:

usr@srv $ snmpwalk -v 2c -c publicCommunityString switch.address.com 1.3.6.1.4.1.1916.1.32.2.2.1
1.3.6.1.4.1.1916.1.32.2.2.1.1.1 = Gauge32: 1
1.3.6.1.4.1.1916.1.32.2.2.1.2.1 = STRING: "262144"
1.3.6.1.4.1.1916.1.32.2.2.1.3.1 = STRING: "116268"
1.3.6.1.4.1.1916.1.32.2.2.1.4.1 = STRING: "7504"
1.3.6.1.4.1.1916.1.32.2.2.1.5.1 = STRING: "138372"

The OID 1.3.6.1.4.1.1916.1.32.2.2.1 addresses the memory information table of the SNMP provider at switch.address.com. The value at *.2.1 shows how much memory is installed, *.3.1 shows how much memory is free, *.4.1 shows how much is consumed by the system, and *.5.1 shows how much is consumed by user processes. Basic calculations tell us there are 262144/1024 = 256KB in total and 100*116268/262144 = 44.35% is free. A bit more logic for a warning/critical switch and the plugin is done.

The Feature: Monitoring of the FDB

But I would probably not write about that basic stuff if there was not an extra feature! I implemented a script to also monitor the FDB. FDB is and abbreviation for forwarding databases: The switch maintains a forwarding database (FDB) of all MAC addresses received on all of its ports. It, for example, uses the information in this database to decide whether a frame should be forwarded or filtered. Each entry consists of

  • the MAC address of the device behind the port
  • the associated VLAN
  • the age of the entry – depending on the configuration the entries age out of the table
  • some flags – e.g. is the entry dynamic or static
  • the port

The table may look like the following:

> show fdb
Mac                     Vlan       Age  Flags         Port / Virtual Port List
------------------------------------------------------------------------------
01:23:45:67:89:ab    worknet(0060) 0056 n m           9
01:23:42:67:89:ab     mobnet(0040) 0001 n m           21

Flags : d - Dynamic, s - Static, p - Permanent, n - NetLogin, m - MAC, i - IP,
        x - IPX, l - lockdown MAC, L - lockdown-timeout MAC, M- Mirror, B - Egress Blackhole,
        b - Ingress Blackhole, v - MAC-Based VLAN, P - Private VLAN, T - VLAN translation,
        D - drop packet, h - Hardware Aging, o - IEEE 802.1ah Backbone MAC,
        S - Software Controlled Deletion, r - MSRP

As soon as the switch gets a frame on one port it learns the corresponding MAC address, port number, etc. into this table. So if a frame for this MAC address arrives it know where to send it to.

However, that content of a networking class. All we need to know is that a switch can tell you which device which MAC address is is connected to which port. And that’s the idea of check_extreme_fdb.pl! It compares the entries of the FDB with some expected entries in an CSV file. The CSV is supposed to contain three coloumns:

mac,port,vlan

If a MAC address in the FDB matches the MAC address in the CSV file it checks the ports and vlans. If those do not match, it will raise an error.

For the CSV: Feel free to leave port or vlan empty if you do not care about this detail. That means, if you just want to make sure that the device with the MAC 01:23:45:67:89:ab is in vlan worknet you add an entry such as:

01:23:45:67:89:ab,,worknet

Use -e <FILE> to pass the CSV file containing expected entry to the program and call it like beckham:

perl -w check_extreme_fdb.pl -s <SWITCH> -C <COMMUNITY-STRING> -e <EXPECTED>

Here, SWITCH being the switch’s address and COMMUNITY-STRING beeing the SNMP “passphrase”. You may also want to add -w to raise a warning if one of the entries in the CSV file wasn’t found in the FDB. To create a sample CSV file that matches the current FDB you can call it with --print.

To get the script have a look at the check_extreme_fdb.pl software page.

More Extreme Stuff

In addition there are some other scripts to monitor Extreme Networks devices:

Do I have a CD-RW?

You don’t know whether the CD drive on your machine is able to burn CDs? And too lazy to go off with your head under your table? Or you’re remote on the machine? Then that’s your command line:

$ cat /proc/sys/dev/cdrom/info
CD-ROM information, Id: cdrom.c 3.20 2003/12/17

drive name:             sr0
drive speed:            32
drive # of slots:       1
Can close tray:         1
Can open tray:          1
Can lock tray:          1
Can change speed:       1
Can select disk:        0
Can read multisession:  1
Can read MCN:           1
Reports media changed:  1
Can play audio:         1
Can write CD-R:         1
Can write CD-RW:        1
Can read DVD:           1
Can write DVD-R:        1
Can write DVD-RAM:      1
Can read MRW:           1
Can write MRW:          1
Can write RAM:          1

Works on Debian based systems :)

Docker Jail for Skype

A jail for skype powered by Docker!
A jail for skype powered by Docker!

As I’m now permanently installed at our University (yeah!) I probably need to use skype more often than desired. However, I still try to avoid proprietary software, and skype is the worst of all. Skype is an

obfuscated malicious binary blob with network capabilities

as jvoisin beautifully put into words. I came in contact with skype multiple times and it was always a mess. Ok, but what are the options if I need skype? So far I’ve been using a virtual box if I needed to call somebody who insisted on using skype, but now that I’ll be using skype more often I need an alternative to running a second OS on my machine. My friend Tom meant to make a joke about using Docker and … TA-DAH! … Turns out it’s actually possible to jail a usable skype inside a Docker container! Guided by jvoisin’s article Running Skype in docker I created my own setup:

The Dockerfile

The Dockerfile is available from the skype-on-docker project on GitHub. Just clone the project and change into the directory:

$ git clone https://github.com/binfalse/skype-on-docker.git
$ cd skype-on-docker
$ ls -l
total 12
-rw-r--r-- 1 martin martin   32 Jan  4 17:26 authorized_keys
-rw-r--r-- 1 martin martin 1144 Jan  4 17:26 Dockerfile
-rw-r--r-- 1 martin martin  729 Jan  4 17:26 README.md

The Docker image is based on a Debian:stable. It will install an OpenSSH server (it exposes 22) and download the skype binaries. It will also install the authorized_keys file in the home directories of root and the unprivileged user. Thus, to be able to connect to the container you need to copy your public SSH key into that file:

$ cat ~/.ssh/id_rsa.pub >> authorized_keys

Good so far? Ok, then go for it! Build a docker image:

$ docker build -t binfalse/skype .

This might take a while. Docker will execute the commands given in the Dockerfile and create a new Docker image with the name binfalse/skype. Feel free to choose a different name.. As soon as that’s finished you can instantiate and run a new container using:

$ docker run -d -p 127.0.0.1:55757:22 --name skype_container binfalse/skype

This will start the container as a daemon (-d) with the name skype_container (--name skype_container) and the host’s port 55757 mapped to the container’s port 22 (-p 127.0.0.1:55757:22). Give it a millisecond to come up and then you should be able to connect to that container via ssh. From that shell you should be able to start an configure skype:

$ ssh -X -p 55555 docker@127.0.0.1

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jan  4 23:07:37 2016 from 172.17.42.1
$ skype

You can immediately go and do your chats and stuff, but you can also just configure skype. Do setup everything just like you want to find it when starting skype, for example tick the auto-login button to get rid of the login screen etc. As soon as that’s done, commit the changes to build a new image reflecting your preferences:

$ docker commit skype_container binfalse/deb-skype

Now you’ll have an image called binfalse/deb-skype that contains a fully configured skype installation. Just kill the other container:

$ docker stop skype_container
$ docker rm skype_container

And now your typical workflow might look like:

docker run -d -p 127.0.0.1:55757:22 --name skype__ binfalse/deb-skype
sleep 1
ssh -X -p 55757 docker@127.0.0.1 skype && docker rm -f skype__

Feel free to cast it in a mould just as I did. The script is also available from my apt repo, it’s name is bf-skype-on-docker:

echo "deb http://apt.binfalse.de binfalse main" > /etc/apt/sources.list.d/binfalse.list
apt-get update && apt-get install bf-skype-on-docker

Getting into a new group

You know, … you just got this new floppy disk with very important material but you cannot access it because you’re not in the system’s floppy group and, thus, you’re not allowed to access the floppy device. Solution is easy: add your current user to the floppy group! Sounds easy, doesn’t it? The annoying thing is that those changes won’t take affect in the current session. You need to log out and log in again – quite annoying, especially if you’re into something with lots of windows and stuff. Just happened to me with docker again..

However, there are two methods to get into the new groups without the need to kill the current session:

  • su yourself: let’s say your username is myname you just need to su myname to get a prompt with the new group memberships.
  • ssh localhost: that also gives you a new session with updated affiliations.

That way, you do not need to start a new session. However, you still need to start all applications/tools from that terminal - might be odd to those who are used to the gnome/kde menues.. :)

Supplemental material

Display group membership:

 groups USERNAME

Add a new system group:

 groupadd GROUPNAME

Add a user to a group:

 useradd -G GROUPNAME USERNAME