KDE file type actions

Annoyingly, KDE’s PDF viewer okluar always opened links to websites with an editor presenting me the source code. But I just figured out how to change this behavior.

 kcmshell4 dialog to configure filetype-application-mappings
kcmshell4 dialog to configure filetype-application-mappings

KDE maintains a central config defining what to do with certain file types. Unfortunately, in my case the default application for HTML files was an editor (kwrite/kate). I don’t know who or what defined this stupid behavior, but there is a tool called kcmshell4 to edit the KDE configuration. That said, to edit the filetype-application-mapping hand it the parameter filetypes :

usr@srv % kcmshell4 filetypes

You’ll get a dialog to define a mapping for each known file type. In my case I had to configure okular to open links to HTML pages with konqueror. Hope that helps you to save some time ;-)

encfs: transparent crypto overlay

encfs is a cryptographic file system (encfs-website). The principle is very easy, you “bind-mount” one directory (containing the crypt stuff) to another directory (where it’s unencrypted). Thus, you can share the encrypted stuff and nobody but you can read your data. So this system is excellent applicable for cloud services like Dropbox, which trap you with some space in the cloud “for free”, but want you to share your private data with them. In some <p>’s we’ll see how to setup encfs for Dropbox, but let’s first take a look at encfs itself.


First of all you have to install encfs. Assuming you’re sitting in front of a Debian-based os:

root@abakus ~ # aptitude install encfs

Since encfs is fuse-based the user who wants to use encfs has to be member of the group fuse . You may use the groups command to make sure you belong to fuse :

martin@abakus ~ % groups
martin mail fuse

If you’re not yet member of that group edit the /etc/group file or use the useradd command (howto). To apply the changes you need to re-login or use newgrp (man newgrp).

That’s it, now the way to use encfs is parved. Let’s say we want to work with our data in /dir/clear , while the whole stuff is stored encrypted in /dir/crypt . It’s quite simple to setup this environment, just call encfs [crypt-dir] [decrypt-dir] :

martin@abakus ~ % encfs /dir/crypt /dir/clear
Creating new encrypted volume.
Please choose from one of the following options:
 enter "x" for expert configuration mode,
 enter "p" for pre-configured paranoia mode,
 anything else, or an empty line will select standard mode.

Give it a p and choose a password. This command will install an encrypted container in /dir/crypt and immediately mount it to /dir/clear . Feel free to create some files in /dir/clear (you’re new working directory) and compare this directory with /dir/crypt . You’ll see that you are not able to understand the files in /dir/crypt , unless you’re a genius or the setup failed. Thus, it’s no issue if anyone might have access to the content in /dir/crypt .

To unmount the clear data use fusermount -u /dir/clear . To remount it just call again encfs /dir/crypt /dir/clear , it will just ask you for the password to decrypt the data.

Of course it’s not very convenient to mount the directory every time manually, hence there is a workaround to automount your encfs directories on login. You need to install libpam-mount and libpam-encfs :

root@abakus ~ # aptitude install libpam-mount libpam-encfs

To automatically mount an encfs on login the password for the crypt-fs has to be the same as the password for your user account! If that’s the case, add a line like the following to /etc/security/pam_mount.conf.xml :

    <volume user="martin" fstype="fuse" path="encfs#/dir/crypt" mountpoint="/dir/clear" />

On your next login this directory will automatically be mounted. Very smart!

Using encfs for the cloud

Ok, let’s get to the main reason for this article, winking towards Norway ;) . As you might have heard, there are some file hosting services out there, like Dropbox or Ubuntu One. They provide some space in the cloud which can be mounted to different devices, so that your data is sync’ed between your systems. Unfortunately, most of these services want to read your data. E.g. the Dropbox system wants to store a file used by multiple users only once. Even if they pretend to assure that nobody’s able to read your private data, you all know that this is bullshit nearly impossible! However, I strongly recommend to not push critical/private files to these kind of providers.

But, thada, you’ve just learned how to sync your files using the cloud while keeping them private! Let’s assume the directory /home/martin/Dropbox is monitored by Dropbox, you just need to create two more directories, like /home/martin/Dropbox/private (for encrypted files to be sync’ed) and /home/martin/Dropbox-decrypt (for decryption). Mount /home/martin/Dropbox/private to /home/martin/Dropbox-decrypt using encfs and work in /home/martin/Dropbox-decrypt . As explained above, feel free to setup an automount using pam ;-) Everything in /home/martin/Dropbox but not in the private directory will be sync’ed unencrypted, so you won’t loose the opportunity to share some open data with [whoever] e.g. via web browser.

Of course, this method comes with some drawbacks:

  • It is a bit more work to setup every client, before you can start working with your private data (fortunately the overhead is kept in reasonable limits)
  • You cannot access these files through the web browser, or using your mobile phone (unless your phone comes with encfs-support)

All in all, you need to decide on your own, how much you trust Dropbox (and alike) and which kind of data you commit to Dropbox unencrypted.

Sync the clock w/o NTP

The network time protocol (NTP) is a really smart and useful protocol to synchronize the time of your systems, but even if we are in two-thousand-whatever there are reasons why you need to seek for alternatives...

You may now have some kind of »what the [cussword of your choice]« in mind, but I have just been in an ugly situation. All UDP traffic is dropped and I don't have permissions to adjust the firewall.. And you might have heard about the consequences of time differences between servers. Long story short, there is a good solution to sync the time via TCP, using the Time Protocol and a tool called `rdate` .

Time Master

First off all you need another server having a correct time (e.g. NTP sync'ed), which can be reached at port 37. Let's call this server `$MASTER` . To enable the Time Protocol on `$MASTER` you have to enable the time service in (x)inetd. For instance to enable the TCP service for a current `xinetd` you could create a file in `/etc/xinetd.d/time` with the following contents:

service time 
    disable     = no 
    type        = INTERNAL
    id          = time-stream
    socket_type = stream
    protocol    = tcp
    user        = root 
    wait        = no 

Such a file may already exist, so you just have to change the value of the `disable` -key to `no` . Still using inetd? I'm sure you'll find your way to enable the time server on your system :)

Time Slave

On the client, which is not allowed to use NTP (wtfh!?), you need to install `rdate` :

aptitude install rdate

Just call the following command to synchronize the time of the client with `$MASTER` :

rdate $MASTER

Since `rdate` immediately corrects the time of your system you need to be root to run this command.

Finally, to readjust the time periodically you might want to install a cronjob. Beeing root call `crontab -e` to edit root's crontab and append a line like the following:

# m     h       dom     mon     dow     command
0       */6     *       *       *       [ -x /usr/bin/rdate ] &amp;&amp; /usr/bin/rdate $MASTER &gt;&gt; /dev/null

This will synchronize the time of your client with the time of `$MASTER` every six hours. (Don't forget to substitute `$MASTER` using your desired server IP or DNS.)


Last but not least I want you to be aware that this workaround just keeps the difference in time between both systems less than 0.5 secs. Beyond all doubt, looking at NTP that's very poor. Nevertheless, 0.5 secs delay is much better than several minutes or even hours!

If it is also not permitted to speak to port 37 you need to tunnel your connections or you have to tell the time server to listen to another, more common port (e.g. 80, 443, or 993), as long as they are not already allocated by other services..

Bash Wildcards

I wanted to publish this summary about wildcards in the bash (and similar shells) some time ago, but didn’t finish it. But finally it gets published.

The shell handles words or patterns containing a wildcard as a template. Available filenames are tested to see if they fit this template. This evaluation is also called globbing. Let’s have a look at a small example:

me@kile /tmp/blog $ ls
aaa   aaa2  aaaa1  aaaa3  aaaa5  abbb  bbbb
aaa1  aaa3  aaaa2  aaaa4  aaab   acdc  caab
me@kile /tmp/blog $ ls *b
aaab  abbb  bbbb  caab

In this example * is replaced by appropriate characters, and the list of matching files are passed to the ls command. This set of files will be used in the following examples.

Encode for a single character: `?`

The question mark can be replaced by a single character. So if you want to get the files aaa1 , aaa2 , aaa3 and aaab you can use the following pattern:

me@kile /tmp/blog $ ls aaa?
aaa1  aaa2  aaa3  aaab

So you see, the ? is replaced by exactly one character. That is, both aaa and aaaa1 won’t match.

Encode for a an arbitrary number of characters: `*`

To match any number of characters you can use the asterix * . It can replace 0 to n characters, n is limited by the max length of the file name and depends on the file system you’re using. Adapting the previous snippet you’ll now also get aaa and aaaa1 :

me@kile /tmp/blog $ ls aaa*
aaa  aaa1  aaa2  aaa3  aaaa1  aaaa2  aaaa3  aaaa4  aaaa5  aaab

Encode for a set of characters: `[...]`

Most of the common tasks can be done with the previous templates, but there are cases when you need to define the characters that should be replaced. You can specify this set of characters using brackets, e.g. [3421] can be replaced by 3 , 4 , 2 or 1 and is the same as [1-4] :

me@kile /tmp/blog $ ls aaaa?
aaaa1  aaaa2  aaaa3  aaaa4  aaaa5
me@kile /tmp/blog $ ls aaaa[3421]
aaaa1  aaaa2  aaaa3  aaaa4
me@kile /tmp/blog $ ls aaaa[1-4]
aaaa1  aaaa2  aaaa3  aaaa4

As you can see aaaa5 doesn’t match [3421] , and btw. the order of the specified characters doesn’t matter. And because it would be very annoying if you want to match against any alphabetic character (you would need to type all 26 characters), you can specify character ranges using a hyphen ( a-z ). Here are some exmaples:

TemplateCharacter set
`[xyz1]` `x` , `y` , `z` or `1`
`[C-Fc-f]` `C` , `D` , `E` , `F` , `c` , `d` , `e` or `f`
`[a-z0-9]` Any small character or digit
`[^b-d]` Any character except `b` , `c` , `d`
`[Yy][Ee][Ss]` Case-insensitive matching of yes
`[[:alnum:]]` Alphanumeric characters, same as `A-Za-z0-9`
`[[:alpha:]]` Alphabetic characters, same as `A-Za-z`
`[[:digit:]]` Digits, same as `0-9`
`[[:lower:]]` Lowercase alphabetic characters, same as `a-z`
`[[:upper:]]` Uowercase alphabetic characters, same as `A-Z`
`[[:space:]]` Whitespace characters (space, tab etc.)

Btw. the files that match such a template are sorted before they are passed to the command.

Validating XML files

In the scope of different projects I often have to validate XML files. Here is my solution to verify XML files using a schema.

First of all to validate XML files in Java you need create a SchemaFactory of the W3C XML schema language and you have to compile the schema (let’s assume it’s located in /path/to/schema.xsd ):

SchemaFactory factory = SchemaFactory.newInstance ("http://www.w3.org/2001/XMLSchema");
Schema schema = factory.newSchema (new File ("/path/to/schema.xsd"));

Now you’re able to create a validator from the schema.

Validator validator = schema.newValidator ();

In order to validate a XML file you have to read it (let’s assume it’s located in /path/to/file.xml ):

Source source = new StreamSource (new File ("/path/to/file.xml"));

Last but not least you can validate the file:

  validator.validate (source);
  System.out.println ("file is valid");
catch (SAXException e)
  System.out.println ("file is invalid:");
  System.out.println (e.getMessage ());
Download: JAVA: XMLValidator.java (Please take a look at the man-page. Browse bugs and feature requests.)

HowTo Debug Bash Scripts

Even shell scripts may get very complex, so it is helpful to know how to debug them.

Lets explain it on a small example:


echo lets go

# some comment
/bin/ls -l $DIR | /bin/grep initrd  | wc -l

echo done

Executing it you’ll get an output like this:

usr@srv /tmp % bash test.sh
lets go

To debug the execution of scripts the bash provides a debugging mode. There is one option -x to trace the execution

usr@srv /tmp % bash -x test.sh
+ echo lets go
lets go
+ DIR=/boot
+ wc -l
+ /bin/grep initrd
+ /bin/ls -l /boot
+ echo done

So you see, every line that is executed at the runtime will be printed with a leading + , comments are ignored. There is another option -v to enable verbose mode. In this mode each line that is read by the bash will be printed before it is executed:

usr@srv /tmp % bash -v test.sh

echo lets go
lets go

# some comment
/bin/ls -l $DIR | /bin/grep initrd  | wc -l

echo done

Of course you can combine both modes, so the script is sequentially printed and the commands are traced:

usr@srv /tmp % bash -vx test.sh

echo lets go
+ echo lets go
lets go

# some comment
+ DIR=/boot
/bin/ls -l $DIR | /bin/grep initrd  | wc -l
+ /bin/ls -l /boot
+ wc -l
+ /bin/grep initrd

echo done
+ echo done

These modes will help you to find some errors. To modify the output of the tracing mode you may configure the PS4 :

export 'PS4=+${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]}: '

This will also print the file name of the executing script, the line number of the current command that is executed and the respective function name:

usr@srv /tmp % export 'PS4=+${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]}: '
usr@srv /tmp % bash -x test.sh
+test.sh:3:: echo lets go
lets go
+test.sh:6:: DIR=/boot
+test.sh:7:: /bin/ls -l /boot
+test.sh:7:: /bin/grep initrd
+test.sh:7:: wc -l
+test.sh:9:: echo done

if You don’t want to trace a whole script you can enable/disable tracing from within a script:

# [...]
echo no tracing
set -x
echo trace me
set +x
echo no tracing
# [...]

This will result in something like:

usr@srv /tmp % bash test.sh
no tracing
+test.sh:14:: echo trace me
trace me
+test.sh:15:: set +x
no tracing

It is of course also possible to enable/disable verbose mode inside the script with set -v and set +v , respectively.

Absolute Path of a Servlet Installation

I’m currently developing some Java Servlets and one of the tasks is to create images dynamically. But where to store them accessible for users?

If you want to show the user for example a graph of some stuff that changes frequently you need to generate the image dynamically. The rendering of the graphic is one thing, but where to store the picture so that the visitor can access it from the web?

There were many options to try, and I found that getServletContext().getRealPath (".") from ServletRequest was the result I’ve been looking for. So to spare you the tests I’ll provide the different options (download):

Let’s assume your webapps-directory is /var/lib/tomcat6/webapps/ , your servlet context is project and the user asks for the servlet test the output probably looks like:

new File (".").getAbsolutePath () => /var/lib/tomcat6/.
request.getPathInfo () => null
request.getPathTranslated () => null
request.getContextPath () => /project
request.getRealPath (request.getServletPath ()) => /var/lib/tomcat6/webapps/project/test
request.getServletPath () => /test
getServletContext ().getContextPath () => /project
getServletContext ().getRealPath (".") => /var/lib/tomcat6/webapps/project/.

That’s it for the moment ;-)

Download: Java: ServletTest.java (Please take a look at the man-page. Browse bugs and feature requests.)

MFC-9120CN Setup

I just bought a new printer, the Brother MFC-9120CN. It’s also able to scan and to copy documents and to send them by fax. Since the installation instructions are win/mac-only I’ll shortly explain how to setup the device in a Linux environment.

My new MFC-9120CN
My new MFC-9120CN

Decision for this printer

First of all I was searching for a printer that is in any case compatible to Linux systems. You might also have experiences with this driver f$ckup, or at least have heard about it. The manufactures often only provide drivers for Win or Mac, so you generally get bugged if you want to integrate those peripherals in your environment. The MFC-9120CN scores at this point. It is able to print and scan via network. Drivers for the printer are available and the the scanned documents can be sent at any FTP server. So you don’t need to have special drivers for scanning, just setup a small FTP server. This model is also a very cheap one compared to other color-laser MFP’s, and with the ADF it completely matches my criteria.


I already noticed some disadvantages. One is the speed, the printer is somewhat slow. Since I’m not printing thousands of pages it’s more or less minor to me, but you should be aware of that. Another issue is the fact, that the device always forgets the date if it is turned of for a time.. And the printer is a bit too noisy.


The printer comes with a large user manual (>200 pages). It well explains setup the fax functionality, but the installation of the network printer and scanner is only described for win/mac, so I’ll give you a small how-to for your Linux systems.

Network Setup

To use this device via network you have to connect it to a router. It should be able to request an IP via DHCP, but if you don’t provide a DHCP server you need to configure the network manually (my values are in parenthesis):

  • IP: menu->5->1->2 ( )
  • Netmask: menu->5->1->3 ( )
  • Gateway: menu->5->1->4 ( )

If this is done you should be able to ping the printer:

usr@srv % ping
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=255 time=0.306 ms

If you browse to this IP using your web browser you’ll find a web interface for the printer. We’ll need this website later on.

Printer Setup

Big thanks to the CUPS project, it’s very easy to setup the network-printer! If you haven’t installed cups yet, do it now:

aptitude install cups foomatic-db

Just browse to your CUPS server (e.g. http://localhost:631 if it is installed on your current machine) and install a new printer via Administration->add Printer (you need to be root). Recent CUPS versions will detect the new printer automatically and you’ll find it in the list of Discovered Network Printers. Just give it a name and some description, select a driver (I’m using Brother MFC-9120CN BR-Script3 (color, 2-sided printing)) and you’re done! Easy, isn’t it!? ;-) For those of you that have an older version of CUPS: The URI of my printer is dnssd://Brother%20MFC-9120CN._printer._tcp.local/ .

Scanner Setup

As explained above, the printer is able to send scanned documents to a FTP location. That is, there is no need for a scanner driver! Just install a small FTP server, I decided for ProFTPd:

aptitude install proftpd-basic

Make sure, that the /etc/proftpd/proftpd.conf contains the following lines:

DefaultRoot ~
RequireValidShell off
AuthOrder mod_auth_file.c  mod_auth_unix.c
AuthUserFile /etc/proftpd/ftpd.passwd
AuthPAM off

and create a new virtual FTP user:

ftpasswd --passwd --name YourPrinter --uid 10001 --home /PATH/TO/FILES --shell /bin/false

You will be asked for a password. The scanned documents will be stored in /PATH/TO/FILES . This command creates a file ftpd.passwd . Move this file to /etc/proftpd/ , if you didn’t execute the command in that directory. Restart ProFTPd:

/etc/init.d/proftpd restart

You should be able to connect to your FTP server:

usr@srv % ftp localhost
Connected to localhost.
220 ProFTPD 1.3.4a Server (Debian) [::ffff:]
Name (localhost:you): YourPrinter
500 AUTH not understood
500 AUTH not understood
SSL not available
331 Password required for printer
230 User printer logged in
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 PORT command successful
150 Opening ASCII mode data connection for file list
226 Transfer complete
ftp> quit
221 Goodbye.

If that was successful, let’s configure the scanner to use this FTP account. Use your web browser to open the interface of the printer (e.g. and go to Administrator Settings->FTP/Network Scan Profile (you have to authenticate, default login is admin and the password is access). Here you’ll find 10 different profiles that can be configured. Click for example on Profile Name 1 and modify the profile:

  • Host Address: The IP of the FTP server (e.g. )
  • Username: The username of the virtual FTP user you’ve created (e.g. YourPrinter )
  • Password and Retype Password: The password of the virtual FTP
  • Store Directory: /

If you submit these values you’ll be able to scan to your FTP server. Just give it a try! ;-)

Additional Notes

I recommend to configure your firewall to drop all packets of your printer that try to leave your own network.

Conditionally autoscroll a JScrollPane

I’m currently developing some GUI stuff and was wondering how to let a JScrollPane scroll automatically if it’s already on the bottom and the size of it’s content increases.

For example if you use a JTextArea to display some log or whatever, than it would be nice if the scroll bars move down while there are messages produced, but it shouldn’t scroll down when the user just scrolled up to read a specific line. To scroll down to the end of a JTextArea can be done with just setting the carret to the end of the text:

JTextArea log = new JTextArea (20, 20);
log.setEditable (false);
JScrollPane scroller = new JScrollPane ();
scroller.setViewportView (output);

// [...]

log.append ("your message");
log.setCaretPosition (log.getDocument ().getLength ());

But we first want to check whether the scroll bar is already at the bottom, and only if that’s the case it should automatically scroll down to the new bottom if another message is inserted. To obtain the position data of the vertical scroll bar on can use the following code:

JScrollBar vbar = scroller.getVerticalScrollBar ();

// get the current position
int currentPosition = vbar.getValue ();

// getMaximum () gives maximum + extent.
int maxPosition = vbar.getMaximum () - vbar.getVisibleAmount ();

if (currentPosition == maxPosition)
	// in this case we want to scroll after the new text is appended

Unfortunately log.append ("some msg") won’t append the text in place, so the size of the text area will not necessarily change before we ask for the new maximum position. To avoid a wrong max value one can also schedule the scroll event:

private void logText (String text)
	final JScrollBar vbar = scroller.getVerticalScrollBar ();
	// is the scroll bar at the bottom?
	boolean end = vbar.getMaximum () == vbar.getValue () + vbar.getVisibleAmount ();
	// append some new text to the text area
	// (or do something else that increases the contents of the JScrollPane)
	log.append (text + "\\n");
	// if scroll bar already was at the bottom we schedule
	// a new scroll event to again scroll to the bottom
	if (end)
		EventQueue.invokeLater (new Runnable ()
			public void run ()
				EventQueue.invokeLater (new Runnable ()
					public void run ()
						vbar.setValue (vbar.getMaximum ());

As you can see, here a new event is put in the EventQueue, and this event is told to put another event in the queue that will do the scroll event. Correct, that’s a bit strange, but the swing stuff is very lazy and it might take a while until the new maximum position of the scroll bar is calculated after the whole GUI stuff is re-validated. So let’s be sure that our event definitely happens when all dependent swing events are processed.


Some days ago I discovered galternatives, a GNOME tool to manage the alternatives system of Debian/Ubuntu. It’s really smart I think.

For example to update the default editor for your system you need to update the alternatives system via:

update-alternatives --set editor /usr/bin/vim

There is also an interactive version available:

update-alternatives --config editor

To see available browsers you need to run

update-alternatives --list x-www-browser

However, the alternatives system is a nice idea I think, but it’s a bit confusing sometimes. And installing a new group or adding another entry to an existing group is pretty complicated and requires information from multiple other commands beforehand.

With galternatives you’ll get a graphical interface to manage all these things. That really brings light into the dark! Just install it via

aptitude install galternatives

You’ll be astonished if you give it a try! ;-)