Change Title of moderncv Document

Once again I had to prepare a CV for an application. I’m using the moderncv package to create the CV in and I was always bothered about the title of the document. Today I spend some time to fix that.

 moderncv produces an ugly title
moderncv produces an ugly title

Using moderncv you can produce really fancy CV’s with very little effort. But unfortunately, by default it produces an ugly title (see the screenshot taken from Okular). As you can see, there is some character that cannot be displayed by certain tools.

I guess most of my “CV-reviewers” don’t care about this little issue, if they recognize it at all, but it bothers me whenever I have to create a resumé. I already tried to override it using the hyperref package, but wherever I put the statement it seems to have no effect.

However, since moderncv is open source (yeah! lovit) I took a look at the code to see how they produce the title. It was quite easy to find the concerning statement (in my case /usr/share/texlive/texmf-dist/tex/latex/moderncv/moderncv.cls:96 , texlive-latex-extra@2012.20120611-2):

\AtEndPreamble{
  \@ifpackageloaded{CJK}
    {\RequirePackage[unicode]{hyperref}}
    {\RequirePackage{hyperref}}
    \hypersetup{
      breaklinks,
      baseurl       = http://,
      pdfborder     = 0 0 0,
      pdfpagemode   = UseNone,% do not show thumbnails or bookmarks on opening
      pdfpagelabels = false,% to avoid a warning setting it automatically to false anyway, because hyperref detects \thepage as undefined (why?)
      pdfstartpage  = 1,
      pdfcreator    = {\LaTeX{} with `moderncv' package},
%      pdfproducer   = {\LaTeX{}},% will/should be set automatically to the correct TeX engine used
      bookmarksopen = true,
      bookmarksdepth= 2,% to show sections and subsections
      pdfauthor     = {\@firstname{}~\@familyname{}},
      pdftitle      = {\@firstname{}~\@familyname{} -- \@title{}},
      pdfsubject    = {Resum\'{e} of \@firstname{}~\@familyname{}},
      pdfkeywords   = {\@firstname{}~\@familyname{}, curriculum vit\ae{}, resum\'{e}}}
  \pagenumbering{arabic}% has to be issued after loading hyperref
}

As expected the pdftitle contains a double-hyphen that is converted by latex to a dash. Apparently a problem for some programs. To fix this issue you could sudo:modify this file, but that’s of course messy. Better add something like the following to the end of the header of your document:

\AtEndPreamble{
\hypersetup{pdftitle={Your New Title}}
}

This will override the broken output of the package.

Check if certain Port is Open

Just needed to get to know whether something listens at a certain TCP port on a particular host.

Here is my workaround using Perl:

my $sock = IO::Socket::INET->new (
	PeerAddr => "1.2.3.4",
	PeerPort => 1337,
	Proto => "tcp",
	Timeout => 1
);
echo "closed" if !defined $sock;

Works at least for me. Any concerns or better solutions?

The Password Dilemma

Earlier this week I had a very small conversation with Pedro Mendes on twitter (well in terms of twitter it might be a long dicussion). It was initiated by him calling for suggestions for a password safe. I suggested better using a system for your passwords, which he thought was a bad idea. So lets have a look at both solutions.

You all know about these rules for choosing a password. It should contain a mix of lower and upper case letters, numerals, special characters, and punctuation. Moreover, it should be at least eight characters long and has to be more or less random. Since our brain is limited in remembering such things we tend to use easy-to-remember password (e.g. replacing letters using leet). But of course hackers are aware of that and it is quite easy to also encode such rules in their cracking algorithms. Equally bad is using one strong password for all accounts. So, how to solve this problem?

Using a Password Safe

The first good idea is using very strong passwords for every account and writing them down, so you don’t have to remember them. You probably often heard that writing passwords on a sheet of paper is a very stupid idea. And storing them in a document on your desktop is even worse. But there are lots of tools that help you with that problem, e.g. KeePass or KeePassX, just to name two open source solutions. You can organize your passwords and store them in an encrypted file. Thus, you just have to remember one single password to open this safe. These tools often include an excellent password generation functionality that helps you choosing passwords. And even if another website gets hacked, you just need to open your safe and replace the password with a new one. Very convenient.

Unfortunately, this solution also comes with some drawbacks. Since you cannot remember a single password you always need to have this safe with you. I usually use five different machines, so you have to distribute this file (at least to have a backup). And of course you want to have it in sync, so you might want to store it in a cloud or something. But every copy of this safe increases the chance that an attacker gets access to it. Moreover, you cannot put a subsafe containing only passwords for trivial accounts (like twitter) on your mobile phone (which I also do not trust). So, there are many weak points to get access to the safe (e.g. a design fail of the cloud, a bug in the cloud, an evil system administrator having access to your PCs at work, a law enforcements etc.). And as soon as the attacker has access to this file he just has to crack one human rememberable password to see the whole collection of your passwords. Probably including login names and links to the web sites. Worst case scenario.

Using a Password System

The second idea is using a system to generate passwords for each account. You have to choose a very strong password , and a function that creates a unique password for every account using and the (domain) name of the related service: . You just need to remember this very good and . Depending on your paranoia and your mind capabilities there are many options to choose . An easy might just put the 3rd and last letters of at the 8th and 2nd pos in (see example below). More paranoid mathematicians might choose an that ASCII-adds the 3rd letter of to the 8th position of , puts the at the 2nd position in , and appends the base64 representation of the multiplicative digital root of the int values of the ASCII letters of to . Here you can see the examples:

u:M~a{em0 twitter ur:M~a{eim0 u2.6:M~a{eW0Mi4yNDU2MjFlKzE0Cg==
u:M~a{em0 google ue:M~a{eom0 u2.4:M~a{e]0MS40MjU4MjNlKzEyCg==

So, you see if the password for twitter gets known the hacker isn’t able to log into your google account. To be honest, I guess that nobody will choose , but I think even is quite good and leaves some space for simple improvements.

However, as expected this solution also has some dramatic disadvantages. If one of your passwords gets compromised you need to change your system, at least choosing a different and maybe also an alternative for . As soon as a hacker is able to get two of these passwords he will immediately recognize the low entropy and it is not difficult to create a pattern for your passwords making it easy to guess all other passwords.

Conclusion

This is not to convince somebody to use one or the other solution, its more or less a comparison of the pros and cons. In my opinion the current password mechanism is sort of stupid, but we need to find the least bad solution until we have some alternatives. So what about creating a small two-factor auth system? You could combine the two above mentioned solutions and use a password safe in combination with a password system. So keep a short lock in mind which is necessary to unlock the passwords in the safe. Maybe something like 29A which you have to add to every password (on some position of your choice, e.g. just append it). Thus, if a hacker breaks into one service only a singe password is compromised and you just need to update this entry in your safe, and if your whole safe is cracked all passwords are useless crap. Of course you have to create a new safe and update all your passwords, but the guy who knows your old “passwords” doesn’t know how to use them. However, we are discussing on a very high level. The mentioned scenarios are more or less just attacks against a particular person. I am a sysadmin, so I would already be very glad if users won’t use passwords like mama123 and stop sending passwords in clear-text mails!

Supp: The Conversation

just for the logs (in twitter chronology: new -> old):

Pedro Mendes @gepasi at 1:13 PM - 30 May 13
@binfalse I agree, but using 30 character completely random ones seems to be the best.

martin scharm @binfalse at 5:40 PM - 29 May 13
@gepasi either using a password safe (which also has drawbacks) or a system with a strong p and a complex f.

martin scharm @binfalse at 5:39 PM - 29 May 13
@gepasi however, i support the attitude seeing every pw as compromised. so the most important rule is using unique pws for every service.

martin scharm @binfalse at 5:39 PM - 29 May 13
@gepasi even after reading this article i’d say that ur:M~a{eim0 is quite strong and i’d expect to find it within the 10% uncracked.

Pedro Mendes @gepasi at 1:18 PM - 29 May 13
@binfalse but thanks for the tip on KeePassX

Pedro Mendes @gepasi at 1:18 PM - 29 May 13
@binfalse a system is not recommended. Anything a human can remember is broken within 24h. Read http://arstechnica.com/security/2013/05/how-crackers-make-minced-meat-out-of-your-passwords/

martin scharm @binfalse at 1:03 PM - 29 May 13
@gepasi and even if someone breaks into twitter, your google passphrase (“ue:M~a{eom0”) isn’t compromised.

martin scharm @binfalse at 1:03 PM - 29 May 13
@gepasi quite easy to remember (when you know p), very hard to guess and brute-forcing the related hash really takes some time.

martin scharm @binfalse at 1:03 PM - 29 May 13
@gepasi e.g. p=”u:M~a{em0” and n=”twitter” would result in “ur:M~a{eim0” as a password for twitter.

martin scharm @binfalse at 1:02 PM - 29 May 13
@gepasi you just need to remember p and f, which may put the 3rd and last letter of n at the 8th and 2nd pos in p.

martin scharm @binfalse at 1:02 PM - 29 May 13
@gepasi choose a password p (as strong as possible) and a function f(p,n) that creates a unique password from p and a (domain) name n.

martin scharm @binfalse at 1:02 PM - 29 May 13
@gepasi afaik KeePassX is a good one. but i recommend to use a system!

Pedro Mendes @gepasi at 9:07 AM - 29 May 13
I need suggestions for a good password manager. Ideally only local storage (ie no cloud storage)

wp-login.php Brute Force Defense

Currently observing a lot of brute force attacks trying to get access to my WordPress installation. Fortunately, I’ve been aware of such cranks when I installed WordPress, and now I want to share my technique to prevent such attacks.

What's the problem?

There are some guys who try to get access to your website’s content to distribute even more spam and malware. Since they don’t have your credentials they need to guess them. Usually they randomly choose common login names (like admin or martin ) and popular passwords (like root123 or martin ) and try to log in to your web site. However, there are lot’s of possibilities and only a few will work, so they usually need a lot of attempts. To prevent an intrusion you should choose an uncommon user name and a strong password (not only for your WordPress installation!). Nevertheless, there is still a chance to guess the credentials, so you’ll sleep much better if you make sure that there’s no chance for an attacker to break into your site.

Deny access to wp-login.php

The idea is to reject any login from anyone but you. For instance, using apache (most common webserver) you may only allow the access to wp-login.php from defined IP adresses:

<Location /wp-login.php>
   ErrorDocument 403 /
   Order deny,allow
   Deny from all
   Allow from 1.2.3.4
</Location>

This piece of code, included in a vhost or in an .htaccess file, will only allow connections from 1.2.3.4 to your wp-login.php . All other requests will be forwarded to / . You need to have the module mod_access installed. For more information take a look at the documentation of the mod_access . Other web servers like nginx or lighttpd have similar solutions. (And I think hope even the microsoft crap is able to do such basic stuff without much configuration overhead, but I’m too busy to read microsoft documentations…)

Workaround for dynamic IPs

As long as you’re editing your articles using static IP everything is fine. But what if you’re cursed with an NAT? Indeed, it will be very annoying if you always have to adjust this config in order to log into your WordPress management interface! Fortunately, there is a small workaround if you have SSH access to that server. Simply restrict the access to the file to connections from the server’s own IP. Thus, only connections from the server itself are able to log in. In order to get access you need to setup a tunnel to your server using SSH providing a socks proxy:

ssh -D8765 you@your.web.server

This command will create a tunnel from your local system to your.web.server . Connections to port 8765 at your systems will be forwarded to your server, hence, connections to your wp-login.php through the tunnel will be allowed. From now on only users having access to the server (physically or via SSH) are allowed to access you wp-login.php :-) There’s only one restriction left: you need to SSH to your server and you have to configure your browser to use this socks proxy before you can access WordPress. I recommend to use FoxyProxy.

Testing

Ok, let’s ensure that our config works. Try to access wp-login.php from an IP which is not allowed to access this file, e.g. using curl :

usr@client % curl -I /wp-login.php
HTTP/1.1 302 Found
[...]
Location: /
[...]

Since I’m not allowed to access this page I got a 302 and am redirected to / . Ok, what happens if I connect from an allowed host?

usr@srv % curl -I /wp-login.php
HTTP/1.1 200 OK
[...]

Excellent, 200 == allowed! If you want to verify your proxy connections using curl pass another parameter -x socks5://127.0.0.1:PORT to the command:

usr@client % curl -x socks5://127.0.0.1:8765 -I /wp-login.php
HTTP/1.1 200 OK
[...]

Great, everything’s fine :D

More Security

Of course you can add similar rules for other web sites or scripts. For example to restrict the access to the whole admin interface of WordPress add another restriction to the vhost / .htaccess :

<LocationMatch ^/wp-admin>
   ErrorDocument 403 /
   Order deny,allow
   Deny from all
   Allow from 1.2.3.4
</LocationMatch>

I’m sure you’ll find even more reasonable rules.

KDE file type actions

Annoyingly, KDE’s PDF viewer okluar always opened links to websites with an editor presenting me the source code. But I just figured out how to change this behavior.

 kcmshell4 dialog to configure filetype-application-mappings
kcmshell4 dialog to configure filetype-application-mappings

KDE maintains a central config defining what to do with certain file types. Unfortunately, in my case the default application for HTML files was an editor (kwrite/kate). I don’t know who or what defined this stupid behavior, but there is a tool called kcmshell4 to edit the KDE configuration. That said, to edit the filetype-application-mapping hand it the parameter filetypes :

usr@srv % kcmshell4 filetypes

You’ll get a dialog to define a mapping for each known file type. In my case I had to configure okular to open links to HTML pages with konqueror. Hope that helps you to save some time ;-)

encfs: transparent crypto overlay

encfs is a cryptographic file system (encfs-website). The principle is very easy, you “bind-mount” one directory (containing the crypt stuff) to another directory (where it’s unencrypted). Thus, you can share the encrypted stuff and nobody but you can read your data. So this system is excellent applicable for cloud services like Dropbox, which trap you with some space in the cloud “for free”, but want you to share your private data with them. In some <p>’s we’ll see how to setup encfs for Dropbox, but let’s first take a look at encfs itself.

encfs

First of all you have to install encfs. Assuming you’re sitting in front of a Debian-based os:

root@abakus ~ # aptitude install encfs

Since encfs is fuse-based the user who wants to use encfs has to be member of the group fuse . You may use the groups command to make sure you belong to fuse :

martin@abakus ~ % groups
martin mail fuse

If you’re not yet member of that group edit the /etc/group file or use the useradd command (howto). To apply the changes you need to re-login or use newgrp (man newgrp).

That’s it, now the way to use encfs is parved. Let’s say we want to work with our data in /dir/clear , while the whole stuff is stored encrypted in /dir/crypt . It’s quite simple to setup this environment, just call encfs [crypt-dir] [decrypt-dir] :

martin@abakus ~ % encfs /dir/crypt /dir/clear
Creating new encrypted volume.
Please choose from one of the following options:
 enter "x" for expert configuration mode,
 enter "p" for pre-configured paranoia mode,
 anything else, or an empty line will select standard mode.
?>

Give it a p and choose a password. This command will install an encrypted container in /dir/crypt and immediately mount it to /dir/clear . Feel free to create some files in /dir/clear (you’re new working directory) and compare this directory with /dir/crypt . You’ll see that you are not able to understand the files in /dir/crypt , unless you’re a genius or the setup failed. Thus, it’s no issue if anyone might have access to the content in /dir/crypt .

To unmount the clear data use fusermount -u /dir/clear . To remount it just call again encfs /dir/crypt /dir/clear , it will just ask you for the password to decrypt the data.

Of course it’s not very convenient to mount the directory every time manually, hence there is a workaround to automount your encfs directories on login. You need to install libpam-mount and libpam-encfs :

root@abakus ~ # aptitude install libpam-mount libpam-encfs

To automatically mount an encfs on login the password for the crypt-fs has to be the same as the password for your user account! If that’s the case, add a line like the following to /etc/security/pam_mount.conf.xml :

<pam_mount>
    [...]
    <volume user="martin" fstype="fuse" path="encfs#/dir/crypt" mountpoint="/dir/clear" />
    [...]
</pam_mount>

On your next login this directory will automatically be mounted. Very smart!

Using encfs for the cloud

Ok, let’s get to the main reason for this article, winking towards Norway ;) . As you might have heard, there are some file hosting services out there, like Dropbox or Ubuntu One. They provide some space in the cloud which can be mounted to different devices, so that your data is sync’ed between your systems. Unfortunately, most of these services want to read your data. E.g. the Dropbox system wants to store a file used by multiple users only once. Even if they pretend to assure that nobody’s able to read your private data, you all know that this is bullshit nearly impossible! However, I strongly recommend to not push critical/private files to these kind of providers.

But, thada, you’ve just learned how to sync your files using the cloud while keeping them private! Let’s assume the directory /home/martin/Dropbox is monitored by Dropbox, you just need to create two more directories, like /home/martin/Dropbox/private (for encrypted files to be sync’ed) and /home/martin/Dropbox-decrypt (for decryption). Mount /home/martin/Dropbox/private to /home/martin/Dropbox-decrypt using encfs and work in /home/martin/Dropbox-decrypt . As explained above, feel free to setup an automount using pam ;-) Everything in /home/martin/Dropbox but not in the private directory will be sync’ed unencrypted, so you won’t loose the opportunity to share some open data with [whoever] e.g. via web browser.

Of course, this method comes with some drawbacks:

  • It is a bit more work to setup every client, before you can start working with your private data (fortunately the overhead is kept in reasonable limits)
  • You cannot access these files through the web browser, or using your mobile phone (unless your phone comes with encfs-support)

All in all, you need to decide on your own, how much you trust Dropbox (and alike) and which kind of data you commit to Dropbox unencrypted.

Sync the clock w/o NTP

The network time protocol (NTP) is a really smart and useful protocol to synchronize the time of your systems, but even if we are in two-thousand-whatever there are reasons why you need to seek for alternatives...

You may now have some kind of »what the [cussword of your choice]« in mind, but I have just been in an ugly situation. All UDP traffic is dropped and I don't have permissions to adjust the firewall.. And you might have heard about the consequences of time differences between servers. Long story short, there is a good solution to sync the time via TCP, using the Time Protocol and a tool called `rdate` .

Time Master

First off all you need another server having a correct time (e.g. NTP sync'ed), which can be reached at port 37. Let's call this server `$MASTER` . To enable the Time Protocol on `$MASTER` you have to enable the time service in (x)inetd. For instance to enable the TCP service for a current `xinetd` you could create a file in `/etc/xinetd.d/time` with the following contents:

service time 
{
    disable     = no 
    type        = INTERNAL
    id          = time-stream
    socket_type = stream
    protocol    = tcp
    user        = root 
    wait        = no 
}

Such a file may already exist, so you just have to change the value of the `disable` -key to `no` . Still using inetd? I'm sure you'll find your way to enable the time server on your system :)

Time Slave

On the client, which is not allowed to use NTP (wtfh!?), you need to install `rdate` :

aptitude install rdate

Just call the following command to synchronize the time of the client with `$MASTER` :

rdate $MASTER

Since `rdate` immediately corrects the time of your system you need to be root to run this command.

Finally, to readjust the time periodically you might want to install a cronjob. Beeing root call `crontab -e` to edit root's crontab and append a line like the following:

# m     h       dom     mon     dow     command
0       */6     *       *       *       [ -x /usr/bin/rdate ] &amp;&amp; /usr/bin/rdate $MASTER &gt;&gt; /dev/null

This will synchronize the time of your client with the time of `$MASTER` every six hours. (Don't forget to substitute `$MASTER` using your desired server IP or DNS.)

Notes

Last but not least I want you to be aware that this workaround just keeps the difference in time between both systems less than 0.5 secs. Beyond all doubt, looking at NTP that's very poor. Nevertheless, 0.5 secs delay is much better than several minutes or even hours!

If it is also not permitted to speak to port 37 you need to tunnel your connections or you have to tell the time server to listen to another, more common port (e.g. 80, 443, or 993), as long as they are not already allocated by other services..

Bash Wildcards

I wanted to publish this summary about wildcards in the bash (and similar shells) some time ago, but didn’t finish it. But finally it gets published.

The shell handles words or patterns containing a wildcard as a template. Available filenames are tested to see if they fit this template. This evaluation is also called globbing. Let’s have a look at a small example:

me@kile /tmp/blog $ ls
aaa   aaa2  aaaa1  aaaa3  aaaa5  abbb  bbbb
aaa1  aaa3  aaaa2  aaaa4  aaab   acdc  caab
me@kile /tmp/blog $ ls *b
aaab  abbb  bbbb  caab

In this example * is replaced by appropriate characters, and the list of matching files are passed to the ls command. This set of files will be used in the following examples.

Encode for a single character: `?`

The question mark can be replaced by a single character. So if you want to get the files aaa1 , aaa2 , aaa3 and aaab you can use the following pattern:

me@kile /tmp/blog $ ls aaa?
aaa1  aaa2  aaa3  aaab

So you see, the ? is replaced by exactly one character. That is, both aaa and aaaa1 won’t match.

Encode for a an arbitrary number of characters: `*`

To match any number of characters you can use the asterix * . It can replace 0 to n characters, n is limited by the max length of the file name and depends on the file system you’re using. Adapting the previous snippet you’ll now also get aaa and aaaa1 :

me@kile /tmp/blog $ ls aaa*
aaa  aaa1  aaa2  aaa3  aaaa1  aaaa2  aaaa3  aaaa4  aaaa5  aaab

Encode for a set of characters: `[...]`

Most of the common tasks can be done with the previous templates, but there are cases when you need to define the characters that should be replaced. You can specify this set of characters using brackets, e.g. [3421] can be replaced by 3 , 4 , 2 or 1 and is the same as [1-4] :

me@kile /tmp/blog $ ls aaaa?
aaaa1  aaaa2  aaaa3  aaaa4  aaaa5
me@kile /tmp/blog $ ls aaaa[3421]
aaaa1  aaaa2  aaaa3  aaaa4
me@kile /tmp/blog $ ls aaaa[1-4]
aaaa1  aaaa2  aaaa3  aaaa4

As you can see aaaa5 doesn’t match [3421] , and btw. the order of the specified characters doesn’t matter. And because it would be very annoying if you want to match against any alphabetic character (you would need to type all 26 characters), you can specify character ranges using a hyphen ( a-z ). Here are some exmaples:

TemplateCharacter set
`[xyz1]` `x` , `y` , `z` or `1`
`[C-Fc-f]` `C` , `D` , `E` , `F` , `c` , `d` , `e` or `f`
`[a-z0-9]` Any small character or digit
`[^b-d]` Any character except `b` , `c` , `d`
`[Yy][Ee][Ss]` Case-insensitive matching of yes
`[[:alnum:]]` Alphanumeric characters, same as `A-Za-z0-9`
`[[:alpha:]]` Alphabetic characters, same as `A-Za-z`
`[[:digit:]]` Digits, same as `0-9`
`[[:lower:]]` Lowercase alphabetic characters, same as `a-z`
`[[:upper:]]` Uowercase alphabetic characters, same as `A-Z`
`[[:space:]]` Whitespace characters (space, tab etc.)

Btw. the files that match such a template are sorted before they are passed to the command.

Validating XML files

In the scope of different projects I often have to validate XML files. Here is my solution to verify XML files using a schema.

First of all to validate XML files in Java you need create a SchemaFactory of the W3C XML schema language and you have to compile the schema (let’s assume it’s located in /path/to/schema.xsd ):

SchemaFactory factory = SchemaFactory.newInstance ("http://www.w3.org/2001/XMLSchema");
Schema schema = factory.newSchema (new File ("/path/to/schema.xsd"));

Now you’re able to create a validator from the schema.

Validator validator = schema.newValidator ();

In order to validate a XML file you have to read it (let’s assume it’s located in /path/to/file.xml ):

Source source = new StreamSource (new File ("/path/to/file.xml"));

Last but not least you can validate the file:

try
{
  validator.validate (source);
  System.out.println ("file is valid");
}
catch (SAXException e)
{
  System.out.println ("file is invalid:");
  System.out.println (e.getMessage ());
}
Download: JAVA: XMLValidator.java (Please take a look at the man-page. Browse bugs and feature requests.)

HowTo Debug Bash Scripts

Even shell scripts may get very complex, so it is helpful to know how to debug them.

Lets explain it on a small example:

#/bin/bash

echo lets go

# some comment
DIR=/boot
/bin/ls -l $DIR | /bin/grep initrd  | wc -l

echo done

Executing it you’ll get an output like this:

usr@srv /tmp % bash test.sh
lets go
112
done

To debug the execution of scripts the bash provides a debugging mode. There is one option -x to trace the execution

usr@srv /tmp % bash -x test.sh
+ echo lets go
lets go
+ DIR=/boot
+ wc -l
+ /bin/grep initrd
+ /bin/ls -l /boot
112
+ echo done
done

So you see, every line that is executed at the runtime will be printed with a leading + , comments are ignored. There is another option -v to enable verbose mode. In this mode each line that is read by the bash will be printed before it is executed:

usr@srv /tmp % bash -v test.sh
#/bin/bash

echo lets go
lets go

# some comment
DIR=/boot
/bin/ls -l $DIR | /bin/grep initrd  | wc -l
112

echo done
done

Of course you can combine both modes, so the script is sequentially printed and the commands are traced:

usr@srv /tmp % bash -vx test.sh
#/bin/bash

echo lets go
+ echo lets go
lets go

# some comment
DIR=/boot
+ DIR=/boot
/bin/ls -l $DIR | /bin/grep initrd  | wc -l
+ /bin/ls -l /boot
+ wc -l
+ /bin/grep initrd
112

echo done
+ echo done
done

These modes will help you to find some errors. To modify the output of the tracing mode you may configure the PS4 :

export 'PS4=+${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]}: '

This will also print the file name of the executing script, the line number of the current command that is executed and the respective function name:

usr@srv /tmp % export 'PS4=+${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]}: '
usr@srv /tmp % bash -x test.sh
+test.sh:3:: echo lets go
lets go
+test.sh:6:: DIR=/boot
+test.sh:7:: /bin/ls -l /boot
+test.sh:7:: /bin/grep initrd
+test.sh:7:: wc -l
112
+test.sh:9:: echo done
done

if You don’t want to trace a whole script you can enable/disable tracing from within a script:

# [...]
echo no tracing
set -x
echo trace me
set +x
echo no tracing
# [...]

This will result in something like:

usr@srv /tmp % bash test.sh
[...]
no tracing
+test.sh:14:: echo trace me
trace me
+test.sh:15:: set +x
no tracing
[...]

It is of course also possible to enable/disable verbose mode inside the script with set -v and set +v , respectively.