I recently wrote an email with an attached LZMA archive. It was immediately answered with something like:

What are you doing? I had to boot linux to open the file!

First of all I don’t care whether user of proprietary systems are able to read open formats, but this answer made me curious to know about the differences between some compression mechanisms regarding compression ratio and time. So I had to test it!

This is nothing scientific! I just took standard parameters, you might optimize each method on its own to save more space or time. Just have a look at the parameter -1..-9 of zip. But all in all this might give you a feeling for the methods.

Candidates

I’ve chosen some usual compression methods, here is a short digest (more or less copy&paste from the man pages):

  • gzip: uses Lempel-Ziv coding (LZ77), cmd: tar czf $1.pack.tar.gz $1
  • bzip2: uses the Burrows-Wheeler block sorting text compression algorithm and Huffman coding, cmd: tar cjf $1.pack.tar.bz2 $1
  • zip: analogous to a combination of the Unix commands tar(1) and compress(1) and is compatible with PKZIP (Phil Katz’s ZIP for MSDOS systems), cmd: zip -r $1.pack.zip $1
  • rar: proprietary archive file format, cmd: rar a $1.pack.rar $1
  • lha: based on Lempel-Ziv-Storer-Szymanski-Algorithm (LZSS) and Huffman coding, cmd: lha a $1.pack.lha $1
  • lzma: Lempel-Ziv-Markov chain algorithm, cmd: tar --lzma -cf $1.pack.tar.lzma $1
  • lzop: imilar to gzip but favors speed over compression ratio, cmd: tar --lzop -cf $1.pack.tar.lzop $1

All times are user times, measured by the unix time command. To visualize the results I plotted them using R, compression efficiency at X vs. time at Y. The best results are of course located near to the origin.

Data

To test the different algorithms I collected different types of data, so one might choose a method depending on the file types.

Binaries

The first category is called binaries. A collection of files in human-not-readable format. I copied all files from /bin and /usr/bin , created a gpg encrypted file of a big document and added a copy of grml64-small_2010.12.iso. All in all 176.753.125 Bytes.

MethodCompressed Size% of originalTime in s
gzip161.999.80491.6510.18
bzip2161.634.68591.4571.76
zip179.273.428101.4313.51
rar175.085.41199.06156.46
lha180.357.628102.0435.82
lzma157.031.05288.84129.22
lzop165.533.60993.654.16

Media

This is a bunch of media files. Some audio data like the I have a dream-speech of Martin-Luther King and some music. Also video files like the The Free Software Song and Clinton’s I did not have sexual relations with that woman are integrated. I attached importance to different formats, so here are audio files of the type ogg, mp3 mid, ram, smil and wav, and video files like avi, ogv and mp4. Altogether 95.393.277 Bytes.

MethodCompressed Size% of originalTime in s
gzip88.454.00292.736.04
bzip287.855.90692.1037.82
zip88.453.92692.736.17
rar87.917.40692.1670.69
lha88.885.32593.1814.22
lzma87.564.03291.7974.76
lzop90.691.76495.072.28

Office

The next category is office. Here are some PDF from different journals and office files from LibreOffice and Microsoft’s Office (special thanks to @chschmelzer for providing MS files). The complete size of these files is 10.168.755 Bytes.

MethodCompressed Size% of originalTime in s
gzip8.091.87679.580.55
bzip28.175.62980.408.58
zip8.092.68279.580.54
rar7.880.71577.503.72
lha8.236.42281.003.29
lzma7.802.41676.735.62
lzop8.358.34382.200.21

Pictures

To test the compression of pictures I downloaded 10 files of each format bmp, eps, gif, jpg, png, svg and tif. That are the first ones I found with google’s image search engine. In total 29’417’414 Bytes.

MethodCompressed Size% of originalTime in s
gzip20.685.80970.321.65
bzip218.523.09162.9710.71
zip20.668.60270.261.72
rar18.052.68861.378.58
lha20.927.94971.145.97
lzma18.310.03262.2421.09
lzop23.489.61179.850.57

Plain

This is the main category. As you know, ASCII content is not saved really space efficient. Here the tools can riot! I downloaded some books from Project Gutenberg, for example Jules Verne’s Around the World in 80 Days and Homer’s The Odyssey, source code of moon-buggy and OpenLDAP, and copied all text files from /var/log . Altogether 40.040.854 Bytes.

MethodCompressed Size% of originalTime in s
gzip11.363.93128.381.88
bzip29.615.92924.0213.63
zip12.986.15332.431.6
rar11.942.20129.838.68
lha13.067.74632.648.86
lzma8.562.96821.3930.21
lzop15.384.62438.420.38

Rand

This category is just to test random generators. Compressing random content shouldn’t decrease the size of the files. Here I used two files from random.org and dumped some bytes from /dev/urandom. 4.198.400 Bytes.

MethodCompressed Size% of originalTime in s
gzip4.195.64699.930.23
bzip24.213.356100.361.83
zip4.195.75899.940.2
rar4.205.389100.171.65
lha4.194.56699.912.04
lzma4.197.25699.971.98
lzop4.197.13499.970.1

Everything

All files of the previous catergories compressed together. Since the categories aren’t of same size it is of course not really fair. Nevertheless it might be interesting. All files together require 355.971.825 Bytes.

MethodCompressed Size% of originalTime in s
gzip294.793.25582.8120.43
bzip2290.093.00781.49141.89
zip313.670.43988.1223.78
rar305.083.64885.70246.63
lha315.669.63188.6864.81
lzma283.475.56879.63258.05
lzop307.644.07686.427.89

Conclusion

As you can see, the violet lzma-dot is always located at the left side, meaning very good compression. But unfortunately it’s also always at the top, so it’s very slow. But if you want to compress files to send it via mail you won’t bother about longer compression times, the file size might be the crucial factor. At the other hand black, green and grey (gzip, zip and lzop) are often found at the bottom of the plots, so they are faster but don’t decrease the size that effectively.

All in all you have to choose the method on your own. Also think about compatibility, not everybody is able to unpack lzma or lzop.. My upshot is to use lzma if I want to transfer data through networks and for attachments to advanced people, and to use gzip for everything else like backups of configs or mails to windows user.


Martin Scharm

stuff. just for the records.

Do you like this page?
You can actively support me!

7 comments

Martin Steinbach | Permalink |

Very detailed Benchmark, especially the choice of the media and plaintext files, nice. For the future I’am sure I will frequently use this site, to choose the optimal compression algorithm.

But I have to add a note: In the rare case I send compressed files to windows users, I use the 7z container. 7z also uses the lzma algorithm and is free software, too.

win: http://7z.org sources: http://p7zip.sourceforge.net/ and available in the Debian,grml an Ubuntu repository.

Martin Scharm | Permalink |

Thanks for the hint to 7z! I knew I forgot something..

Tagir Valeev | Permalink |

Comparison is unfair and biased towards gzip/bzip2, because you first used tar (practically to join all files into single big file). Other archivers compress each file independently, so they cannot gain an advantage of similarities between files (but they allow you to unpack any file or remove/replace files without repacking the whole archive). If you use tar as the first step for all other archivers as well, results will be much more fair. Rar has a special option -s (solid archive) which allows you to use inter-file compression. From your tests it seems that similarities between files can affect the result pretty much (especially for /var/log content and source code).

Foo Bar | Permalink |

Your “Binaries” test is invalid. The output of GPG is mathematically pseudo-random and therefore incompressible (as demonstrated). Hence, the result is random noise, identical to your “Rand” test.

Compression of actual binary code files yields substantially different results.

Bradley | Permalink |

I’m running windows 8 and use quite a small amount of compression, and as of yet I’ve had no problem with lmza, like a comment above stated 7z uses that algorithm and there are numerous programs for windows that will open them and create them with no issue.

Robert Ruedisueli | Permalink |

I recommend 7z, it uses LZMA algorithm, but adds checksums and archiving capabilities. This adds a little to the overhead, but saves you from having to use tar for archinving, and gives you the reliability of built-in checksums.

As for single files that you don’t need checksums on, I recommend LZMA.

As a note, many of these algorithms you can push further with command line options that most archivers use the default options. This is because, the higher-compression options can take exponentially longer and use more computer resources for only a small gain, and some of them can be very memory and/or processor intensive to decompress.

AndreasW | Permalink |

I’ve done some speed comparison between 7z and zip lately. Zip came out much faster. In some test cases twenty times faster See there: http://99-developer-tools.com/why-zip-is-better-than-7z/

Leave a comment

There are multiple options to leave a comment: