DNS look-ups are a very sensible topic. Of course you want very fast name-to-IP resolutions, but should you always use Google’s DNS server? After all they can keep track of all your network motion profile unless you are surfing by IP!
Today I read about the OpenNIC Project and ran some speed tests. It’s very interesting and worthy to know about!
OpenNIC (a.k.a. "The OpenNIC Project") is an organization of hobbyists who run an alternative DNS network. [...] Our goal is to provide you with quick and reliable DNS services and access to domains not administered by ICANN.
Ok, I gave it a try and implemented a Perl-script that checks the speed. It throws a dice to call one of my often used domains and digs1 each of my predefined DNS servers to save the query time. I tested the following DNS server:
184.108.40.206 : one server of the OpenNIC project, located in Germany
220.127.116.11 : one server of the OpenNIC project, located in Germany (NRW)
18.104.22.168 : Google’s public DNS server, proven to be fast and reliable
172.16.20.53 : my ISP’s server
22.214.171.124 : name server of our university
Find the Perl code attached.
And here are the results after 10000 qeuries:
NS of uni-halle.de
As you can see, my ISP’s DNS server is the fastest, they may have optimized their internal infrastructure to provide very fast look-ups to its customers. But it is also nice to see, that there is one OpenNIC server that is faster than google! And this server comes with another feature: It doesn’t track any logs! Isn’t that great!?
To find some servers near you just check their server list. Some of them don’t record logs or anonymize them, and of course all of them are independent from ICANN administrations.
I can’t recommend to use any special DNS server, but I want to advise to test them and find the best one for your demands! Feel free to post your own test results via comment or trackback.
You may have heard about management consoles!? If a server is dead you can revive it via service console without driving the long way to the data center (often miles away).
While logged into the service console you of course have the chance to reboot the machine itself. To get to know what it is doing while booting you may want to see all the messages that are usually prompted to the terminal at the attached monitor. Unfortunately you aren’t next to the machine, and so there is no monitor attached to it, but you can force grub to prompt all messages both to terminal and to service console.
First of all you have to setup the serial console:
The --unit parameter determines the COM port, here it’s COM1, if you need COM2 you should use --unit=1 . --speed defines a baud rate of 57600 bps, see your manual. To learn more about the other parameter you are referred to the Grub manual for serial.
Next you have to tell Grub where to write the output:
This line tells grub that there are two devices, the typical console on the attached screen and our previous defined serial console. With this directive Grub waits 5 seconds for any input from serial console or the attached keyboard and will print its menu to that device where the input was generated. That means if you’re at home and press any key, Grub will show you all outputs to your serial connection, but your student assistant (who had to go to the server, by bike while raining!!) isn’t able to see whats happening. But if your assistance is faster than you and hits a key on the physically attached keyboard, he’ll see anything and you’ll look through a black window…
If nobody produces any input the output is written to that device that is listed first.
Last but not least you have to modify the kernel sections of the boot menu and append something like that at the end of every kernel line:
That tells grub that all kernel messages should be printed to both the real console of the attached screen and the serial console. Keep in mind to modify ttyS0 to match your serial port (here it is COM1).
Grub decides for the device that is listed last to also send all stdin/stdout/stderr of the init process, that means only the last device will act as interactive terminal. E.g. checks of fsck are only printed to the last device, so stay calm if nothing happen for a long time on the other one ;-)
Here is a valid example for copy and paste:
Here both Grub entries are booting the same kernel, but the first one will use the local console as interactive terminal whether the other entry takes the serial console for interactions.
By default Xfce provides screen-locking via Xscreensaver. Here is how you change it.
Xfce runs a script called xflock4 to lock the screen, to change the default behavior just foist another script on Xfce!
The default path settings for searching for this executable shows, that /usr/local/bin has higher priority than /usr/bin (here is the original xflock4 located). The rest should be clear!
E.g. to use xtrlock instead of Xscreensaver you just have to link to the binary:
On a multiuser system you may allow each user to use it’s own locking-solution. So just write a script that checks if $HOME/.screenlock is executable and runs it or falls back to a default screensaver:
Save it executable as /usr/local/bin/xflock4 - done…
I recently got very close to the floating point trap, again, so here is a little tribute with some small examples!
Because Gnu R is very nice in suppressing these errors, all examples are presented in R.
Those of you that are ignorant like me, might think that 0.1 equals 0.1 and expect 0.1==0.1 to be true, it isn’t! Just see the following:
You might think it comes from the division, so you might expect seq(0, 1, by=0.1) == 0.3 contains exactly one vale that is TRUE !? Harrharr, nothing like that!
Furthermore, what do you think is the size of unique(c(0.3, 0.4 - 0.1, 0.5 - 0.2, 0.6 - 0.3, 0.7 - 0.4)) !? Is it one? Not even close to it:
Your machine is that stupid, that it isn’t able to save such simple numbers ;)
And another example should show you how these errors sum up:
As you can see, R tells you that you summed up to exactly one, suppressing the small numerical error. This error will increase with larger calculations! So be careful with any comparisons.
To not fail the next time, for example use the R build-in function all.equal for comparison:
Or, if you’re dealing with integers, you should use round or as.integer to make sure they really are integers.
I hope I could prevent some of you falling into this floating point trap! So stop arguing about numerical errors and start caring for logical fails ;-)
Those of you interested in further wondering are referred to [Mon08].
Welcome to my new category: ShortCut! Here I’ll shortly explain some smart features, unknown extensions or uncommon pathways of going for gold.
Today it’s about the Gnu R tool locator.
With locator you are able to detect the mouse position inside you plot. Just run locator() and click some points, when you’re finished click the right button and locator will print the x - and y -values of the clicked positions.
With this tool it’s possible to visually validate some numerical calculation.
With a little bit more code, you can output the coordinates right into you plot:
With a click into the plot you’ll be able to create a result like figure 1.
I’m actually attending a lecture with the great name RNA-Seq, dealing with next generation sequencing (NGS). I think the lecture is more or less addressed to biological scientist and people who are working with genome analyzers, but I think there is no harm in visiting this lecture and to get to know the biologists point of view.
These scientists are using different sequencing platforms. Some popular examples are Roche 454, Illumina/Solexa, ABI SOLiD, Pacific Biosciences PacBio RS, Helicos HeliScope™ Single Molecule Sequencer or Polonator, but there are much of more such platforms. If you are interested in these different techniques, you are referred to [Met09].
There is no standard, so all these machines produce output in different formats and quality. In general you’ll get a fastq file as result of sequencing. This file contains roughly more or less small reads of sequences and a quality score of each recognized nucleotide. The quality score is encoded in ASCII characters and contains four line types.
Here is an example of such a file:
As you can see, in general the file contains an identifier line, starting with @ , the recognized sequence, a comment, starting with + , followed by the quality score for each base. It’s a big problem that there is no common standard for these quality scores, they differ in domain depending on the sequencing platform. So the original Sanger format uses PHRED scores ([EG98] and [EHWG98]) in an ASCII range 33-126 ( ! - ~ ), Solexa uses Solexa scores encoded in ASCII range 59-126 ( ; - ~ ) and with Illumina 1.3+ they introdused PHRED scores in an ASCII range 64-126 ( @ - ~ ). So you sometimes won’t be able to determine which format your fastq file comes from, the Illumina scores can be observed by all of this three example. If you want to learn more about fastq files and formats you are referred to [CFGHR10].
Interested readers are free to translate the ASCII coded quality scores of my small example to numerical quality scores and post the solution to the comment!
There is a great tool established to work with these resulting fastq files (this is just a small field of application): Galaxy. It is completely open source and written in Python. Those who already worked with it told me that you can easily extend it with plug-ins. You can choose wheter to run your own copy of this tool or to use the web platform of the Penn State. There’s a very huge ensemble of tools, I just worked with a small set of it, but I like it. It seems that you are able to upload unlimited size of data and it will never get deleted!? Not bad guys! You can share your data and working history and you can create workflows to automatize some jobs. Of course I’m excited to write an en- and decoder for other data like videos or music to and from fastq - let’s see if there’s some time ;-)
But this platform also has some inelegance’s. So there is often raw data presented in an raw format. Have a look at figure 1, you can see there is a table, columns are separated by tabs, but if one word in a column is much smaller/shorter as another one in this column this table looses the human readability! Here I’ve colorized the columns, but if the background is completely white, you have no chance to read it.
So instead of getting angry I immediately wrote a user-script. It adds a button on the top of pages with raw data and if it is clicked, it creates an HTML table of this data. You can see a resulting table in figure 2. If you think it is nice, just download it at the end of this article.
All in all I just can estimate what’s coming next!
Peter J. A. Cock, Christopher J. Fields, Naohisa Goto, Michael L. Heuer, and Peter M. Rice.
The Sanger FASTQ file format for sequences with quality scores, and the Solexa/Illumina FASTQ variants.
Nucleic Acids Research, 38(6):1767–1771, April 2010.
Brent Ewing, LaDeana Hillier, Michael C. Wendl, and Phil Green.
Base-Calling of Automated Sequencer Traces Using Phred. I. Accuracy Assessment.
Genome Research, 8(3):175–185, March 1998.
Apart from an IMAP/POP service we provide a webmail front end to interact with our mail server via SquirrelMail. This tool has a very annoying feature, search results are ordered by date, but in the wrong way: From old to new!
SquirrelMail is a very simple to administrate front end, not very nice, but if my experimental icedove doesn’t work I use it too. Furthermore we have staff members, who only use this tool and aren’t impressed by real user-side clients like icedove or sylpheed.. What ever, I had to resort these search results!
Searching for a solutions doesn’t result in a solution, so I had three options: Modifying the SquirrelMail code itself (very bad idea, I know), providing a plugin for SquirrelMail, or writing a userscript.
The layout of this website is lousy! I think the never heard of div’s or CSS, everything is managed by tables in tables of tables and inline layout specifications -.-
So detecting of the right table wasn’t that easy. I had to find the table that contains a headline with the key From :
If I’ve found such a table, all the rows have to be sorted from last to first. Except the first ones defining the headline of that table. So I modified the DOM:
Ok, that’s it! Using this script the search results are ordered in the correct way. Let’s wait for a response from these nerdy SquirrelMail-user ;-)
Ladies and gentlemen, waiting is finally over. I’m proud to introduce Rumpel!
After more than one year of high frequently power of persuasion he finally set up his own blog named RforRocks. A few minutes after release there are even lots of myths surrounding the origin of that name. May it stand for:
Who knows? Me not! So try to grill Rumpel about this issue. And by the way, he’s in any case worthy to follow ;-)
(so Rumpel, time for my payoff)
Today I attended a workshop on Shibboleth, organized by the AAI team of the DFN. There are several problems I’ll explain in this posting.
What the hell is Shibboleth!?
Shibboleth is a system to provide a single sign-on (SSO) solution for different services. It is split into two modules, the Identity Provider (IdP), that knows the authentication stuff by an Identity Management (IdM) (e.g. a database like LDAP), and the Service Provider (SP), that has (generally) no knowledge about accounts of the users that make use of its services. One example may be a university (school, company etc.) as IdP, that provides accounts of its students and staff members, and a scientific journal (mail provider, library, e-learning platform etc.) as SP, that will offer copies to students. So the journal has to verify that the requesting user is a student or a staff. The actual system is either based on user authentication on each SP or on IP restrictions (e.g. only user from 141.48.x.x are allowed to download), so the users have to manage a lot of different accounts for any service or otherwise the SP’s have to maintain IP black- or whitelists. Of course this is an unsatisfying behavior.
Here comes Shibboleth! It provides the communication between the IdP’s and SP’s, so a single user just has to have only one account at its IdP and is able to use all services of the SP’s that have arrangements with the users IdP.
I don’t want to go into detail. Just for notice, it is based on XML messages via web, can be implemented via JAVA/C++/PHP, verification goes by certificates, a lot of restrictions…
However, figure 1 illustrates the working principle. First the user requests a service of a SP (1), there are two possibilities:
There is no active session on the SP, so the user is linked to a Discovery Service (DS) (2).
This DS lets the user choose its IdP in a pool of known IdP's (3). The DC may be implemented by this SP or it is provided by someone like the DFN.
The user chooses one of the IdP's and is linked to the website of this IdP (4) to authenticate itself (5).
The IdP decides whether it is a valid user or not (authentication by form, session based recognition or something like that), so again two possibilities (6):
If it is a valid user, the IdP sends some user related stuff to the SP, so the SP knows it is a valid user.
Otherwise the IdP informs the SP that the authentication has failed.
If there is an active session (7), the SP already knows whether the user is allowed to request anything...
I’m not the person to evaluate that code, didn’t yet saw any, but I see some other problems not concerned to code exploits.
The first problem isn’t that critical, but the current situation is that each SP (library, mail provider, computer pools etc.) has a single account for every user. Due to historical reason they are all disconnected, so it will be a hard job to combine all of them. But nevertheless it’s possible.
Basically (yes, the instructors always said basically) the SP’s shouldn’t know anything about the user, except the validity. But they also mentioned that e.g. an e-learning platform might want to know whether the user has a prediploma or something like this, so they have to receive this information from the IdP. Of course this has to be controlled via contracts, but what if the SP wants also to know some grades or an mail address to communicate with a user!? You may not want to provide that much information to any SP.. That situation isn’t considered anywhere.
It’s also a terrible thing, that the user doesn’t know what kind of information is offered by the IdP. In their demonstration one could see, that the SP perfectly receives the information from a IdP, by displaying this information (consisting of role at the IdP, mail address, given name, sure name and so on). So the possibility to send all LDAP attributes to the SP is undeniable there, who can promise that not all information will be transferred!?
Remember: The provided data is verified! No chance for trashmail or something like this!
I think a much better solution would be, if the IdP tells the user what attributes are requested by which SP before a user authenticates itself and thus commits the access to this data.
Yes, the good old fishing problem. I think it would be a very interesting experiment to build a fake SP, maybe called bamja, and pretend to offer music for free to students. Just authenticate as member of an university..
Yeah, cool thing! Just log in and I get any music I want!?
But of course also the DS is faked, and why not, even the university website (maybe found at something like auth.uni-halle.de.whatever.de). We all know, that not every student has that technological knowledge like us, so I think that try will catch a lot of people. And if there is really some music behind the faked authentication page, this user will probably tell its friend about this cool feature and you are able to catch a lot of accounts in a short period of time.
Ahh, and, because we have this new feature, with this account you can access their mail accounts, you can request books in a library, you can buy lectures in an e-learning tool and so on! Maybe you can quit their university register!?
Micha immediately recognized the high DDoS potential. Imagine one single IdP (e.g. a university) and hundreds of SP’s (journals, libraries, software provider etc.). Every time you request something from one of these SP’s they send a big XML message to the IdP, containing lots of data (certs, web addresses and so on). So you just have to request some stuff from different clients to any of these SP’s and they will attack the IdP with that much data, that the IdP may fail parsing everything. The SP’s don’t recognize each other, and the IdP just sees different SP’s until it parses the XML, so there is no chance to block a request!? Isn’t that a nice scenario? ;-)
Of course the idea of SSO is very smart, but I don’t like what they build…
And, by the way, I don’t really want that much cookie trash in my browser.