Welcome to my new category: ShortCut! Here I’ll shortly explain some smart features, unknown extensions or uncommon pathways of going for gold.
Today it’s about the Gnu R tool locator.
With locator you are able to detect the mouse position inside you plot. Just run locator() and click some points, when you’re finished click the right button and locator will print the x - and y -values of the clicked positions.
With this tool it’s possible to visually validate some numerical calculation.
With a little bit more code, you can output the coordinates right into you plot:
With a click into the plot you’ll be able to create a result like figure 1.
I’m actually attending a lecture with the great name RNA-Seq, dealing with next generation sequencing (NGS). I think the lecture is more or less addressed to biological scientist and people who are working with genome analyzers, but I think there is no harm in visiting this lecture and to get to know the biologists point of view.
These scientists are using different sequencing platforms. Some popular examples are Roche 454, Illumina/Solexa, ABI SOLiD, Pacific Biosciences PacBio RS, Helicos HeliScope™ Single Molecule Sequencer or Polonator, but there are much of more such platforms. If you are interested in these different techniques, you are referred to [Met09].
There is no standard, so all these machines produce output in different formats and quality. In general you’ll get a fastq file as result of sequencing. This file contains roughly more or less small reads of sequences and a quality score of each recognized nucleotide. The quality score is encoded in ASCII characters and contains four line types.
Here is an example of such a file:
As you can see, in general the file contains an identifier line, starting with @ , the recognized sequence, a comment, starting with + , followed by the quality score for each base. It’s a big problem that there is no common standard for these quality scores, they differ in domain depending on the sequencing platform. So the original Sanger format uses PHRED scores ([EG98] and [EHWG98]) in an ASCII range 33-126 ( ! - ~ ), Solexa uses Solexa scores encoded in ASCII range 59-126 ( ; - ~ ) and with Illumina 1.3+ they introdused PHRED scores in an ASCII range 64-126 ( @ - ~ ). So you sometimes won’t be able to determine which format your fastq file comes from, the Illumina scores can be observed by all of this three example. If you want to learn more about fastq files and formats you are referred to [CFGHR10].
Interested readers are free to translate the ASCII coded quality scores of my small example to numerical quality scores and post the solution to the comment!
There is a great tool established to work with these resulting fastq files (this is just a small field of application): Galaxy. It is completely open source and written in Python. Those who already worked with it told me that you can easily extend it with plug-ins. You can choose wheter to run your own copy of this tool or to use the web platform of the Penn State. There’s a very huge ensemble of tools, I just worked with a small set of it, but I like it. It seems that you are able to upload unlimited size of data and it will never get deleted!? Not bad guys! You can share your data and working history and you can create workflows to automatize some jobs. Of course I’m excited to write an en- and decoder for other data like videos or music to and from fastq - let’s see if there’s some time ;-)
But this platform also has some inelegance’s. So there is often raw data presented in an raw format. Have a look at figure 1, you can see there is a table, columns are separated by tabs, but if one word in a column is much smaller/shorter as another one in this column this table looses the human readability! Here I’ve colorized the columns, but if the background is completely white, you have no chance to read it.
So instead of getting angry I immediately wrote a user-script. It adds a button on the top of pages with raw data and if it is clicked, it creates an HTML table of this data. You can see a resulting table in figure 2. If you think it is nice, just download it at the end of this article.
All in all I just can estimate what’s coming next!
Peter J. A. Cock, Christopher J. Fields, Naohisa Goto, Michael L. Heuer, and Peter M. Rice.
The Sanger FASTQ file format for sequences with quality scores, and the Solexa/Illumina FASTQ variants.
Nucleic Acids Research, 38(6):1767–1771, April 2010.
Brent Ewing, LaDeana Hillier, Michael C. Wendl, and Phil Green.
Base-Calling of Automated Sequencer Traces Using Phred. I. Accuracy Assessment.
Genome Research, 8(3):175–185, March 1998.
Apart from an IMAP/POP service we provide a webmail front end to interact with our mail server via SquirrelMail. This tool has a very annoying feature, search results are ordered by date, but in the wrong way: From old to new!
SquirrelMail is a very simple to administrate front end, not very nice, but if my experimental icedove doesn’t work I use it too. Furthermore we have staff members, who only use this tool and aren’t impressed by real user-side clients like icedove or sylpheed.. What ever, I had to resort these search results!
Searching for a solutions doesn’t result in a solution, so I had three options: Modifying the SquirrelMail code itself (very bad idea, I know), providing a plugin for SquirrelMail, or writing a userscript.
The layout of this website is lousy! I think the never heard of div’s or CSS, everything is managed by tables in tables of tables and inline layout specifications -.-
So detecting of the right table wasn’t that easy. I had to find the table that contains a headline with the key From :
If I’ve found such a table, all the rows have to be sorted from last to first. Except the first ones defining the headline of that table. So I modified the DOM:
Ok, that’s it! Using this script the search results are ordered in the correct way. Let’s wait for a response from these nerdy SquirrelMail-user ;-)
Ladies and gentlemen, waiting is finally over. I’m proud to introduce Rumpel!
After more than one year of high frequently power of persuasion he finally set up his own blog named RforRocks. A few minutes after release there are even lots of myths surrounding the origin of that name. May it stand for:
Who knows? Me not! So try to grill Rumpel about this issue. And by the way, he’s in any case worthy to follow ;-)
(so Rumpel, time for my payoff)
Today I attended a workshop on Shibboleth, organized by the AAI team of the DFN. There are several problems I’ll explain in this posting.
What the hell is Shibboleth!?
Shibboleth is a system to provide a single sign-on (SSO) solution for different services. It is split into two modules, the Identity Provider (IdP), that knows the authentication stuff by an Identity Management (IdM) (e.g. a database like LDAP), and the Service Provider (SP), that has (generally) no knowledge about accounts of the users that make use of its services. One example may be a university (school, company etc.) as IdP, that provides accounts of its students and staff members, and a scientific journal (mail provider, library, e-learning platform etc.) as SP, that will offer copies to students. So the journal has to verify that the requesting user is a student or a staff. The actual system is either based on user authentication on each SP or on IP restrictions (e.g. only user from 141.48.x.x are allowed to download), so the users have to manage a lot of different accounts for any service or otherwise the SP’s have to maintain IP black- or whitelists. Of course this is an unsatisfying behavior.
Here comes Shibboleth! It provides the communication between the IdP’s and SP’s, so a single user just has to have only one account at its IdP and is able to use all services of the SP’s that have arrangements with the users IdP.
I don’t want to go into detail. Just for notice, it is based on XML messages via web, can be implemented via JAVA/C++/PHP, verification goes by certificates, a lot of restrictions…
However, figure 1 illustrates the working principle. First the user requests a service of a SP (1), there are two possibilities:
There is no active session on the SP, so the user is linked to a Discovery Service (DS) (2).
This DS lets the user choose its IdP in a pool of known IdP's (3). The DC may be implemented by this SP or it is provided by someone like the DFN.
The user chooses one of the IdP's and is linked to the website of this IdP (4) to authenticate itself (5).
The IdP decides whether it is a valid user or not (authentication by form, session based recognition or something like that), so again two possibilities (6):
If it is a valid user, the IdP sends some user related stuff to the SP, so the SP knows it is a valid user.
Otherwise the IdP informs the SP that the authentication has failed.
If there is an active session (7), the SP already knows whether the user is allowed to request anything...
I’m not the person to evaluate that code, didn’t yet saw any, but I see some other problems not concerned to code exploits.
The first problem isn’t that critical, but the current situation is that each SP (library, mail provider, computer pools etc.) has a single account for every user. Due to historical reason they are all disconnected, so it will be a hard job to combine all of them. But nevertheless it’s possible.
Basically (yes, the instructors always said basically) the SP’s shouldn’t know anything about the user, except the validity. But they also mentioned that e.g. an e-learning platform might want to know whether the user has a prediploma or something like this, so they have to receive this information from the IdP. Of course this has to be controlled via contracts, but what if the SP wants also to know some grades or an mail address to communicate with a user!? You may not want to provide that much information to any SP.. That situation isn’t considered anywhere.
It’s also a terrible thing, that the user doesn’t know what kind of information is offered by the IdP. In their demonstration one could see, that the SP perfectly receives the information from a IdP, by displaying this information (consisting of role at the IdP, mail address, given name, sure name and so on). So the possibility to send all LDAP attributes to the SP is undeniable there, who can promise that not all information will be transferred!?
Remember: The provided data is verified! No chance for trashmail or something like this!
I think a much better solution would be, if the IdP tells the user what attributes are requested by which SP before a user authenticates itself and thus commits the access to this data.
Yes, the good old fishing problem. I think it would be a very interesting experiment to build a fake SP, maybe called bamja, and pretend to offer music for free to students. Just authenticate as member of an university..
Yeah, cool thing! Just log in and I get any music I want!?
But of course also the DS is faked, and why not, even the university website (maybe found at something like auth.uni-halle.de.whatever.de). We all know, that not every student has that technological knowledge like us, so I think that try will catch a lot of people. And if there is really some music behind the faked authentication page, this user will probably tell its friend about this cool feature and you are able to catch a lot of accounts in a short period of time.
Ahh, and, because we have this new feature, with this account you can access their mail accounts, you can request books in a library, you can buy lectures in an e-learning tool and so on! Maybe you can quit their university register!?
Micha immediately recognized the high DDoS potential. Imagine one single IdP (e.g. a university) and hundreds of SP’s (journals, libraries, software provider etc.). Every time you request something from one of these SP’s they send a big XML message to the IdP, containing lots of data (certs, web addresses and so on). So you just have to request some stuff from different clients to any of these SP’s and they will attack the IdP with that much data, that the IdP may fail parsing everything. The SP’s don’t recognize each other, and the IdP just sees different SP’s until it parses the XML, so there is no chance to block a request!? Isn’t that a nice scenario? ;-)
Of course the idea of SSO is very smart, but I don’t like what they build…
And, by the way, I don’t really want that much cookie trash in my browser.
We often use Gnu R to work on different things and to solve various exercises. It’s always a disgusting job to export e.g. a matrix with probabilities to a document to send it to our supervisors, but Rumpel just gave me a little hint.
The trick is called xtable and it can be found in the deb repository:
It’s an add on for R and does right that what I need:
It is not only limited to matrices and doesn’t only export to latex, but for further information take a look at ?xtable ;)
Btw. I just noticed that the GeSHi acronym for Gnu R syntax highlighting is rsplus …
Don’t ask me why neither how, but I’ve lost the gitosis’ post-update on my server. So, among others, the .gitosis.conf wasn’t updated anymore…
Ok, let’s start at the beginning. I’m hosting some git repositories on my server using gitosis. Today I tried to create a new repository by editing the gitosis.conf in the gitosis-admin repo, but I couldn’t push the new created repo to the server:
Damn, looking at the server all rights through the gitosis home seem to be ok. But I was wondering why the $HOME/.gitosis.conf of the gitosis user wasn’t updated!? How could this happen?
After some thoughts I found out, that the link $HOME/repositories/gitosis-admin.git/hooks/post-update was pointing to nirvana:
Seems that the file /usr/share/python-support/gitosis/gitosis-0.2-py2.5.egg/gitosis/templates/admin/hooks/post-update was deleted through an update (one week ago I updated gitosis 0.2+20090917-9 -> 0.2+20090917-10 ).. Didn’t find an announce of it or any workaround, but I identified a template in /usr/share/pyshared/gitosis/templates/admin/hooks/post-update . So the solution (hack) is to link to that file:
Just replace $GITOSISHOME with the home of your gitosis user, mine lives for example in /home/git .
Now every think works fine. If anybody has a better solution please tell me!
Suffix trees are an application to particularly fast implement many important string operations like searching for a pattern or finding the longest common substring.
Introduction and definitions
I already introduced the Z-Algorithm which optimizes the searching for a pattern by preprocessing the pattern. It is very useful if you have to search for one single pattern in a large number of words. But often you’ll try to find many patterns in a single text. So the preprocessing of each pattern is ineffective. Suffix trees come up with a preprocessing of the text, to speed up the search for any pattern.
As expected a suffix tree of the word (length ) is represented in a data structure of a rooted tree. Every path from root to a leave represents a suffix of $ with . The union .
Every inner node (except root) has at least two children. Every edge is labeled with a string of , labels of leaving edges at a single node start with different symbols and each leaf is indexed with .
The concatenation of all edge labels on a path from root to a leaf with index represents the suffix .
For each node :
The path from root to is called
The union of the labels at all edges on is
is label of path ()
is path label to ()
instead of we can call this node
A pattern exists in suffix tree of (further called ) if and only if there is a , so that contains a node with ().
A substring ends in node or in an edge to with .
A edge to a leaf is called leaf-edge.
The tree contains all suffixes of the word extended with $. This tree is visualized in figure 1.
Building suffix trees: Write only top down
The write-only, top-down (WOTD) algorithm constructs the suffix tree in a top-down fashion.
Let be a node in , then denotes the concatenation of all edge labels on the path to (). Each node in the suffix tree represents the set of all suffixes that have the prefix . So the set of pathlabels to leafs below can be written as (all suffixes of the set of suffixes that start with ).
This set is splitted in equivalence classes for each symbol with is the -group of .
For groups that contain only one suffix we create a leaf with the index and connect it to with an edge containing label .
In groups with a size of at least two we compute their longest common prefix that starts with and create a node . The connecting edge between and gets the label and we continue recursively with this algorithm in node with
Exact pattern matching
All paths from the root of the suffix tree are labeled with the prefixes of path labels. That is, they’re labeled with prefixes of suffixes of the string . Or, in other words, they’re labeled with substrings of .
To search for a pattern in , just go through , following paths labeled by the characters of .
At any node with is prefix of find the edge with label that starts with symbol . If such an edge doesn’t exists, isn’t a substring of . Otherwise try to match the pattern with to node . If is not a prefix of you’ll either get a mismatch denoting that isn’t a substring of , or you ran out of caracters of and found it in the tree.
If is a prefix of continue searching at node .
If you were able to find in , contains at any position denoted by the indexes of leafs below your point of discovery.
Minimal unique substrings
is a minimal unique substring if and only if contains exactly once and any prefix of can be found at least two times in .
To find such a minimal unique substring walk through the tree to nodes with a leaf-edge . A minimal unique substring is with is the first char of , because its prefix isn’t unique ( has at least two leaving edges) and every extended version has a prefix that is also unique.
A maximal pair is a tuple , so that , but and . A maximal repeat is the string represented by such tuple.
If is a maximal repeat there is a node in .
To find the maximal repeats do a DFS on the tree. Label each leaf with the left character of the suffix that it represents. For each internal node:
If at least one child is labeled with c, then label it with c
Else if its children’s labels are diverse, label with c.
Else then all children have same label, copy it to current node.
Path labels to left-diverse nodes are maximal repeats.
Generalized suffix trees
An extension of suffix trees are generalized suffix trees. With it you can represent multiple words in one single tree.
Of course you have to modify the tree, so that you know which leaf index corresponds to which word. Just a little bit more to store in the leafs ;)
A generalized suffix tree is printed in figure 2 of page one.
There are a lot of other applications for a suffix tree structure. For example finding palindromes, search for regular expressions, faster computing of the Levenshtein distance, data compression and so on…
I’ve implemented a suffix tree in Java. The tree is constructed via WOTD and finds maximal repeats and minimal unique substrings. I also wanted pictures for this post, thus, I added a functionality that prints GraphViz code that represents the tree.