I just had a confusing problem: instead of interpreting PHP-scripts in our webserver’s userdir apache serves them for download!
It’s caused by an upgrade from lenny to squeeze and I spend a lot of ours with fixing.
This is really a serious problem, these sites aren’t able to read for those people/search engines etc. that are browsing and, more fatal, if clients are able to access the PHP code of our students/staff they might explore security issues or passwords stored in these PHP files, so first of all I disabled the public access to the webserver.
So what was the problem? When I recognized that phpMyAdmin and other not userdir related stuff still works fine, I searched for issues that differ for userdirs. At long last I took a look into the libapache2-mod-php5 config file located in /etc/apache2/mods-available/php5.conf :
As you can see, PHP is disabled if the userdir module was enabled… Disgusting!
Commenting these lines out switched PHP for users on. Very annoying!
Today I restarted my notebook to boot into another kernel. Unfortunately I couldn’t log-in to the desktop because neither the mouse nor the keyboard was working. The Xorg.log gave me a hint what happened.
That all affected my GRML installation. Unfortunately you can’t change to a virtual terminal while there is no keyboard control, so to change anything you have to connect via SSH or boot from live CD or USB. The error reported in /var/log/Xorg.0.log looks like:
So you see, all input device are turned off. Annoying!!
To avoid this problem you have to add the following section to your /etc/X11/xorg.conf :
That should solve the problem. If you don’t have a xorg.conf yet you can create one with:
This will create /root/xorg.conf.new , so you just have to copy it to /etc/X11/xorg.conf .
Since it was the first reboot for about 30 days and I updated/installed a lot of new software, so I’m not able to blame anyone generating this bug. But if you are feel free to do so ;-)
You see I wasn’t able to obtain half of the points of Demel and Keiblinger, looking to the results for each game there was no chance for my bot to beat them. But nevertheless I won the second rank! I couldn’t find any contact information of these guys so I wasn’t able to congratulate personally but if they read this article: Nice work guys ;-)
Of course congratulations to the other programmers, even if you didn’t win, taking place is what counts!
By the way the organizer informed me about an IndexOutOfBoundsException in de.binfalse.martin.fmcontest.map.DMap.dirTo(DMap.java:192) , so that my bot quit working 17 times. But I won’t update my code since it has no sence beyond this contest… It’s just to inform you.
Last but not least my thanks goes to the freiesMagazin itself. It was a very nice contest and I’m really happy about the voucher! I still have a good idea what to buy :-P
P.S.: Since both programmers on the first rank should split their voucher of 50 €, they both won a voucher of 25 €. That means with a voucher of 30 € I won the biggest value ;-)
Rumpel frequently reminded me to do that, but I was too lazy to find my own modifications to the WP core… But today I did! And thinking ahead, here I record what I’m changing to this version! Majorly for me, but maybe you like it ;-)
Display whole tag cloud in wp-admin
When you create an article WP by default only displays the 45 most-used tags in the sidebar. I want to see all of them:
File to change:wp-admin/includes/meta-boxes.php
File to change:wp-admin/admin-ajax.php
If I want to insert a link into an article I often use the button above the textarea. It’s very friendly from WP to remind the users to start links with http:// , but for me it’s only disgusting because I usually copy&paste the URL from the browsers address bar and have to delete the http:// from the pop-up…
To delete them permanently edit wp-includes/js/quicktags.js . Unfortunately this script is just one line, so a diff won’t help you, but I can give you a vim substitution command:
Update 07. July 2011: For WP > 3.2 you also need to apply this regex for wp-includes/js/tinymce/plugins/wplink/js/wplink.js to also eliminate this disgusting http:// from the new link-overlay…
When I write mails to people for the first time they usually answer them immediately with something like
What is that crazy crypto stuff surrounding your mails? Wondering why I can't read it!?
There are lots of legends out there belonging to this clutter, most of them are only fairy tales, here is the one and only true explanation!
As a friend of security I always try to encrypt my mails via GPG. That is only possible if the recipient is also using GPG and I have his/her public key. If this is not the case, I just sign my mail to give the addressee the chance to verify that the mail is from me and nobody else on its way has modified the content of the mail. So the clutter is the electronic signature of the mail! It’s a simple ASCII code, however not readable for human eyes but readable for some intelligent tools.
There are two kinds of signatures:
inline signature: it surrounds the message with cryptographic armor. That has the disadvantage that you can't sign attachments or HTML mails and the text is more or less hidden between PGP-goodies.
attached signatures: the crypto stuff is attached as signature.asc. With the disadvantage that mailservers may be alarmed from this attachment and drop the mail.
Since I usually write ASCII mails without attachments I sign them inline. Such a signed mail that reaches your inbox may look like:
Depending on the used mail-client I usually also attach my public key, so if you’re using a mail-client that is able to handle GPG signed/encrypted mails it should parse the crypto stuff and verify whether the signature is correct or not. In this case the mail will be collapsed so that you’ll see something like this (with an indication whether the signature was valid or not):
But if you’re using a client that doesn’t ever heard about GPG it won’t recognize the cryptographic parts and you’ll only see lot’s of clutter. In this case I recommend to change the mail-client! ;-)
Just developed a small crawler to check my online content at binfalse.de in terms of W3C validity and the availability of external links. Here is the code and some statistics…
The new year just started and I wanted to check what I produced the last year in my blog. Mainly I wanted to ensure more quality, my aim was to make sure all my blog content is W3Cvalid and all external resources I’m linking to are still available.
First I thought about parsing the database-content, but at least I decided to check the real content as it is available to all of you. The easiest way to do something like this is doing it with Perl, at least for me.
The following task were to do for each site of my blog:
Check if W3C likes the site
For each link to external resources: Check if they respond with 200 OK
For each internal link: Check this site too if not already checked
While I’m checking each site I also saved the number of leaving links to a file to get an overview.
Here is the code:
You need to install LWP::UserAgent , XML::TreeBuilder and WebService::Validator::HTML::W3C . Sitting in front of a Debian based distribution just execute:
The script checks all sites that it can find and that match to
So adjust the $domain variable at the start of the script to fit your needs.
It writes all W3C results to /tmp/check-links.val , the following line-types may be found within that file:
So it should be easy to parse if you are searching for invalids.
Each external link that doesn’t answer with 200 OK produces an entry to /tmp/check-links.fail with the form
Additionally it writes for each website the number of internal links and the number of external links to /tmp/check-links.log .
If you want to try it on your site keep in mind to change the content of $domain and take care of the pattern in line 65:
Because I don’t want to check internal links to files like .png or .tgz the URL has to end with / . All my sites containing parseable XML end with / , if your sites doesn’t, try to find a similar expression.
As I said I’ve looked to the results a bit. Here are some statistics (as at 2011/Jan/06):
Sites containing W3C errors
Number of errors
Mean error per site
Mean of internal/external links per site
230.9833 / 15.39875
Median of internal/external links per site
216 / 15
Dead external links
Dead external links w/o Twitter
Most of the errors are now repaired, the other ones are in progress.
The high number of links that aren’t working anymore comes from the little twitter buttons at the end of each article. My crawler is of course not authorized to tweet, so twitter responds with 401 Unauthorized . One of the other five fails because of a cert problem, all administrators of the other dead links are informed.
I also analyzed the outgoing links per site. I’ve clustered them with K-Means, the result can be seen in figure 1. How did I produce this graphic? Here is some R code:
You’re right, there is a lot stuff in the image that is not essential, but use it as example to show R beginners what is possible. Maybe you want to produce similar graphics!?