As far as I know the OpenVZ associated vzprocps-tools are just available in version 2.0.11, at least for Debian. Unfortunately they are damn buggy in this version, so unusable…
There are two smart tools included in
vzps . These programs help you a lot dealing with processes of your running containers. But in 2.0.11 they aren’t working:
It seems that there is an update, but not available as
.deb yet. Here is an example for an alternative to
vzps to find zombies:
It’s a bit complicated, but you can write a small script to grep for further things..
|Ranking||Programmer(s)||Score in 2500 games|
|1.||Demel and Keiblinger||232.595,79|
You see I wasn’t able to obtain half of the points of Demel and Keiblinger, looking to the results for each game there was no chance for my bot to beat them. But nevertheless I won the second rank! I couldn’t find any contact information of these guys so I wasn’t able to congratulate personally but if they read this article: Nice work guys ;-) Of course congratulations to the other programmers, even if you didn’t win, taking place is what counts!
By the way the organizer informed me about an
de.binfalse.martin.fmcontest.map.DMap.dirTo(DMap.java:192) , so that my bot quit working 17 times. But I won’t update my code since it has no sence beyond this contest… It’s just to inform you.
Last but not least my thanks goes to the freiesMagazin itself. It was a very nice contest and I’m really happy about the voucher! I still have a good idea what to buy :-P
P.S.: Since both programmers on the first rank should split their voucher of 50 €, they both won a voucher of 25 €. That means with a voucher of 30 € I won the biggest value ;-)
Rumpel frequently reminded me to do that, but I was too lazy to find my own modifications to the WP core… But today I did! And thinking ahead, here I record what I’m changing to this version! Majorly for me, but maybe you like it ;-)
Display whole tag cloud in wp-admin
When you create an article WP by default only displays the 45 most-used tags in the sidebar. I want to see all of them:
File to change:
File to change:
If I want to insert a link into an article I often use the button above the textarea. It’s very friendly from WP to remind the users to start links with
http:// , but for me it’s only disgusting because I usually copy&paste the URL from the browsers address bar and have to delete the
http:// from the pop-up…
To delete them permanently edit
wp-includes/js/quicktags.js . Unfortunately this script is just one line, so a diff won’t help you, but I can give you a vim substitution command:
Update 07. July 2011: For WP > 3.2 you also need to apply this regex for
wp-includes/js/tinymce/plugins/wplink/js/wplink.js to also eliminate this disgusting
http:// from the new link-overlay…
When I write mails to people for the first time they usually answer them immediately with something like
What is that crazy crypto stuff surrounding your mails? Wondering why I can't read it!?
There are lots of legends out there belonging to this clutter, most of them are only fairy tales, here is the one and only true explanation!
As a friend of security I always try to encrypt my mails via GPG. That is only possible if the recipient is also using GPG and I have his/her public key. If this is not the case, I just sign my mail to give the addressee the chance to verify that the mail is from me and nobody else on its way has modified the content of the mail. So the clutter is the electronic signature of the mail! It’s a simple ASCII code, however not readable for human eyes but readable for some intelligent tools.
There are two kinds of signatures:
- inline signature: it surrounds the message with cryptographic armor. That has the disadvantage that you can't sign attachments or HTML mails and the text is more or less hidden between PGP-goodies.
- attached signatures: the crypto stuff is attached as signature.asc. With the disadvantage that mailservers may be alarmed from this attachment and drop the mail.
Since I usually write ASCII mails without attachments I sign them inline. Such a signed mail that reaches your inbox may look like:
Depending on the used mail-client I usually also attach my public key, so if you’re using a mail-client that is able to handle GPG signed/encrypted mails it should parse the crypto stuff and verify whether the signature is correct or not. In this case the mail will be collapsed so that you’ll see something like this (with an indication whether the signature was valid or not):
But if you’re using a client that doesn’t ever heard about GPG it won’t recognize the cryptographic parts and you’ll only see lot’s of clutter. In this case I recommend to change the mail-client! ;-)
To learn more about GPG take a look at gnupg.org.
Just developed a small crawler to check my online content at binfalse.de in terms of W3C validity and the availability of external links. Here is the code and some statistics…
The new year just started and I wanted to check what I produced the last year in my blog. Mainly I wanted to ensure more quality, my aim was to make sure all my blog content is W3C valid and all external resources I’m linking to are still available. First I thought about parsing the database-content, but at least I decided to check the real content as it is available to all of you. The easiest way to do something like this is doing it with Perl, at least for me. The following task were to do for each site of my blog:
- Check if W3C likes the site
- For each link to external resources: Check if they respond with
- For each internal link: Check this site too if not already checked
While I’m checking each site I also saved the number of leaving links to a file to get an overview. Here is the code:
You need to install
WebService::Validator::HTML::W3C . Sitting in front of a Debian based distribution just execute:
The script checks all sites that it can find and that match to
So adjust the
$domain variable at the start of the script to fit your needs.
It writes all W3C results to
/tmp/check-links.val , the following line-types may be found within that file:
So it should be easy to parse if you are searching for invalids.
Each external link that doesn’t answer with
200 OK produces an entry to
/tmp/check-links.fail with the form
Additionally it writes for each website the number of internal links and the number of external links to
If you want to try it on your site keep in mind to change the content of
$domain and take care of the pattern in line 65:
Because I don’t want to check internal links to files like
.tgz the URL has to end with
/ . All my sites containing parseable XML end with
/ , if your sites doesn’t, try to find a similar expression.
As I said I’ve looked to the results a bit. Here are some statistics (as at 2011/Jan/06):
|Sites containing W3C errors||38|
|Number of errors||63|
|Mean error per site||0.1309771|
|Mean of internal/external links per site||230.9833 / 15.39875|
|Median of internal/external links per site||216 / 15|
|Dead external links||82|
|Dead external links w/o Twitter||5|
Most of the errors are now repaired, the other ones are in progress.
The high number of links that aren’t working anymore comes from the little twitter buttons at the end of each article. My crawler is of course not authorized to tweet, so twitter responds with
401 Unauthorized . One of the other five fails because of a cert problem, all administrators of the other dead links are informed.
I also analyzed the outgoing links per site. I’ve clustered them with K-Means, the result can be seen in figure 1. How did I produce this graphic? Here is some R code:
You’re right, there is a lot stuff in the image that is not essential, but use it as example to show R beginners what is possible. Maybe you want to produce similar graphics!?