binfalse
Native SSH server on LinageOS
September 6th, 2018I finally trashed my shitty Shift5.2 and got a spare OnePlus One from a good colleague.
tldr: scroll down to Setup of SSH on LineageOS.
I strongly discourage everyone from buying a ShiftPhone. The Phone was/is on Android patch level from 2017-03-05 – which is one and a half year ago! Not to mention that it was running an Android 5.1.1 in 2018… With soo many bugs and security issues, in my opinion this phone is a danger to the community! And nobody at Shift seemed to really care…
However, I now have a OnePlus One, which is supported by LineageOS - the successor of CyanogenMod. So, first action was installing LineageOS. Immediately followed by installing SU to get root access.
Next, I’d like to have SSH access to the phone. I did love the native SSH server on my Galaxy S2, which used to run CyanogenMod for 5+ years. Using the SSH access I was able to integrate it in my backup infrastructure and it was much easier to quickly copy stuff from the phone w/o a cable :)
The original webpage including a how-to for installing SSH on CyanogenMod has unfortunately vanished. There is a copy available from the WayBackMachine (thanks a lot guys!!). I still thought dumping an up-to-date step-wise instruction here may be a good idea :)
Setup of SSH on LineageOS
The setup of the native SSH server on LineageOS seems to be pretty similiar to the CyanogenMod version. First you need a shell on the phone, e.g. through adb, and become root (su). Then just follow the following three steps:
Create SSH daemon configuration
You do not need to create a configuration file from scratch, you can use /system/etc/ssh/sshd_config
as a template.
Just copy the configuration file to /data/ssh/sshd_config
;
cp /system/etc/ssh/sshd_config /data/ssh/sshd_config
Just make sure you set the following things:
PermitRootLogin without-password
PubkeyAuthentication yes
PermitEmptyPasswords no
ChallengeResponseAuthentication no
Subsystem sftp internal-sftp
Update: Ed Huott reported:
There was one additional step I needed to make it work. It was necessary to set
StrictModes no
in/data/ssh/sshd_config
in order to keep sshd from failing to start due to bad file ownership/permissions on the/data/.ssh
directory and files as well as the parent/data
directory.This is because the owner:group of
/data
is system:system which doesn’t match eitherroot
orshell
owner:group used for/data/.ssh
and its contents. I felt that settingStrictModes no
was a better solution than messing with the owner:group of the/data
directory!
Setup SSH keys
We’ll be using SSH-keys to authenticate to the phone. If you don’t know what SSH keys are, or how to create them, you may go to an article that I wrote in 2009 (!!) or use an online search engine.
First, we need to create /data/.ssh
on the phone (note the .
!) and give it to the shell
user:
mkdir -p /data/.ssh
chmod 700 /data/.ssh
chown shell:shell /data/.ssh
Second, we need to store our public SSH key (probably stored in ~/.ssh/id_rsa.pub
on your local machine) in /data/.ssh/authorized_keys
on the phone.
If that file exists, just append your public key into a new line.
Afterwards, handover the authorized_keys
file to the shell user:
chmod 600 /data/.ssh/authorized_keys
chown shell:shell /data/.ssh/authorized_keys
Create a start script
Last but not least, we need a script to start the SSH service.
There is again a template available in /system/bin/start-ssh
.
Just copy the script to /data/local/userinit.d/
:
mkdir /data/local/userinit.d/
cp /system/bin/start-ssh /data/local/userinit.d/99sshd
chmod 755 /data/local/userinit.d/99sshd
Finally, we just need to update the location of the sshd_config
to /data/ssh/sshd_config
in our newly created /data/local/userinit.d/99sshd
script (in the template it points to /system/etc/ssh/sshd_config
, there are 2 occurences: for running the daemon w/ and w/o debugging).
That’s it
You can now run /data/local/userinit.d/99sshd
and the SSH server should be up and running :)
Earlier versions of Android/CyanogenMod auto-started the scripts stored in /data/local/userinit.d/
right after the boot, but this feature was removed with CM12..
Thus, at the moment it is not that easy to automatically start the SSH server with a reboot of your phone.
But having the SSH daemon running all the time may also be a bad idea, in terms of security and battery…
Regain RSS feeds for the University of Rostock
September 3rd, 2018I’m consuming quite some input from the internet everyday. A substantial amount of information arrives through podcasts, but much more essential are the 300+ RSS feeds that I’m subscribed to. I love RSS, it’s one of the best inventions in the world wide web!
However, there are alarming rumors and activities trying to get rid of RSS… We probably should all get our news filtered by Facebook or something..!? The importance of RSS, which allows users to keep track of updates on many different websites, seems to get continuously ignored.. And so does the new website of our University, where official RSS feeds aren’t provided anymore :(
Apparently, many people were already asking for RSS feeds of the University’s webpage. At least that’s what they told me, when I asked… But the company who built the pages won’t integrate RSS anymore - probably wasn’t listed in the requirements.. And the University wouldn’t touch the expensive website.
“Fortunatelly,” they stayed with Typo3 as the CMS, which we’ve been using as well - before we decided to switch. And this Typo3 platform can output the page’s content as RSS feed out of the box, you just need to know how! ;-)
And… I’ll tell you: Just append ?type=9818
to the URL.
That’s it! Really. It’s so easy.
Here are a few examples:
- Press releases as RSS feed: https://www.uni-rostock.de/universitaet/aktuelles/pressemeldungen/?type=9818
- Events as RSS feed: https://www.uni-rostock.de/universitaet/aktuelles/veranstaltungen/?type=9818
- Open positions as RSS feed: https://www.uni-rostock.de/stellen/wissenschaftliches-und-nichtwissenschaftliches-personal/?type=9818
- Open professorships as RSS feed: https://www.uni-rostock.de/stellen/professuren/?type=9818
- Events of the institute of computer science as RSS feed: https://www.informatik.uni-rostock.de/veranstaltungen/alle-veranstaltungen/?type=9818
Sure, it doesn’t work everywhere. If the editors maintain news as static HTML pages, Typo3 fails to export a proper RSS feed. It’s still better than nothing. And maybe it helps a few people…
The RSS icon was adapted from commons:Generic Feed-icon.svg.
Proper Search Engine for a Static Website powered by DuckDuckGo (and similar)
June 23rd, 2018Static websites are great and popular, see for example Brunch, Hexo, Hugo, Jekyll, Octopress, Pelican, and …. They are easy to maintain and their performance is invincible. But… As they are static, they cannot dynamically handle user input, which is an obvious requirement for every search engine.
Outsource the task
Lucky us, there are already other guys doing the search stuff pretty convincingly. So it’s just plausible to not reinvent the wheel, but instead make use of their services. There are a number of search engines, e.g. Baidu, Bing, Dogpile, Ecosia, Google, StartPage, Yahoo, Yippy, and more (list sorted alphabetically, see also Wikipedia::List of search engines). They all have pros and cons, but typically it boils down to a trade between coverage, up-to-dateness, monopoly, and privacy. You probably also have your favourite. However, it doesn’t really matter. While this guide focusses on DuckDuckGo, the proposed solution is basically applicable to all search engines.
Theory
The idea is, that you add a search form to your website, but do not handle the request yourself and instead redirect to an endpoint of a public search engine.
All the search engines have some way to provide the search phrase encoded in the URL.
Typically, the search phrase is stored in the GET varialble q
, for example example.org/?q=something
would search for something
at example.org
.
Thus, your form would redirect to example.org/?q=...
.
However, that would of course start a search for the given phrase on the whole internet!
Instead, you probably want to restrict the search results to pages from your domain.
Fortunatelly, the search engines typically also provide means to limit search results to a domain, or similar.
In case of DuckDuckGo it is for example the site:
operator, see also DuckDuckGo’s syntax.
That is, for my blog I’d prefix the search phrase with site:binfalse.de
.
Technical realisation
Implementing the workaround is no magic, even though you need to touch your webserver’s configuration.
First thing you need to do is adding a search form to your website. That form may look like this:
<form action="/search" method="get">
<input name="q" type="text" />
<button type="submit">Search</button>
</form>
As you see, the form just consists of a text field and a submit-button.
The data will be submitted to /search
on your website.
Sure, /search
doesn’t exist on your website (if it exists you need to use a different endpoint), but we’ll configure your web server to do the remaining work.
The web server needs to do two things: (1) it needs to prefix the phrase with site:your.domain
and (2) it needs to redirect the user to the search engine of your choice.
Depending on the web server you’re using the configuration of course differs.
My Nginx configuration, for example, looks like this:
location ~ ^/search {
return 302 https://duckduckgo.com/?q=site%3Abinfalse.de+$arg_q;
}
So it sends the user to duckduckgo.com
, with the query string site:binfalse.de
concatenated to the submitted search phrase ($arg_q
= the q
variable of the original GET request).
If you’re running an Apache web server, you probably know how to achieve the same over there.
Otherwise it’s a good opportunity to look again into the manual ;-)
Furthermore, the results pages of DuckDuckGo can be customised to look more closely like your site.
You just need to send a few more URL parameters with the query, such as kj
for the header color or k7
for the background color.
The full list of available configuration options are available from DuckDuckGo settings via URL parameters.
In conclusion, if you use my search form to search for docker
, you’ll be guided to https://binfalse.de/search?q=docker
.
The Nginx delivering my website will then redirect you to https://duckduckgo.com/?q=site%3Abinfalse.de+docker
, try it yourself:
search for docker!
This of course also works for dynamic websites with WordPress, Contao or similar…
Run Baïkal through Docker
June 7th, 2018Baïkal is a quite popular Calendar+Contacts server. It supports CalDAV as well as CardDAV.
I’ve been using it for my calendars and adressbooks already for more than 4 years now. However, I initially installed it as plain PHP application with a MySQL database. The developers also announced quite early, that they are working on a Docker image, but there is nothing useful as of mid 2018. So far they just provide a quite inconvenient how-to and a list of issues that apparently prevent them from providing a proper Docker image. Thus, I just dockerised the application myself :)
The Docker image
Actually, creating a Docker image for Baïkal was super easy. In the end, it is “only” a PHP application ;-) The corresponding Dockerfile can be found in the root directory of Baïkal’s git repository (at least in my fork). The latest version at the time of writing is:
FROM php:apache
MAINTAINER martin scharm <https://binfalse.de/contact>
# we're working from /var/www, not /var/www/html
# the html directory will come with baikal
WORKDIR /var/www
# install tools necessary for the setup
RUN apt-get update \
&& apt-get install -y -q --no-install-recommends \
unzip \
git \
libjpeg62-turbo \
libjpeg62-turbo-dev \
libpng-dev \
libfreetype6-dev \
ssmtp \
&& apt-get clean \
&& rm -r /var/lib/apt/lists/* \
&& a2enmod expires headers
# for mail configuration see https://binfalse.de/2016/11/25/mail-support-for-docker-s-php-fpm/
# install php db extensions
RUN docker-php-source extract \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) pdo pdo_mysql \
&& docker-php-source delete
# install composer
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& mkdir -p composer/packages \
&& php composer-setup.php --install-dir=composer \
&& php -r "unlink('composer-setup.php');" \
&& chown -R www-data: composer
# prepare destination
RUN rm -rf /var/www/html && chown www-data /var/www/
ADD composer.json /var/www/
ADD Core html /var/www/Core/
ADD html /var/www/html/
# install dependencies etc
USER www-data
RUN composer/composer.phar install
USER root
# the Specific dir is supposed to come from some persistent storage
VOLUME /var/www/Specific
So, it basically
- installs some dependencies through
apt-get
, - installs the PDO-MySQL extension,
- installs composer,
- adds the Baikal sources into the image,
- and finally installs remaining Baikal dependencies through composer.
I distribute the image as binfalse/baikal.
Using the Docker image
Using the image is fairly simple.
Basically, you only need to mount some persistent space to /var/www/Specific
docker run -it --rm -p 80:80 -v /path/to/persistent:/var/www/Specific binfalse/baikal
Please make sure that the directory /path/to/persistent
has proper permissions.
In the container an Apache2 is serving the contents, so make sure the user www-data
(UID 33
) is allowed to rwx
that directory.
To start with, you can use the original Specific directory from the Baïkal repository.
Then head to your Baikal instance (which will probably redirect to BASEURL/admin/install
), and setup your server.
Every configuration will be stored in the mounted volume at /path/to/persistent
.
SSL
To support encrypted connections you would need to mount the certificates as well as a modified Apache configuration into the container. However, I recommend to run it behind a reverse proxy, such as binfalse/nginx-proxy, and let the proxy handle all SSL connections (as for all other containers). This way, you just need one proper SSL configuration.
MySQL
The default SQLite database is perfect for a first test, but is slow and just allows for a limited amount of SQL variables. If you for example have more than 999 contacts, the first sync of a clean WebDAV device will result in an exception such as:
PDOException: SQLSTATE[HY000]: General error: 1 too many SQL variables
Thus, for production you may want to switch to a proper database, such as MariaDB. Lucky you, the Docker image supports MySQL! ;-)
To reproducibly assemble both containers, I recommend Docker-Compose.
Here is a sample config with two containers baikal
and baikal-db
:
version: '2'
services:
baikal:
restart: always
image: binfalse/baikal
container_name: baikal
volumes:
- /srv/baikal/config:/var/www/Specific
links:
- baikal-db
baikal-db:
restart: always
image: mariadb
container_name: baikal-db
volumes:
- /srv/baikal/database:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: roots-difficult-password
MYSQL_DATABASE: baikal
MYSQL_USER: baikal
MYSQL_PASSWORD: baikals-difficult-password
This assumes, that your Baikal configuration can be found in /srv/baikal/config
.
The database will be stored in /srv/baikal/database
.
Also note the database credentials for configuring Baikal.
If you’re not running a reverse proxy in front of the application, you also need to add some port forwarding for the baikal
container:
version: '2'
services:
baikal:
restart: always
image: binfalse/baikal
[...]
ports:
- "80:80"
- "443:443"
[...]
Mail support
I’m not sure why, but Baikal’s list of issues included support for mail. However, adding mail support should also be fairly easy if needed. I already wrote a How-To for PHP-mail in Docker.
PLEASE NOTE: sSMTP is not maintained anymore! Please switch to
msmtp
, for example, as I explained in Migrating from sSMTP to msmtp.
Logging with Docker
February 21st, 2018In a typical Docker environment you’ll have plenty of containers (probably in multiple networks?) on the same machine. Let’s assume, you need to debug some problems of a container, eg. because it doesn’t send mails anymore.. What would you do? Correct, you’d go and check the logs.
By default, Docker logs the messages of every container into a json file.
On a Debian-based system you’ll probably find the file at /var/lib/docker/containers/CONTAINERID/CONTAINERID-json.log
.
However, to properly look into the logs you would use Docker’s logs tool.
This will print the logs, just as you would expect cat
to dump the logs in /var/log
.
docker-logs
can also filter for time spans using --since
and --until
, and it is able to emulate a tail -f
with --follow
.
However, the logs are only available for exsiting containers.
That means, if you recreate the application (i.e. you recreate the container), you’ll typically loose the log history…
If your workflow includes the --rm
, you will immediately trash the log of a container when it’s stopped.
Fortunatelly, Docker provides other logging drivers, to e.g. log to AWS, fluentd, GPC, and to good old syslog! :)
Here I’ll show how to use the host’s syslog to manage the logs of your containers.
Log to Syslog
Telling Docker to log to the host’s syslog is really easy.
You just need to use the built-in syslog
driver:
docker run --log-driver syslog [other options etc]
Voilà, the container will log to the syslog and you’ll probably find the messages in /var/log/syslog
.
Here is an example of an Nginx, that I just started to serve my blog on my laptop:
Feb 21 16:06:32 freibeuter af6dcace59a9[5606]: 172.17.0.1 - - [21/Feb/2018:15:06:32 +0000] "GET /2018/02/21/logging-with-docker/ HTTP/1.1" 304 13333 "http://localhost:81/" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" "-"
By default, the syslog driver uses the container’s ID as the syslog tag (here it is af6dcace59a9
),
but you can further configure the logging driver and, for example, set a proper syslog tag:
docker run --log-driver syslog --log-opt tag=binfalse-blog [other options etc]
This way, it is easier to distinguish between messages from different containers and to track the logs of an application even if the container gets recreated:
Feb 21 16:11:16 freibeuter binfalse-blog[5606]: 172.17.0.1 - - [21/Feb/2018:15:11:16 +0000] "GET /2018/02/21/logging-with-docker/ HTTP/1.1" 200 13333 "http://localhost:81/" "Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0" "-"
If you’re using Docker Compose, you can use the logging
keyword to configure logging:
version: '2'
services:
website:
restart: unless-stopped
image: nginx
container_name: website
volumes:
- /srv/web/default/:/usr/share/nginx/html
logging:
driver: syslog
options:
tag: docker/website
Here, I configured an nxinx that just serves the contents from /srv/web/default
.
The interesting part is, however, that the container uses the syslog
driver and the syslog tag docker/website
.
I always prefix the tag with docker/
, to distinguish between log entries of the host machine and entries from Docker containers..
Store Docker logs seperately
The workaround so far will probably substantially spam your /var/log/syslog
, which may become very annoying… ;-)
Therefore, I recommend to write Docker’s logs to a seperate file. If you’re for example using Rsyslog, you may want to add the following configuration:
if $syslogtag contains 'docker/' then /var/log/docker
& ~
Just dump the snippet to a new file /etc/rsyslog.d/docker.conf
and restart Rsyslog.
This rule tells Rsyslog to write messages that are tagged with docker/*
to /var/log/docker
, and not to the default syslog file anymore.
Thus, your /var/log/syslog
stays clean and it’s easier do monitor the Docker containers.
Disentangle the Container logs
Since version 8.25, Rsyslog can also be used to split the docker logs into individual files based on the tag.
So you can create separate log files, one per container, which is even cleaner!
The idea is to use the tag name of containers to implement the desired directory structure.
That means, I would tag the webserver of a website with docker/website/webserver
and the database with docker/website/database
.
We can then tell Rsyslog to allow slashes in program names (see the programname section at www.rsyslog.com/doc/master/configuration/properties.html) and create a template target path for Docker log messages, which is based on the programname:
global(parser.PermitSlashInProgramname="on")
$template DOCKER_TEMPLATE,"/var/log/%programname%.log"
if $syslogtag contains 'docker/' then ?DOCKER_TEMPLATE
&~
Using that configuration, our website will log to /var/log/docker/website/webserver.log
and /var/log/docker/website/database.log
.
Neat, isn’t it? :)
Inform Logrotate
Even though all the individual logfiles will be smaller than a combined one, they will still grow in size. So we should tell logrotate of their existence!
Fortunatelly, this is easy as well.
Just create a new file /etc/logrotate.d/docker
containing something like the following:
/var/log/docker/*.log
/var/log/docker/*/*.log
/var/log/docker/*/*/*.log
{
rotate 7
daily
missingok
notifempty
delaycompress
compress
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
}
This will rotate the files ending in *.log
in /var/log/docker/
and its subdirectories everyday and keep compressed logs for 7 days. Here I’m using a maximum depth of 3 subdirectories – if you need to create a deeper hierarchy of directories just add another /var/log/docker/*/*/*/*.log
etc to the beginning of the file.