As I’m working on multiple machines (two desks at work, one desk at home, laptop, …) I’ve always been looking for a way to sync my browsers… Of course, I knew about Firefox’ sync, but I obviously don’t want to store my private browsing data in Mozilla’s cloud! Every once in a while I stumbled upon articles and posts suggesting to run a private syncserver. However, every time when looking into that project it left an uncomfortable impression: (i) you need to manually compile some 3rd party software, (ii) the whole thing seems very complex/unclean, as it requires an account server and a sync server and may work with Mozilla’s account server (but how?), and (iii) the sync project was once already abandoned (Firefox Weave was discontinued because too complex and unreliable)… Therefore, I never dared to give it a try.
Today, when I’ve again been frustrated with that fragmented situation, I saw that Mozilla’s syncserver sources contain a Dockerfile! It probably has been there for ages, but I never recognised it.. Even if that project may be a mess, in a container environment it’s pretty easy to give it a try (and clean it, if unsatisfied)! That changes everything! :P
Get the Syncserver Running
Running your own syncserver using Docker is pretty straight forward. This how-to is based on the project’s readme at GitHub:mozilla-services/syncserver, but I’m using docker-compose and I deployed the service behind an Nginx proxy. You can of course skip the proxy settings and have it run locally or something.
Get the Code
Just clone the sources from GitHub:
You should now see a new directory
syncserver containing all the sources, including a
Build a Docker Image
Change into the project’s directory, that contains the
Dockerfile and build a new Docker image using:
That will take a while, but when it’s finished you’ll find a new image (double check with
The provided Dockerfile is basically sufficient, but in my scenario I also need to properly declare an exposed port. So I edited that file and added
See also the diff of my commit.
I decided to take port
5000, as the user running the syncserver is unpriviledged (so
:443 are not an option) and
:5000 is the example in the project’s readme ;-)
Create a Docker-Compose Configuration
Docker-Compose makes it easier to assemble and handle multiple containers in a medium complex environment.
My compose config looks like this:
This snippet encodes for a container named
firefox-sync, which is based on the image
It mounts the host’s directory
/path/to/mozilla-sync/share into the container as
/syncshare (I’d like to store my stuff outside of the container).
In addition it declares some environment:
SYNCSERVER_PUBLIC_URLtells the service the actual URL to your instance.
SYNCSERVER_SECRETshould be complicated as it is used to generate internal certificates and stuff.
SYNCSERVER_SQLURItell the service which database to use. I point it to the directory (
/syncshare) that was mounted into the container, so it will actually store the database on the host.
SYNCSERVER_BATCH_UPLOAD_ENABLEDis, if I understand correctly, an option to allow for uploading everything immediately…?
SYNCSERVER_FORCE_WSGI_ENVIRONmust be set to true, if
SYNCSERVER_PUBLIC_URLdoesn’t match the actual URL seen by the python tool. In my case, I would connect to
SYNCSERVER_PUBLIC_URL, which is however the Nginx proxy, which forwards the traffic to the syncserver. However, the syncserver will see a different request (e.g. it’s internally not
httpsanymore) and complain.
The last two variables (
VIRTUAL_PORT) just configure the reverse proxy that I’m using.
Feel free to drop these lines if you want to expose the service directly to the network, but then you need to add a port forwarding for that container, such as
which forwards traffic at your machine’s HTTP port (
:80, use a different port if you’re already running a web server) to the service’s port in the container (
If you have a porper Docker-Compose configuration, just run
to start the service.
Et voilà, you should be able to access the service at the configured
Configure Firefox to use your Private Sync Server
First make sure you’re signed out in the browser!
about:preferences#sync should not show your identity and instead provide a button to sign in.
about:config and search for
By default, it will be set to Mozilla’s sync server
Edit that field and point it to your
Thus, in our example above I’d set it to
Now go back to
about:preferences#sync and sign in with your Mozilla account.
Yes, correct. You still need an account at Mozilla!
But that is just for authentication…
There is an option to also run a private account server (see Run your own Firefox Accounts Server), but that’s even more complicated.
And as I need a Mozilla account anyway to develop my AddOns, I skipped that additional hassling..
Open Issues and Troubleshooting
There are still a few issues with different clients. For example, I don’t know how to tell Epiphany to use my private syncserver instead of Mozilla’s public instance.. In addition, there is apparently no Firefox in the F-Droid repository, that properly supports sync…
For general debugging and troubleshooting, search engines are a good start..
In addition, I learnt that there is
about:sync-log, which contains very detailed error messages in case of problems.
… I got my sync! #hooray
It’s still crisply and I didn’t test it too much, but so far it’s looking pretty good.
Some days ago, @email@example.com convinced me on Mastodon to give BTRFS a try. That’s actually been a feature on my list for some time already, and now that I need to switch PCs at work I’m going for it. However, this post wouldn’t exist if everything went straight forward.. ;-)
I have a 1TB SSD that I want to encrypt. It should automatically get decrypted and mounted to certain places when I log in. pam_mount can do that for you, and I’ve already been using that a lot in different scenarios. However, with BTRFS it’s a bit different. With any other file systems you would create a partition on the hard drive, which is then LUKS encrypted. This has the drawback, that you need to decide on the partition’s size beforehand!
With BTRFS you can just encrypt the whole drive and use so-called subvolumes on top of it. Thus, you’re a bit more flexible by creating and adjusting quotas as required at any point in time (if at all…), but (or and!) the subvolumes are not visible unless the device is decrypted.
Let’s have a look into that and create the scenario.
I assume that the SSD is available as
Then we can create an encrypted container using LUKS:
You’re not sure which cipher or key-size to choose?
cryptsetup benchmark to see which settings perform best for you.
My CPU, for example, comes with hardware support for AES, thus the AES ciphers show a significantly higher throughput.
If you’re still feeling uncompfortable with that step, I recommend reading the sophisticated article at the
ArchLinux’ wiki on dm-crypt/Device encryption.
We can now open the encrypted device using
This will create a node in
/dev/mapper/mydrive, which represents the decrypted device.
Next, we’ll create a BTRFS on that device:
That’s indeed super fast, isn’t it!? I also couldn’t believe it.. ;-)
We can now mount the device, for example to
So far, the file system is completely empty.
But as it’s a BTRFS, we can create some subvolumes.
Let’s say, we want to create a volume for our
$HOME, and as we’re developing this website, we also want to create a volume called
So we have two subvolumes in that file system:
We could now mount them with
But we want the system to do it automatically for us, as we login.
So unmount everything and close the LUKS container:
PamMount can Decrypt and Mount Automatically
pam_mount already for ages! It is super convenient.
To get your home automatically decrypted and mounted, you would just need to add the following lines to your
Here, I am using UUIDs to identify the disks.
You can still use
/dev/sdb (or similar), but there is a chance, that the disks are recognised in a different sequence with the next boot (and
/dev/sdb may become
/dev/sdc or something…).
Plus, the UUID is invariant to the system – you can put the disk in any other machine and it will have the same UUID.
To find the UUID of your disk you can use blkid:
As said above, with BTRFS you’ll have your partitions (called subvolumes) right in the filesystem – invisible unless decrypted.
So, what is PAM doing?
It discovers the first entry in the
pam_mount.conf.xml configuration, which basically says
a1b20e2f-049c-...with some extra options to
PAM is also smart enough to understand that
a1b20e2f-049c-... is a LUKS encrypted device and it decrypts it using your login password.
This will then create a node in
/dev/mapper/_dev_sdb, representing the decrypted device.
And eventually, PAM mounts
So far so perfect.
But as soon as PAM discovers the second entry, it tries to do the same!
Again it detects a LUKS device and tries to decrypt that.
But unfortunately, there is already
Thus, opening the LUKS drive fails and you’ll find something like that in your
First it seems annoying that it doesn’t work out of the box, but at least it sounds reasonable that PAM cannot do what you what it to do..
… is quite easy, even though it took me a while to figure things out…
As soon as the first subvolume is mounted (and the device is decrypted and available through
/dev/mapper/_dev_sdb), we have direct access to the file system!
Thus, we do not neet to tell PAM to mount
/dev/disk/by-uuid/a1b20e2f-049c-..., but we can use
Or even better, we can use the file system’s UUID now, to become invariant to the
If you run
blkid with the device being decrypted you’ll find an entry like this:
You see, the new node
/dev/mapper/_dev_sdb also carries a UUID, actually representing the BTRFS :)
This UUID was by the way also reported by the
mkfs.btrfs call above.
What does that mean for our setup? When we first need a subvolume of an encrypted drive we need to use the UUID of the parent LUKS container. For every subsequent subvolume we can use the UUID of the internal FS.
Transferred to the above scenario, we’d create a
/etc/security/pam_mount.conf.xml like that:
Note the different UUIDs? Even though both mounts origin from the same FS :)
Actually, I wanted to have my home in a raid of two devices, but I don’t know how to tell
pam_mount to decrypt two devices to make BTRFS handle the raid..?
The only option seems to use mdadm to create the raid, but then BTRFS just sees a single device and, therefore, cannot do its extra raid magic…
If anyone has an idea on that issue you’ll have my ears :)
I’m running Thunderbird to read emails on my desktops. And I’m using the Lightning plugin to manage calendars, evens, and tasks.
However, since I updated to Thunderbird 60 some weeks ago, Lightning strangely seems to be broken. The Add-ons manager still lists Lightning as properly installed, but there the “Events and Tasks” menu is missing, as well as the calendar/tasks tabs and the calendar settings in the preferences. As I’ve been pretty busy with many other things, I didn’t study the problem - hoping that the bug gets fixed in the meantime - but living without the calendar addon is cumbersome. And today it became annoying enough to make me investigate this…
There seems to be various issues with calendars in the new Thunderbird version: Mozilla provides an extensive support page dedicated to this topic. Sadly, none of these did help in my case..
I then made sure that the versions of Thunderbird and Lightning are compatible (both are
1:60.0-3~deb9u1 for me):
Eventually, I stumbled upon a thread in the German Debian forums: Thunderbird 60 - Lightning funktioniert nicht. And they figured out, that it may be caused by missing language packs for Lightning… Indeed, I do have language packs for Thunderbird installed (de and en-gb), that are not installed for Lightning:
And it turns out, that this was a problem!
Thunderbird apparently wouldn’t run Lightning unless it has all required language packs installed.
After installing the missing language packs (
aptitude install lightning-l10n-de lightning-l10n-en-gb), the extension is again fully working in Thunderbird!
All that may be cause by a missing dependency..?
thunderbird-l10n-de (and similiar) do not recommend
Not exactly sure how, but maybe the dependencies should be remodelled…?
tldr: scroll down to Setup of SSH on LineageOS.
I strongly discourage everyone from buying a ShiftPhone. The Phone was/is on Android patch level from 2017-03-05 – which is one and a half year ago! Not to mention that it was running an Android 5.1.1 in 2018… With soo many bugs and security issues, in my opinion this phone is a danger to the community! And nobody at Shift seemed to really care…
Next, I’d like to have SSH access to the phone. I did love the native SSH server on my Galaxy S2, which used to run CyanogenMod for 5+ years. Using the SSH access I was able to integrate it in my backup infrastructure and it was much easier to quickly copy stuff from the phone w/o a cable :)
The original webpage including a how-to for installing SSH on CyanogenMod has unfortunately vanished. There is a copy available from the WayBackMachine (thanks a lot guys!!). I still thought dumping an up-to-date step-wise instruction here may be a good idea :)
Setup of SSH on LineageOS
The setup of the native SSH server on LineageOS seems to be pretty similiar to the CyanogenMod version. First you need a shell on the phone, e.g. through adb, and become root (su). Then just follow the following three steps:
Create SSH daemon configuration
You do not need to create a configuration file from scratch, you can use
/system/etc/ssh/sshd_config as a template.
Just copy the configuration file to
Just make sure you set the following things:
Subsystem sftp internal-sftp
Update: Ed Huott reported:
There was one additional step I needed to make it work. It was necessary to set
/data/ssh/sshd_configin order to keep sshd from failing to start due to bad file ownership/permissions on the
/data/.sshdirectory and files as well as the parent
This is because the owner:group of
/datais system:system which doesn’t match either
shellowner:group used for
/data/.sshand its contents. I felt that setting
StrictModes nowas a better solution than messing with the owner:group of the
Setup SSH keys
First, we need to create
/data/.ssh on the phone (note the
.!) and give it to the
Second, we need to store our public SSH key (probably stored in
~/.ssh/id_rsa.pub on your local machine) in
/data/.ssh/authorized_keys on the phone.
If that file exists, just append your public key into a new line.
Afterwards, handover the
authorized_keys file to the shell user:
Create a start script
Last but not least, we need a script to start the SSH service.
There is again a template available in
Just copy the script to
Finally, we just need to update the location of the
/data/ssh/sshd_config in our newly created
/data/local/userinit.d/99sshd script (in the template it points to
/system/etc/ssh/sshd_config, there are 2 occurences: for running the daemon w/ and w/o debugging).
You can now run
/data/local/userinit.d/99sshd and the SSH server should be up and running :)
Earlier versions of Android/CyanogenMod auto-started the scripts stored in
/data/local/userinit.d/ right after the boot, but this feature was removed with CM12..
Thus, at the moment it is not that easy to automatically start the SSH server with a reboot of your phone.
But having the SSH daemon running all the time may also be a bad idea, in terms of security and battery…
I’m consuming quite some input from the internet everyday. A substantial amount of information arrives through podcasts, but much more essential are the 300+ RSS feeds that I’m subscribed to. I love RSS, it’s one of the best inventions in the world wide web!
However, there are alarming rumors and activities trying to get rid of RSS… We probably should all get our news filtered by Facebook or something..!? The importance of RSS, which allows users to keep track of updates on many different websites, seems to get continuously ignored.. And so does the new website of our University, where official RSS feeds aren’t provided anymore :(
Apparently, many people were already asking for RSS feeds of the University’s webpage. At least that’s what they told me, when I asked… But the company who built the pages won’t integrate RSS anymore - probably wasn’t listed in the requirements.. And the University wouldn’t touch the expensive website.
“Fortunatelly,” they stayed with Typo3 as the CMS, which we’ve been using as well - before we decided to switch. And this Typo3 platform can output the page’s content as RSS feed out of the box, you just need to know how! ;-)
And… I’ll tell you: Just append
?type=9818 to the URL.
That’s it! Really. It’s so easy.
Here are a few examples:
- Press releases as RSS feed: https://www.uni-rostock.de/universitaet/aktuelles/pressemeldungen/?type=9818
- Events as RSS feed: https://www.uni-rostock.de/universitaet/aktuelles/veranstaltungen/?type=9818
- Open positions as RSS feed: https://www.uni-rostock.de/stellen/wissenschaftliches-und-nichtwissenschaftliches-personal/?type=9818
- Open professorships as RSS feed: https://www.uni-rostock.de/stellen/professuren/?type=9818
- Events of the institute of computer science as RSS feed: https://www.informatik.uni-rostock.de/veranstaltungen/alle-veranstaltungen/?type=9818
Sure, it doesn’t work everywhere. If the editors maintain news as static HTML pages, Typo3 fails to export a proper RSS feed. It’s still better than nothing. And maybe it helps a few people…
The RSS icon was adapted from commons:Generic Feed-icon.svg.