RSS Atom Add a new post titled:
Creating and maintaining vagrant base boxes

I finally got the hang of vagrant base box creation. I used to download images from ppl that I trusted, but then I was always caught with having to either wait for new releases to get captured into a box or just not using a certain OS at all in vagrant.

But it's not complex at all to create your own, as the vagrant site documents:

This documentation page got me started. So here are lists of commands and instructions for how I create boxes (unfortunately I couldn't get instructions for CentOS finished since I'm still experiencing a bug about ip not showing up the network interface while starting up intances once the box is imported).

Creating a base box

The idea here is really simple. You need to manually create a VM and install the OS of your choice in it. Make sure it's using DHCP, has OpenSSH installed, the configuration manager of your choice, that the insecure vagrant public key lets you login to the "vagrant" user inside the VM and finally that you can sudo to root from the "vagrant" user without a password.

We also include perform some other tricks and install other stuff that might be interesting for our use cases. My base boxes are used for testing puppet modules so I want them to be as untouched as possible, but you can install whatever you need to make them smell exactly like your production setups.

Debian box (currently jessie)

Start by downloading a netinst iso from the debian web site. If you want to have a box using the testing branch of packages you'll need to install a stable release first and then upgrade to testing right before cleaning up and packaging up into a box. The OS upgrade is out of the scope of this document.

In virt-manager, I usually create a VM with 512Mb of RAM and 20Gb of disk (with qcow2 since I'm using vagrant-libvirt). Then I just follow instructions from the installer. In tasksel, uncheck all the options except for "SSH server" and "Common system utils". Place all files in one partition (not crypted otherwise it's impractical to update packages in the base box by merging a snapshot). Set the root password to "vagrant", then choose "vagrant" as a user name and set its password to "vagrant".

On the host:

# You'll have to know which IP the VM configured once booted up after install
ssh-copy-id -o UserKnownHostsFile=/dev/null -i ~/.vagrant.d/ vagrant@

Inside the VM:

su -
sed -i 's/^\(GRUB_TIMEOUT\)=.*$/\1=1/' /etc/default/grub
echo UseDNS no >> /etc/ssh/sshd_config
apt install -y sudo
echo "vagrant ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/vagrant
logout; sudo -i  # test that sudo is working OK and without password

Here you can optionally upgrade the OS if you want to use testing or sid instead.

Still in the VM:

apt install -y puppet rsync
systemctl disable puppet.service
# Now you can install whatever else that you need. I usually install vim-nox here.
apt-get clean
dd if=/dev/zero of=/EMPTY #reclaim emtpy space. this operation needs 20Gb on host
history -c; history -w
logout # go back to the vagrant user
history -c; history -w
sudo -i
shutdown -h now

The main part of the work is done. Now follow instructions in the section below about packaging and importing the base box.

FreeBSD box

This procedure was put together using FreeBSD 11.

Notice: I use the ports system to install software which is insanely long and needs constant attention since some software needs you to choose compilation options. There is probably a better way, but I'm now steering away from FreeBSD packages because of their nasty choices in default compilation options.

Start by downloading an image that ends with -bootonly.iso. In the installer, choose the keyboard layout of your preference. Choose guided ZFS partitioning (or if you don't want ZFS, you can choose guided normal). Don't set any crypto since this'll make upgrading software in the base box by merging a snapshot impractical later. Set network to DHCP and type in a hostname that'll be somwhat valid (e.g. a real FQDN hostname even if the hostname won't resolve). Don't activate IPv6 (that choice might be reviewed in the future.. depending on whether the local network on laptop is IPv6). Don't activate any hardening features. Choose sshd in the list of software to install to the system. Set the root password to "vagrant". Choose to create a user and name it "vagrant" with a password of "vagrant".

On the host:

# You'll have to know which IP the VM configured once booted up after install
ssh-copy-id -o UserKnownHostsFile=/dev/null -i ~/.vagrant.d/ vagrant@

Inside the VM:

su -
echo 'autoboot_delay="1"' >> /boot/loader.conf
echo UseDNS no >> /etc/ssh/sshd_config
cd /usr/ports
# Installing bash is optional and might be avoided to have a system that's more
# "pure" or "vanilla". But I personally hate csh
(cd shells/bash; make install clean)
# Following line needed for bash
echo "fdesc /dev/fd fdescfs rw 0 0" >> /etc/fstab; mount /dev/fd
chsh -s bash; chsh -s bash vagrant
(cd security/sudo; make install clean)
echo "vagrant ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/vagrant
logout; sudo -i  # test that sudo is working OK and without password
# This is SO annoying. why is that enabled by default?
sed -i -e '/freebsd-tips/d' ~vagrant/.profile
(cd net/rsync; make install clean)
# Here you can choose other available versions of puppet. currently 3.7, 3.8 or 4
(cd sysutils/puppet38; make install clean)
# Now you can install whatever you want. I usually do: (cd editors/vim-lite; make install clean)
history -c; history -w
shutdown -p now

note: since we're using ZFS, cleaning up disk space by writing a huge file and deleting it doesn't work since data compression is enabled by default.

You're done with installing your box. Now follow the instructions in the section below about Packaging and importing the base box.

Packaging and importing the base box

Once the VM is installed we need to package it into a box and then import that as a base box.

These instructions are made for my setup that uses vagrant-libvirt. You'll have to find out how to perform disk space reclaiming and box export with virtualbox or other providers. I believe these instructions are very easy to find at least for virtualbox.

Obviously the image name and path where you store the final copy of the image must be changed to fit your current setup.

On the host:

sudo -i
cd /var/lib/libvirt/images
qemu-img convert -O qcow2 stretch.qcow2 ~myusername/dev/vm/stretch.qcow2
chown myusername:mygroup ~myusername/dev/vm/stretch.qcow2
cd ~/dev/vm
~/.vagrant.d/gems/gems/vagrant-libvirt-*/tools/ stretch.qcow2
vagrant box add --name stretch

Now you can create a vagrant project with the new base box. Test that it works correctly, and then you can remove the qcow2 image in your home dir. You can also remove the base box, or you can store it for future uses or even publish it so that others can use it!

You can also remove the VM that was manually created.

Upgrade packages inside of the box

From time to time it's useful to upgrade packages/software inside of the base boxes to avoid downloading too much during your tests or even hitting package not found if the version you're requesting doesn't exist anymore.

Again, these instructions are meant for vagrant-libvirt users. However, if I remember correctly it's even easier to do with virtualbox and snapshots.

I've scripted the following procedure since it's very mechanical and has no real variability. Checkout box

To perform upgrades, make sure you're using a vagrant project that:

  • doesn't install stuff inside the VM with a configuration manager
  • doesn't use any additional network interface
  • doesn't have any files in the vagrant project other than the Vagrantfile


Yes, you are possibly going to break the base box. So it's a great idea to start by taking a backup of the base box disk image before starting:

sudo cp /var/lib/libvirt/images/jessie_vagrant_box_image_0.img .

Warning: Don't place the image backup inside the directory of the vagrant project you're using for running the upgrades. This directory gets rsync'ed to the VM when starting up.

In case of a total meltdown of the base box, run "vagrant destroy" on all VMs that use this base box, then squash the file inside /var/lib/libvirt/images/ with the backup you've taken.

Perform the upgrade

Those instructions are fit for Debian, but the upgrade commands run inside the VM can easily be changed to perform upgrades on any system.

On the host:

vagrant up
vagrant ssh

Inside the VM:

sudo sh -c "apt update && apt -y upgrade && apt -y dist-upgrade && apt-get clean"
sudo bash -c "history -c; history -w"; history -c; history -w

On the host:

cat ~/.vagrant.d/ | vagrant ssh -c "cat > ~/.ssh/authorized_keys"
vagrant halt
# The image file name must be the snapshot that corresponds to the vagrant
# instance you've spun up. This should be shown in the output of vagrant when
# running vagrant up at the beginning of this procedure.
sudo qemu-img commit /var/lib/libvirt/images/jessiepuppet_jessiepuppet.img
vagrant destroy

Done! now you can start any VM using that base box and the upgrades should be available to the new instances.

Debugging crashes on debian - divide and conquer


Last month I was suffering from chronic laptop crashes after I decided to run a long-overdue apt dist-upgrade (I run sid). I knew that software was causing this issue since before the dist-upgrade, the laptop was happily running fine for the last couple of years. My issue, though was: what exactly was causing such a painful thing as seemingly random but often-recurring crashes?

I wanted to diagnose. The situation got to a point where it was so aggravating that I was using a desktop at the office instead of the laptop and I would ssh into the laptop to access important information. This had to stop.


After that fated dist-upgrade, my laptop started to show some visual flashes, as if I could see the screen blackout and refresh. Those flashes would only occur when I typed on the keyboard, and I found they would reproduce the easiest when I would change from one terminal to another in Terminator and started typing.

After a certain amount of time (or typing) which to me seemed random, the screen would turn black exactly when I typed any letter on the keyboard. A one-pixel vertical line would go crazy zigzagging colours on the left side. And then after about 10s the whole machine would shutdown by itself.

Frequency (or repetitiveness)

After some time suffering from this, I could see a clear pattern: the more I would use the keyboard, the bigger the chances of crashing were growing.

I then found out that if I purposefully caused visual flashes by going back and forth between two terminal windows and typing all the time, I would repeat the crash faily easily.

Great: with repetitiveness comes easy testing.


My first hypothesis was that it might be caused by the video driver. However when I downgraded it to the latest version that was installed before the dist-upgrade that brought the them, the symptoms didn't go away.

I tried a couple more packages like gnome components. I tried running Xwayland instead of Xorg. I tried using fluxbox instead of gnome. Nothing would cut it: the crashes were still there.

I also tried installing a fresh debian jessie from a debian live image, and then upgrading that to debian sid. The issue was reproducing in this setup.

The issue was that during the dist-upgrade mentioned in the diagnosis section above, there were so many packages upgraded at once that I'd go crazy triaging them all, especially since some of them would refuse to dowgrade because of dependency issues.

Strong medicine

So I was completely fed up with this issue and I had a means of easily reproducing the crash. The strong medicine would be needed: bisecting between the last known good state (the upgrade before last) and the the first known bad state (last upgrade).

The main tool for this bisection would be the awesome service from debian that keeps 4 snapshots per day of the whole debian package repository:

I used the fresh debian sid install that reproduced the crashes so that I wouldn't be messing around too much with my main setup.

Setting up debian sid for bisection

In order to be able to jump back and forth in time, some preparation was needed. First, since I would be upgrading and downgrading, I needed to tell dpkg to just follow along. The unstable release would be moving around with sources from, so a preferences file would suit the purpose nicely:

Package: *
Pin-Priority: 1001
Pin: release a=unstable

Then to make things easier, I needed the bisection process to be somewhat scripted. First, I listed all of the days in between the good and bad states (including the good and bad state days to mark boundaries). Then I marked the oldest date by adding " GOOD" to the right of the date, and similarly the last date would be marked with " BAD". This was done manually in a text file called dpkg-bisect.

The following figure shows a shortened version of the file (assume dates continue sequentially where content is ellipsized):

06-14 GOOD
08-07 BAD

Then I wrote two functions in .bashrc to make it easy to mark a date as either good or bad, respectively. Those functions would accept a date as argument and, depending on which function I called, simply add a mark of " GOOD" or of " BAD" on the line that started with that date:

function good () {
  sed -i "s/^\($1\).*\$/\1 GOOD/" dpkg-bisect
function bad () {
  sed -i "s/^\($1\).*\$/\1 BAD/" dpkg-bisect

Then I created aliases in .bashrc to make it easier to upgrade or downgrade packages and cleanup after an upgrade/downgrade. The cleanup alias shows packages which were not installed on current archives, so it would show us packages that come from a different day (e.g. a different snapshot source). The update alias gives an option to apt-get to disregard errors about a source being outdated: sources on are only valid for a dozen or so days:

alias cleanup="aptitude search '?narrow(?not(?archive(\"^[^n][^o].*$\")),?version(CURRENT))'"
alias update='sudo apt-get -o Acquire::Check-Valid-Until=false update'

Finally, to make bisection easier, I wrote an alias also in .bashrc that would remove all lines until the last occurence of a "GOOD" marker and all lines after the first occurence of a "BAD" marker, and print the date that was right in the middle of what was left:

alias bisect="sed '/BAD\$/Q' dpkg-bisect | tac | sed '/GOOD\$/Q' | tac | awk '{ lines[NR]=\$0; } END { print lines[int(NR/2)+1] }'"

Running bisection

With all this in place, I could start the process, which boils down to:

  • call bisect and find the given date on For consistency, I would always choose the first snapshot of that day.
    • if the result is empty, then stop: the file would have one good and one bad line just next to each other. This bad date would be the first point where the issue was introduced.
  • change /etc/apt/sources.list to list only the URL corresponding to the date from previous point
  • call upgrade
  • call cleanup and figure out how to make state consistent (old libs can be removed, conflicts should be fixed)
  • reboot
  • try to reproduce the bug
  • mark the date as either good or bad depending on the result of previous point
  • repeat process

When I finally found the fateful day that introduced the error, I actually drilled down in this day by adding times of the 4 snapshots in between the last good line and the first bad line. e.g.:

07-18 GOOD
07-19-04:18:30 GOOD
07-20-04:39:05 BAD
07-22 BAD

Then I continued to bisect to find the exact snapshot within that day that brought the problem. Arguably, I could have just listed all snapshots at the beginning of the process.


The snapshot that introduced the crashes was upgrading only 5 packages. Among these, there was one that would stand out from the rest as possible cause so I tried to dowgrade this one first, and voilĂ ! I had found the source of the bug. It was xserver-xorg-core.

So I downgraded this package on my main setup and the bug was gone. I put it on hold so it wouldn't re-upgrade automatically, and finally I reported a bug on the package, #837451. Apparently, a patch for intel-specific hardware was introduced in the package in the version that introduces the crashes and I suspect this patch is what was causing my woes.

This issue was really annoying, but learning how to run a binary search on debian archive snapshots was very interesting. It's a great tool for when finding a remedy to software bugs that make you loose hair.

Migrating to gpg2.1

Last month the debian package for gnupg was changed in the unstable branch from providing version 1.4 to providing 2.1. Users of debian jessie and even stretch in principle shouldn't be concerned by this change. But for sid users this is a major change and some ppl might need a bit of guidance through the process... well at least I needed some. Here's a post about what issues I encountered and how I fixed them or my comprehension of their cause.

The change needs some readjusting, but it is for good reasons. The use of agents is interesting for some security reasons, and the storage format is way faster than it was in gpg 1.4. This and the codebase for 2.1 is apparently in a way better shape.

First and foremost, ppl who read this post should probably familiarize themselves with the changes that the version brings.

Auto-migration of keys happened long ago

First thing that I got sorted out was this: I had had gnupg2 installed on my computer for quite a while already. It was installed automatically as a dependency to password-store. Normally gpg2 automatically migrates your public and secret keyrings automatically for you to the new storing format. This is super neat, but it happened automatically when I first used password-store, and then I didn't migrate to using gpg2 (oops).

Re-importing secret keyring

During the time I persisted in using 1.4, I created a new authentication subkey for use with monkeysphere. This subkey was not automatically migrated when I finally switched to gpg 2.1 thanks to the package change. This is normal: the automatic migration happened a bunch of time ago.

The fix was easy, reimport secret key material:

gpg --import ~/.gnupg/secring.gpg

Using the new, faster public key storage

You also probably want to move your public keyring to the new storage format which is way faster. As described in the GnuPG 2.1 release notes linked above, you can achieve this with the following series of commands:

cd ~/.gnupg
gpg --export-ownertrust >otrust.lst
mv pubring.gpg publickeys
gpg --import-options import-local-sigs --import publickeys
gpg --import-ownertrust otrust.lst
mv pubkeys pubring.gpg

This will create a file named pubring.kbx which is the new storage file. The above commands ensure that you properly import all public keys, public and local signatures and keep your ownertrust intact. The file pubring.gpg is then kept in place so that you can still use it with gpg1.


GnuPG 2.1 now relies heavily on agents. This is actually nice since only one process is holding onto your key material and the others just ask for it to the agent. It also means that network access is totally segmented off to a process of its own. So you have at least two agents: gpg-agent and dirmngr. Those two agents are started automatically by gpg commands, which is convenient.

Dirmngr configuration

In order to contact the wild, wild, Internet, gpg will ask a new agent called dirmngr to access the network and report its findings. You'll be using it among other things to search for keys and to publish your newly updated key for those signatures you've acquired, or UIDs you added.

For this you need to configure dirmngr, see man (8) dirmngr for a list of options. You can set the same options in ~/.gnupg/dirmngr.conf. For example you can set the keyserver to hkps://

If the dirmngr doesn't want to start, the only info you'll get when trying to search for keys with gpg is that connection to the dirmngr timed out. This is pretty annoying: you'll get no detail of why it just won't work.

In order to debug what's happening, you can run the dirmngr manually and it should tell you what's wrong. For example, here we have a syntax error on a config option:

$ dirmngr
dirmngr[29289.0]: /home/gabster/.gnupg/dirmngr.conf:13: invalid option

Once it starts and tells you OK Dirmngr 2.1.15 at your service then you can quit and try doing that gpg network operation you wanted to do.

gpg-agent and SSH keys

I've been using Monkeysphere for some time now to provide access to servers. With gpg 1.4 in order to be able to use the key material, it was necessary to export it to the ssh-agent process. Now it's possible to ask gpg-agent to expose the keys you want to SSH and be used as the ssh-agent process. This means there's no need to use monkeysphere s anymore!

In order to do this you first need to enable this support for the agent. In ~/.gnupg/gpg-agent.conf, add the line:


Then restart the gpg-agent. (you can kill the process and then start it again with gpgconf --launch gpg-agent. If you're using systemd, see below)

Once this is done, you should see a socket file in /var/run/user/$uid/gnupg/S.gpg-agent.ssh. We want to point ssh to this socket so that it communicates directly with the gpg-agent. Add this to your preferred shell initscript; in my case ~/.bashrc:

if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]; then
  export SSH_AUTH_SOCK="$(gpgconf --list-dirs agent-ssh-socket)"

The if block is there because if you run gpg-agent --daemon /bin/bash for example, the environment will be set by the agent itself and the variable that we're checking against will be set. The SSH_AUTH_SOCK variable tells SSH to talk to this socket for communicating with the ssh-agent.

Since SSH also doesn't communicate the current TTY name to gpg-agent, we need to set another variable in order to communicate the current TTY name to gpg-agent. This is useful so that gpg-agent knows where to send prompts for passwords. Again in your favorite shell initscript, add the following:

export GPG_TTY=$(tty)

Now we only have two more details to sort out.

Configure which keys/subkeys are exposed to SSH

The gpg-agent doesn't expose all of your key material to the SSH process. In fact you need to specify which sub-keys should be exposed. For this you need to find the subkey's keygrip. Run this to find it:

$ gpg -k --with-keygrip <>
pub   rsa4096/0xC4ADA67875247FCF 2011-04-25 [SC] [expires: 2016-11-03]
      Key fingerprint = 5F73 EFCB 02FC E345 C107  7477 C4AD A678 7524 7FCF
      Keygrip = BF3DB7C51C596974DF58DC5860BE9F88A12CA19F
uid                   [ unknown] Perception <>
uid                   [  undef ] Network operations center <>
uid                   [  full  ] Koumbit frontdesk <>
uid                   [  full  ] Koumbit support <>
uid                   [ unknown] Production <>
uid                   [ unknown] Services <>
uid                   [ unknown] Koumbit sales <>
uid                   [ unknown] Facturation <>
sub   rsa4096/0x32873884B600AD97 2011-04-25 [E] [expires: 2016-11-03]
      Keygrip = CD56911C4CE3173BD4D0AE5DCDBA29F738F14B39
sub   rsa4096/0xDC837CFBE0D1124E 2015-08-20 [A]
      Keygrip = C87589642DE00D1306350DD5C20F35C409427D45

In this example the key has one subkey with authentication capability (A). We'll expose it to SSH by dropping the hexadecimal keygrip value on a line in ~/.gnupg/sshcontrol:

echo "C87589642DE00D1306350DD5C20F35C409427D45 0" >> ~/.gnupg/sshcontrol

The trailing 0 in the echo above is the TTL of the key before SSH needs to revalidate with gpg-agent. In this case, we set it to 0 so that it doesn't expire: you'll get prompted for the key's password only upon first use.

Once this is done, you can verify that the key is exposed by listing keys with ssh-add:

$ ssh-add -l
4096 SHA256:7mb85m/biclOdJ4JB62rmrWe8nfV/Nwcmwop/Svdo3k (none) (RSA)


Once configuration is correctly established, you can also import ssh keys from default files (e.g. id_ed25519) by using ssh-add without parameters.

Automatically starting the gpg-agent

When using gpg-agent for provide SSH with key material, you need to somehow automatically start the gpg-agent by yourself. GnuPG commands will auto-start the agent for you, but SSH doesn't know (and probably doesn't care) how to do this.

The simplest method is to call gpg-connect-agent /bye in your shell initscript.

But you can also use systemd to do this for you which means the agent will be started even if you don't have a terminal open. First you need to ensure you have a package installed, since without it communication between gpg-agent and your X session will be impossible:

sudo apt install dbus-user-session

With this in place, you can enable the service for your user:

gpgconf --kill gpg-agent
systemctl --user enable gpg-agent.service
systemctl --user start gpg-agent.service

This is not the terminal you are looking for

Now that you have a gpg-agent process running, you might notice that if you move around from your X session to an SSH connection towards your computer (some ppl do weird things... hey! my laptop is crashing all the time, but if I ssh in it doesn't... don't judge!) then the password prompts will not show up, or will show up in the wrong place. gpg commands normally pass the display and tty information to the agent so that things just work, but SSH doesn't pass this information. In such a case you'll probably end up getting this utterly useless message:

sign_and_send_pubkey: signing failed: agent refused operation

So you might need to call this command to make the gpg-agent point to the right place for the prompts:

gpg-connect-agent updatestartuptty /bye

After this you should get prompted for passwords and you should be able to connect to hosts with your keys.

Debcon17 will be happening in Montreal

It's just been decided: Montreal will be hosting Debconf in 2017!

After trying our luck last year, the current Montreal bid team was chosen for organising the event. The competing bid for Prague was really strong, too, so the decision from the organising chairs was only made clear towards the end of the meeting.

Our team currently counts nine people, which is more than last year, and we hope to enroll more help as time advances towards the conference.

I'm thrilled of being able to bring such an interesting event to this city and look forward to working with everybody involved. I'm hoping this will be a great experience for myself and all of the others involved. At least for myself, it's one way of giving back to the community around the distribution I use on my personal computers and in my everyday job.

If you feel like giving a hand with organization, you can come and say hi in the local debian users group mailing list, or join our IRC channel: #debian-quebec on the OFTC network.

Using password-store with an alternate directory

pass (or password-store) is a neat program that helps you manage passwords in files encrypted with gpg.

The comand as it is already offers a lot: you can encrypt different directories to different sets of keys to have finer-grain control over sharing passwords; you can also use git to fetch and push password changes between people.

However, it's built without the idea of having multiple password repositories. It is possible to do so, but you have to know a little trick to do it. This post describes the trick that is already well known and published out there, but adds to it the possibility to use bash-completion with that trick.

The trick

That's super simple and it's documented in the pass(1) man page. In order to use an alternative password store, you need to set an environment variable:

export PASSWORD_STORE_DIR=~/.password-store-alternate

Then the next calls to pass will interact with this alternat store.

Setting up an alias to make it easier to interact with multiple stores

Then you think: I don't want to always set and unset the environment variable!

Easily fixed: just creat an alias that sets the variable only for each call to pass:

alias altpass='PASSWORD_STORE_DIR=~/.password-store-alternate pass'

Using bash-completion with your alias

Ok here comes the new detail (what was above is common knowledge within the pass users community). That alias suffers from not being able to auto-complete sub-command names or password entry/directory names. You can enable it by adding the following contents to the ~/.bash_completion file (create it if it doesn't exist):

# Add alias for alternate password-store
. /usr/share/bash-completion/completions/pass
_altpass() {
  # trailing / is required for the password-store dir.
  PASSWORD_STORE_DIR=~/.password-store-alternate/ _pass

complete -o filenames -o nospace -F _altpass altpass

There you have it. Now start a new terminal, and try using tab to auto-complete. The original pass command will still be auto-completing for the default password store.

SSH key rotation with monkeysphere

It's said to be a good practice to sometimes ro-ro-rotate your keys. It shortens the time span duging which your communications might be snooped upon if your key was compromised without your knowledge.

It's especially interesting to do it whenever there's a security issue like the one that was disclosed last week, cve-2016-0777 and cve-2016-0778, for which keys might have been exposed for 5 years to bein extracted by a malicious server.

I use monkeysphere to link my pubkey material to PGP the web of trust. This makes it super easy to make the pubkey available, and to have servers verify that the key it's getting was actually validated by some peers.

Here's how I rotated my key pair with monkeysphere.

Generate new subkey

First things first. Since we want to rotate keys, we need a new key. Monkeysphere does this for us and makes it super easy.

Before actually doing, though, let's take a look at my key before the process starts for comparative measures afterwards).

pub   4096R/0xC28592496BABC122 2014-06-11 [expires: 2016-06-10]
      Key fingerprint = C1CC 7A4B 7FBE 8ED3 7C00  F8B5 C285 9249 6BAB C122
uid                 [ultimate] Gabriel Filion <>
uid                 [ultimate] Gabriel Filion <>
sub   4096R/0x59BC891D96B9EF51 2014-06-11 [expires: 2016-06-10]
sub   4096R/0xC613C0506BBF1403 2014-09-18

You can see that I already have a subkey on the last line. That's the one I want to replace. So let's create the new key:

monkeysphere gen-subkey -l 4096

After this operation is complete, you should be able to notice a new subkey on your PGP key:

pub   4096R/0xC28592496BABC122 2014-06-11 [expires: 2016-06-10]
      Key fingerprint = C1CC 7A4B 7FBE 8ED3 7C00  F8B5 C285 9249 6BAB C122
uid                 [ultimate] Gabriel Filion <>
uid                 [ultimate] Gabriel Filion <>
sub   4096R/0x59BC891D96B9EF51 2014-06-11 [expires: 2016-06-10]
sub   4096R/0xC613C0506BBF1403 2014-09-18
sub   4096R/0x595B733A8B95E6F1 2016-01-23

Export new subkey to ssh-agent alongside old one

This part should be super simple if you have only one secret key in your keyring. Just launch the command at the end of this section and you're done. However, in my case I have a key that's revoked and monkeysphere tries to export material from this key. In order to prevent this, I use the environment variable MONKEYSPHERE_SUBKEYS_FOR_AGENT that I set in my ~/.bashrc file.

Let's get each subkey's fingerprint. Some users might need to use this super intuitive gpg call. For some others, only one argument is needed; I haven't yet determined what influences this, but in all cases using the argument twice will work for everyone.

$ gpg --fingerprint --fingerprint
pub   4096R/0xC28592496BABC122 2014-06-11 [expires: 2016-06-10]
      Key fingerprint = C1CC 7A4B 7FBE 8ED3 7C00  F8B5 C285 9249 6BAB C122
uid                 [ultimate] Gabriel Filion <>
uid                 [ultimate] Gabriel Filion <>
sub   4096R/0x59BC891D96B9EF51 2014-06-11 [expires: 2016-06-10]
      Key fingerprint = CB3D 48CE 55CD 1FAB B1E4  D0C3 59BC 891D 96B9 EF51
sub   4096R/0xC613C0506BBF1403 2014-09-18
      Key fingerprint = 39C9 47C6 48F4 664C FFBB  C83A C613 C050 6BBF 1403
sub   4096R/0x595B733A8B95E6F1 2016-01-23
      Key fingerprint = D480 05C9 0B18 ABF7 965C  7E01 595B 733A 8B95 E6F1

Now that we have this information, we can adjust the environment variable. Monkeysphere's man page says that the variable takes a space-separated list of fingerprints so I removed all spaced from the output above:

export MONKEYSPHERE_SUBKEYS_FOR_AGENT="D48005C90B18ABF7965C7E01595B733A8B95E6F1 39C947C648F4664CFFBBC83AC613C0506BBF1403"

Finally, we can export the new subkey to the ssh agent:

monkeysphere s

After this, you should be seeing in the output of ssh-add -L the same thing as in monkeysphere u ''

Revoke old subkey

Now we can revoke the old subkey. Doing so will not stop monkeysphere from exporting it to ssh-agent. It simply creates a public revocation object on the subkey's public part.

gpg --edit uid
> key 2
> revkey
> save

(optional) publish updated PGP key to key servers

If you are using the public key servers for publishing your public key material, now is a good time to send your updated key.

gpg --send-keys

Revoked subkeys that are published to key servers won't get imported by monkeysphere-authentication update-users anymore; they will actually get removed from computers (after all that's the point of monkeysphere, to import only public keys that are valid). So once this runs on computers to which you should have access, only you new subkey should be present on computers.

Of course, for this to actually happen you will have to wait for propagation to happen between the key servers.

For people that use other means of publishing keys, you'll have to send your updated public key to the right communication channel for your key to end up getting updated by monkeysphere.

Install new key everywhere

If you're only using your subkey for monkeysphere-enabled computers, then you're all done! But if you're installing this same public key on computers that are not using monkeysphere (e.g. the traditional authorized_keys way), you'll have to install your new key everywhere and remove the old one.

You can get your new public key in a format that's usable with authorized_keys with:

monkeysphere u ''

Clear out old subkey

Once you're certain that the old key is not installed anywhere anymore, you can stop exporting it to your ssh-agent. For this, we'll change the environment variable again and remove the old subkey's fingerprint.

export MONKEYSPHERE_SUBKEYS_FOR_AGENT="D48005C90B18ABF7965C7E01595B733A8B95E6F1"

You can then clean out the old key from your running agent. First manually export the new value of the variable you just set in your ~/.bashrc. Then remove keys from your agent (for this part you might want to be more careful and use -d to export single keys if you have other identities present in your ssh-agent) and if you blasted everything out like I did, re-export key material from monkeysphere:

ssh-add -D
monkeysphere s

Rotation completed!

How should I order things in ssh config

This may seem super obvious for some people, but I've actually just discovered this for myself and I think documentation doesn't make this super easy to know. This discovery solved some of my woes with configuring my ssh client.

Here's a motto that you should keep in mind when modifying your ssh_config file:

In ssh_config, specific comes first and generic last.

OpenSSH tends to parse the ssh_config file from top to bottom, and by doing so, as soon as you set an option other blocks that might match for the same host further down won't be able to set that same option again. In that sense, having wildcard blocks at the end of the file (or at least after all other hosts that it can match) makes sense since such a block will set the option for all matching hosts only if it hasn't already been set above.

Debian jessie live image

A quick note to ppl who want to use the Debian live jessie image (standard, no X environment):

The auto login was busted up since wheezy and you now need to manually login for that image to be useful. The credentials are:

  • User: user
  • Password: live

With this login, you can then sudo to perform any task you want.

Bash random stuff

With bash, there are lots of things that you can do. Some of them make GUIs look like interfaces for kids. Some others are not super useful, but intellectually fun.

Here's a random dump of things I've kept around as notes.

Who's using that?

To know if files or directories are being used by programs, two commands are super useful: fuser and lsof.


To know the PIDs of programs that have a certain file in their file descriptors, and the user names under which they are running:

fuser -u /var/log/mail.log

To show PIDs using any file under a mounted filesystem:

fuser -m /srv/

Create a screen session on a serial device, but only if nothing is already using it. Otherwise, try reconnecting to the current screen session:

if fuser -s /dev/ttyUSB2; then screen -x; else screen /dev/ttyUSB2 115200; fi


To list all files that are open by a certain process ID:

lsof -p 4194

To get all files open by a certain user:

lsof -u joejane

To see all established IPv4 connections from a certain process ID:

lsof -i 4 -a -p 31936

List all processes that have established an SSH connection:

lsof -i :22

Getting rid of all spaces in a tree of files

# this trick depends on bash features
# this command doesn't take any argument. it'll work on the current working
# directory and all of its subdirectories
nomorespace () { ls -1| while read i; do j=${i// /_}; if [ "$i" != "$j" ]; then mv "$i" "$j";fi; done; for i in $(find . -maxdepth 1 -type d -not -name "."); do pushd $i; nomorespace; popd; done; }

Switching file encoding

Sometimes it's useful to switch files that you get from the internet from one encoding to another that's more useful for you.

Transform flac files into ogg files

# This expects files to have track number at the start of the file followed by
# a dash like this:
# 01-Track_title.flac
for i in *.flac; do track=$(echo $i|sed -e 's/\([0-9][0-9]\).*/\1/'); title=$(basename $i .flac|sed -e 's/^[0-9]\+-//' -e 's/_/ /g'); flac -sdc $i | oggenc -a "Ali Farka Touré" -l "The river" -N "$track" -t "$title" -o $(basename $i .flac).ogg -; done

Transform m4a files into ogg files

# Same expectations for the filename as above
for i in *.m4a; do track=$(echo $i |sed -e 's/\([0-9][0-9]\).*/\1/'); title=$(basename $i .m4a|sed -e 's/^[0-9]\+-//' -e 's/_/ /g'); mplayer -quiet -vo null -vc dummy -ao pcm:waveheader:file="rawaudio.wav" "$i"; oggenc -a "Aphex twin" -l "Drukqs" -N "$track" -t "$title" -o ${track}-$(echo $title | sed -e 's/ /_/g').ogg rawaudio.wav; rm -f rawaudio.wav; done

Redefining builtin commands

This is rather more fun than useful, but I found it on a site that was instructing about what you can do when the infamous "rm -rf /" was run on a server and you need to salvage what you can from the remains of the explosion.

ls() {
  [ "x$1" == "-a" ] && ALL=".*"
  for i in $ALL *; echo $i; done

cat() {
  while read line; do echo $l; done < $1
Bash uglyness of the day

Here the Bash uotd:

Say you've got a list of items separated by new lines, and you'd like to concatenate them in a string, but by ensuring you enclose each item in single quotes so as to enclose spaces and weird characters safely.

You do the following:

for i in $list_of_things; do
   resulting_string="${resulting_string}'${i}' "
# All good here, look at the contents.
# should be giving you what you expect
echo "$resulting_string"

Now try supplying that to a command:

some_command $resulting_string
# OOPS, some_command has an argv that would look like this (notice how it
# actually receives the single quotes with the arguments):

How do you fix that? Obvious, isn't it: you either eval or enclose the whole thing in another shell!

sh -c "some_command $resulting_string"

I really don't like shell scripting...

See older posts

This blog is powered by ikiwiki.