lelutin.ca
lelutin.ca
https://lelutin.ca//
lelutin.ca
ikiwiki
2018-01-02T23:41:19Z
Creating and maintaining vagrant base boxes
https://lelutin.ca//posts/Creating_and_maintaining_vagrant_base_boxes/
2017-12-14T21:51:16Z
2016-12-13T08:09:19Z
<p>I finally got the hang of vagrant base box creation. I used to download images
from ppl that I trusted, but then I was always caught with having to either
wait for new releases to get captured into a box or just not using a certain OS
at all in vagrant.</p>
<p>But it's not complex at all to create your own, as the vagrant site documents:</p>
<p><a href="https://www.vagrantup.com/docs/boxes/base.html">https://www.vagrantup.com/docs/boxes/base.html</a></p>
<p>This documentation page got me started. So here are lists of commands and
instructions for how I create boxes (unfortunately I couldn't get instructions
for CentOS finished since I'm still experiencing a bug about <code>ip</code> not showing
up the network interface while starting up intances once the box is imported).</p>
<h1>Creating a base box</h1>
<p>The idea here is really simple. You need to manually create a VM and install
the OS of your choice in it. Make sure it's using DHCP, has OpenSSH installed,
the configuration manager of your choice, that the insecure vagrant public key
lets you login to the "vagrant" user inside the VM and finally that you can
sudo to root from the "vagrant" user without a password.</p>
<p>We also include perform some other tricks and install other stuff that might be
interesting for our use cases. My base boxes are used for testing puppet
modules so I want them to be as untouched as possible, but you can install
whatever you need to make them smell exactly like your production setups.</p>
<h2>Debian box (currently jessie)</h2>
<p>Start by downloading a <code>netinst</code> iso from the debian web site. If you want to
have a box using the <code>testing</code> branch of packages you'll need to install a
stable release first and then upgrade to testing right before cleaning up and
packaging up into a box. The OS upgrade is out of the scope of this document.</p>
<p>In virt-manager, I usually create a VM with 512Mb of RAM and 20Gb of disk (with
qcow2 since I'm using vagrant-libvirt). Then I just follow instructions from
the installer. In tasksel, uncheck all the options except for "SSH server" and
"Common system utils". Place all files in one partition (not crypted otherwise
it's impractical to update packages in the base box by merging a snapshot). Set
the root password to "vagrant", then choose "vagrant" as a user name and set
its password to "vagrant".</p>
<p>On the host:</p>
<pre><code># You'll have to know which IP the VM configured once booted up after install
ssh-copy-id -o UserKnownHostsFile=/dev/null -i ~/.vagrant.d/insecure_private_key.pub vagrant@192.168.122.56
</code></pre>
<p>Inside the VM:</p>
<pre><code>su -
sed -i 's/^\(GRUB_TIMEOUT\)=.*$/\1=1/' /etc/default/grub
update-grub2
echo UseDNS no >> /etc/ssh/sshd_config
apt install -y sudo
echo "vagrant ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/vagrant
logout; sudo -i # test that sudo is working OK and without password
</code></pre>
<p>Here you can optionally upgrade the OS if you want to use testing or sid
instead.</p>
<p>Still in the VM:</p>
<pre><code>apt install -y puppet rsync
systemctl disable puppet.service
# Now you can install whatever else that you need. I usually install vim-nox here.
apt-get clean
dd if=/dev/zero of=/EMPTY #reclaim emtpy space. this operation needs 20Gb on host
rm /EMPTY
history -c; history -w
logout # go back to the vagrant user
history -c; history -w
sudo -i
shutdown -h now
</code></pre>
<p>The main part of the work is done. Now follow instructions in the section below
about packaging and importing the base box.</p>
<h2>FreeBSD box</h2>
<p>This procedure was put together using FreeBSD 11.</p>
<p>Notice: I use the ports system to install software which is insanely long and
needs constant attention since some software needs you to choose compilation
options. There is probably a better way, but I'm now steering away from FreeBSD
packages because of their nasty choices in default compilation options.</p>
<p>Start by downloading an image that ends with <code>-bootonly.iso</code>. In the installer,
choose the keyboard layout of your preference. Choose guided ZFS partitioning
(or if you don't want ZFS, you can choose guided normal). Don't set any crypto
since this'll make upgrading software in the base box by merging a snapshot
impractical later. Set network to DHCP and type in a hostname that'll be
somwhat valid (e.g. a real FQDN hostname even if the hostname won't resolve).
Don't activate IPv6 (that choice might be reviewed in the future.. depending on
whether the local network on laptop is IPv6). Don't activate any hardening
features. Choose sshd in the list of software to install to the system. Set the
root password to "vagrant". Choose to create a user and name it "vagrant" with
a password of "vagrant".</p>
<p>On the host:</p>
<pre><code># You'll have to know which IP the VM configured once booted up after install
ssh-copy-id -o UserKnownHostsFile=/dev/null -i ~/.vagrant.d/insecure_private_key.pub vagrant@192.168.122.56
</code></pre>
<p>Inside the VM:</p>
<pre><code>su -
echo 'autoboot_delay="1"' >> /boot/loader.conf
echo UseDNS no >> /etc/ssh/sshd_config
cd /usr/ports
# Installing bash is optional and might be avoided to have a system that's more
# "pure" or "vanilla". But I personally hate csh
(cd shells/bash; make install clean)
# Following line needed for bash
echo "fdesc /dev/fd fdescfs rw 0 0" >> /etc/fstab; mount /dev/fd
chsh -s bash; chsh -s bash vagrant
(cd security/sudo; make install clean)
echo "vagrant ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/vagrant
logout; sudo -i # test that sudo is working OK and without password
# This is SO annoying. why is that enabled by default?
sed -i -e '/freebsd-tips/d' ~vagrant/.profile
(cd net/rsync; make install clean)
# Here you can choose other available versions of puppet. currently 3.7, 3.8 or 4
(cd sysutils/puppet38; make install clean)
# Now you can install whatever you want. I usually do: (cd editors/vim-lite; make install clean)
history -c; history -w
shutdown -p now
</code></pre>
<p>note: since we're using ZFS, cleaning up disk space by writing a huge file and
deleting it doesn't work since data compression is enabled by default.</p>
<p>You're done with installing your box. Now follow the instructions in the
section below about Packaging and importing the base box.</p>
<h2>Packaging and importing the base box</h2>
<p>Once the VM is installed we need to package it into a box and then import that
as a base box.</p>
<p>These instructions are made for my setup that uses vagrant-libvirt. You'll have
to find out how to perform disk space reclaiming and box export with virtualbox
or other providers. I believe these instructions are very easy to find at least
for virtualbox.</p>
<p>Obviously the image name and path where you store the final copy of the image
must be changed to fit your current setup.</p>
<p>On the host:</p>
<pre><code>sudo -i
cd /var/lib/libvirt/images
qemu-img convert -O qcow2 stretch.qcow2 ~myusername/dev/vm/stretch.qcow2
chown myusername:mygroup ~myusername/dev/vm/stretch.qcow2
logout
cd ~/dev/vm
~/.vagrant.d/gems/gems/vagrant-libvirt-*/tools/create_box.sh stretch.qcow2
vagrant box add stretch.box --name stretch
</code></pre>
<p>Now you can create a vagrant project with the new base box. Test that it works
correctly, and then you can remove the qcow2 image in your home dir. You can
also remove the base box, or you can store it for future uses or even publish
it so that others can use it!</p>
<p>You can also remove the VM that was manually created.</p>
<h1>Upgrade packages inside of the box</h1>
<p>From time to time it's useful to upgrade packages/software inside of the base
boxes to avoid downloading too much during your tests or even hitting package
not found if the version you're requesting doesn't exist anymore.</p>
<p>Again, these instructions are meant for vagrant-libvirt users. However, if I
remember correctly it's even easier to do with virtualbox and snapshots.</p>
<p>I've scripted the following procedure since it's very mechanical and has no
real variability. Checkout <a href="https://lelutin.ca//posts/files/box_update.sh">box update.sh</a>.</p>
<p>To perform upgrades, make sure you're using a vagrant project that:</p>
<ul>
<li>doesn't install stuff inside the VM with a configuration manager</li>
<li>doesn't use any additional network interface</li>
<li>doesn't have any files in the vagrant project other than the Vagrantfile</li>
</ul>
<h2>Backup</h2>
<p>Yes, you are possibly going to break the base box. So it's a great idea to
start by taking a backup of the base box disk image before starting:</p>
<pre><code>sudo cp /var/lib/libvirt/images/jessie_vagrant_box_image_0.img .
</code></pre>
<p>Warning: Don't place the image backup inside the directory of the vagrant
project you're using for running the upgrades. This directory gets rsync'ed to
the VM when starting up.</p>
<p>In case of a total meltdown of the base box, run "vagrant destroy" on all VMs
that use this base box, then squash the file inside <code>/var/lib/libvirt/images/</code>
with the backup you've taken.</p>
<h2>Perform the upgrade</h2>
<p>Those instructions are fit for Debian, but the upgrade commands run inside the
VM can easily be changed to perform upgrades on any system.</p>
<p>On the host:</p>
<pre><code>vagrant up
vagrant ssh
</code></pre>
<p>Inside the VM:</p>
<pre><code>sudo sh -c "apt update && apt -y upgrade && apt -y dist-upgrade && apt-get clean"
sudo bash -c "history -c; history -w"; history -c; history -w
</code></pre>
<p>On the host:</p>
<pre><code>cat ~/.vagrant.d/insecure_private_key.pub | vagrant ssh -c "cat > ~/.ssh/authorized_keys"
vagrant halt
# The image file name must be the snapshot that corresponds to the vagrant
# instance you've spun up. This should be shown in the output of vagrant when
# running vagrant up at the beginning of this procedure.
sudo qemu-img commit /var/lib/libvirt/images/jessiepuppet_jessiepuppet.img
vagrant destroy
</code></pre>
<p>Done! now you can start any VM using that base box and the upgrades should be
available to the new instances.</p>
Debugging crashes on debian - divide and conquer
https://lelutin.ca//posts/Debugging_crashes_on_debian_-_divide_and_conquer/
2018-01-02T23:41:19Z
2016-09-20T03:23:10Z
<h1>Prognosis</h1>
<p>Last month I was suffering from chronic laptop crashes after I decided to run a
long-overdue <code>apt dist-upgrade</code> (I run sid). I knew that software was causing
this issue since before the dist-upgrade, the laptop was happily running fine
for the last couple of years. My issue, though was: what exactly was causing
such a painful thing as seemingly random but often-recurring crashes?</p>
<p>I wanted to diagnose. The situation got to a point where it was so aggravating
that I was using a desktop at the office instead of the laptop and I would ssh
into the laptop to access important information. This had to stop.</p>
<h2>Symptoms</h2>
<p>After that fated <code>dist-upgrade</code>, my laptop started to show some visual flashes,
as if I could see the screen blackout and refresh. Those flashes would only
occur when I typed on the keyboard, and I found they would reproduce the
easiest when I would change from one terminal to another in <code>Terminator</code> and
started typing.</p>
<p>After a certain amount of time (or typing) which to me seemed random, the
screen would turn black exactly when I typed any letter on the keyboard. A
one-pixel vertical line would go crazy zigzagging colours on the left side. And
then after about 10s the whole machine would shutdown by itself.</p>
<h2>Frequency (or repetitiveness)</h2>
<p>After some time suffering from this, I could see a clear pattern: the more I
would use the keyboard, the bigger the chances of crashing were growing.</p>
<p>I then found out that if I purposefully caused visual flashes by going back and
forth between two terminal windows and typing all the time, I would repeat the
crash faily easily.</p>
<p>Great: with repetitiveness comes easy testing.</p>
<h1>Treatment</h1>
<p>My first hypothesis was that it might be caused by the video driver. However
when I downgraded it to the latest version that was installed before the
<code>dist-upgrade</code> that brought the them, the symptoms didn't go away.</p>
<p>I tried a couple more packages like gnome components. I tried running Xwayland
instead of Xorg. I tried using fluxbox instead of gnome. Nothing would cut it:
the crashes were still there.</p>
<p>I also tried installing a fresh debian jessie from a debian live image, and
then upgrading that to debian sid. The issue was reproducing in this setup.</p>
<p>The issue was that during the <code>dist-upgrade</code> mentioned in the diagnosis section
above, there were so many packages upgraded at once that I'd go crazy triaging
them all, especially since some of them would refuse to dowgrade because of
dependency issues.</p>
<h2>Strong medicine</h2>
<p>So I was completely fed up with this issue and I had a means of easily
reproducing the crash. The strong medicine would be needed: bisecting between
the last known good state (the upgrade before last) and the the first known bad
state (last upgrade).</p>
<p>The main tool for this bisection would be the awesome service from debian that
keeps 4 snapshots per day of the whole debian package repository:</p>
<p>http://snapshot.debian.org/</p>
<p>I used the fresh debian sid install that reproduced the crashes so that I
wouldn't be messing around too much with my main setup.</p>
<h2>Setting up debian sid for bisection</h2>
<p>In order to be able to jump back and forth in time, some preparation was needed.
First, since I would be upgrading <em>and</em> downgrading, I needed to tell dpkg to
just follow along. The unstable release would be moving around with sources
from snapshot.debian.org, so a preferences file would suit the purpose nicely:</p>
<pre><code>Package: *
Pin-Priority: 1001
Pin: release a=unstable
</code></pre>
<p>Then to make things easier, I needed the bisection process to be somewhat
scripted. First, I listed all of the days in between the good and bad states
(including the good and bad state days to mark boundaries). Then I marked the
oldest date by adding " GOOD" to the right of the date, and similarly the last
date would be marked with " BAD". This was done manually in a text file called
<code>dpkg-bisect</code>.</p>
<p>The following figure shows a shortened version of the file (assume dates
continue sequentially where content is ellipsized):</p>
<pre><code>06-14 GOOD
06-15
06-16
06-17
[...]
08-04
08-05
08-06
08-07 BAD
</code></pre>
<p>Then I wrote two functions in <code>.bashrc</code> to make it easy to mark a date as
either good or bad, respectively. Those functions would accept a date as
argument and, depending on which function I called, simply add a mark of
" GOOD" or of " BAD" on the line that started with that date:</p>
<pre><code>function good () {
sed -i "s/^\($1\).*\$/\1 GOOD/" dpkg-bisect
}
function bad () {
sed -i "s/^\($1\).*\$/\1 BAD/" dpkg-bisect
}
</code></pre>
<p>Then I created aliases in <code>.bashrc</code> to make it easier to upgrade or downgrade
packages and cleanup after an upgrade/downgrade. The cleanup alias shows
packages which were not installed on current archives, so it would show us
packages that come from a different day (e.g. a different snapshot source). The
update alias gives an option to <code>apt-get</code> to disregard errors about a source
being outdated: sources on snapshot.debian.org are only valid for a dozen or so
days:</p>
<pre><code>alias cleanup="aptitude search '?narrow(?not(?archive(\"^[^n][^o].*$\")),?version(CURRENT))'"
alias update='sudo apt-get -o Acquire::Check-Valid-Until=false update'
</code></pre>
<p>Finally, to make bisection easier, I wrote an alias also in <code>.bashrc</code> that
would remove all lines until the last occurence of a "GOOD" marker and all
lines after the first occurence of a "BAD" marker, and print the date that was
right in the middle of what was left:</p>
<pre><code>alias bisect="sed '/BAD\$/Q' dpkg-bisect | tac | sed '/GOOD\$/Q' | tac | awk '{ lines[NR]=\$0; } END { print lines[int(NR/2)+1] }'"
</code></pre>
<h2>Running bisection</h2>
<p>With all this in place, I could start the process, which boils down to:</p>
<ul>
<li>call <code>bisect</code> and find the given date on snapshot.debian.org. For
consistency, I would always choose the first snapshot of that day.
<ul>
<li>if the result is empty, then stop: the file would have one good and one bad
line just next to each other. This bad date would be the first point
where the issue was introduced.</li>
</ul>
</li>
<li>change <code>/etc/apt/sources.list</code> to list only the snapshot.debian.org URL
corresponding to the date from previous point</li>
<li>call <code>upgrade</code></li>
<li>call <code>cleanup</code> and figure out how to make state consistent (old libs can be
removed, conflicts should be fixed)</li>
<li>reboot</li>
<li>try to reproduce the bug</li>
<li>mark the date as either good or bad depending on the result of previous point</li>
<li>repeat process</li>
</ul>
<p>When I finally found the fateful day that introduced the error, I actually
drilled down in this day by adding times of the 4 snapshots in between the last
good line and the first bad line. e.g.:</p>
<pre><code>[...]
07-16
07-17
07-18 GOOD
07-19-04:18:30 GOOD
07-19-10:13:20
07-19-16:21:50
07-19-22:21:00
07-20-04:39:05 BAD
07-21
07-22 BAD
07-23
[...]
</code></pre>
<p>Then I continued to bisect to find the exact snapshot within that day that
brought the problem. Arguably, I could have just listed all snapshots at the
beginning of the process.</p>
<h1>Diagnosis</h1>
<p>The snapshot that introduced the crashes was upgrading only 5 packages. Among
these, there was one that would stand out from the rest as possible cause so I
tried to dowgrade this one first, and voilĂ ! I had found the source of the bug.
It was <code>xserver-xorg-core</code>.</p>
<p>So I downgraded this package on my main setup and the bug was gone. I put it on
hold so it wouldn't re-upgrade automatically, and finally I reported a bug on
the package,
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=837451">#837451</a>.
Apparently, a patch for intel-specific hardware was introduced in the package
in the version that introduces the crashes and I suspect this patch is what was
causing my woes.</p>
<p>This issue was really annoying, but learning how to run a binary search on
debian archive snapshots was very interesting. It's a great tool for when
finding a remedy to software bugs that make you loose hair.</p>
Migrating to gpg2.1
https://lelutin.ca//posts/Migrating_to_gpg2.1/
2016-09-02T17:11:45Z
2016-09-02T07:35:42Z
<p>Last month the debian package for gnupg was changed in the unstable branch from
providing version 1.4 to providing 2.1. Users of debian jessie and even stretch
in principle shouldn't be concerned by this change. But for sid users this is a
major change and some ppl might need a bit of guidance through the process...
well at least <em>I</em> needed some. Here's a post about what issues I encountered
and how I fixed them or my comprehension of their cause.</p>
<p>The change needs some readjusting, but it is for good reasons. The use of
agents is interesting for some security reasons, and the storage format is way
faster than it was in gpg 1.4. This and the codebase for 2.1 is apparently in a
way better shape.</p>
<p>First and foremost, ppl who read this post should probably familiarize
themselves with the
<a href="https://gnupg.org/faq/whats-new-in-2.1.html">changes that the version brings</a>.</p>
<h1>Auto-migration of keys happened long ago</h1>
<p>First thing that I got sorted out was this: I had had gnupg2 installed on my
computer for quite a while already. It was installed automatically as a
dependency to password-store. Normally gpg2 automatically migrates your public
and secret keyrings automatically for you to the new storing format. This is
super neat, but it happened automatically when I first used password-store, and
then I didn't migrate to using gpg2 (oops).</p>
<h2>Re-importing secret keyring</h2>
<p>During the time I persisted in using 1.4, I created a new authentication subkey
for use with monkeysphere. This subkey was not automatically migrated when I
finally switched to gpg 2.1 thanks to the package change. This is normal: the
automatic migration happened a bunch of time ago.</p>
<p>The fix was easy, reimport secret key material:</p>
<pre><code>gpg --import ~/.gnupg/secring.gpg
</code></pre>
<h2>Using the new, faster public key storage</h2>
<p>You also probably want to move your public keyring to the new storage format
which is way faster. As described in the GnuPG 2.1 release notes linked above,
you can achieve this with the following series of commands:</p>
<pre><code>cd ~/.gnupg
gpg --export-ownertrust >otrust.lst
mv pubring.gpg publickeys
gpg --import-options import-local-sigs --import publickeys
gpg --import-ownertrust otrust.lst
mv pubkeys pubring.gpg
</code></pre>
<p>This will create a file named <code>pubring.kbx</code> which is the new storage file. The
above commands ensure that you properly import all public keys, public and
local signatures and keep your ownertrust intact. The file <code>pubring.gpg</code> is
then kept in place so that you can still use it with gpg1.</p>
<h1>Agents</h1>
<p>GnuPG 2.1 now relies heavily on agents. This is actually nice since only one
process is holding onto your key material and the others just ask for it to the
agent. It also means that network access is totally segmented off to a process
of its own. So you have at least two agents: gpg-agent and dirmngr. Those two
agents are started automatically by gpg commands, which is convenient.</p>
<h2>Dirmngr configuration</h2>
<p>In order to contact the wild, wild, Internet, gpg will ask a new agent called
dirmngr to access the network and report its findings. You'll be using it among
other things to search for keys and to publish your newly updated key for those
signatures you've acquired, or UIDs you added.</p>
<p>For this you need to configure dirmngr, see man (8) dirmngr for a list of
options. You can set the same options in <code>~/.gnupg/dirmngr.conf</code>. For example
you can set the keyserver to <code>hkps://hkps.pool.sks-keyservers.net</code>.</p>
<p>If the dirmngr doesn't want to start, the only info you'll get when trying to
search for keys with gpg is that connection to the dirmngr timed out. This is
pretty annoying: you'll get no detail of why it just won't work.</p>
<p>In order to debug what's happening, you can run the dirmngr manually and it
should tell you what's wrong. For example, here we have a syntax error on a
config option:</p>
<pre><code>$ dirmngr
dirmngr[29289.0]: /home/gabster/.gnupg/dirmngr.conf:13: invalid option
</code></pre>
<p>Once it starts and tells you <code>OK Dirmngr 2.1.15 at your service</code> then you can
quit and try doing that gpg network operation you wanted to do.</p>
<h2>gpg-agent and SSH keys</h2>
<p>I've been using <a href="http://monkeysphere.info/">Monkeysphere</a> for
some time now to provide access to servers. With gpg 1.4 in order to be able to
use the key material, it was necessary to export it to the ssh-agent process.
Now it's possible to ask gpg-agent to expose the keys you want to SSH and be
used as the ssh-agent process. This means there's no need to use
<code>monkeysphere s</code> anymore!</p>
<p>In order to do this you first need to enable this support for the agent. In
<code>~/.gnupg/gpg-agent.conf</code>, add the line:</p>
<pre><code>enable-ssh-support
</code></pre>
<p>Then restart the gpg-agent. (you can kill the process and then start it again
with <code>gpgconf --launch gpg-agent</code>. If you're using systemd, see below)</p>
<p>Once this is done, you should see a socket file in
<code>/var/run/user/$uid/gnupg/S.gpg-agent.ssh</code>. We want to point ssh to this socket
so that it communicates directly with the gpg-agent. Add this to your preferred
shell initscript; in my case <code>~/.bashrc</code>:</p>
<pre><code>if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]; then
export SSH_AUTH_SOCK="$(gpgconf --list-dirs agent-ssh-socket)"
fi
</code></pre>
<p>The if block is there because if you run <code>gpg-agent --daemon /bin/bash</code> for
example, the environment will be set by the agent itself and the variable that
we're checking against will be set. The SSH_AUTH_SOCK variable tells SSH to
talk to this socket for communicating with the ssh-agent.</p>
<p>Since SSH also doesn't communicate the current TTY name to gpg-agent, we need
to set another variable in order to communicate the current TTY name to
gpg-agent. This is useful so that gpg-agent knows where to send prompts for
passwords. Again in your favorite shell initscript, add the following:</p>
<pre><code>export GPG_TTY=$(tty)
</code></pre>
<p>Now we only have two more details to sort out.</p>
<h3>Configure which keys/subkeys are exposed to SSH</h3>
<p>The gpg-agent doesn't expose all of your key material to the SSH process. In
fact you need to specify which sub-keys should be exposed. For this you need to
find the subkey's keygrip. Run this to find it:</p>
<pre><code>$ gpg -k --with-keygrip <your@uid.com>
pub rsa4096/0xC4ADA67875247FCF 2011-04-25 [SC] [expires: 2016-11-03]
Key fingerprint = 5F73 EFCB 02FC E345 C107 7477 C4AD A678 7524 7FCF
Keygrip = BF3DB7C51C596974DF58DC5860BE9F88A12CA19F
uid [ unknown] Perception <perception@rt.koumbit.net>
uid [ undef ] Network operations center <noc@koumbit.org>
uid [ full ] Koumbit frontdesk <info@koumbit.org>
uid [ full ] Koumbit support <support@koumbit.org>
uid [ unknown] Production <prod@rt.koumbit.net>
uid [ unknown] Services <services@rt.koumbit.net>
uid [ unknown] Koumbit sales <ventes@koumbit.org>
uid [ unknown] Facturation <facturation@rt.koumbit.net>
sub rsa4096/0x32873884B600AD97 2011-04-25 [E] [expires: 2016-11-03]
Keygrip = CD56911C4CE3173BD4D0AE5DCDBA29F738F14B39
sub rsa4096/0xDC837CFBE0D1124E 2015-08-20 [A]
Keygrip = C87589642DE00D1306350DD5C20F35C409427D45
</code></pre>
<p>In this example the key has one subkey with authentication capability (A).
We'll expose it to SSH by dropping the hexadecimal keygrip value on a line in
<code>~/.gnupg/sshcontrol</code>:</p>
<pre><code>echo "C87589642DE00D1306350DD5C20F35C409427D45 0" >> ~/.gnupg/sshcontrol
</code></pre>
<p>The trailing 0 in the echo above is the TTL of the key before SSH needs to
revalidate with gpg-agent. In this case, we set it to 0 so that it doesn't
expire: you'll get prompted for the key's password only upon first use.</p>
<p>Once this is done, you can verify that the key is exposed by listing keys with
<code>ssh-add</code>:</p>
<pre><code>$ ssh-add -l
4096 SHA256:7mb85m/biclOdJ4JB62rmrWe8nfV/Nwcmwop/Svdo3k (none) (RSA)
</code></pre>
<p>Success!</p>
<p>Once configuration is correctly established, you can also import ssh keys from
default files (e.g. <code>id_ed25519</code>) by using <code>ssh-add</code> without parameters.</p>
<h2>Automatically starting the gpg-agent</h2>
<p>When using gpg-agent for provide SSH with key material, you need to somehow
automatically start the gpg-agent by yourself. GnuPG commands will auto-start
the agent for you, but SSH doesn't know (and probably doesn't care) how to do
this.</p>
<p>The simplest method is to call <code>gpg-connect-agent /bye</code> in your shell
initscript.</p>
<p>But you can also use systemd to do this for you which means the agent will be
started even if you don't have a terminal open. First you need to ensure you
have a package installed, since without it communication between gpg-agent and
your X session will be impossible:</p>
<pre><code>sudo apt install dbus-user-session
</code></pre>
<p>With this in place, you can enable the service for your user:</p>
<pre><code>gpgconf --kill gpg-agent
systemctl --user enable gpg-agent.service
systemctl --user start gpg-agent.service
</code></pre>
<h2>This is not the terminal you are looking for</h2>
<p>Now that you have a gpg-agent process running, you might notice that if you
move around from your X session to an SSH connection towards your computer
(some ppl do weird things... hey! my laptop is crashing all the time, but if I
ssh in it doesn't... don't judge!) then the password prompts will not show up,
or will show up in the wrong place. gpg commands normally pass the display and
tty information to the agent so that things just work, but SSH doesn't pass
this information. In such a case you'll probably end up getting this utterly
useless message:</p>
<pre><code>sign_and_send_pubkey: signing failed: agent refused operation
</code></pre>
<p>So you might need to call this command to make the gpg-agent point to the right
place for the prompts:</p>
<pre><code>gpg-connect-agent updatestartuptty /bye
</code></pre>
<p>After this you should get prompted for passwords and you should be able to
connect to hosts with your keys.</p>
Debcon17 will be happening in Montreal
https://lelutin.ca//posts/Debcon17_will_be_happening_in_Montreal/
2016-02-29T22:18:18Z
2016-02-29T22:13:30Z
<p>It's just been decided: Montreal will be hosting Debconf in 2017!</p>
<p>After trying our luck last year, the <a href="https://wiki.debconf.org/wiki/DebConf17/Bids/Montreal">current Montreal bid
team</a> was chosen for
organising the event. The competing bid for Prague was really strong, too, so
the decision from the organising chairs was only made clear towards the end of
the meeting.</p>
<p>Our team currently counts nine people, which is more than last year, and we
hope to enroll more help as time advances towards the conference.</p>
<p>I'm thrilled of being able to bring such an interesting event to this city and
look forward to working with everybody involved. I'm hoping this will be a
great experience for myself and all of the others involved. At least for
myself, it's one way of giving back to the community around the distribution I
use on my personal computers and in my everyday job.</p>
<p>If you feel like giving a hand with organization, you can come and say hi in
the <a href="https://lists.debian.org/debian-dug-quebec/">local debian users group mailing
list</a>, or join our
<a href="https://webchat.oftc.net/?channels=debian-quebec">IRC channel: #debian-quebec on the OFTC network</a>.</p>
Using password-store with an alternate directory
https://lelutin.ca//posts/Using_password-store_with_an_alternate_directory/
2018-01-02T23:41:19Z
2016-02-21T00:57:54Z
<p><a href="https://www.passwordstore.org/">pass</a> (or password-store) is a neat program
that helps you manage passwords in files encrypted with gpg.</p>
<p>The comand as it is already offers a lot: you can encrypt different directories
to different sets of keys to have finer-grain control over sharing passwords;
you can also use git to fetch and push password changes between people.</p>
<p>However, it's built without the idea of having multiple password repositories.
It is possible to do so, but you have to know a little trick to do it. This
post describes the trick that is already well known and published out there,
but adds to it the possibility to use bash-completion with that trick.</p>
<h2>The trick</h2>
<p>That's super simple and it's documented in the pass(1) man page. In order to
use an alternative password store, you need to set an environment variable:</p>
<pre><code>export PASSWORD_STORE_DIR=~/.password-store-alternate
</code></pre>
<p>Then the next calls to <code>pass</code> will interact with this alternat store.</p>
<h2>Setting up an alias to make it easier to interact with multiple stores</h2>
<p>Then you think: I don't want to always set and unset the environment variable!</p>
<p>Easily fixed: just creat an alias that sets the variable only for each call to
pass:</p>
<pre><code>alias altpass='PASSWORD_STORE_DIR=~/.password-store-alternate pass'
</code></pre>
<h2>Using bash-completion with your alias</h2>
<p>Ok here comes the new detail (what was above is common knowledge within the
pass users community). That alias suffers from not being able to auto-complete
sub-command names or password entry/directory names. You can enable it by
adding the following contents to the <code>~/.bash_completion</code> file (create it if it
doesn't exist):</p>
<pre><code># Add alias for alternate password-store
. /usr/share/bash-completion/completions/pass
_altpass() {
# trailing / is required for the password-store dir.
PASSWORD_STORE_DIR=~/.password-store-alternate/ _pass
}
complete -o filenames -o nospace -F _altpass altpass
</code></pre>
<p>There you have it. Now start a new terminal, and try using tab to
auto-complete. The original <code>pass</code> command will still be auto-completing for
the default password store.</p>
SSH key rotation with monkeysphere
https://lelutin.ca//posts/SSH_key_rotation_with_monkeysphere/
2018-01-02T23:41:19Z
2016-01-25T06:48:06Z
<p>It's said to be a good practice to sometimes ro-ro-rotate your keys. It
shortens the time span duging which your communications might be snooped upon
if your key was compromised without your knowledge.</p>
<p>It's especially interesting to do it whenever there's a security issue like the
one that was disclosed last week,
<a href="https://www.qualys.com/2016/01/14/cve-2016-0777-cve-2016-0778/openssh-cve-2016-0777-cve-2016-0778.txt">cve-2016-0777 and cve-2016-0778</a>,
for which keys might have been exposed for 5 years to bein extracted by a
malicious server.</p>
<p>I use monkeysphere to link my pubkey material to PGP the web of trust. This
makes it super easy to make the pubkey available, and to have servers verify
that the key it's getting was actually validated by some peers.</p>
<p>Here's how I rotated my key pair with monkeysphere.</p>
<h1>Generate new subkey</h1>
<p>First things first. Since we want to rotate keys, we need a new key.
Monkeysphere does this for us and makes it super easy.</p>
<p>Before actually doing, though, let's take a look at my key before the process
starts for comparative measures afterwards).</p>
<pre><code>pub 4096R/0xC28592496BABC122 2014-06-11 [expires: 2016-06-10]
Key fingerprint = C1CC 7A4B 7FBE 8ED3 7C00 F8B5 C285 9249 6BAB C122
uid [ultimate] Gabriel Filion <gabster@lelutin.ca>
uid [ultimate] Gabriel Filion <gabriel@koumbit.org>
sub 4096R/0x59BC891D96B9EF51 2014-06-11 [expires: 2016-06-10]
sub 4096R/0xC613C0506BBF1403 2014-09-18
</code></pre>
<p>You can see that I already have a subkey on the last line. That's the one I
want to replace. So let's create the new key:</p>
<pre><code>monkeysphere gen-subkey -l 4096
</code></pre>
<p>After this operation is complete, you should be able to notice a new subkey on
your PGP key:</p>
<pre><code>pub 4096R/0xC28592496BABC122 2014-06-11 [expires: 2016-06-10]
Key fingerprint = C1CC 7A4B 7FBE 8ED3 7C00 F8B5 C285 9249 6BAB C122
uid [ultimate] Gabriel Filion <gabster@lelutin.ca>
uid [ultimate] Gabriel Filion <gabriel@koumbit.org>
sub 4096R/0x59BC891D96B9EF51 2014-06-11 [expires: 2016-06-10]
sub 4096R/0xC613C0506BBF1403 2014-09-18
sub 4096R/0x595B733A8B95E6F1 2016-01-23
</code></pre>
<h1>Export new subkey to ssh-agent alongside old one</h1>
<p>This part should be super simple if you have only one secret key in your
keyring. Just launch the command at the end of this section and you're done.
However, in my case I have a key that's revoked and monkeysphere tries to
export material from this key. In order to prevent this, I use the environment
variable <code>MONKEYSPHERE_SUBKEYS_FOR_AGENT</code> that I set in my <code>~/.bashrc</code> file.</p>
<p>Let's get each subkey's fingerprint. Some users might need to use this super
intuitive gpg call. For some others, only one argument is needed; I haven't yet
determined what influences this, but in all cases using the argument twice will
work for everyone.</p>
<pre><code>$ gpg --fingerprint --fingerprint gabster@lelutin.ca
[...]
pub 4096R/0xC28592496BABC122 2014-06-11 [expires: 2016-06-10]
Key fingerprint = C1CC 7A4B 7FBE 8ED3 7C00 F8B5 C285 9249 6BAB C122
uid [ultimate] Gabriel Filion <gabster@lelutin.ca>
uid [ultimate] Gabriel Filion <gabriel@koumbit.org>
sub 4096R/0x59BC891D96B9EF51 2014-06-11 [expires: 2016-06-10]
Key fingerprint = CB3D 48CE 55CD 1FAB B1E4 D0C3 59BC 891D 96B9 EF51
sub 4096R/0xC613C0506BBF1403 2014-09-18
Key fingerprint = 39C9 47C6 48F4 664C FFBB C83A C613 C050 6BBF 1403
sub 4096R/0x595B733A8B95E6F1 2016-01-23
Key fingerprint = D480 05C9 0B18 ABF7 965C 7E01 595B 733A 8B95 E6F1
</code></pre>
<p>Now that we have this information, we can adjust the environment variable.
Monkeysphere's man page says that the variable takes a space-separated list of
fingerprints so I removed all spaced from the output above:</p>
<pre><code>export MONKEYSPHERE_SUBKEYS_FOR_AGENT="D48005C90B18ABF7965C7E01595B733A8B95E6F1 39C947C648F4664CFFBBC83AC613C0506BBF1403"
</code></pre>
<p>Finally, we can export the new subkey to the ssh agent:</p>
<pre><code>monkeysphere s
</code></pre>
<p>After this, you should be seeing in the output of <code>ssh-add -L</code> the same thing
as in <code>monkeysphere u 'gabster@lelutin.ca'</code></p>
<h1>Revoke old subkey</h1>
<p>Now we can revoke the old subkey. Doing so will not stop monkeysphere from
exporting it to ssh-agent. It simply creates a public revocation object on the
subkey's public part.</p>
<pre><code>gpg --edit uid
> key 2
> revkey
> save
</code></pre>
<h1>(optional) publish updated PGP key to key servers</h1>
<p>If you are using the public key servers for publishing your public key
material, now is a good time to send your updated key.</p>
<pre><code>gpg --send-keys gabster@lelutin.ca
</code></pre>
<p>Revoked subkeys that are published to key servers won't get imported by
<code>monkeysphere-authentication update-users</code> anymore; they will actually get
removed from computers (after all that's the point of monkeysphere, to import
only public keys that are valid). So once this runs on computers to which you
should have access, only you new subkey should be present on computers.</p>
<p>Of course, for this to actually happen you will have to wait for propagation to
happen between the key servers.</p>
<p>For people that use other means of publishing keys, you'll have to send your
updated public key to the right communication channel for your key to end up
getting updated by monkeysphere.</p>
<h1>Install new key everywhere</h1>
<p>If you're only using your subkey for monkeysphere-enabled computers, then
you're all done! But if you're installing this same public key on computers
that are not using monkeysphere (e.g. the traditional <code>authorized_keys</code> way),
you'll have to install your new key everywhere and remove the old one.</p>
<p>You can get your new public key in a format that's usable with
<code>authorized_keys</code> with:</p>
<pre><code>monkeysphere u 'gabster@lelutin.ca'
</code></pre>
<h1>Clear out old subkey</h1>
<p>Once you're certain that the old key is not installed anywhere anymore, you can
stop exporting it to your ssh-agent. For this, we'll change the environment
variable again and remove the old subkey's fingerprint.</p>
<pre><code>export MONKEYSPHERE_SUBKEYS_FOR_AGENT="D48005C90B18ABF7965C7E01595B733A8B95E6F1"
</code></pre>
<p>You can then clean out the old key from your running agent. First manually
export the new value of the variable you just set in your <code>~/.bashrc</code>. Then
remove keys from your agent (for this part you might want to be more careful
and use <code>-d</code> to export single keys if you have other identities present in your
ssh-agent) and if you blasted everything out like I did, re-export key material
from monkeysphere:</p>
<pre><code>ssh-add -D
monkeysphere s
</code></pre>
<p>Rotation completed!</p>
How should I order things in ssh config
https://lelutin.ca//posts/How_should_I_order_things_in_ssh_config/
2016-01-24T02:59:19Z
2016-01-24T02:59:19Z
<p>This may seem super obvious for some people, but I've actually just discovered
this for myself and I think documentation doesn't make this super easy to know.
This discovery solved some of my woes with configuring my ssh client.</p>
<p>Here's a motto that you should keep in mind when modifying your <code>ssh_config</code>
file:</p>
<pre><code>In ssh_config, specific comes first and generic last.
</code></pre>
<p>OpenSSH tends to parse the <code>ssh_config</code> file from top to bottom, and by doing
so, as soon as you set an option other blocks that might match for the same
host further down won't be able to set that same option again. In that sense,
having wildcard blocks at the end of the file (or at least after all other
hosts that it can match) makes sense since such a block will set the option for
all matching hosts only if it hasn't already been set above.</p>
Debian jessie live image
https://lelutin.ca//posts/Debian_jessie_live_image/
2015-10-27T15:46:04Z
2015-10-27T15:46:04Z
<p>A quick note to ppl who want to use the Debian live jessie image (standard, no
X environment):</p>
<p>The auto login was busted up since wheezy and you now need to manually login for that image to be useful. The credentials are:</p>
<ul>
<li>User: user</li>
<li>Password: live</li>
</ul>
<p>With this login, you can then sudo to perform any task you want.</p>
Bash random stuff
https://lelutin.ca//posts/Bash_random_stuff/
2018-01-02T23:41:19Z
2015-07-04T04:08:51Z
<p>With bash, there are lots of things that you can do. Some of them make GUIs
look like interfaces for kids. Some others are not super useful, but
intellectually fun.</p>
<p>Here's a random dump of things I've kept around as notes.</p>
<h1>Who's using that?</h1>
<p>To know if files or directories are being used by programs, two commands are
super useful: fuser and lsof.</p>
<h2>fuser</h2>
<p>To know the PIDs of programs that have a certain file in their file
descriptors, and the user names under which they are running:</p>
<pre><code>fuser -u /var/log/mail.log
</code></pre>
<p>To show PIDs using any file under a mounted filesystem:</p>
<pre><code>fuser -m /srv/
</code></pre>
<p>Create a screen session on a serial device, but only if nothing is already
using it. Otherwise, try reconnecting to the current screen session:</p>
<pre><code>if fuser -s /dev/ttyUSB2; then screen -x; else screen /dev/ttyUSB2 115200; fi
</code></pre>
<h2>lsof</h2>
<p>To list all files that are open by a certain process ID:</p>
<pre><code>lsof -p 4194
</code></pre>
<p>To get all files open by a certain user:</p>
<pre><code>lsof -u joejane
</code></pre>
<p>To see all established IPv4 connections from a certain process ID:</p>
<pre><code>lsof -i 4 -a -p 31936
</code></pre>
<p>List all processes that have established an SSH connection:</p>
<pre><code>lsof -i :22
</code></pre>
<h1>Getting rid of all spaces in a tree of files</h1>
<pre><code># this trick depends on bash features
# this command doesn't take any argument. it'll work on the current working
# directory and all of its subdirectories
nomorespace () { ls -1| while read i; do j=${i// /_}; if [ "$i" != "$j" ]; then mv "$i" "$j";fi; done; for i in $(find . -maxdepth 1 -type d -not -name "."); do pushd $i; nomorespace; popd; done; }
</code></pre>
<h1>Switching file encoding</h1>
<p>Sometimes it's useful to switch files that you get from the internet from one
encoding to another that's more useful for you.</p>
<h2>Transform flac files into ogg files</h2>
<pre><code># This expects files to have track number at the start of the file followed by
# a dash like this:
# 01-Track_title.flac
for i in *.flac; do track=$(echo $i|sed -e 's/\([0-9][0-9]\).*/\1/'); title=$(basename $i .flac|sed -e 's/^[0-9]\+-//' -e 's/_/ /g'); flac -sdc $i | oggenc -a "Ali Farka Touré" -l "The river" -N "$track" -t "$title" -o $(basename $i .flac).ogg -; done
</code></pre>
<h2>Transform m4a files into ogg files</h2>
<pre><code># Same expectations for the filename as above
for i in *.m4a; do track=$(echo $i |sed -e 's/\([0-9][0-9]\).*/\1/'); title=$(basename $i .m4a|sed -e 's/^[0-9]\+-//' -e 's/_/ /g'); mplayer -quiet -vo null -vc dummy -ao pcm:waveheader:file="rawaudio.wav" "$i"; oggenc -a "Aphex twin" -l "Drukqs" -N "$track" -t "$title" -o ${track}-$(echo $title | sed -e 's/ /_/g').ogg rawaudio.wav; rm -f rawaudio.wav; done
</code></pre>
<h1>Redefining builtin commands</h1>
<p>This is rather more fun than useful, but I found it on a site that was
instructing about what you can do when the infamous "rm -rf /" was run on a
server and you need to salvage what you can from the remains of the explosion.</p>
<pre><code>ls() {
[ "x$1" == "-a" ] && ALL=".*"
for i in $ALL *; echo $i; done
}
cat() {
while read line; do echo $l; done < $1
}
</code></pre>
Bash uglyness of the day
https://lelutin.ca//posts/Bash_uglyness_of_the_day/
2018-01-02T23:41:19Z
2014-11-13T07:49:57Z
<p>Here the Bash uotd:</p>
<p>Say you've got a list of items separated by new lines, and you'd like to
concatenate them in a string, but by ensuring you enclose each item in single
quotes so as to enclose spaces and weird characters safely.</p>
<p>You do the following:</p>
<pre><code>resulting_string=
for i in $list_of_things; do
resulting_string="${resulting_string}'${i}' "
done
# All good here, look at the contents.
# should be giving you what you expect
echo "$resulting_string"
</code></pre>
<p>Now try supplying that to a command:</p>
<pre><code>some_command $resulting_string
# OOPS, some_command has an argv that would look like this (notice how it
# actually receives the single quotes with the arguments):
argv[0]=some_command
argv[1]='blah'
argv[2]='blih'
</code></pre>
<p>How do you fix that? Obvious, isn't it: you either eval or enclose the whole
thing in another shell!</p>
<pre><code>sh -c "some_command $resulting_string"
</code></pre>
<p>I really don't like shell scripting...</p>