Sierra introduced restrictions to the ssh-agent (new version of OpenSSH) by limiting the PKCS#11 libraries that can be loaded to a list of whitelisted directories. As of now, this is public domain because I must be one of the last persons updating from El Capitan to Sierra. Yes, I am not an early adopter!
So, until now I had an alias in my .bashrc that was loading my SSH key in the Yubikey to the ssh agent. The alias was just fine but the library is now outside the trusted path, that is “/usr/lib:/usr/local/lib”.
alias load_key="ssh-add -s /Library/OpenSC/lib/pkcs11/opensc-pkcs11.so" alias unload_key="ssh-add -e /Library/OpenSC/lib/pkcs11/opensc-pkcs11.so"
As always, the error message is everything but useful:
$ ssh-add -s /Library/OpenSC/lib/pkcs11/opensc-pkcs11.so Enter passphrase for PKCS#11: Could not add card "/Library/OpenSC/lib/pkcs11/opensc-pkcs11.so": agent refused operation
I had to run the ssh-agent in debug mode to understand what was happening (Google is your friend) and the output said: provider not whitelisted.
$ ssh-agent -d -a /tmp/agent.socket SSH_AUTH_SOCK=/tmp/agent.socket; export SSH_AUTH_SOCK; echo Agent pid 2918; debug2: fd 3 setting O_NONBLOCK debug3: fd 4 is O_NONBLOCK debug1: type 20 refusing PKCS#11 add of "/Library/OpenSC/lib/opensc-pkcs11.so": provider not whitelisted debug1: XXX shrink: 3 < 4
$ SSH_AUTH_SOCK="/tmp/agent.socket" ssh-add -s /Library/OpenSC/lib/pkcs11/opensc-pkcs11.so Enter passphrase for PKCS#11: Could not add card "/Library/OpenSC/lib/pkcs11/opensc-pkcs11.so": agent refused operation
Goggling again, the first hit is the bug report that was opened last year.
At the end, I had to modify my aliases to the library located in the trusted path.
alias load_key="ssh-add -s /usr/local/lib/opensc-pkcs11.so" alias unload_key="ssh-add -e /usr/local/lib/opensc-pkcs11.so"
The original post where I setup SSH public key authentication with security tokens
I am using runwhen together with daemontools to launch and monitor the backup. The run script used by the svc service executes runwhen commands to sleep until the next run (every hour) and then launch the backup script. The service is running in a dedicated jail.
The run script listed below uses some runwhen commands (rw-add,rw-matchand rw-sleep) to wake-up every hour and setuidgid to run the service with an unprivileged user.
#!/bin/sh exec 2>&1 exec setuidgid gitbackup \ rw-add n d1S now1s \ rw-match \$now1s ,M=00 wake \ rw-sleep \$wake \ /home/gitbackup/update.sh
The actual backup script that iterates over all the git repos and fetches the changes.
#!/bin/sh exec 2>&1 cd /usr/home/gitbackup/backup echo "====" date echo "====" for repo in `ls -d1 *.git`; do cd $repo && /usr/local/bin/git fetch --all cd - done echo "===="
checking the output log
$ cat /var/service/backups/log/main/current | tai64nlocal 2018-02-05 18:00:00.098641500 ==== 2018-02-05 18:00:00.150083500 Mon Feb 5 18:00:00 CET 2018 2018-02-05 18:00:00.180056500 ==== 2018-02-05 18:00:00.211689500 Fetching origin 2018-02-05 18:00:01.073738500 From https://github.com/xgarcias/ansible-cmdb-freebsd-template 2018-02-05 18:00:01.073743500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:01.091577500 Fetching origin 2018-02-05 18:00:02.185366500 From https://github.com/xgarcias/ansible-daemontools 2018-02-05 18:00:02.185371500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:02.203049500 Fetching origin 2018-02-05 18:00:04.180310500 From https://github.com/xgarcias/ansible-macbook 2018-02-05 18:00:04.180315500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:04.198104500 Fetching origin 2018-02-05 18:00:06.448429500 From https://github.com/xgarcias/daemontools-dyndns 2018-02-05 18:00:06.448434500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:06.466266500 Fetching origin 2018-02-05 18:00:08.299785500 From https://github.com/xgarcias/daemontools-poudriere 2018-02-05 18:00:08.299790500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:08.321755500 Fetching origin 2018-02-05 18:00:09.749956500 From https://github.com/xgarcias/daemontools-unbound-sinkhole 2018-02-05 18:00:09.749961500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:09.771744500 Fetching origin 2018-02-05 18:00:11.113934500 From https://github.com/xgarcias/elasticsearch-plugin-readonlyrest 2018-02-05 18:00:11.113939500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:11.135774500 Fetching origin 2018-02-05 18:00:12.703191500 From https://github.com/xgarcias/freebsd_local_ports 2018-02-05 18:00:12.703197500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:12.724967500 Fetching origin 2018-02-05 18:00:13.583204500 From https://github.com/xgarcias/xgarcias.github.io 2018-02-05 18:00:13.583209500 * branch HEAD -> FETCH_HEAD 2018-02-05 18:00:13.601461500 ====
Querying ANS/IP records via non rate-limited unauthenticated REST API.
Also, you can use @DuckDuckGo to get the same results with the !Arin and !Ripe bang searches.
You can also use !Arin and !Ripe bang searches on @DuckDuckGo to quickly lookup IP information— Greg Bray (@GBrayUT) January 27, 2018
At work we’ve been using Bhyve for a while to run non-critical systems. It is a really nice and stable hypervisor even though we are using an earlier version available on FreeBSD 10.3. This means we lack Windows and VNC support among other things, but it is not a big deal.
After some iterations in our internal tools, we realised that the installation process was too slow and we always repeated the same steps. Of course, any good sysadmin will scream “AUTOMATION!” and so did we. Therefore, we started looking for different ways to improve our deployments.
We had a look at existing frameworks that manage Bhyve, but none of them had a feature that we find really important: having a centralized repository of VM images. For instance, SmartOS applies this method successfully by having a backend server that stores a catalog of VMs and Zones, meaning that new instances can be deployed in a minute at most. This is a game changer if you are really busy in your day-to-day operations.
Since we are not great programmers, we decided to leverage the existing tools to achieve the same results. This is, having a centralised repository of Bhyve images in our data centers. The following building blocks are used:
- The ZFS snapshot of an existing VM. This will be our VM template.
- A modified version of oneoff-pkg-create to package the ZFS snapshots.
- pkg-ssh and pkg-repo to host a local FreeBSD repo in a FreeBSD jail.
- libvirt to manage our Bhyve VMs.
- The ansible modules virt, virt_net and virt_pool.
- We write a yml dictionary to define the parameters needed to create a new VM:
- VM template (name of the pkg that will be installed in /bhyve/images)
- VM name, cpu, memory, domain template, serial console, etc.
- This dictionary will be kept in the corresponding host_vars definition that configures our Bhyve host server.
- The Ansible playbook:
- installs the package named after the VM template (ZFS snapshot).e.g. pkg install FreeBSD-10.3-RELEASE-ZFS-20G-20170515.
- uses cat and zfs receive to load the ZFS snapshot in a new volume.
- calls the libvirt modules to automatically configure and boot the VM.
- The Sysadmin logs in the new VM and adjusts the hostname and network settings.
- Run a separate Ansible playbook to configure the new VM as usual.
Once automated, the installation process needs 2 minutes at most, compared with the 30 minutes needed to manually install VM plus allowing us to deploy many guests in parallel.
- Sample config for FreeBSD https://people.freebsd.org/~rodrigc/libvirt-bhyve/libvirt-bhyve.html
- bhyve driver for libvirt http://libvirt.org/drvbhyve.html
- virsh examples https://wiki.libvirt.org/page/VM_lifecycle#Creating_a_domain
- migrating VMs w/o shared storage https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/
- xml reference http://libvirt.org/formatdomain.html
- Virtual networking https://wiki.libvirt.org/page/VirtualNetworking