Outdoors laptop (part 2)

Some time ago I posted about wanting an outdoor laptop. The first option I listed was a panasonic toughbook. Recently (a year and a half later) I finally ordered one. I ordered from bobjohnson.com, because the people there are a class act who’ve been calmly answering my questions for a long time.

Some highlights:

* It has a transflective display. This means that it is emissive, but also uses reflected sunlight to boost brightness, up to 6000 nits. In comparison, my previous thinkpad was 350 nit (unusable sometimes even in shade), and my macbook was 500nit. With this laptop, I can leave the display on 25% brightness and move from a dark basement to hurt-my-skin bright sunlight.

* It’s ‘fully rugged’, so using it in rain or dust storms should not be an issue. (I lost 4G ram in my thinkpad to dust).

* It has a shoulder strap ($20 extra) screwed on solidly.

* It has a touchscreen with stylus. (To use this under ubuntu 18.04 I had to install xserver-xorg-input-evdev and remove xserver-xorg-input-libinput. Note just installing evdev was not enough) I may look like a dweeb, but I prefer this to smudging my screen.

The laptop I got is a CF-19 MK6. This is several years old and refurbished. The reason I went with this instead of a new toughbook (besides price) is because, as far as I can tell, only the CF-19 MK5 through MK8 have the transflective display. The replacement for the CF-19 (the CF-20) may have a better screen (i’ve not seen it), but it is not transflective and comes in at “only” 1000nit. Same with the slightly larger nonconvertable laptops.

Mind you, there is (I trust) a reason these screens did not take off – the colors are kind of washed out, and it’s low resolution. But for reading kernel code by the pool without draining the battery in 1 hr, the only thing I can imagine being better is an eink screen.

The CF-19 is compact: it’s a 10″ (convertible) netbook. This keyboard is more cramped than on my old s100 netbook. I do actually kind of like the keys – they have a good travel depth and a nice click. But it’s weird going back to a full-size keyboard.

The first time I measured the battery life, it shut down when battery listed 36% remaining, after a mere got 3 hours. Panasonic had advertised 10 hours for this laptop. 3 was unacceptable, and I was about ready to send it back. But, reading the powertop output, I noticed that the sound card was listed as taking tons of battery power. So for my next run I did a powertop –auto-tune, and got over 4 hours battery life. Then I noticed bluetooth radio doing the same, so I did rmmod btusb. These are now all done on startup by systemd. The battery still stops at 35%, which takes getting used to, but it’s acceptable.

4.5 hours is still limiting, so I picked up a second battery and an external charger. I can charge one battery while using the other, or take both batteries along for a longer trip.

In summary – i may have found my outdoors laptop. I’d still prefer it be thinner, with a slightly larger and mechanical keyboard, and have 12 hour battery life, delivered on a unicorn…

(Here is an attempt to show the screen in very bright sunlight. It’s hard to get a good photo, since the camera wants to play its own games) :

Advertisement
Posted in Uncategorized | 9 Comments

TPM 2.0 in qemu

If you want to test software which exploits TPM 2.0 functionality inside the qemu-kvm emulator, this can be challenging because the software stack is still quite new. Here is how I did it.

First, you need a new enough qemu. The version on Ubuntu xenial does not suffice. The 2.11 version in Ubuntu bionic does. I believe the 2.10 version in artful is also too old, but might be mis-remembering haven’t tested that lately.

The two pieces of software I needed were libtpms and swtpm. For libtpms I used the tpm2-preview.rev146.v2 branch, and for swtpm I used the tpm2-preview.v2 branch.

apt -y install libtool autoconf tpm-tools expect socat libssl-dev
git clone https://github.com/stefanberger/libtpms
( cd libtpms &&
  git checkout tpm2-preview.rev146.v2 &&
  ./bootstrap.sh &&
  ./configure --prefix=/usr --with-openssl --with-tpm2 &&
  make && make install)
git clone https://github.com/stefanberger/swtpm
(cd swtpm &&
  git checkout tpm2-preview.v2 &&
  ./bootstrap.sh &&
  configure --prefix=/usr --with-openssl --with-tpm2 &&
  make &&
  make install)

For each qemu instance, I create a tpm device. The relevant part of the script I used looks like this:

#!/bin/bash

i=0
while [ -d /tmp/mytpm$i ]; do
let i=i+1
done
tpm=/tmp/tpm$i

mkdir $tpm
echo "Starting $tpm"
sudo swtpm socket --tpmstate dir=$tpm --tpm2 \
             --ctrl type=unixio,path=/$tpm/swtpm-sock &
sleep 2 # this should be changed to a netstat query

next_vnc() {
    vncport=0
    port=5900
    while nc -z 127.0.0.1 $port; do
        port=$((port + 1))
        vncport=$((vncport + 1))
    done
    echo $vncport
}

nextvnc=$(next_vnc)
sudo kvm -drive file=${disk},format=raw,if=virtio,cache=none -chardev socket,id=chrtpm,path=/$tpm/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-tis,tpmdev=tpm0 -vnc :$nextvnc -m 2048
Posted in Uncategorized | 9 Comments

Tagged window manager views

I find myself talking about these pretty frequently, and it seems many people have never actually heard about them, so a blog post seems appropriate.

Window managers traditionally present (for “advanced” users) “virtual” desktops and/or “multiple” desktops. Different window managers will have slightly different implementations and terminology, but typically I think of virtual desktops as being an MxN matrix of screen-sized desktops, and multiple desktops as being some number of disjoint MxN matrices. (In some cases there are only multiple 1×1 desktops) If you’re a MacOS user, I believe you’re limited to a linear array (say, 5 desktops), but even tvtwm back in the early 90s did matrices. In the late 90s Enlightenment offered a cool way of combining virtual and multiple desktops: As usual, you could go left/right/up/down to switch between virtual desktops, but in addition you had a bar across one edge of the screen which you could use to drag the current desktop so as to reveal the underlying desktop. Then you could do it again to see the next underlying one, etc. So you could peek and move windows between the multiple desktops.

Now, if you are using a tiling window manager like dwm, wmii, or awesome, you may think you have the same kinds of virtual desktops. But in fact what you have is a clever ‘tagged view’ implementation. This lets you pretend that you have virtual desktops, but tagged views are far more powerful.

In a tagged view, you define ‘tags’ (like desktop names), and assign one or more tags to each window. Your current screen is a view of one or more tags. In this way you can dynamically switch the set of windows displayed.

For instance, you could assign tag ‘1’ or ‘mail’ to your mail window; ‘2’ or ‘web’ to your browser; ‘3’ or ‘work’ as well as ‘1’ to one terminal, and ‘4’ or ‘notes’ to another terminal. Now if you view tag ‘1’, you will see the mail and first terminal; if you view 1+2, you will see those plus your browser. If you view 2+3, you will see the browser and first terminal but not the mail window.

As you can see, if you don’t know about this, you can continue to use tagged views as though they were simply multiple desktops. But you can’t get this flexibility with regular multiple or virtual desktops.

(This may be a case where a video would be worth more than a bunch of text.)

So in conclusion – when I complain about the primitive window manager on MacOS, I’m not just name-calling. A four-finger gesture to expose the 1xN virtual desktops just isn’t nearly as useful as being able to precisely control which windows I see together on the desktop.

Posted in Uncategorized | Leave a comment

History of containers

I’ve researched these dates several times now over the years, in preparation for several talks. So I’m posting it here for my own future reference.

There are some…  “interesting” claims and posts out there about the history of containers.   In fact, the order appears to be:

  1.  The Virtuozzo product development (or at least idea) started in 1999). Initial release was in 2002.
  2.  FreeBSD Jails were described in SANE paper in 2000. They were first available in the 4.0 FreeBSD release in March 2000. (I implemented jails as an Linux LSM in 2004, but this was – rightly – deemed an abuse of the LSM infrastructure). I’m not sure when development started on them (which would be the fairer comparison to what I listed as #1). I do know they were not started by Daniel in 1996 🙂 despite some online claims which I won’t link here.
  3.  The first public open source implementation for linux was linux-vserver, announced on the linux-kernel mailing list in 2001.

After this we have the following sequence of events:

  1. Columbia university had a project called ZAP which placed processes into “Process Domains (PODS)” which are like containers. This paper is from 2002. This likely means PODS were in development for
    some time before that, but they also were never considered for “real” upstream use.
  2. According to wikipedia, Solaris Zones were first released in 2004.
  3. Meiosys had a container-like feature which was specifically geared toward being able to checkpoint and restart applications. Theirs worked on linux, AIX, and Solaris. (I believe WPARS were based on this work)
  4. In 2005 I was asked to work on getting this functionality upstream. Luckily I had a great team. After reviewing the viability of upstreaming any of the existing implementations, we decided we’d have to start from scratch. Our first straw-man proposal was here. This really was intended as a straw man to start discussion, as we knew there would be great ideas in response to this, including this one by Eric. And of course there was also some resistance.

From here it would seem appropriate to credit some of the awesome people who did much of the implementation of the various features. I keep starting lists of names, but if I list some, then I’ll miss some and feel bad. So I won’t. If you want to know about a specific feature, you can look at the git log and lkml archives.

Posted in Uncategorized | 10 Comments

GTD tools

I’ve been using GTD to organize projects for a long time. The “tickler file” in particular is a crucial part of how I handle scheduling of upcoming and recurring tasks. I’ve blogged about some of the scripts I’ve written to help me do so in the past at https://s3hh.wordpress.com/2013/04/19/gtd-managing-projects/ and https://s3hh.wordpress.com/2011/12/10/tickler/. This week I’ve combined these tools, slightly updated them, and added an install script, and put them on github at http://github.com/hallyn/gtdtools.

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.

Posted in Uncategorized | Leave a comment

Pockyt and edbrowse

I use r2e and pocket to follow tech related rss feeds. To read these I sometimes use the nook, sometimes use the pocket website, but often I use edbrowse and pockyt on a terminal. I tend to prefer this because I can see more entries more quickly, delete them en masse, use the terminal theme already set for the right time of day (dark and light for night/day), and just do less clicking.

My .ebrc has the following:

# pocket get
function+pg {
e1
!pockyt get -n 40 -f '{id}: {link} - {excerpt}' -r newest -o ~/readitlater.txt > /dev/null 2>&1
e98
e ~/readitlater.txt
1,10n
}

# pocket delete
function+pd {
!awk -F: '{ print $1 }' ~/readitlater.txt > ~/pocket.txt
!pockyt mod -d -i ~/pocket.txt
}

It’s not terribly clever, but it works – both on linux and macos. To use these, I start up edbrowse, and type <pg. This will show me the latest 10 entries. Any which I want to keep around, I delete (5n). Any which I want to read, I open (4g) and move to a new workspace (M2).

When I'm done, any references which I want deleted are still in ~/readitlater.txt. Those which I won't to keep, are deleted from that file. (Yeah a bit backwards from normal 🙂 ) At that point I make sure to save (w), then run <pd to delete them from pocket.

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.

Posted in Uncategorized | 2 Comments

Genoci and Lpack

Introduction

I’ve been working on a pair of tools for manipulating OCI images:

  • genoci, for GENerating OCI images, builds images according to a recipe in yaml format.
  • lpack, the layer unpacker, unpacks an OCI image’s layers onto either btrfs subvolumes or thinpool LVs.

See the README.md for both for more detailed usage.

The two can be used together to speed up genoci’s builds by reducing the number of root filesystem unpacks and repacks. (See genoci’s README.md for details)

Example

While the project’s readme’s give examples, here is a somewhat silly one just to give an idea. Copy the following into recipe.yaml:

cirros:
  base: empty
  expand: https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-i386-lxc.tar.gz
weird:
  base: cirros
  pre: mount -t proc proc %ROOT%/proc
  post: umount %ROOT%/proc
  run: ps -ef > /processlist
  run: |
    cat > /usr/bin/startup << EOF
    #!/bin/sh
    echo "Starting up"
    nc -l -4 9999
    EOF
    chmod 755 /usr/bin/startup
  entrypoint: /usr/bin/startup

Then run “./genoci recipe.yaml”. You should end up with a directory “oci”, which you can interrogate with

$ umoci ls --layout oci
empty
cirros
cirros-2017-11-13_1
weird
weird-2017-11-13_1

You can unpack one of the containers with:

$ umoci unpack --image oci:weird
$ ls -l weird/rootfs/usr/bin/startup
-rwxr-xr-x 1 root root 43 Nov 13 04:27 weird/rootfs/usr/bin/startup

Upcoming

I’m about to begin the work to replace both with a single tool, written in golang, and based on an API exported by umoci.

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.

Posted in Uncategorized | Tagged , | Leave a comment

CNI for LXC

It’s now possible to use CNI (container networking interface) with lxc. Here is an example. This requires some recent upstream patches, so for simplicity let’s use the lxc packages for zesty in ppa:serge-hallyn/atom. Setup a zesty host with that ppa, i.e.

sudo add-apt-repository ppa:serge-hallyn/atom
sudo add-apt-repository ppa:projectatomic/ppa
sudo apt update
sudo apt -y install lxc1 skopeo skopeo-containers jq

(To run the oci template below, you’ll also need to install git://github.com/openSUSE/umoci. Alternatively, you can use any standard container, the oci template is not strictly needed, just a nice point to make)

Next setup CNI configuration, i.e.

cat >> EOF | sudo tee /etc/lxc/simplebridge.cni
{
  "cniVersion": "0.3.1",
  "name": "simplenet",
  "type": "bridge",
  "bridge": "cnibr0",
  "isDefaultGateway": true,
  "forceAddress": false,
  "ipMasq": true,
  "hairpinMode": true,
  "ipam": {
    "type": "host-local",
    "subnet": "10.10.0.0/16"
  }
}
EOF

The way lxc will use CNI is to call out to it using a start-host hook, that is, a program (hook) which is called in the host namespaces right before the container starts. We create the hook using:

cat >> EOF | sudo tee /usr/share/lxc/hooks/cni
#!/bin/sh

CNIPATH=/usr/share/cni

CNI_COMMAND=ADD CNI_CONTAINERID=${LXC_NAME} CNI_NETNS=/proc/${LXC_PID}/ns/net CNI_IFNAME=eth0 CNI_PATH=${CNIPATH} ${CNIPATH}/bridge < /etc/lxc/simplebridge.cni
EOF

This tells the ‘bridge’ CNI program our container name and the network namespace in which the container is running, and sends it the contents of the configuration file which we wrote above.

Now create a container,

sudo lxc-create -t oci -n a1 -- -u docker://alpine

We need to edit the container configuration file, telling it to use our new hook,

sudo sed -i '/^lxc.net/d' /var/lib/lxc/a1/config
cat >> EOF | sudo tee -a /var/lib/lxc/a1/config
lxc.net.0.type = empty
lxc.hook.start-host = /usr/share/lxc/hooks/cni
EOF

Now we’re ready! Just start the container with

lxc-execute -n a1

and you’ll get a shell in the alpine container with networking configured.

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.

Posted in Uncategorized | Tagged , | 2 Comments

Namespaced File Capabilities

Namespaced file capabilities

As of this past week, namespaced file capabilities are available in the upstream kernel. (Thanks to Eric Biederman for many review cycles and for the final pull request)

TL;DR

Some packages install binaries with file capabilities, and fail to install if you cannot set the file capabilities. Such packages could not be installed from inside a user namespace. With this feature, that problem is fixed.

Yay!

What are they?

POSIX capabilities are pieces of root’s privilege which can be individually used.

File capabilites are POSIX capability sets attached to files. When files with associated capabilities are executed, the resulting task may end up with privilege even if the calling user was unprivileged.

What’s the problem

In single-user-namespace days, POSIX capabilities were completely orthogonal to userids. You can be a non-root user with CAP_SYS_ADMIN, for instance. This can happen by starting as root, setting PR_SET_KEEPCAPS through prctl(2), and dropping the capabilities you don’t want and changing your uid.  Or, it can happen by a non-root user executing a file with file capabilities.  In order to append such a capability to a file, you require the CAP_SETFCAP capability.

User namespaces had several requirements, including:

  1. an unprivileged user should be able to create a user namespace
  2. root in a user namespace should be privileged against its resources
  3. root in a user namespace should be unprivileged against any resources which it does not own.

So in a post-user-namespace age, unprivileged user can “have privilege” with respect to files they own. However if we allow them to write a file capability on one of their files, then they can execute that file as an unprivileged user on the host, thereby gaining that privilege. This violates the third user namespace requirement, and is therefore not allowed.

Unfortunately – and fortunately – some software wants to be installed with file capabilities. On the one hand that is great, but on the other hand, if the package installer isn’t able to handle the failure to set file capabilities, then package installs are broken. This was the case for some common packages – for instance httpd on centos.

With namespaced file capabilities, file capabilities continue to be orthogonal with respect to userids mapped into the namespace. However they capabilities are tagged as belonging to the host uid mapped to the container’s root id (0).  (If uid 0 is not mapped, then file capabilities cannot be assigned)  This prevents the namespace owner from gaining privilege in a namespace against which they should not be privileged.

 

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.

Posted in Uncategorized | 2 Comments

Containers micro-conference

The deadline for the CFP for the containers microconference at Plumber’s is coming up next week. See https://discuss.linuxcontainers.org/t/containers-micro-conference-at-linux-plumbers-2017/262 for more information

Posted in Uncategorized | Leave a comment