July 22, 2014

Slide embedding from Commons

A friend of a friend asked this morning:

I suggested Wikimedia Commons, but it turns out she wanted something like Slideshare’s embedding. So here’s a test of how that works (timely, since soon Wikimanians will be uploading dozens of slide decks!)

This is what happens when you use the default Commons “Use this file on the web -> HTML/BBCode” option on a slide deck pdf:

Wikimedia Legal overview 2014-03-19

Not the worst outcome – clicking gets you to a clickable deck. No controls inline in the embed, though. And importantly nothing to show that it is clickable :/

Compare with the same deck, uploaded to Slideshare:

Some work to be done if we want to encourage people to upload to Commons and share later.

Update: a commenter points me at viewer.js, which conveniently includes a wordpress plugin! The plugin is slightly busted (I had to move some files around to get it to work in my install) but here’s a demo:

July 16, 2014

Designers and Creative Commons: Learning Through Wikipedia Redesigns

tl;dr: Wikipedia redesigns mostly ignore attribution of Wikipedia authors, and none approach the problem creatively. This probably says as much or more about Creative Commons as it does about the designers.

disclaimer-y thing: so far, this is for fun, not work; haven’t discussed it at the office and have no particular plans to. Yes, I have a weird idea of fun.

Refresh variant from interfacesketch.com.
A mild refresh from interfacesketch.com.

It is no longer surprising when a new day brings a new redesign of Wikipedia. After seeing one this weekend with no licensing information, I started going back through seventeen of them (most of the ones listed on-wiki) to see how (if at all) they dealt with licensing, attribution, and history. Here’s a summary of what I found.

Completely missing

Perhaps not surprisingly, many designers completely remove attribution (i.e., history) and licensing information in their designs. Seven of the seventeen redesigns I surveyed were in this camp. Some of them were in response to a particular, non-licensing-related challenge, so it may not be fair to lump them into this camp, but good designers still deal with real design constraints, and licensing is one of them.

History survives – sometimes

The history link is important, because it is how we honor the people who wrote the article, and comply with our attribution obligations. Five of the seventeen redesigns lacked any licensing information, but at least kept a history link.

Several of this group included some legal information, such as links to the privacy policy, or in one case, to the Wikimedia Foundation trademark page. This suggests that our current licensing information may be presented in a worse way than some of our other legal information, since it seems to be getting cut out even by designers who are tolerant of some of our other legalese?

Same old, same old

Four of the seventeen designs keep the same old legalese, though one fails to comply by making it impossible to get to the attribution (history) page. Nothing wrong with keeping the existing language, but it could reflect a sad conclusion that licensing information isn’t worth the attention of designers; or (more generously) that they don’t understand the meaning/utility of the language, so it just gets cargo-culted around. (Credit to Hamza Erdoglu , who was the only mockup designer who specifically went out of his way to show the page footer in one of his mockups.)

A winner, sort of!

Of the seventeen sites I looked at, exactly one did something different: Wikiwand. It is pretty minimal, but it is something. The one thing: as part of the redesign, it adds a big header/splash image to the page, and then adds a new credit specifically for the author of the header/splash image down at the bottom of the page with the standard licensing information. Arguably it isn’t that creative, just complying with their obligations from adding a new image, but it’s at least a sign that not everyone is asleep at the wheel.

Observations

This is surely not a large or representative sample, so all my observations from this exercise should be taken with a grain of salt. (They’re also speculative since I haven’t talked to the designers.) That said, some thoughts besides the ones above:

  • Virtually all of the designers who wrote about why they did the redesign mentioned our public-edit-nature as one of their motivators. Given that, I expected history to be more frequently/consistently addressed. Not clear whether this should be chalked up to designers not caring about attribution, or the attribution role of history being very unclear to anyone who isn’t an expect. I suspect the latter.
  • It was evident that some of these designers had spent a great deal of time thinking about the site, and yet were unaware of licensing/attribution. This suggests that people who spend less time with the site (i.e., 99.9% of readers) are going to be even more ignorant.
  • None of the designers felt attribution and licensing was even important enough to experiment on or mention in their writeups. As I said above, this is understandable but sort of sad, and I wonder how to change it.

Postscript, added next morning:

I think it’s important to stress that I didn’t link to the individual sites here, because I don’t want to call out particular designers or focus on their failures/oversights. The important (and as I said, sad) thing to me is that designers are, historically, a culture concerned with licensing and attribution. If we can’t interest them in applying their design talents to our problem, in the context of the world’s most famously collaborative project, we (lawyers and other Commoners) need to look hard at what we’re doing, and how we can educate and engage designers to be on our side.

I should also add that the WMF design team has been a real pleasure to work with on this problem, and I look forward to doing more of it. Some stuff still hasn’t made it off the drawing board, but they’re engaged and interested in this challenge. Here is one example.

morituri 0.2.3 ‘moved’ released!

It’s two weeks shy of a year since the last morituri release. It’s been a pretty crazy year for me, getting married and moving to New York, and I haven’t had much time throughout the year to do any morituri hacking at all. I miss it, and it was time to do something about it, especially since there’s been quite a bit of activity on github since I migrated the repository to it.

I wanted to get this release out to combine all of the bug fixes since the last release before I tackle one of the number one asked for issues – not ripping the hidden track one audio if it’s digital silence. There are patches floating around that hopefully will be good enough so I can quickly do another release with that feature, and there are a lot of minor issues that should be easy to fix still floating around.

But the best way to get back into the spirit of hacking and to remove that feeling of it’s-been-so-long-since-a-release-so-now-it’s-even-harder-to-do-one is to just Get It Done.

I look forward to my next hacking stretch!

Happy ripping everybody.

flattr this!

Closure on some old bugs.

Closure on some old bugs. is a post from: codemonkey.org.uk

July 15, 2014

catch-up after a brief hiatus.

Yikes, almost a month since I last posted.
In that time, I’ve spent pretty much all my time heads down chasing memory corruption bugs in Trinity, and whacking a bunch of smaller issues as I came across them. Some of the bugs I’ve been chasing have taken a while to reproduce, so I’ve deliberately held off on changing too much at once this last few weeks, choosing instead to dribble changes in a few commits at a time, just to be sure things weren’t actually getting worse. Every time I thought I’d finally killed the last bug, I’d do another run for a few hours, and then see the same corrupted structures. Incredibly frustrating. After a process of elimination (I found a hundred places where the bug wasn’t), I think I’ve finally zeroed in on the problematic code, in the functions that generate random filenames.
I pretty much gutted that code today, which should remove both the bug, and a bunch of unnecessary operations that never found any kernel bugs anyway. I’m glad I spent the time to chase this down, because the next bunch of features I plan to implement leverage this code quite heavily, and would have only caused even more headache later on.

The one plus side of chasing this bug the last month or so has been all the added debugging code I’ve come up with. Some of it has been committed for re-use later, while some of the more intrusive debug features (like storing backtraces for every locking operation) I opted not to commit, but will keep the diff around in case it comes in handy again sometime.

Spent the afternoon clearing out my working tree by committing all the clean-up patches I’ve done while doing this work. Some of them were a little tangled and needed separating into multiple commits). Next, removing some lingering code that hasn’t really done anything useful for a while.

I’ve been hesitant to start on the newer features until things calmed down, but that should hopefully be pretty close.

catch-up after a brief hiatus. is a post from: codemonkey.org.uk

July 04, 2014

FUDCON + GNOME.Asia Beijing 2014

Thanks to the funding from FUDCON I had the chance to attend and keynote at the combined FUDCON Beijing 2014 and GNOME.Asia 2014 conference in Beijing, China.

My talk was about systemd's present and future, what we achieved and where we are going. In my talk I tried to explain a bit where we are coming from, and how we changed focus from being purely an init system, to more being a set of basic building blocks to build an OS from. Most of the talk I talked about where we still intend to take systemd, which areas we believe should be covered by systemd, and of course, also the always difficult question, on where to draw the line and what clearly is outside of the focus of systemd. The slides of my talk you find online. (No video recording I am aware of, sorry.)

The combined conferences were a lot of fun, and as usual, the best discussions I had in the hallway track, discussing Linux and systemd.

A number of pictures of the conference are now online. Enjoy!

After the conference I stayed for a few more days in Beijing, doing a bit of sightseeing. What a fantastic city! The food was amazing, we tried all kinds of fantastic stuff, from Peking duck, to Bullfrog Sechuan style. Yummy. And one of those days I am sure I will find the time to actually sort my photos and put them online, too.

I am really looking forward to the next FUDCON/GNOME.Asia!

June 29, 2014

mach 1.0.3 ‘moved’ released

It’s been very long since I last posted something. Getting married, moving across the Atlantic, enjoying the city, it’s all taken its time. And the longer you don’t do something, the harder it is to get back into.

So I thought I’d start simple – I updated mach to support Fedora 19 and 20, and started rebuilding some packages.

Get the source, update from my repository, or wait until updates hit the Fedora repository.

Happy packaging!

flattr this!

June 21, 2014

Democracy and Software Freedom

As part of a broader discussion of democracy as the basis for a just socio-economic system, Séverine Deneulin summarizes Robert Dahl’s Democracy, which says democracy requires five qualities:

First, democracy requires effective participation. Before a policy is adopted, all members must have equal and effective opportunities for making their views known to others as to what the policy should be.

Second, it is based on voting equality. When the moment arrives for the final policy decision to be made, every member should have an equal and effective opportunity to vote, and all votes should be counted as equal.

Third, it rests on ‘enlightened understanding’. Within reasonable limits, each member should have equal and effective opportunities for learning about alternative policies and their likely consequences.

Fourth, each member should have control of the agenda, that is, members should have the exclusive opportunity to decide upon the agenda and change it.

Fifth, democratic decision-making should include all adults. All (or at least most) adult permanent residents should have the full rights of citizens that are implied by the first four criteria.

From An Introduction to the Human Development and Capability Approach“, Ch. 8 – “Democracy and Political Participation”.

Poll worker explains voting process in southern Sudan referendum.jpg
Poll worker explains voting process in southern Sudan referendum” by USAID Africa Bureau via Wikimedia Commons.

It is striking that, despite talking a lot about freedom, and often being interested in the question of who controls power, these five criteria might as well be (Athenian) Greek to most free software communities and participants- the question of liberty begins and ends with source code, and has nothing to say about organizational structure and decision-making – critical questions serious philosophers always address.

Our licensing, of course, means that in theory points #4 and #5 are satisfied, but saying “you can submit a patch” is, for most people, roughly as satisfying as saying “you could buy a TV ad” to an American voter concerned about the impact of wealth on our elections. Yes, we all have the theoretical option to buy a TV ad/edit our code, but for most voters/users of software that option will always remain theoretical. We’re probably even further from satisfying #1, #2, and #3 in most projects, though one could see the Ada Initiative and GNOME OPW as attempts to deal with some aspects of #1, #3, and #4

This is not to say that voting is the right way to make decisions about software development, but simply to ask: if we don’t have these checks in place, what are we doing instead? And are those alternatives good enough for us to have certainty that we’re actually enhancing freedom?

We’ll Build A Dream House Of Net

(Note: the final 0.9.10 will be out later this week…  read on for the awesome that it will contain)

Hey wouldn’t it be great if NetworkManager did X and made my life so awesome I could retire to a private island surrounded by things I love?  Like kittens and teddy bears and bright copper kettles and Domaine Leflaive Montrachet Grand Cru?

If only your dream was reality…  Oh wait!  NetworkManager 0.9.10 will be your genie from Aladdin, granting every wish you dream of, except this time you can wish for more wishes.  But you still can’t bring awful networking back from the dead because it’s just not pretty.  Don’t do it.

 

7385321668_0ec3667c73_b

Tons of new features, yet somehow smaller and nimbler! (via cuxclipper, CC BY 2.0)

What is pretty is NetworkManager 0.9.10; it’s like the lightning-quick racing yacht that Larry Ellison doesn’t have and really, really wants, but which somehow also adds a Triple-E-Class-worth of new features just for you.  Somebody (maybe you!) wished for every single thing you’re about to see.  And then a magic genie showed up, snapped its fingers, and gave it to them.

nmtui

We found a usability gap between full-fledged CLI tools like nmcli and GUI-based ones, and thus nmtui was born.  Sometimes you don’t want to remember esoteric commands and options, but you also don’t want to run X. Boom, first wish granted: a curses-based tool for configuring and managing your network, no X involved:

nmtuinmtui2

nmcli

The command-line still rules with divine mandate, and we’re here to please so nmcli was a huge focus for this release.  We’ve added interactive editing support, single-command editing, detailed help, tab completion, and enhanced bash completion.  You really need to check this out; almost anything you can do with GUI tools can now be done with nmcli, and there’s even some stuff nmcli can do that the GUI tools can’t.  If you’re comfortable with terminals, NetworkManager 0.9.10 is right up your alley.

nmcli

Size Does Matter

Continuing on the quest to be more nimble and streamlined, we’ve split Wi-Fi, WWAN, Bluetooth, ADSL, and WiMAX device support into plugins which you don’t need to install if you like a minimal system.  Distributions should package these separately so they can be added/removed independently of NetworkManager itself, which reduces disk usage, runtime memory usage, and packaging dependency chains.  We’ve also spent time slimming down and optimizing the code.  The core NetworkManager daemon is now just over 1MB in size!

dbus-daemon is also no longer required for root-only or early-boot operation, with communication using a private root-only Unix socket. Similarly, PolicyKit is no longer used for root operation, though it could always be disabled at build-time anyway.

To facilitate remote and SSH-based management, the “at_console” D-Bus permission has been removed, which also helpfully harmonizes authorization settings between Fedora and Debian-based distributions.  All permissions authorization now happens through PolicyKit instead.

4870003098_26ba44a08a_b

NetworkManager works here (via scobleizer, CC BY 2.0)

The Enterprise

When you Absolutely Positively MUST have your ethernet frames delivered on-time and without loss you turn to Data Center Bridging.  DCB provides the reliability and robustness that iSCSI and FibreChannel over Ethernet (FCoE) need so you don’t have to keep shovelling money into a proprietary SAN.  Since users requested it, we snapped our fingers and added support to NetworkManager 0.9.10 for configuring DCB on your ethernet interfaces.

We’ve also upped our game with IP-level configuration support for many more software interfaces like GRE, macvlan, macvtap, tun, tap, veth, and vxlan.  And when you have services that aren’t yet network-aware, the NetworkManager-wait-online systemd service is more reliable to ensure your legacy services start up with the resources they require.

Customization Galore

You dreamed, we listened.  Creepy, no?  Yeah, we know what you want.  And top of the list was more flexible configuration:

  • Connection configuration files are no longer watched for changes by default, which used to cause problems with backups, filesystem copies, half-configured connections, etc.  If you want that behavior you can turn it back on (monitor-connection-files=true), but instead, edit them as much as you want and when you’re done, “nmcli con reload“.
  • Connections can now be locked to interface names instead of just MAC addresses
  • A new “ignore-carrier” option is available to ensure your critical app doesn’t fail just because you got drunk on Captain Morgan + Coke, and tripped over a cable
  • Want to manage /etc/resolv.conf yourself?  You can!  “dns=none” is your new best friend.
  • Configuration file snippets can be dropped into /etc/NetworkManager/conf.d to change smaller sets of configuration options

The NetworkManager dispatcher got some enhancements too.  It now has a “pre-up” event that allow scripts to execute before NetworkManager announces connectivity to applications.  We also added a “pre-down” event that lets network filesystems flush data before the interface is actually disconnected from the network.

Seamless Cooperation

Do you love /sbin/ip?  ifconfig?  brctl?  vconfig?  Keep using them!  Changes you make outside of NetworkManager get picked up, respected, and reflected in the D-Bus API.  NetworkManager 0.9.10 also goes to great lengths to read the existing configuration of interfaces and not touch them.  Most network interfaces known to the kernel are now exposed in the D-Bus API, and you can even change their IP configuration right from NetworkManager.  There’s more work to do here but we hope you’ll appreciate the new situational awareness as much as we do.

Get Your VPN On

We’ve improved support for routing-only VPNs like Openswan/Libreswan/Strongswan.  We’ve added full details of the VPN’s IP configuration to the D-Bus API.  And best yet, VPN plugins can now request additional passwords during the connection process if the ones you previously gave them are wrong or changed.

All the Rest

For clients, more properties are exposed in the D-Bus API.  We’ve added support for custom IP ranges to the Internet Connection Sharing functionality.  We’ve added WWAN autoconnect support and more reliable airplane mode behavior.  Fatal connection errors now more reliably block reconnect, which means better handling of wrong Wi-Fi passwords and access point failures.  Captive portal/hotspot support is moving forward, as are DNSSEC enhancements.

Geez, are we done yet?

Not even close!  Seriously, there’s more but I’m kinda tired of typing.  Try it out (the final release will be out later this week) and tell us what you think.  Then tell us what you want.  Don’t be afraid to dream a little bigger, darling!

(via Alexandra Guerson, CC BY-NC-ND 2.0)

June 20, 2014

Transferring maintainership of x86info.

On Mon Feb 26 2001, I committed the version of x86info. 13 years later I’m pretty much done with it.
Despite a surge of activity back in February, when I reached a new record monthly commit count record, it’s been a project I’ve had little time for over the last few years. Luckily, someone has stepped up to take over. Going forward, Nick Black will be maintaining it here.

I might still manage the occasional commit, but I won’t be feeling guilty about neglecting it any more.

An unfinished splinter project that I started a while back x86utils was intended to be a replacement for x86info of sorts, by moving it to multiple small utilities rather than the one monolithic blob that x86info is. I’m undecided if I’ll continue to work on this, especially given my time commitments to Trinity and related projects.

Transferring maintainership of x86info. is a post from: codemonkey.org.uk

June 19, 2014

PSA: Fedora 21, NetworkManager, and DNF

Recently we posted a Fedora 21 update delivering the huge new awesome that is NetworkManager 0.9.10-beta1.  Among a literal Triple E Class boatload of enhancements and fixes, this update continues our fine tradition of making the core of NetworkManager smaller and more flexible by splitting out Wi-Fi support into the NetworkManager-wifi package.  If you don’t have or don’t use Wi-Fi on your system, you don’t need to install stuff for it and you can save some disk space and RAM.

The Problem

To ensure upgrades work correctly and people didn’t unexpectedly lose Wi-Fi support we set RPM Obsoletes to ensure that when upgrading the new package was installed even though it didn’t exist before.  Unfortunately this didn’t work for those using DNF instead of yum for package management.

It turns out that DNF treats RPM obsoletes differently than yum, and it’s unclear right now how package splitting is actually supposed to work with DNF.  You can track the issue here.

The Workaround

If you suddenly find yourself without Wi-Fi, find a wired network connection and:

dnf install NetworkManager-wifi
systemctl restart NetworkManager

and harmony will return to the Universe.  We understand the pain, will continue to monitor the situation, and will update the Fedora NetworkManager packages when DNF has a solution.

June 17, 2014

Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems

(Just a small heads-up: I don't blog as much as I used to, I nowadays update my Google+ page a lot more frequently. You might want to subscribe that if you are interested in more frequent technical updates on what we are working on.)

In the past weeks we have been working on a couple of features for systemd that enable a number of new usecases I'd like to shed some light on. Taking benefit of the /usr merge that a number of distributions have completed we want to bring runtime behaviour of Linux systems to the next level. With the /usr merge completed most static vendor-supplied OS data is found exclusively in /usr, only a few additional bits in /var and /etc are necessary to make a system boot. On this we can build to enable a couple of new features:

  1. A mechanism we call Factory Reset shall flush out /etc and /var, but keep the vendor-supplied /usr, bringing the system back into a well-defined, pristine vendor state with no local state or configuration. This functionality is useful across the board from servers, to desktops, to embedded devices.
  2. A Stateless System goes one step further: a system like this never stores /etc or /var on persistent storage, but always comes up with pristine vendor state. On systems like this every reboot acts as factor reset. This functionality is particularly useful for simple containers or systems that boot off the network or read-only media, and receive all configuration they need during runtime from vendor packages or protocols like DHCP or are capable of discovering their parameters automatically from the available hardware or periphery.
  3. Reproducible Systems multiply a vendor image into many containers or systems. Only local configuration or state is stored per-system, while the vendor operating system is pulled in from the same, immutable, shared snapshot. Each system hence has its private /etc and /var for receiving local configuration, however the OS tree in /usr is pulled in via bind mounts (in case of containers) or technologies like NFS (in case of physical systems), or btrfs snapshots from a golden master image. This is particular interesting for containers where the goal is to run thousands of container images from the same OS tree. However, it also has a number of other usecases, for example thin client systems, which can boot the same NFS share a number of times. Furthermore this mechanism is useful to implement very simple OS installers, that simply unserialize a /usr snapshot into a file system, install a boot loader, and reboot.
  4. Verifiable Systems are closely related to stateless systems: if the underlying storage technology can cryptographically ensure that the vendor-supplied OS is trusted and in a consistent state, then it must be made sure that /etc or /var are either included in the OS image, or simply unnecessary for booting.

Concepts

A number of Linux-based operating systems have tried to implement some of the schemes described out above in one way or another. Particularly interesting are GNOME's OSTree, CoreOS and Google's Android and ChromeOS. They generally found different solutions for the specific problems you have when implementing schemes like this, sometimes taking shortcuts that keep only the specific case in mind, and cannot cover the general purpose. With systemd now being at the core of so many distributions and deeply involved in bringing up and maintaining the system we came to the conclusion that we should attempt to add generic support for setups like this to systemd itself, to open this up for the general purpose distributions to build on. We decided to focus on three kinds of systems:

  1. The stateful system, the traditional system as we know it with machine-specific /etc, /usr and /var, all properly populated.
  2. Startup without a populated /var, but with configured /etc. (We will call these volatile systems.)
  3. Startup without either /etc or /var (We will call these stateless systems.).

A factory reset is just a special case of the latter two modes, where the system boots up without /var and /etc but the next boot is a normal stateful boot like like the first described mode. Note that a mode where /etc is flushed, but /var is not is nothing we intend to cover (why? well, the user ID question becomes much harder, see below, and we simply saw no usecase for it worth the trouble).

Problems

Booting up a system without a populated /var is relatively straight-forward. With a few lines of tmpfiles configuration it is possible to populate /var with its basic structure in a way that is sufficient to make a system boot cleanly. systemd version 214 and newer ship with support for this. Of course, support for this scheme in systemd is only a small part of the solution. While a lot of software reconstructs the directory hierarchy it needs in /var automatically, many software does not. In case like this it is necessary to ship a couple of additional tmpfiles lines that setup up at boot-time the necessary files or directories in /var to make the software operate, similar to what RPM or DEB packages would set up at installation time.

Booting up a system without a populated /etc is a more difficult task. In /etc we have a lot of configuration bits that are essential for the system to operate, for example and most importantly system user and group information in /etc/passwd and /etc/group. If the system boots up without /etc there must be a way to replicate the minimal information necessary in it, so that the system manages to boot up fully.

To make this even more complex, in order to support "offline" updates of /usr that are replicated into a number of systems possessing private /etc and /var there needs to be a way how these directories can be upgraded transparently when necessary, for example by recreating caches like /etc/ld.so.cache or adding missing system users to /etc/passwd on next reboot.

Starting with systemd 215 (yet unreleased, as I type this) we will ship with a number of features in systemd that make /etc-less boots functional:

  • A new tool systemd-sysusers as been added. It introduces a new drop-in directory /usr/lib/sysusers.d/. Minimal descriptions of necessary system users and groups can be placed there. Whenever the tool is invoked it will create these users in /etc/passwd and /etc/group should they be missing. It is only suitable for creating system users and groups, not for normal users. It will write to the files directly via the appropriate glibc APIs, which is the right thing to do for system users. (For normal users no such APIs exist, as the users might be stored centrally on LDAP or suchlike, and they are out of focus for our usecase.) The major benefit of this tool is that system user definition can happen offline: a package simply has to drop in a new file to register a user. This makes system user registration declarative instead of imperative -- which is the way how system users are traditionally created from RPM or DEB installation scripts. By being declarative it is easy to replicate the users on next boot to a number of system instances.

    To make this new tool interesting for packaging scripts we make it easy to alternatively invoke it during package installation time, thus being a good alternative to invocations of useradd -r and groupadd -r.

    Some OS designs use a static, fixed user/group list stored in /usr as primary database for users/groups, which fixed UID/GID mappings. While this works for specific systems, this cannot cover the general purpose. As the UID/GID range for system users/groups is very small (only containing 998 users and groups on most systems), the best has to be made from this space and only UIDs/GIDs necessary on the specific system should be allocated. This means allocation has to be dynamic and adjust to what is necessary.

    Also note that this tool has one very nice feature: in addition to fully dynamic, and fully static UID/GID assignment for the users to create, it supports reading UID/GID numbers off existing files in /usr, so that vendors can make use of setuid/setgid binaries owned by specific users.

  • We also added a default user definition list which creates the most basic users the system and systemd need. Of course, very likely downstream distributions might need to alter this default list, add new entries and possibly map specific users to particular numeric UIDs.
  • A new condition ConditionNeedsUpdate= has been added. With this mechanism it is possible to conditionalize execution of services depending on whether /usr is newer than /etc or /var. The idea is that various services that need to be added into the boot process on upgrades make use of this to not delay boot-ups on normal boots, but run as necessary should /usr have been update since the last boot. This is implemented based on the mtime timestamp of the /usr: if the OS has been updated the packaging software should touch the directory, thus informing all instances that an upgrade of /etc and /var might be necessary.
  • We added a number of service files, that make use of the new ConditionNeedsUpdate= switch, and run a couple of services after each update. Among them are the aforementiond systemd-sysusers tool, as well as services that rebuild the udev hardware database, the journal catalog database and the library cache in /etc/ld.so.cache.
  • If systemd detects an empty /etc at early boot it will now use the unit preset information to enable all services by default that the vendor or packager declared. It will then proceed booting.
  • We added a new tmpfiles snippet that is able to reconstruct the most basic structure of /etc if it is missing.
  • tmpfiles also gained the ability copy entire directory trees into place should they be missing. This is particularly useful for copying certain essential files or directories into /etc without which the system refuses to boot. Currently the most prominent candidates for this are /etc/pam.d and /etc/dbus-1. In the long run we hope that packages can be fixed so that they always work correctly without configuration in /etc. Depending on the software this means that they should come with compiled-in defaults that just work should their configuration file be missing, or that they should fall back to static vendor-supplied configuration in /usr that is used whenever /etc doesn't have any configuration. Both the PAM and the D-Bus case are probably candidates for the latter. Given that there are probably many cases like this we are working with a number of folks to introduce a new directory called /usr/share/etc (name is not settled yet) to major distributions, that always contain the full, original, vendor-supplied configuration of all packages. This is very useful here, so that there's an obvious place to copy the original configuration from, but it is also useful completely independently as this provides administrators with an easy place to diff their own configuration in /etc against to see what local changes are in place.
  • We added a new --tmpfs= switch to systemd-nspawn to make testing of systems with unpopulated /etc and /var easy. For example, to run a fully state-less container, use a command line like this:

    # system-nspawn -D /srv/mycontainer --read-only --tmpfs=/var --tmpfs=/etc -b

    This command line will invoke the container tree stored in /srv/mycontainer in a read-only way, but with a (writable) tmpfs mounted to /var and /etc. With a very recent git snapshot of systemd invoking a Fedora rawhide system should mostly work OK, modulo the D-Bus and PAM problems mentioned above. A later version of systemd-nspawn is likely to gain a high-level switch --mode={stateful|volatile|stateless} that sets combines this into simple switches reusing the vocabulary introduced earlier.

What's Next

Pulling this all together we are very close to making boots with empty /etc and /var on general purpose Linux operating systems a reality. Of course, while doing the groundwork in systemd gets us some distance, there's a lot of work left. Most importantly: the majority of Linux packages are simply incomptible with this scheme the way they are currently set up. They do not work without configuration in /etc or state directories in /var; they do not drop system user information in /usr/lib/sysusers.d. However, we believe it's our job to do the groundwork, and to start somewhere.

So what does this mean for the next steps? Of course, currently very little of this is available in any distribution (simply already because 215 isn't even released yet). However, this will hopefully change quickly. As soon as that is accomplished we can start working on making the other components of the OS work nicely in this scheme. If you are an upstream developer, please consider making your software work correctly if /etc and/or /var are not populated. This means:

  • When you need a state directory in /var and it is missing, create it first. If you cannot do that, because you dropped priviliges or suchlike, please consider dropping in a tmpfiles snippet that creates the directory with the right permissions early at boot, should it be missing.
  • When you need configuration files in /etc to work properly, consider changing your application to work nicely when these files are missing, and automatically fall back to either built-in defaults, or to static vendor-supplied configuration files shipped in /usr, so that administrators can override configuration in /etc but if they don't the default configuration counts.
  • When you need a system user or group, consider dropping in a file into /usr/lib/sysusers.d describing the users. (Currently documentation on this is minimal, we will provide more docs on this shortly.)

If you are a packager, you can also help on making this all work:

  • Ask upstream to implement what we describe above, possibly even preparing a patch for this.
  • If upstream will not make these changes, then consider dropping in tmpfiles snippets that copy the bare minimum of configuration files to make your software work from somewhere in /usr into /etc.
  • Consider moving from imperative useradd commands in packaging scripts, to declarative sysusers files. Ideally, this is shipped upstream too, but if that's not possible then simply adding this to packages should be good enough.

Of course, before moving to declarative system user definitions you should consult with your distribution whether their packaging policy even allows that. Currently, most distributions will not, so we have to work to get this changed first.

Anyway, so much about what we have been working on and where we want to take this.

Conclusion

Before we finish, let me stress again why we are doing all this:

  1. For end-user machines like desktops, tablets or mobile phones, we want a generic way to implement factory reset, which the user can make use of when the system is broken (saves you support costs), or when he wants to sell it and get rid of his private data, and renew that "fresh car smell".
  2. For embedded machines we want a generic way how to reset devices. We also want a way how every single boot can be identical to a factory reset, in a stateless system design.
  3. For all kinds of systems we want to centralize vendor data in /usr so that it can be strictly read-only, and fully cryptographically verified as one unit.
  4. We want to enable new kinds of OS installers that simply deserialize a vendor OS /usr snapshot into a new file system, install a boot loader and reboot, leaving all first-time configuration to the next boot.
  5. We want to enable new kinds of OS updaters that build on this, and manage a number of vendor OS /usr snapshots in verified states, and which can then update /etc and /var simply by rebooting into a newer version.
  6. We wanto to scale container setups naturally, by sharing a single golden master /usr tree with a large number of instances that simply maintain their own private /etc and /var for their private configuration and state, while still allowing clean updates of /usr.
  7. We want to make thin clients that share /usr across the network work by allowing stateless bootups. During all discussions on how /usr was to be organized this was fequently mentioned. A setup like this so far only worked in very specific cases, with this scheme we want to make this work in general case.

Of course, we have no illusions, just doing the groundwork for all of this in systemd doesn't make this all a real-life solution yet. Also, it's very unlikely that all of Fedora (or any other general purpose distribution) will support this scheme for all its packages soon, however, we are quite confident that the idea is convincing, that we need to start somewhere, and that getting the most core packages adapted to this shouldn't be out of reach.

Oh, and of course, the concepts behind this are really not new, we know that. However, what's new here is that we try to make them available in a general purpose OS core, instead of special purpose systems.

Anyway, let's get the ball rolling! Late's make stateless systems a reality!

And that's all I have for now. I am sure this leaves a lot of questions open. If you have any, join us on IRC on #systemd on freenode or comment on Google+.

Daily log June 16th 2014

Catch up from the last week or so.
Spent a lot of time turning a bunch of code in trinity inside out. After splitting the logging code into “render” and “write out buffer”, I had a bunch of small bugs to nail down. While chasing those I kept occasionally hitting some strange bug where occasionally the list of pids would get corrupted. Two weeks on, and I’m still no closer to figuring out the exact cause, but I’ve got a lot more clues (mostly by now knowing what it _isn’t_ due to a lot of code rewriting).
Some of that rewriting has been on my mind for a while, the shared ‘shm’ structure between all processes was growing to the point that I was having trouble remembering why certain variables were shared there rather than just be globals in main, inherited at fork time. So I moved a bunch of variables from the shm to another structure named ‘childdata’, of which there are an array of ptrs to in the shm.

Some of the code then looked a little convoluted with references that used to be shm->data being replaced with shm->children[childno].data, which I remedied by having each child instantiate a ‘this_child’ var when it starts up, allowing this_child->data to point to the right thing.

All of this code churn served in part to refresh my memory on how some of this worked, hoping that I’d stumble across the random scribble that occasionally happens. I also added a bunch of code to things like dump backtraces, which has been handy for turning up 1-2 unrelated bugs.

On Friday afternoon I discovered that the bug only shows up if MALLOC_PERTURB_ is set. Which is curious, because the struct in question that gets scribbled over is never free()’d. It’s allocated once at startup, and inherited in each child when it fork()’s.

Debugging continues..

Daily log June 16th 2014 is a post from: codemonkey.org.uk

June 12, 2014

Building clean docker images

I’ve been doing some experiments recently with ways to build docker images. As a test bed for this I’m building an image with the Gtk+ Broadway backend, and some apps. This will both be useful (have a simple way to show off Broadway) as well as being a complex real-world use case.

I have two goals for the image. First of all, building of custom code has to be repeatable and controlled, Secondly, the produced image should be clean and have a minimal size. In particular, we don’t want any of the build dependencies or intermediate results in any layer in the final docker image.

The approach I’m using involves using a build Dockerfile, which installs all the build tools and the development dependencies. Inside this container I build a set of rpms. Then another Dockerfile creates a runtime image which just installs the necessary runtime rpms from the rpms built in the first container..

However, there is a complication to this. The runtime image is created using a Dockerfile, but there is no way for a Dockerfile to access the files produced from the build container, as volumes don’t work during docker build. We could extract the rpms from the container and use ADD in the dockerfile to insert the files into the container before installing them, but this would create an extra layer in the runtime image that contained the rpms, making it unnecessarily large.

To solve this I made the build container construct a yum repository of all the built rpms, and then when starts lighttp, exporting it. So, by doing:

docker build -t broadway-repo
docker run -d -p 9999:80 broadway-repo

I get a yum repository server at port 9999 that can be used by the
second container. Ideally we would want to use
docker links to get access to the build container, but unfortunately links don’t work with docker build. Instead we use a hack in the Makefile to
generate a yum repo file with the host ip address and add that to the
container.

Here is the Dockerfile used for the build container. There are are some interesting things to notice in it:

  • Instead of starting by adding all the source files we add them one by one as needed. This allows us to use the caching that docker build does, so that if you change something at the end of the Dockerfile docker will reuse the previously built images all the way up to the first line that has changed.
  • We try do do multiple commands in each RUN invocation, chaining them together with &&. This way we can avoid the current limit of 127 layers in a docker image.
  • At the end we use createrepo to create the repository and set the default command to run lighttp on the repository at port 80.
  • We rebuild all packages in the Gtk+ stack, so there are no X11 dependencies in the final image. Additionally, we strip a bunch of dependencies and use some other tricks to make the rpms themselves a bit smaller.

Here is the Dockerfile for the runtime container. It installs the required rpms, creates a user and sets up an init script and a super-simple panel/launcher app written in javascript. We also use a simple init as pid 1 in the container so that process that die are reaped.

Here is a screenshot of the final results:

Broadway

You can try it yourself by running “docker run -d -p 8080:8080 alexl/broadway” and loading http://127.0.0.1:8080 in a browser.

June 10, 2014

Chromium revisited

It's been more than a year since I've had a successful build of Chromium that I was willing to share with anyone else, but last night I pushed out a Fedora 20 x86_64 build of the current stable Chromium. Here's where you can go and get it:

1) Get this repo file: http://repos.fedorapeople.org/repos/spot/chromium-stable/fedora-chromium-stable.repo and put it in /etc/yum.repos.d/
2) EDIT - I've signed the packages with my personal GPG key, upon request. This means you also need to download my public key. You can either get it from:
http://repos.fedorapeople.org/repos/spot/chromium-stable/spot.gpg
or by running:
gpg --recv-key 93054260 ; gpg --export --armor 93054260 > spot.gpg
Then, copy it (as root) to /etc/pki/rpm-gpg/
cp -a spot.gpg /etc/pki/rpm-gpg/
3) Run yum install chromium chromium-v8

Why has it been so long?

A) I really believe in building things from source code. At the very least, I should be able to take the source code and know that I can follow instructions (whether in an elaborate README or in src.rpm) and get a predictable binary that matches it. This is one of the core reasons that I got involved with FOSS oh so many years ago.

B) I'm a bit of a perfectionist. I wanted any Chromium builds to be as fully functional as possible, and not just put something out there with disabled features.

C) Chromium's not my day job. I don't get paid to do it by Red Hat (they don't mind me doing it as long as I also do the work they _do_ pay me for).

D) I also build Chromium using as many system libraries as possible, because I really get annoyed at Google's policy of bundling half the planet. I also pull out and package independently some components that are reasonably independent (webrtc, ffmpegsumo).

If my schedule is reasonably clear, it takes me about 7-10 _days_ to get a Chromium build up and going, with all of its components. Why did it take a year this time around? Here's the specifics:

AA) I changed jobs in November, and that didn't leave me very much time to do anything else. It also didn't leave me very motivated to drill down into Chromium for quite some time. I've gotten my feet back underneath me though, so I made some time to revisit the problem....

BB) ... and the core problem was the toolchains that Chromium requires. Chromium relies upon two independent (sortof) toolchains to build its Native Client (NaCl) and Portable Native Client (PNaCl) support. NaCl is binutils/gcc/newlib (it's also glibc and other things, but Chromium doesn't need those so I don't build them), and PNaCl is binutils/llvm/clang/a whole host of other libs. NaCl was reasonably easy to figure out how to package, even if you do have to do a bootstrap pass through gcc to do it, but for a very long time, I had no success getting PNaCl to build whatsoever. I tried teasing it apart into its components, but while it depends on the NaCl toolchain to be built, it also builds and uses incompatible versions of libraries that conflict with that toolchain. Eventually, I tried just building it from the giant "here is all the PNaCl source in one git checkout" that Google loosely documents, but it never worked (and it kept trying to download pre-built NaCl binaries to build itself which I didn't want to use).

** deep breath **

After a few months not looking at Chromium or NaCl or PNaCl, I revisited it with fresh eyes and brain. Roland McGrath was very helpful in giving me advice and feedback as to where I was going wrong with my efforts, and I finally managed a building PNaCl package. It's not done the way I'd want it to be (it uses the giant git checkout of all PNaCl sources instead of breaking it out into components), but it is built entirely from source and it uses my NaCl RPMs. The next hurdle was the build system inside Chromium. The last time I'd done a build, I used gyp to generate Makefiles, because, for all of make's eccentricities, it is the devil we understand. I bet you can guess the next part... Makefile generation no longer works. Someone reported a bug on it, and Google's response is paraphrased as "people use that? we should disable it." They've moved the build tool over to something called "ninja", which was written by Google for Chromium. It's not the worst tool ever, but it's new, and learning new things takes time. Make packages, test packages, build, repeat. Namespace off the v8 that chromium needs into a chromium-v8 package that doesn't conflict with the v8 in Fedora proper that node.js uses. Discover that Google has made changes to namespace zlib (to be fair, its the same hack Firefox uses), so we have to use their bundled copy. Discover that Google has added code assuming that icu is bundled and no longer works with the system copy. Discover that Google's fork of libprotobuf is not compatible with the system copy, but the API looks identical, so it builds against the system copy but does not work properly (and coredumps when you try to setup sync). Add all the missing files that need to go into the package (there is no "make install" equivalent).

Then, we test. Discover that NaCl/PNaCl sortof works, but nothing graphical does. Figure out that we need to enable the "Override software rendering list" in chrome://flags because intel graphics are blacklisted on Linux (it works fine once I do that, at least on my Thinkpad T440s, your mileage may vary). Test WebRTC (seems to work). Push packages and hope for the best. Wait for the inevitable bugs to roll in.

******

I didn't do an i686 build (but some of the libraries that are likely to be multilib on an x86_64 system are present in i686 builds as well), because I'm reasonably sure there are not very many chromium users for that arch. If I'm wrong, let me know. I also haven't built for any older targets.

June 06, 2014

Monthly Fedora kernel bug statistics – May 2014

  19 20 rawhide  
Open: 88 308 158 (554)
Opened since 2014-05-01 9 112 20 (141)
Closed since 2014-05-01 26 68 17 (111)
Changed since 2014-05-01 49 293 31 (373)

Monthly Fedora kernel bug statistics – May 2014 is a post from: codemonkey.org.uk

June 03, 2014

Daily log June 2nd 2014

Spent the day making some progress on trinity’s “dump buffers after we detect tainted kernel” code, before hitting a roadblock. It’s possible for a fuzzing child process to allocate a buffer to create (for eg) a mangled filename. Because this allocation is local to the child, attempts to reconstruct what it pointed to from the dumping process will fail.
I spent a while thinking about this before starting on a pretty severe rewrite of how the existing logging code works. Instead of just writing the description of what is in the registers to stdout/logs, it now writes it to a buffer, which can be referenced directly by any of the other processes. A lot of the output code actually looks a lot easier to read as an unintended consequence.
No doubt I’ve introduced a bunch of bugs in the process, which I’ll spend time trying to get on top of tomorrow.

Daily log June 2nd 2014 is a post from: codemonkey.org.uk

May 28, 2014

Daily log May 27th 2014

Spent much of the day getting on top of the inevitable email backlog.
Later started cleaning up some of the code in trinity that dumps out syscall parameters to tty/log files.
One of the problems that has come up with trinity is that many people run it with logging disabled, because it’s much faster when you’re not constantly spewing to files, or scrolling text. This unfortunately leads to situations where we get an oops and we have no idea what actually happened. We have some information that we could dump in that case, which we aren’t right now.
So I’ve been spending time lately working towards a ‘post-mortem dump’ feature. The two pieces missing are 1. moving the logging functions away from being special cased to be called from within a child process, so that they can just chew on a syscall record. (This work is now almost done), and 2. maintain a ringbuffer of these records per child. (Not a huge amount of work once everything else is in place).

Hopefully I’ll have this implemented soon, and then I can get back on track with some of the more interesting fuzzing ideas I’ve had for improving.

Daily log May 27th 2014 is a post from: codemonkey.org.uk

May 27, 2014

Next stop, SMT

The little hardware project, mentioned previously, continues to chug along. A prototype is now happily blinking LEDs on a veroboard:

zag-avr-2.photo1

Now it's time to use a real technology, so the thing could be used onboard a car or airplane. Since the heart of the design is a chip in LGA-14 package, we are looking at soldering a surface-mounted element with 0.35 mm pitch.

I reached to members of a local hackerspace for advice, and they suggested to forget what people post to the Internet about wave-soldering in a oven. Instead, buy a cheap 10x microscope, fine tip for my iron, and some flux-solder paste, then do it all by hand. As Yoda said, there is only do and do not, there is no try.

May 25, 2014

Digital life in 2014

I do not get out often and my clamshell cellphone incurs a bill of $3/month. So, imagine my surprise when at a dinner with colleagues everyone pulled out a charger brick and plugged their smartphone into it. Or, actually, one guy did not have a brick with him, so someone else let him tap into a 2-port brick (pictured above). The same ritual repeated at every dinner!

I can offer a couple of explanations. One is that it is much cheaper to buy a smartphone with a poor battery life and a brick than to buy a cellphone with a decent battery life (decent being >= 1 day, so you could charge it overnight). Or, that market forces are aligned in such a way that there are no smartphones on the market that can last for a whole day (anymore).

UPDATE: My wife is a smartphone user and explained it to me. Apparently, sooner or later everyone hits an app which is an absolute energy hog for no good reason, and is unwilling to discard it (in her case it was Pomodoro, but it could be anything). Once that happens, no technology exists to pack enough battery into a smartphone. Or so the story goes. In other words, they buy a smartphone hoping that they will not need the brick, but inevitably they do.

May 23, 2014

OpenStack Atlanta 2014

The best part, I think, came during the Swift Ops session, when our PTL, John Dickinson, asked for a show of hands. For example -- without revealing any specific proprietary information -- how many people run clusters of less than 10 nodes, 10 to 100, 100 to 1000, etc. He also asked how old the production clusters were, and if anyone deployed Swift in 2014. Most clusters were older, and only 1 man raised his hand. John asked him what were the difficulties setting it up, and the man said: "we had some, but then we paid Red Hat to fix it up for us, and they did, so it works okay now." I felt so useful!

The hardware show was pretty bare, compared to Portland, where Dell and OCP brought out cool things.

HGST showed their Ethernet drive, but they used a chassis so dire, I don't even want to post a picture. OCP did the same this year: they brought a German partner who demonstrated a storage box that looked built in a basement in Kherson, Ukraine, while under siege by Kiev forces.

Here's a poor pic of cute Mellanox Ethernet wares: a switch, NICs, and some kind of modern equivalent of GBICs for fiber.

Interestingly enough, although Mellanox displayed Ethernet only, I heard in other sessions that Infiniband was not entirely dead. Basically if you need to step beyond bonded 10GbE, there's nothing else for your overworked Swift proxies: it's Infiniband or nothing. My interlocutor from SwiftStack implied that a router existed into which you could plug your Infiniband pipe, I think.

DisplayPort MST dock support for Fedora 20 copr repo.

So I've created a copr

http://copr.fedoraproject.org/coprs/airlied/mst/

With a kernel + intel driver that should provide support for DisplayPort MST on Intel Haswell hardware. It doesn't do any of the fancy Dell monitor stuff yet, it primarily for people who have Lenovo or Dell docks and laptops that can't currently multihead.

The kernel source is from this branch which backports a chunk of stuff to v3.14 to support this.

http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-i915-mst-v3.14

It might still have some bugs and crashes, but the basics should in theory work.

Daily log May 22nd 2014

Found another hugepage VM bug.
This one VM_BUG_ON_PAGE(PageTail(page), page). The filemap.c bug has gone into hiding since last weekend. Annoying.

On the trinity list, Michael Ellerman reported a couple bugs that looked like memory corruption since my changes yesterday to make fd registration more dynamic. So I started the day trying out -fsanitize=address without much success. (I had fixed a number of bugs recently using it). However, reading the gcc man page, I found out about -fsanitize=undefined, which I was previously unaware of. This is new in gcc 4.9, so ships with Fedora rawhide, but my test box is running F20. Luckily, clang has supported it even longer.

So I spent a while fixing up a bunch of silly bugs, like (1<<31) where it should have been (1UL<<31) and some not so obvious alignment bugs. A dozen or so fixes for bugs that had been there for years. Sadly though, nothing that explained the corruption Michael has been seeing. I spent some more time looking over yesterdays changes without success. Annoying that I don’t see the corruption when I run it.

Decided not to take on anything too ambitious for the rest of the day given I’m getting an early start on the long weekend, by taking off tomorrow. Hopefully when I return with fresh eyes on Tuesday I’ll have some idea of what I can do to reproduce the bug.

Daily log May 22nd 2014 is a post from: codemonkey.org.uk

May 21, 2014

Daily log May 21st 2014

Fell out of the habit of blogging regularly again.
Recap: Took Monday off, to deal with fallout from a flooded apartment that happened at 2am Sunday night. Not fun in the least.

Despite this, I’ve found time to do a bunch of trinity work this last week. Biggest highlight being todays work gutting the generation of file descriptors on startup, moving it away from a nasty tangled mess of goto’s and special cases to something more manageable, involving dynamic registration. The code is a lot more readable now, and hopefully as a result more maintainable.

This work will continue over the next few days, extending to the filename cache code. For a while, I’ve felt that maintaining one single filename cache of /dev, /sys & /proc wasn’t so great, so I’ll be moving that to also be a list of caches.

Eventually, the network socket code also needs the same kind of attention, moving it away from a single array of sockets, to per-protocol buckets. But that’s pretty low down my list right now.

Asides from the work mentioned above, lots of commits changing code around to use syscallrecord structs to make things clearer, and making sure functions get passed the parameters they need rather than have them recomputing them.

Daily log May 21st 2014 is a post from: codemonkey.org.uk

May 15, 2014

Seagate Kinetic and SMR

In the trivial things I never noticed department: Kinetic might have a technical merit. Due to the long history of vendors selling high-margin snake oil, I was somewhat sceptical of the object-addressed storage. However, they had a session with Joe Arnold of SwiftStack and a corporate person from Seagate (with an excessively complicated title), where they mentioned off-hand that actually all this "intelligent" stuff is supposed to help with SMR. As everyone probably know, shingle drives implement complicated read-modify-write cycles to support traditional sector-addressed model and the performance penalty is worse than 4K drive with 512-byte interface.

I cannot help thinking that it would be even better to find a more natural model to expose the characteristics of the drive to the host. Perhaps some kind of "log-structured-drive". I bet there are going to be all sorts of overheads in the drive's filesystem that negate the increase in the aerial density. After all, shingles only give you about 2x. The long-term performance of any object-addressed drive is also in doubt as the fragmentation mounts.

BTW, a Seagate guy swore to me that Kinetic is not patent-encumbered and that they really want other drive vendors to jump on the bandwagon.

UPDATE: Jeff Darcy brought up HGST on Twitter. The former-Hitachi guys (owned by WD now) do something entirely different: they allow apps, such as Swift object server, to run directly on the drive. It's cute, but does nothing to help the block-addressing API being unsifficient to manage a shingled drive. When software runs on the drive, it still has to talk to the rest of the drive somehow, and HGST did not add a different API to the kernel. All it does is kicking the can down the road and hoping a solution comes along.

UPDATE: Wow even Sage.

May 14, 2014

Daily log May 13th 2014

After yesterdays Trinity release, not too much fallout. Michael Ellerman reported a situation that causes the watchdog to hang indefinitely after the main process has exited. Hopefully this won’t affect too many people. It should now be fixed in the git tree anyway. Doesn’t seem worth doing another point release over right now.

Thomas Gleixner finally got to the bottom of the futex bugs that Trinity has been hitting for a while.

The only other kernel bugs I’m still seeing over and over are the filemap.c:202 BUG_ON, and a perf bug that has been around for a while.

Sasha seems to be hitting a ton of VM related issues on linux-next. Hopefully some of that stuff gets straightened out before the next merge window.

Daily log May 13th 2014 is a post from: codemonkey.org.uk

May 12, 2014

Trinity 1.4 release.

As predicted last week, I made v1.4 tarball release of Trinity today. The run I left going over the weekend seemed ok, so I started prepping a release. Then at the last minute, found a bug that would only show up if you had a partial socket cache. This caused trinity to exit immediately on startup. Why it waited until today to show up is a mystery, but it’s fixed now.

A few other last minute “polishing” patches, hushing some of the excessive spew etc, but it’s otherwise mostly the same as what I had prepared on Friday.

The weekend test run found the mm/filemap.c:202 BUG_ON again, where we have a page mapped that shouldn’t be. Really need to focus on trying to get a better reproducer for that sometime soon, as it’s been around for a while now.

Trinity 1.4 release. is a post from: codemonkey.org.uk

May 09, 2014

Getting closer to Trinity 1.4

I plan on doing a tarball release of Trinity on Monday. It’s been about six months since the last release, and I know I’m overdue a tarball release because I find myself asking more often “Are you using a build from the tarball, or git?”

Since the last release it’s got a lot better at tickling VM related bugs, in part due to the tracking and recycling of mmap() results from the child processes, but also from a bunch of related work to improve the .sanitise routines of various VM syscalls. There have also of course been a ton of other changes and fixes. I’ve spent the last week mostly doing last minute cleanups & fixes, plugging memory leaks etc, and finally fixing the locking routines to behave as I had originally intended.

So, touch wood, barring any last minute surprises, what’s in git today will be 1.4 on Monday morning.

I’m already looking forward to working on some new features I’ve had in mind for some time. (Some of which I originally planned on working on as a standalone tool, but after last months LSF/MM summit, I realised the potential for integrating those ideas into trinity). More on that stuff next week.

Getting closer to Trinity 1.4 is a post from: codemonkey.org.uk

May 08, 2014

Mental note

I went through the architecture manual for Ceph and penciled down a few ideas that could be applied to Swift. The biggest one is that we could benefit from some kind of massive proxy or a PACO setup.

Unfortunately, I see problems with a large PACO. Memcached efficiency will nosedive, for one. But also, how are we going to make sure clients are spread right? There's no cluster map and thus clients can't know which proxy in PACO is closer to the location. In fact we deliberately prevent them from knowing too much. They don't even know cluster's partition size.

The reason this matters is that I feel that EC should increase requirements for CPU on proxies, which are CPU bound in most clusters already. Of course what I feel may not be what actually occurs, so maybe it does not matter.

May 07, 2014

Random link dump

random link dump of some videos related to performance, scalability and code optimization I found interesting recently.

  • Two really good videos from Microsoft events on native code performance from the perspective of the compiler. A processor architecture oriented one, and
    a memory/caches oriented one.
    The latter half of the first video tells the story of a 60% performance regression that Microsoft saw when running code on Haswell due to store buffer stalls. Pretty horrific.
    The second video gave me flashbacks to this optimization book. So much so that I went to dig it out, and remembered I loaned my copy to someone who I’ve long since forgotten about. So I ended up buying it again and have been re-reading it the last few days. If you ignore the Pentium 4 specific parts, it’s still one of my favorite books. I’d love to see a 3rd edition one day.
    There’s a bonus compiler video by the same guy that’s pretty good too, covering vectorization and bunch of other optimizations that modern compilers do.
  • scaling to 10 million concurrent connections. Again, a year old, but somehow only now I’ve gotten around to watching this. Some interesting stuff in there about userspace network stacks.
  • address sanitizer in llvm presentation. From the 2013 llvm meeting, a good overview of the Asan project. I keep meaning to find time to play with the kernel variant. I’m sure it and Trinity would make for interesting results.

Random link dump is a post from: codemonkey.org.uk

May 05, 2014

Coverity, heartbleed, and the kernel.

I was a little surprised last week when I did the Coverity scan on the 3.15-rc3 kernel, and 118 new issues appeared. Usually once we’re past -rc1, the number of new issues between builds is in low double figures. It turned out to be due to Coverity’s new checks that flag byte swaps as potential sources of tainted data.

There’s a lot to go through, but from what I’ve seen so far, there hasn’t been anything as scary as what happened with OpenSSL. Though my unfamiliarity with some of the drivers may mean they could turn out to be something that needs fixing even if they aren’t exploitable.

What is pretty neat though, is that the scanner doesn’t just detect calls to the usual byte-swapping routines, it even picks up on open-coded variants, such as..

n_blocks = (p_src[0] < < 8) | p_src[1];

The tricky part is determining in various drivers if (ie, in the example above) 'p_src' is actually something a user can control, or is it just reading something back from hardware. Quite a few of them seem to be in things like video BIOS parsers, firmware loaders etc. Even handling of packets from a joystick involves byte-swapping.

Even though the hardware (should) limit the range of values these routines see, we do seem to do range checking in the kernel in most cases. The cases where we don't that I've seen so far, I'm not sure we care, as the code looks like it would still work correctly.

Somewhat surprising was how many of these new warnings turned up in mostly unmaintained or otherwise crappy parts of the kernel. ISDN. Staging wireless drivers. Megaraid. The list goes on..

Regardless of whether or not these new checks find bugs, re-reviewing some of this code can't really be a bad thing, as a lot of it probably never had a lot of review when it was merged 10-15 years ago.

Coverity, heartbleed, and the kernel. is a post from: codemonkey.org.uk

May 02, 2014

Daily log May 1st 2014

Rough couple days. Didn’t get much done yesterday due to food poisoning.
Still chasing the futex bug I started hitting a few days ago.

More polishing towards a point release of Trinity. Fixed some code in the locking routines that never really worked by rewriting it. After spending an hour or so trying to iron out bugs in my rewritten code, I realized things could be done in a much simpler manner, and also a few hundred lines less code. As a result the locking routines got simpler, and the watchdog code got some long overdue cleanup.

If things shake out well over the next few days, I’ll call this 1.4 on Monday.

Daily log May 1st 2014 is a post from: codemonkey.org.uk

May 01, 2014

Get off my lawn

My only contact with Erasure Codes previously was in the field of data transmission (naturally). While a student at Lomonosov MSU, I worked in a company that developed a network for PCs, called "Micross" and led by Andrey Kinash, IIRC. Originally it ran on top of MS-DOS 3.30, and was ported to 4.01 and later versions over time.

Ethernet was absurdly expensive in the country back in the 1985, so the hardware used the built-in serial. A small box with a relay attached the PC to a ring, where a software token was circulated. Primary means of keeping the ring integrity was the relay, controlled by the DTR signal. In case that was not enough, the system implemented a double-ring isolation not unlike FDDI's.

The baud rate was 115.2 kbit/s, and that interfered with the MS-DOS. I should note that notionally the system permitted data transfer while PCs ran usual office applications. But even if PC was otherwise idle, a hit by the system timer would block out interrupts long enough to drop 2 or 3 characters. Solution was to implement a kind of erasure coding, which recovered not only corruption, but also a loss of data. The math and implementation in Modula 2 were done by Mr. Vladimir Roganov.

I remember that at the time, the ability to share directories over LAN without a dedicated server was most impressive to the customers, but I thought that the Roganov's EC was perhaps the most praiseworthy, from nerdiness perspective. Remember that all of that ran on a 12 MHz 80286.

Monthly Fedora kernel bug statistics – April 2014

  19 20 rawhide  
Open: 104 234 153 (491)
Opened since 2014-04-01 7 81 29 (117)
Closed since 2014-04-01 15 46 18 (79)
Changed since 2014-04-01 26 107 42 (175)

Monthly Fedora kernel bug statistics – April 2014 is a post from: codemonkey.org.uk

April 30, 2014

Daily log April 29th 2014

Started the day by hitting an assertion in the rtmutex-debug code.

Fixed a long standing bug in trinity’s memory allocation routines.
On occasion, it would spew messages when it failed to allocate even a single page of memory. I’ve been meaning to dig into this for a while, wondering if there was some memory leak somewhere. The actual cause of it was a lot more dumb. If we randomly happened to call mlockall() in a child process, after a while, we’d fill up memory as none of those allocations could be paged out. Now that trinity is doing multi-megabyte allocations when it tries to map huge pages, this is a problem.
The answer, I decided was to munlockall if the allocator fails, and then retry. It seems to be holding up so far.

I’m spending the rest of the week wrapping up loose ends like this, and then cutting a new point release, as it’s been six months or so since the last one, and I want to try some more experimental things with file IO soon, which might upset it’s ability to trigger some of the existing kernel bugs for a while.

Daily log April 29th 2014 is a post from: codemonkey.org.uk

April 27, 2014

gEDA

zag-avr-1

For the next stage, I switched from Fritzing to gEDA and was much happier for it. Certainly, gEDA is old-fashioned. The UI of gschem is really weird, and even has problems dealing with click-to-focus. But it works.

I put the phase onto a Veroboard, as I'm not ready to deal with ordering PCBs yet.

April 25, 2014

Weekly Fedora kernel bug statistics – April 25th 2014

  19 20 rawhide  
Open: 102 223 148 (473)
Opened since 2014-04-18 1 25 3 (29)
Closed since 2014-04-18 3 16 3 (22)
Changed since 2014-04-18 7 38 15 (60)

Weekly Fedora kernel bug statistics – April 25th 2014 is a post from: codemonkey.org.uk

April 24, 2014

server move.

Lots of behind-the-scenes block stacking yesterday moving this web site to another machine. What should have been a simple move turned out to be slightly more involved, but things should be ok now. Took the opportunity to reduce some unnecessary services, and as a result git.codemonkey.org.uk is no more. You can now find trinity, x86info and more on github. (I’m only using it for git hosting, please don’t send me github pull requests)

server move. is a post from: codemonkey.org.uk

April 22, 2014

Daily log April 22nd 2014

Did some investigation into why the move_pages() sanitize routine in trinity seems to routinely fail to allocate a single page of memory. This led to some cleanup work on the code that creates iovec structs that get passed to syscalls.
While cleaning up this code to plug some memory leaks, I accidentally ended up triggering a new kernel bug.

I made a couple of small changes to how iovec structs are created:

  • Instead of a mix of mallocs and mmaps, it now only does mmaps, so I don’t have to track the malloc for later freeing.
  • I now limit the maximum number of entries in an iovec to 256

When I rebased my test box to use the latest trinity code, I noticed that it got ‘stuck’ when it should have exited.
Some profiling ensued, which showed it was spinning in kernel space. We’ve had bugs like this before, where we’re not marking a page as writable, and continuously page faulting.

Thanks to the function tracer output, Linus was able to narrow in on a fix pretty quickly.
For reasons I can’t fathom right now, the bug doesn’t reproduce as quickly on 3.14 as it did on 3.15rc2, even though the code is apparently broken there too. Very odd.

Despite all this, still seeing the move_pages alloc failures. More debugging on this tomorrow.

Asides from the above busy-work, I’ve been trying to get my head around how I’m going to implement something in trinity to ease with debugging when things go wrong. A plan is starting to come together, which could lead to interesting features like enhanced statistics gathering, and runtime configuration changes without needing to restart a run. We’ll see how that pans out in the coming weeks.

Daily log April 22nd 2014 is a post from: codemonkey.org.uk

April 18, 2014

Weekly Fedora kernel bug statistics – April 18th 2014

  19 20 rawhide  
Open: 103 204 149 (456)
Opened since 2014-04-11 3 14 8 (25)
Closed since 2014-04-11 6 9 6 (21)
Changed since 2014-04-11 6 29 14 (49)

Weekly Fedora kernel bug statistics – April 18th 2014 is a post from: codemonkey.org.uk

April 17, 2014

Daily log April 16th 2014

Added some code to trinity to use random open flags on the fd’s it opens on startup.

Spent most of the day hitting the same VM bugs as yesterday, or others that Sasha had already reported.
Later in the day, I started seeing this bug after applying a not-yet-merged patch to fix a leak that Coverity had picked up on recently. Spent some time looking into that, without making much progress.
Rounded out the day by trying out latest builds on my freshly reinstalled laptop, and walked into this.

Daily log April 16th 2014 is a post from: codemonkey.org.uk

April 16, 2014

Daily log April 15th 2014

Spent all of yesterday attempting recovery (and failing) on the /home partition of my laptop.
On the weekend, I decided I’d unsuspend it to send an email, and just got a locked up desktop. The disk IO light was stuck on, but it was completely dead to input, couldn’t switch to console. Powered it off and back on. XFS wanted to me to run xfs_repair. So I did. It complained that there was pending metadata in the log, and that I should mount the partition to replay it first. I tried. It failed miserably, so I re-ran xfs_repair with -L to zero the log. Pages and pages of scrolly text zoomed up the screen.
Then I rebooted and.. couldn’t log in any more. Investigating with root showed that /home/davej was now /home/lost & found, and within it were a couple dozen numbered directories containing mostly uninteresting files.

So that’s the story about how I came to lose pretty much everything I’ve written in the last month that I hadn’t already pushed to github. I’m still not entirely sure what happened, but I point the finger of blame more at dm-crypt than at xfs at this point, because the non-encrypted partitions were fine.

Ultimately I gave up, reformatted and reinstalled. Kind of a waste of a day (and a half).

Things haven’t being entirely uneventful though:

So there’s still some fun VM/FS horrors lurking. Sasha has been hitting a bunch more huge page bugs too. It never ends.

Daily log April 15th 2014 is a post from: codemonkey.org.uk

April 11, 2014

Weekly Fedora kernel bug statistics – April 11th 2014

  19 20 rawhide  
Open: 102 197 143 (442)
Opened since 2014-04-04 3 19 7 (29)
Closed since 2014-04-04 7 13 6 (26)
Changed since 2014-04-04 9 32 10 (51)

Weekly Fedora kernel bug statistics – April 11th 2014 is a post from: codemonkey.org.uk

April 04, 2014

Weekly Fedora kernel bug statistics – April 04 2014

  19 20 rawhide  
Open: 99 186 142 (427)
Opened since 2014-03-28 4 17 9 (30)
Closed since 2014-03-28 12 17 4 (33)
Changed since 2014-03-28 15 29 10 (54)

Weekly Fedora kernel bug statistics – April 04 2014 is a post from: codemonkey.org.uk

Monthly Fedora kernel bug statistics – March 2014

  19 20 rawhide  
Open: 99 186 142 (427)
Opened since 2014-03-01 32 92 19 (143)
Closed since 2014-03-01 156 183 99 (438)
Changed since 2014-03-01 83 111 31 (225)

Monthly Fedora kernel bug statistics – March 2014 is a post from: codemonkey.org.uk

April 02, 2014

why I suck at finishing stuff , or how I learned to stop working and love DisplayPort MST

DisplayPort 1.2 Multi-stream Transport is a feature that allows daisy chaining of DP devices that support MST into all kinds of wonderful networks of devices. Its been on the TODO list for many developers for a while, but the hw never quite materialised near a developer.

At the start of the week my local Red Hat IT guy asked me if I knew anything about DP MST, it turns out the Lenovo T440s and T540s docks have started to use DP MST, so they have one DP port to the dock, and then dock has a DP->VGA, DP->DVI/DP, DP->HDMI/DP ports on it all using MST. So when they bought some of these laptops and plugged in two monitors to the dock, it fellback to using SST mode and only showed one image. This is not optimal, I'd call it a bug :)

Now I have a damaged in transit T440s (the display panel is in pieces) with a dock, and have spent a couple of days with DP 1.2 spec in one hand (monitor), and a lot of my hair in the other. DP MST has a network topology discovery process that is build on sideband msgs send over the auxch which is used in normal DP to read/write a bunch of registers on the plugged in device. You then can send auxch msgs over the sideband msgs over auxch to read/write registers on other devices in the hierarchy!

Today I achieved my first goal of correctly encoding the topology discovery message and getting a response from the dock:
[ 2909.990743] link address reply: 4
[ 2909.990745] port 0: input 1, pdt: 1, pn: 0
[ 2909.990746] port 1: input 0, pdt: 4, pn: 1
[ 2909.990747] port 2: input 0, pdt: 0, pn: 2
[ 2909.990748] port 3: input 0, pdt: 4, pn: 3

There are a lot more steps to take before I can produce anything, along with dealing with the fact that KMS doesn't handle dynamic connectors so well, should make for a fun tangent away from the job I should be doing which is finishing virgil.

I've ordered another DP MST hub that I can plug into AMD and nvidia gpus that should prove useful later, also for doing deeper topologies, and producing loops.

Also some 4k monitors using DP MST as they are really two panels, but I don't have one of them, so unless one appears I'm mostly going to concentrate on the Lenovo docks for now.

March 31, 2014

Linux 3.14 coverity stats

date rev Outstanding fixed defect density
Jan/20/2014 v3.13 5096 5705 0.59
Feb/03/2014 v3.14-rc1 4904 5789 0.56
Feb/09/2014 v3.14-rc2 4886 5810 0.56
Feb/16/2014 v3.14-rc3 4816 5836 0.55
Feb/23/2014 v3.14-rc4 4792 5841 0.55
Mar/03/2014 v3.14-rc5 4779 5842 0.55
Mar/10/2014 v3.14-rc6 4755 5852 0.54
Mar/17/2014 v3.14-rc7 4934 6123 0.56
Mar/27/2014 v3.14-rc8 4809 6126 0.55
Mar/31/2014 v3.14 4811 6126 0.55

The big thing that stands out this cycle is that the defect ratio was going down until we hit around 3.14-rc7, and then we got a few hundred new issues. What happened ?
Nothing in the kernel thankfully. This was due to an upgrade server side to a new version of Coverity which has some new checkers. Some of the existing ones got improved too, so a bunch of false positives we had sitting around in the database are no longer reported. The number of new issues unfortunately was greater than the known false positives[1]. In the days following, I did a first sweep through these and closed out the easy ones, bringing the defect density back down.

note: I stopped logging the ‘dismissed’ totals. With Coverity 7.0, the number can go backwards.
If a file gets deleted, the issues against that file that were dismissed also disappears.
Given this happens fairly frequently, the number isn’t really indicative of anything useful.

With the 3.15 merge window now open, I’m hoping a bunch of the queued fixes I sent over the last few weeks get merged, but I’m fully expecting to need to do some resending.

[1] It was actually worse than this, the ratio went back up to 0.57 right before rc7

Linux 3.14 coverity stats is a post from: codemonkey.org.uk

LSF/MM collaboration summit recap.

It’s been a busy week.
A week ago I flew out to Napa,CA for two days of discussions with various kernel people (ok, and some postgresql people too) about all things VM and FS/IO related. I learned a lot. These short focussed conferences have way more value to me these days personally than the conferences of years ago with a bunch of tracks, and day after day of presentations.

I gave two sessions relating to testing, there are some good write-ups on lwn. It was more of a extended QA than a presentation, so I got a lot of useful feedback (and especially afterwards in the hallway sessions). A couple people asked if trinity was doing certain things yet, which led to some code walkthroughs, and a lot of brainstorming about potential solutions.

By the end of the week I was overflowing with ideas for new things it could be doing, and have started on some of the code for this already. One feature I’d had in mind for a while (children doing root operations) but hadn’t gotten around to writing could be done in a much simpler way, which opens the doors to a bunch more interesting things. I might end up rewriting the current ioctl fuzzing (which isn’t finding a huge amount of bugs right now anyway) once this stuff has landed, because I think it could be doing much more ‘targeted’ things.

It was good to meet up with a bunch of people that I’ve interacted with for a while online and discuss some things. Was surprised to learn Sasha Levin is actually local to me, yet we both had to fly 3000 miles to meet.

Two sessions at LSF/MM were especially interesting outside of my usual work.
The postgresql session where they laid out their pain points with the kernel IO was enlightening, as they started off with a quick overview of postgresql’s process model, and how things interact. The session felt like it went off in a bunch of random directions at once, but the end goal (getting a test case kernel devs can run without needing a full postgresql setup) seemed to be reached the following day.

The second session I found interesting was the “Facebook linux problems” session. As mentioned in the lwn write-up, one of the issues was this race in the pipe code. “This is *very* hard to trigger in practice, since the race window is very small”. Facebook were hitting it 500 times a day. Gave me thoughts on a whole bunch of “testing at scale” problems. A lot of the testing I do right now is tiny in comparison. I do stress tests & fuzz runs on a handful of machines, and most of it is all done by hand. Doing this kind of thing on a bigger scale makes it a little impractical to do in a non-automated way. But given I’ve been buried alive in bugs with just this small number, it has left me wondering “would I find a load more bugs with more machines, or would it just mean the mean time between reproducing issues gets shorter”. (Given the reproducibility problems I’ve had with fuzz testing sometimes, the latter wouldn’t necessarily be a bad thing). More good thoughts on this topic can be found in a post google made a few years ago.

Coincidentally, I’m almost through reading How google tests software, which is a decent book, but with not a huge amount of “this is useful, I can apply this” type knowledge. It’s very focussed on the testing of various web-apps, with no real mention of testing of Android, Chrome etc. (The biggest insights in the book aren’t actually testing related, but more the descriptions of googles internal re-hiring processes when people move between teams).

Collaboration summit followed from Wednesday onwards. One highlight for me were learning that the tracing code has something coming in 3.15/3.16 that I’ve been hoping for for a while. At last years kernel summit, Andi Kleen suggested it might be interesting if trinity had some interaction with ftrace to get traces of “what the hell just happened”. The tracing changes landing over the next few months will allow that to be a bit more useful. Right now, we can only do that on a global system-wide basis, but with that moving to be per-process, things can get a lot more useful.

Another interesting talk was the llvmlinux session. I haven’t checked in on this project in a while, so was surprised to learn how far along they are. Apparently all the necessary llvm changes to build the kernel are either merged, or very close to merging. The kernel changes still have a ways to go, but this too has improved a lot since I last looked. Some good discussion afterwards about the crossover between things like clang’s static analysis warnings and the stuff I’m doing with Coverity.

Speaking of, I left early on Friday to head back to San Francisco to meet up with Coverity. Lots of good discussion about potential workflow improvements, false positive/heuristic improvements etc. A good first meeting if only to put faces to names I’ve been dealing with for the last year. I bugged them about a feature request I’ve had for a while (that a few people the days preceding had also nagged me about); the ability to have per-subsystem notification emails instead of the one global email. If they can hook this up, it’ll save me a lot of time having to manually craft mails to maintainers when new issues are detected.

busy busy week, with so many new ideas I felt like my head was full by the time I got on the plane to get back.
Taking it easy for a day or two, before trying to make progress on some of the things I made notes on last week.

LSF/MM collaboration summit recap. is a post from: codemonkey.org.uk

March 29, 2014

I am the CADT; and advice on NEEDINFOing old bugs en masse

[Attention conservation notice: probably not of interest to lawyers; this is about my previous life in software development.]

<a href="https://commons.wikimedia.org/wiki/File:MW_Bug_Squad_Barnstar.svg">Bugsquad barnstar, under MPL 1.1</a>
Bugsquad barnstar, under MPL 1.1

Someone recently mentioned JWZ’s old post on the CADT (Cascade of Attention Deficit Teecnagers) development model, and that finally has pushed me to say:

I am the CADT.

I did the bug closure that triggered Jamie’s rant, and I wrote the text he quotes in his blog post.1

Jamie got some things right, and some things wrong. The main thing he got right is that it is entirely possible to get into a cycle where instead of seriously trying to fix bugs, you just do a rewrite and cross your fingers that it fixes old bugs. And yes, this can particularly happen when you’re young and writing code for fun, where the joy of a from-scratch rewrite can overwhelm some of your other good senses. Jamie also got right that I communicated the issue pretty poorly. Consider this post a belated explanation (as well as a reference for the next time I see someone refer to CADT).

But that wasn’t what GNOME was doing when Jamie complained about it, and I doubt it is actually something that happens very often in any project large enough to have a large bug tracking system (BTS). So what were we doing?

First, as Brendan Eich has pointed out, sometimes a rewrite really is a good idea. GNOME 2 was such a rewrite – not only was a lot of the old code a hairy mess, we decided (correctly) to radically revise the old UI. So in that sense, the rewrite was not a “CADT” decision – the core bugs being fixed were the kinds of bugs that could only be fixed with massive, non-incremental change, rather than “hey, we got bored with the old code”. (Immediately afterwards, GNOME switched to time-based releases, and stuck to that schedule for the better part of a decade, which should be further proof we weren’t cascading.)

This meant there were several thousand old bugs that had been filed against UIs that no longer existed, and often against code that no longer existed or had been radically rewritten. So you’ve got new code and old bugs. What do you do with the old bugs?

It is important to know that open bugs in a BTS are not free. Old bugs impose a cost on developers, because when they are trying to search relevant bugs, old bugs can make it harder to find the things they really should be working on. In the best case, this slows them down; in the worst case, it drives them to use other tools to track the work they want to do – making the BTS next to useless. This violates rule #1 of a BTS: it must be useful for developers, or else it all falls apart.

So why did we choose to reduce these costs by closing bugs filed against the old codebase as NEEDINFO (and asking people to reopen if they were still relevant) instead of re-testing and re-triaging them one-by-one, as Jamie would have suggested? A few reasons:

  • number of triagers v. number of bugs: there were, at the time, around a half-dozen active bug volunteers, and thousands of pre-GNOME 2 bugs. It was simply unlikely that we’d ever be able to review all the old bugs even if we did nothing else.
  • focus on new bugs: new bugs are where triagers and developers are much more likely to be relevant – those bugs are against fresh code; the original filer is much more likely to respond to clarifying questions; etc. So all else being equal, time spent on new bugs was going to be much better for the software than time spent on old bugs.
  • steady flow of new bugs: if you’ve got a small number of new bugs coming in, perhaps you split your time – but we had no shortage of new bugs, nor of motivated bug reporters. So we may have paid some cost (by demotivating some reporters) but our scarce resource (developers) greatly appreciated it.
  • relative burden: with thousands of open bugs from thousands of reporters, it made sense to ask old them to test their bug against the new code. Reviewing their old bugs was a small burden for each of them, once we distributed it.

So when isn’t it a good idea to close ask for more information about old bugs?

  • Great at keeping old bugs triaged/relevant: If you have a very small number of old bugs that haven’t been touched in a long time, then they aren’t putting much burden on developers.
  • Slow code turnover: If your development process is such that it is highly likely that old bugs are still relevant (e.g., core has remained mostly untouched for many years, or effective use of TDD has kept the number of accidental new bugs low) this might not be a good idea.
  • No triggering event: In GNOME, there was a big event, plus a new influx of triagers, that made it make sense to do radical change. I wouldn’t recommend this “just because” – it should go hand-in-hand with other large changes, like a major release or important policy changes that will make future triaging more effective.

Relatedly, the team practices mailing list has been discussing good practices for migrating bug tracking systems in the past few days, which has been interesting to follow. I don’t take a strong position on where Wikimedia’s bugzilla falls on this point – Mediawiki has a fairly stable core, and the volume of incoming bugs may make triage of old bugs more plausible. But everyone running a very large bugzilla for an active project should remember that this is a part of their toolkit.

  1. Both had help from others, but it was eventually my decision.

March 28, 2014

VPN versus DNS

For years, I did my best to ignore the problem, but CKS inspired me to blog the curious networking banality, in case anyone has wisdom to share.

The deal is simple: I have a laptop with a VPN client (I use vpnc). The client creates a tun0 interface and some RFC 1918 routes. My home RFC 1918 routes are more specific, so routing works great. The name service does not.

Obviously, if we trust DHCP-supplied nameserver, it has no work-internal names in it. The stock solution is to let vpnc to install /etc/resolv.conf pointing to work-internal nameservers. Unfortunately this does not work for me, because I have a home DNS zone, zaitcev.lan. Work-internal DNS does not know about that one.

Thus I would like some kind of solution that routes DNS requests somehow according to a configuration. Requests to work-internal namespaces (such as *.redhat.com) would go to nameservers delivered by vpnc (I think I can make it write something like /etc/vpnc/resolv.conf that does not conflict). Other requests go to the infrastructure name service, being it a hotel network or home network. Home network is capable of serving its own private authoritative zones and forwarding the rest. That's the ideal, so how to accomplish it?

I attempted apply a local dnsmasq, but could not figure out if it can do what I want and if yes, how.

For now, I have some scripting that caches work-internal hostnames in /etc/hosts. That works, somewhat. Still, I cannot imagine that nobody thought of this problem. Surely, thousands are on VPNs, and some of them have home networks. And... nobody? (I know that a few people just run VPN on the home infrastructure; that does not help my laptop, unfortunately).

UPDATE: Several people commented with interesting solutions. You can count on Mr. robbat2 to be on the bleeding edge and use unbound manually. I went with the NM magic as suggested by Mr. nullr0ute. In F20 it is required to edit /etc/NetworkManager/NetworkManager.conf and add "dns=dnsmasq" there. Then, NM runs dnsmasq with the following magic /var/run/NetworkManager/dnsmasq.conf:

server=/redhat.com/10.11.5.19
server=/10.in-addr.arpa/10.11.5.19
server=/redhat.com/10.5.30.160
server=/10.in-addr.arpa/10.5.30.160
server=192.168.128.1
server=fd2d:acfb:74cc:1::1

It is exactly the syntax Ewen tried to impart with his comment, but I'm too stupid to add 2 and 2 this way, so I have NM do it.

NM also starts vpnc in such a way that it does not need to damage any of my old hand-made config in /etc/vpnc, which is a nice touch.

See also: bz#842037.

See also: Chris using unbound.

March 21, 2014

Weekly Fedora kernel bug statistics – March 21st 2014

  19 20 rawhide  
Open: 98 167 135 (400)
Opened since 2014-03-14 6 19 2 (27)
Closed since 2014-03-14 6 133 84 (223)
Changed since 2014-03-14 18 46 14 (78)

Weekly Fedora kernel bug statistics – March 21st 2014 is a post from: codemonkey.org.uk

March 19, 2014

Fedora rawhide should have GL 3.3 on radeonsi supported hardware

So to enable OpenGL 3.3 on radeonsi required some patches backported to llvm 3.4, I managed to get some time to do this, and rebuilt mesa against the new llvm, so if you have an AMD GPU that is supported by radeonsi you should now see GL 3.3.

For F20 this isn't an option as backporting llvm is a bit tricky there, though I'm considering doing a copr that has a private llvm build in it, it might screw up some apps but for most use cases it might be fine.

March 14, 2014

Weekly Fedora kernel bug statistics – March 14th 2014

  19 20 rawhide  
Open: 96 274 136 (506)
Opened since 2014-03-07 8 18 7 (33)
Closed since 2014-03-07 133 20 6 (159)
Changed since 2014-03-07 79 35 11 (125)

Weekly Fedora kernel bug statistics – March 14th 2014 is a post from: codemonkey.org.uk

Daily log March 13th 2014

High (low?) point of the day was taking delivery of my new remote power switch.
You know that weird PCB chemical smell some new electronics have? Once I got this thing out the box it smelled so strong, I almost gagged. I let it ‘air’ for a little while, hoping it would dissipate. It didn’t. Or if it did, I couldn’t tell because now the whole room stunk. Then I made the decision to plug it in anyway. Within about a minute the smell went away. Well, not so much “went away”. More like, “was replaced with the smell of burning electronics”.
So that’s another fun “hardware destroys itself as soon as I get a hold of it” story, and yet another Amazon return.
(And I guess I’m done with ‘digital loggers’ products).

In non hardware-almost-burning-down-my-house news:

  • More chasing the VM bugs. Specifically the bad rss-counter messages.
  • Wrote some code to fix up a trinity deadlock I introduced. It fixes the problem, but not happy with it, so haven’t committed it yet. Should be done tomorrow.

Daily log March 13th 2014 is a post from: codemonkey.org.uk

March 13, 2014

Daily log March 12th 2014

I’ve been trying to chase down the VM crashes I’ve been seeing. I’ve managed to find ways to reproduce some of them a little faster, but not really getting any resolution so far. Hacked up a script to run a subset of random VM related syscalls. (Trinity already has ‘-g vm’, but I wanted something even more fine-grained). Within minutes I was picking up VM traces that so far I’ve only seen Sasha reporting on -next.

Daily log March 12th 2014 is a post from: codemonkey.org.uk

March 12, 2014

Okay, it's all broken. Now what?

A rant in ;login was making rounds recently (h/t @jgarzik), which I thought was not all that relevant... until I remembered that Swift Power Calculator has mysteriously stopped working for me. Its creator is powerless to do anything about it, and so am I.

So, it's relevant all right. We're in a big trouble even if Gmail kind of works most of the time. But the rant makes no recommendations, only observations. So it's quite unsatisfying.

BTW, it reminds me about a famous preso by Jeff Mogul, "What's wrong with HTTP and why it does not matter". Except Mogul's rant was more to the point. They don't make engineers like they used to, apparently. Also notably, I think, Mogul prompted development of RESTful improvements. But there's nothing we can do about excessive thickness of our stacks (that I can see). It's just spiralling out of control.

March 11, 2014

Daily log March 10th 2014

Been hitting a number of VM related bugs the last few days.

The first bug is the one that concerns me most right now, though the 2nd is feasibly something that some non-fuzzer workloads may hit too. Other than these bugs, 3.14rc6 is working pretty well for me.

Daily log March 10th 2014 is a post from: codemonkey.org.uk

March 10, 2014

AVR, Fritzing, Inkscape

zag-avr-0-sch

I suppose everyone has to pass through a hardware phase, and mine is now, for which I implemented a LED blinker with an AVRtiny2313. I don't think it even merits the usual blog laydown. Basically all it took was following tutorials to the letter.

For the initial project, I figured that learning gEDA would take too much, so I unleashed an inner hipster and used Fritzing. Hey, it allows to plan breadboards, so there. And well it was a learning experience and no mistake. Crashes, impossible to undo changes, UI elements outside of the screen, everything. Black magic everywhere: I could never figure out how to merge wires, dedicate a ground wire/plane, or edit labels (so all of them are incorrect in the schematic above). The biggest problem was the lack of library support together with an awful parts editor. Editing schematics in Inkscape was so painful, that I resigned to doing a piss-poor job, evident in all the crooked lines around the AVRtiny2313. I understand that Fritzing's main focus is iPad, but this is just at a level of typical outsourced Windows application.

Inkscape deserves a special mention due to the way Fritzing requires SVG files being in a particular format. If you load and edit some of those, the grouping defeats Inkscape features, so one cannot even select elements at times. And editing the raw XML cause weirdest effects, so it's not like LyX-on-TeX, edit and visualize. At least our flagship vector graphics package didn't crash.

The avr-gcc is awesome though. 100% turnkey: yum install and you're done. Same for avrdude. No huss, no fuss, everything works.

March 08, 2014

Weekly Fedora kernel bug statistics – March 07 2014

  19 20 rawhide  
Open: 225 263 135 (623)
Opened since 2014-02-28 12 29 9 (50)
Closed since 2014-02-28 8 41 8 (57)
Changed since 2014-02-28 18 54 16 (88)

Weekly Fedora kernel bug statistics – March 07 2014 is a post from: codemonkey.org.uk

March 07, 2014

Suddenly, Python Magic

Looking at a review by Solly today, I saw something deeply disturbing. A simplified version that I tested follows:


import unittest

class Context(object):
    def __init__(self):
        self.func = None
    def kill(self):
        self.func(31)

class TextGuruMeditationMock(object):

    # The .run() normally is implemented in the report.Text.
    def run(self):
        return "Guru Meditation Example"

    @classmethod
    def setup_autorun(cls, ctx, dump_with=None):
        ctx.func = lambda *args: cls.handle_signal(dump_with,
                                                   *args)

    @classmethod
    def handle_signal(cls, dump_func, *args):
        try:
            res = cls().run()
        except Exception:
            dump_func("Unable to run")
        else:
            dump_func(res)

class TestSomething(unittest.TestCase):

    def test_dump_with(self):
        ctx = Context()

        class Writr(object):
            def __init__(self):
                self.res = ''

            def go(self, out):
                self.res += out

        target = Writr()
        TextGuruMeditationMock.setup_autorun(ctx,
                                dump_with=target.go)
        ctx.kill()
        self.assertIn('Guru Meditation', target.res)

Okay, obviously we're setting a signal handler, which is a little lambda, which invokes the dump_with, which ... is a class method? How does it receive its self?!

I guess that the deep Python magic occurs in how the method target.go is prepared to become an argument. The only explanation I see is that Python creates some kind of activation record for this, which includes the instance (target) and the method, and that record is the object being passed down as dump_with. I knew that Python did it for scoped functions, where we have global dict, local dict, and all that good stuff. But this is different, isn't it? How does it even know that target.io belongs to target? In what part of Python spec is it described?

UPDATE: Commenters provided hints with the key idea being a "bound method" (a kind of user-defined method).

A user-defined method object combines a class, a class instance (or None) and any callable object (normally a user-defined function).

When a user-defined method object is created by retrieving a user-defined function object from a class, its im_self attribute is None and the method object is said to be unbound. When one is created by retrieving a user-defined function object from a class via one of its instances, its im_self attribute is the instance, and the method object is said to be bound.

Thanks, Josh et al.!

UPDATE: See also Chris' explanation and Peter Donis' comment re. unbound methods gone from py3.