A Quick Introduction to Mistral Usage in TripleO (Newton) For developers

Since Newton, Mistral has become a central component to the TripleO project, handling many of the operations in the back-end. I recently gave a short crash course on Mistral, what it is and how we use it to a few people and thought it might be useful to share some of my bag of tricks here as well.

What is Mistral?

It's a workflow service. You describe what you want as a series of steps (tasks) in YAML, and it will coordinate things for you, usually asynchronously.

Link: Mistral overview.

We are using it for a few reasons:

  • it lets us manage long-running processes (e.g. introspection) and track their state
  • it acts a common interface/API, that is currently used by both the TripleO CLI and UI thus avoiding duplication, and can also be consumed directly by external non-OpenStack consumers (e.g. ManageIQ).


A workbook contains multiple workflows. (The TripleO workbooks live at https://github.com/openstack/tripleo-common/tree/master/workbooks).

A workflow contains a series of 'tasks' which can be thought of as steps. We use the default 'direct' type of workflow on TripleO, which means tasks are executed in the order written, moving around based on the on-success and on-error values.

Every task calls to an action (or to another workflow), which is where the work actually gets done.

OpenStack services are automatically mapped into actions thanks to the mappings defined in Mistral, so we get a ton of actions for free already.

Useful tip: with the following commands you can see locally which actions are available, for a given project.

$ mistral action-list | grep $projectname

You can of course create your own actions. Which we do. Quite a lot.

$ mistral action-list | grep tripleo

An execution is what an instance of a running workflow is called, once you started one.

Link: Mistral terminology (very detailed, with diagrams and examples).

Where the TripleO Mistral workflows live


Let's look at a couple of examples.

A short one to start with, scaling down


It takes some input, starts with the 'delete_node' task and continues on to on-success or on-error depending on the action result.

Note: You can see we always end the workflow with send_message, which is a convention we use in the project. Even if an action failed and moves to on-error, the workflow itself should be successful (a failed workflow would indicate a problem at the Mistral level). We end with send_message because we want to let the caller know what was the result.

How will the consumer get to that result? We associate every workflow with a Zaqar queue. This is a TripleO convention, not a Mistral requirement. Each of our workflow takes a queue_name as input, and the clients are expected to listen to the Zaqar socket for that queue in order to receive the messages.

Another point, about the action itself on line 20: tripleo.scale.delete_node is a TripleO-specific action, as indicated in the name. If you were interested in finding the code for it, you should look at the entry_points in setup.cfg for tripleo-common (where all the workflows live):


which would lead you to the code at:


A bit more complex: node configuration


It's "slightly more complex" in that it has a couple more tasks, and it also calls to another workflow (line 426). You can see it starts with a call to ironic.node_list in its first task at line 417, which comes for free with Mistral. No need to reimplement it.

Debugging notes on workflows and Zaqar

Each workflow creates a Zaqar queue, to send progress information back to the client (CLI, web UI, other...).

Sometimes these messages get lost and the process hangs. It doesn't mean the action didn't complete successfully.

  • Check the Zaqar processes are up and running: $ sudo systemctl | grep zaqar (this has happened to me after reboots)
  • Check Mistral for any errored workflow: $ mistral execution-list
  • Check the Mistral logs (executor.log and engine.log are usually where the interesting errors are)
  • Ocata has timeouts for some of the commands now, so this is getting better

Following a workflow through its execution via CLI

This particular example will run somewhat fast so it's more of a "tracing back what happened afterwards."

$ openstack overcloud plan create my-new-overcloud
Started Mistral Workflow. Execution ID: 05d550f2-5d13-4782-be7f-a775a1d86a84
Default plan created

The CLI nicely tells you which execution ID to look for, so let's use it:

$ mistral task-list 05d550f2-5d13-4782-be7f-a775a1d86a84

| ID                                   | Name                            | Workflow name                              | Execution ID                         | State   | State info                   |
| c6e0fef0-4e65-4ee6-9ae4-a6d9e8451fd0 | verify_container_doesnt_exist   | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 72c1310d-8379-4869-918e-62eb04530e46 | verify_environment_doesnt_exist | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 74438300-8b18-40fd-bf73-62a1d90f71b3 | create_container                | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 667c0e4b-6f6c-447d-9325-ab6c20c8ad98 | upload_to_container             | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| ef447ea6-86ec-4a62-bca2-a083c66f96d3 | create_plan                     | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| f37ebe9f-b39c-4f7a-9a60-eceb80782714 | ensure_passwords_exist          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 193f65fb-502a-4e4c-9a2d-053966500990 | plan_process_templates          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 400d7e11-aea8-45c7-96e8-c61523d66fe4 | plan_set_status_success         | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 9df60103-15e2-442e-8dc5-ff0d61dba449 | notify_zaqar                    | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |

This gives you an idea of what Mistral did to accomplish the goal. You can also map it back to the workflow defined in tripleo-common to follow through the steps and find out what exactly was run. It if the workflow stopped too early, this can give you an idea of where the problem occurred.

Side-node about plans and the ERRORed tasks above

As of Newton, information about deployment is stored in a "Plan" which is implemented as a Swift container together with a Mistral environment. This could change in the future but for now that is what a plan is.

To create a new plan, we need to make sure there isn't already a container or an environment with that name. We could implement this in an action in Python, or since Mistral already has commands to get a container / get an environment we can be clever about this and reverse the on-error and on-success actions compared to usual:


If we do get a 'container' then it means it already exists and the plan already exists, so we cannot reuse this name. So 'on-success' becomes the error condition.

I sometimes regret a little us going this way because it leaves exception tracebacks in the logs, which is misleading when folks go to the Mistral logs for the first time in order to debug some other issue.

Finally I'd like to end all this by mentioning the Mistral Quick Start tutorial, which is excellent. It takes you from creating a very simple workflow to following its journey through the execution.

How to create your own action/workflow in TripleO

Mistral documentation:

In short:

  • Start writing your python code, probably under tripleo_common/actions
  • Add an entry point referencing it to setup.cfg
  • /!\ Restart Mistral /!\ Action code is only taken in once Mistral starts

This is summarised in the TripleO common README (personally I put this in a script to easily rerun it all).

Back to deployments: what's in a plan

As mentioned earlier, a plan is the combination of a as a Swift container + Mistral environment. In theory this is an implementation detail which shouldn't matter to deployers. In practice knowing this gives you access to a few more debugging tricks.

For example, the templates you initially provided will be accessible through Swift.

$ swift list $plan-name

Everything else will live in the Mistral environment. This contains:

  • The default passwords (which is a potential source of confusion)
  • The parameters_default aka overriden parameters (this takes priority and would override the passwords above)
  • The list of enabled environments (this looks nicer for plans created from the UI, as they are all munged into one user-environment.yaml file when deploying from CLI - see bug 1640861)
$ mistral environment-get $plan-name

For example, with an SSL-deployment done from the UI:

$ mistral environment-get ssl-overcloud
| Field       | Value                                                                             |
| Name        | ssl-overcloud                                                                     |
| Description | <none>                                                                            |
| Variables   | {                                                                                 |
|             |     "passwords": {                                                                |
|             |         "KeystoneFernetKey1": "V3Dqp9MLP0mFvK0C7q3HlIsGBAI5VM1aW9JJ6c5lLjo=",     |
|             |         "KeystoneFernetKey0": "ll6gbwcbhyAi9jNvBnpWDImMmEAaW5dog5nRQvzvEz4=",     |
|             |         "HAProxyStatsPassword": "NXgvwfJ23VHJmwFf2HmKMrgcw",                      |
|             |         "HeatPassword": "Fs7K3CxR636BFhyDJWjsbAQZr",                              |
|             |         "ManilaPassword": "Kya6gr2zp2x8ApD6wtwUUMcBs",                            |
|             |         "NeutronPassword": "x2YK6xMaYUtgn8KxyFCQXfzR6",                           |
|             |         "SnmpdReadonlyUserPassword": "5a81d2d83ee4b69b33587249abf49cd672d08541",  |
|             |         "GlancePassword": "pBdfTUqv3yxpH3BcPjrJwb9d9",                            |
|             |         "AdminPassword": "KGGz6ApEDGdngj3KMpy7M2QGu",                             |
|             |         "IronicPassword": "347ezHCEqpqhmANK4fpWK2MvN",                            |
|             |         "HeatStackDomainAdminPassword": "kUk6VNxe4FG8ECBvMC6C4rAqc",              |
|             |         "ZaqarPassword": "6WVc8XWFjuKFMy2qP2qqqVk82",                             |
|             |         "MysqlClustercheckPassword": "M8V26MfpJc8FmpG88zu7p3bpw",                 |
|             |         "GnocchiPassword": "3H6pmazAQnnHj24QXADxPrguM",                           |
|             |         "CephAdminKey": "AQDloEFYAAAAABAAcCT546pzZnkfCJBSRz4C9w==",               |
|             |         "CeilometerPassword": "6DfAKDFdEFhxWtm63TcwsEW2D",                        |
|             |         "CinderPassword": "R8DvNyVKaqA44wRKUXEWfc4YH",                            |
|             |         "RabbitPassword": "9NeRMdCyQhekJAh9zdXtMhZW7",                            |
|             |         "CephRgwKey": "AQDloEFYAAAAABAACIfOTgp3dxt3Sqn5OPhU4Q==",                 |
|             |         "TrovePassword": "GbpxyPdnJkUCjXu4AsjmgqZVv",                             |
|             |         "KeystoneCredential0": "1BNiiNQjthjaIBnJd3EtoihXu25ZCzAYsKBpPQaV12M=",    |
|             |         "KeystoneCredential1": "pGZ4OlCzOzgaK2bEHaD1xKllRdbpDNowQJGzJHo6ETU=",    |
|             |         "CephClientKey": "AQDloEFYAAAAABAAoTR3S00DaBpfz4cyREe22w==",              |
|             |         "NovaPassword": "wD4PUT4Y4VcuZsMJTxYsBTpBX",                              |
|             |         "AdminToken": "hdF3kfs6ZaCYPUwrTzRWtwD3W",                                |
|             |         "RedisPassword": "2bxUvNZ3tsRfMyFmTj7PTUqQE",                             |
|             |         "MistralPassword": "mae3HcEQdQm6Myq3tZKxderTN",                           |
|             |         "SwiftHashSuffix": "JpWh8YsQcJvmuawmxph9PkUxr",                           |
|             |         "AodhPassword": "NFkBckXgdxfCMPxzeGDRFf7vW",                              |
|             |         "CephClusterFSID": "3120b7cc-b8ac-11e6-b775-fa163e0ee4f4",                |
|             |         "CephMonKey": "AQDloEFYAAAAABAABztgp5YwAxLQHkpKXnNDmw==",                 |
|             |         "SwiftPassword": "3bPB4yfZZRGCZqdwkTU9wHFym",                             |
|             |         "CeilometerMeteringSecret": "tjyywuf7xj7TM7W44mQprmaC9",                  |
|             |         "NeutronMetadataProxySharedSecret": "z7mb6UBEHNk8tJDEN96y6Acr3",          |
|             |         "BarbicanPassword": "6eQm4fwqVybCecPbxavE7bTDF",                          |
|             |         "SaharaPassword": "qx3saVNTmAJXwJwBH8n3w8M4p"                             |
|             |     },                                                                            |
|             |     "parameter_defaults": {                                                       |
|             |         "OvercloudControlFlavor": "control",                                      |
|             |         "ComputeCount": "2",                                                      |
|             |         "ControllerCount": "3",                                                   |
|             |         "OvercloudComputeFlavor": "compute",                                      |
|             |         "NtpServer": "my.ntp-server.example.com"                                  |
|             |     },                                                                            |
|             |     "environments": [                                                             |
|             |         {                                                                         |
|             |             "path": "overcloud-resource-registry-puppet.yaml"                     |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/inject-trust-anchor.yaml"                       |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/tls-endpoints-public-ip.yaml"                   |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/enable-tls.yaml"                                |
|             |         }                                                                         |
|             |     ],                                                                            |
|             |     "template": "overcloud.yaml"                                                  |
|             | }                                                                                 |
| Scope       | private                                                                           |
| Created at  | 2016-12-02 16:27:11                                                               |
| Updated at  | 2016-12-06 21:25:35                                                               |

Note: 'environment' is an overloaded word in the TripleO world, be careful. Heat environment, Mistral environment, specific templates (e.g. TLS/SSL, Storage...), your whole setup, ...

Bonus track

There is documentation on going from zero (no plan, no nodes registered) till running a deployment, directly using Mistral: http://tripleo.org/mistral-api/mistral-api.html.

Also, with the way we work with Mistral and Zaqar, you can switch between the UI and CLI, or even using Mistral directly, at any point in the process.


Thanks to Dougal for his feedback on the initial outline!

Leave a comment

EFI, Linux and other boot-loading fun a.k.a. where's my Grub gone

I've been gaming more recently, on hardware that is decent-in-some-contexts-but-definitely-not-in-this-one which has meant pathetic frame rates as low as 7~12FPS as my save files grew bigger. A kind soul took pity on me and installed a better graphics card in my machine while I wasn't looking - which of course is when everything started going wrong.

I'll gloss over the "Not turning on" part because that was due to mislabelled wires - the real problem began when Windows for some reason picked up that something had changed at boot time, and promptly overwrote the boot loader.

Cannot boot from USB keys

None of my LiveUSB sticks would boot.

This turned out to be due to the device (or the system on it?) not being compatible with EFI. I'm not sure how to make a EFI-compatible Live USB system and didn't need to in the end - if you absolutely need to, enabling CSM mode in the BIOS ("Compatibility Support Module") was useful there, but likely wouldn't have helped with fixing my boot-loader. EFI and Legacy OS shouldn't be dual-booted in parallel - you can read more about this at the beginning of that excellent page.

Side-note: "chroot: cannot execute /bin/sh"

That was because the Live USB stick turned out to be a 32 bit system, while my desktop OS is 64 bits.

Where's my EFI partition anyway

I found an old install CD for Debian testing 7.10 (64 bits) lying around that turned out to have a Rescue mode option.

To prepare the system for the chroot that would fix All My Problems, first I had to figure out what was on which partition. The rescue mode let me mount them one by one and take a peek, though using parted would have been much faster.

# parted /dev/sda print
Model: Blah
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name                          Flags
 1      1049kB  420MB   419MB   ntfs            Basic data partition          hidden, diag
 2      420MB   735MB   315MB   fat32           EFI system partition          boot, esp
 3      735MB   869MB   134MB                   Microsoft reserved partition  msftres
 4      869MB   247GB   247GB   ntfs            Basic data partition          msftdata
 6      247GB   347GB   100GB   ext4            debian-root                   msftdata
etc etc etc

So my Debian partition is on /dev/sda6, the EFI stuff is on /dev/sda2. I need to make a Debian chroot and reinstall Grub from there - that's the kind of stuff I learnt last time I broke a lot of things.

Let's chroot and grub!

After selecting the "Execute a shell in Installer environment" option:

# mount /dev/sda6 /mnt
# mount -o bind /dev /mnt/dev
# chroot /mnt

Error: update-grub: device node not found

This one I think happened because I only bind mounted /dev, when you also need things like /dev/pts, /sys, /proc.

# for i in /dev /dev/pts /proc /sys ; do mount -o bind $i /mnt$i ; done

Error: grub-install /dev/sda: cannot find EFI directory

That one was before I figured out I also needed to mount the EFI special boot partition - sda2 as shown in the printed output.

# mount /dev/sda2 /mnt/boot/efi

From then on I read about the efibootmgr tool and decided to try and use that instead.

Error: efibootmgr: EFI variables are not supported on this system

Outside the chroot, you need to load the efivars module:

# modprobe efivars

How are things looking by now

# modprobe efivars
# mount /dev/sda6 /mnt
# mount /dev/sda2 /mnt/boot/efi
# for i in /dev /dev/pts /proc /sys ; do mount -o bind $i /mnt$i ; done
# chroot /mnt

Usually I can never remember the mount syntax (does the mount point come first or second?!) but I typed these commands so many times today, I bet I'll remember the syntax for at least a week.

Playing with efibootmgr

I tried to use the following command from the "Managing EFI Boot Loaders for Linux" page but it didn't quite work for me.

# efibootmgr -c -d /dev/sda -p 2 -l \\EFI\\debian\\grubx64.efi -L Linux

While this did add a Linux entry to my F12 BIOS boot menu (yay!), that booted straight into a black screen ("Reboot and Select proper Boot Device or Insert Boot Media" etc etc). Later on I learnt about the efibootmgr --verbose command which shows the difference between the working entry and the non-working one:

# efibootmgr --verbose
Boot0000* debian    HD(2,c8800,96000,0123456789-abcd-1234)File(\EFI\debian\grubx64.efi)
Boot0005  Linux    HD(2,c8800,96000,0123456789-abcd-1234)File(\EFI\redhat\grub.efi)\EFI\debian\grubx64.efi

I'm not quite sure how the path ended up looking like that. It could be a default somewhere, or I'm quite willing to believe I did something wrong - I also made a mistake on the partition number when I first ran the command.

But how did you fix it?!

Despite showing all the options I wanted in the efibootmgr output within the chroot, running grub-install and update-grub multiple times did nothing: I'd still boot straight into Windows or straight into a black screen. The strange thing is that even though only "Windows Boot Manager" and my new "Linux" entry were in the F12 boot menu, the BIOS setup did offer a 'debian' entry (created automatically at install time a long time ago) was in the boot ordering options. Moving it around didn't change a thing though.

The efibootmgr man page talks of a "BootNext" option. With the 'debian' entry right in front of me, why not try it? It's entry Boot0000 on my list, therefore:

# efibootmgr -n 0

Ta-dah! I rebooted straight into Grub then Debian. From there, grub-install /dev/sda and update-grub worked just fine.

Things I still don't know

  • Why did this happen in the first place? Can I prevent it from happening again?
  • I'm not sure why the grub install properly worked that last time. Did I miss something when working from the chroot?
  • Where does the "redhat\grub.efi" line come from, and can I amend that?
  • Why does Windows take so long to restart each time, when I don't even log in?


  • Linux on UEFI: A quick installation guide - I found this site incredibly informational.
  • The efibootmgr man page is very nice and contains useful examples.
  • GrubEFIReinstall - I probably would have tried that tool at some point. I postponed because I didn't have an easy way to burn a CD and wasn't sure about how to boot from USB without enabling CSM.
  • Booting with EFI, a blog entry about fallback boot loaders. While this didn't help in my case, I found the article very clear and enjoyed getting an explanation of the context around the problem in addition to the solution.

Punch line: the graphics card wasn't the bottle neck and my frame rate still hovers around 9FPS.

Leave a comment | 3 so far

Switching locale (Gnome 3, Fedora 20) and making sure all the menus switch too

In the spirit of immersion, I switched the laptop's language to Japanese, however most of the Gnome menus remained in English even after switching the System Language altogether, logging out etc. Another profile on the laptop shows the menus for Activities, etc in Japanese just fine so it wasn't a missing package. I found the following in ~/.profile, which seems a bit heavy-handed but does do the trick. For future reference:


Note on restarting X one is asked if they want to change the names for the default folders like Documents, etc. Personally I find it makes using the CLI a pain and don't recommend it.

Leave a comment


I'm taking a year out to travel in Japan on a working holiday visa, starting with a couple of months of studying the language some more. I landed about 2 weeks ago. The school put me at a slightly higher level than I had dared to hope and I am now learning new words, new expressions, new nuances every day and generally having a tremendous time :) I'm exactly where I want to be, doing and learning what I want to.

I'm experimenting with posting pictures on Flickr as I go, we'll see how that works out!

Also, note for future-self and other folks who are about to travel with a cold: the so-called "ear planes" work great to avoid painful ear pain on descent. I put them about 1h30 before landing on the second flight and it was painless, quite a contrast with the previous flight 10 hours earlier. Nasal spray helps a bit for comfort too.

Leave a comment | 2 so far



Last week-end I attended FOSDEM for the 7th time. It's kinda strange to say and think - if someone tells me they've been going to this or that open-source conference for 7 years I tend to assume they're hardcore and totally know what they're doing. I go to hang out with cool folks and learn new things. This year's FOSDEM didn't disappoint in that regard!

As usual, the conference was packed and most rooms filled up quickly, but I was happily surprised to see it was still possible to squeeze in some of the more popular rooms regardless. I think many devrooms organisers are well aware of the frustration with not being able to get in and they did a great job at encouraging/demanding folks use all seats rather than leave spaces in the middle, which really helped (special kudos to the Legal devroom, which was in a smaller room in H). Also the main conference organisers appear quite good at trying to adjust the room size based on popularity year to year (e.g. the Mozilla room used to be utterly impossible to get into).

Some of the conference highlights from my perspective:

Identity Crisis: Are we who we say we are?

This was the first keynote on Saturday morning, which I think did a good job of bringing up many possible ambiguities hidden in the "we" we use when contributing to a project. One of the strengths of open-source is that we're quick to say "we" and include everyone, but sometimes it bears more thinking or clarification of who we actually mean with "we" - sometimes two "we" can describe different subgroups of contributors even in the same sentence. Taking the time to think explicitly about who we mean, and avoid unintended conflicts of interests is important.


Fog of War - The GNOME Trademark Battle

The story of what was happening in the background during the Gnome battle for their trademark with Groupon last year, told by a Gnome board member and the lawyer that helped them on the case. Interesting insights thanks to the lawyer's perspective particularly, who also took a guess at what possibly happened in the Groupon lawyers' mind during their risk analysis and the consequences (e.g. "Groupon was dealing with an animal they'd never seen before." A charitable org not willing to be silenced or take a big donation.) Not a kind reflection on Groupon.


Why Samba moved to GPLv3: Why we moved, what we gained, what we lost.

Emboldened by having managed to get a seat in the Legal devroom, I decided to also stick around for the next talk. I hadn't attended a talk on GPLv3 in a few years and I wasn't to be disappointed. It was a very honest and funny talk - I knew of Jeremy Allison aka the Samba guy, but I didn't know he was such an entertaining speaker. Overall Samba seems very happy with the move to GPLv3, it simplified a lot of things for them especially in terms of copyright managenent (some companies are just nasty), and most of the contributors and users they initially lost ended up returning (multiple closed-source vendors being bought out and leaving their customers in the cold likely helped). They felt really let down that the FSF didn't force their own projects to move as well (though I understand that is not the case anymore) and of course the Linux kernel being GPLv2-only is hurtful too. The speaker is convinced that all the scary stuff around GPLv3 is FUD and everyone should switch to using GPLv3+ right now if they don't have to link to v2 stuff. An audience member did raise an issue/unclear point with the v3 licence, for when a company rents a device to a customer (that the user doesn't actually own and thus perhaps? shouldn't be allowed to modify).


Participation metrics at Mozilla: A story of systems and data

For projects that depend so heavily on volunteer contributions as Mozilla does, understanding who the community is made of and where/when people are being lost or leave is really important. The speaker started by showing us some of the ways they tried and failed to measure participation and what they ended up with. They defined what "participation" means by formalising paths a contributor might take across their systems (e.g. file a bug, comment on a bug, translate a string, etc) and they extract and map the data they have to these paths. This enables them to also deduplicate contributor information: for instance it's not because you have 100 translators and 300 developers that you have 400 contributors, people can do more than one thing, and it also lets them identify more clearly whether someone is leaving the project altogether or simply moving to another area. Very interesting stuff!

This is work in progress but their current results and reports are available at Are we a million yet.


Maintaining & growing a technical community: Mozilla Developer Network

The other Mozilla talk I attended explored the meaning of community and the motivations behind why people start contributing, why they continue to contribute and how to help folks feel involved and want to contribute. The speaker made some really good points, one that really stuck with me being that contributors ≠ community. It's really important to connect contributors to your community or they will not stick around! The example she used was getting people to contribute at hackathons-like events, but then disappear - as someone who's run such events that certainly rang true, simply showing folks they can make a positive impact easily is not enough to make them come back or feel part of the community.


Retooling Fedora: A Retrospective on Fedora 21 (and looking to 22)

I knew Fedora had been changing their model since the past release but I hadn't been following closely. This clarified the goals and the why, and I was very impressed with the beginning of the talk where the speaker (Matthew Miller, the current Fedora Project Leader) took a really hard look at where distributions are today and why they appear to be becoming less relevant - for instance looking at the contrast between the number of open-source projects available on platforms like Github compared to what is actually packaged in the distro. People used to care about getting their software into the major distributions but it doesn't seem to matter as much nowadays. In that light the "ring" graph showed toward the end, explaining that perhaps the apps at the outer layer don't need as strict and stringent criterias for inclusion than the more core OS components, totally makes sense and the future looks interesting.



I continue to be hugely impressed by how much Mozilla cares about improving the experience for new and existing contributors (impressed but not surprised! Their "Get Involved" page remains excellent, letting you get in touch with real people while showing at one glance all the different ways you can help, and having a mentored bugs process for new contributors is an awesome step-up from simply tagging easy bugs. Keep rocking and showing us all how it's done Mozilla!)

Videos of the talks should be available in time on the FOSDEM website.

Leave a comment

PyCon Ireland 2014

Together with a couple of colleagues, I wrote a short report about PyCon Ireland 2014. Once again the conference was a lot of fun and I'm looking forward to the 2015 edition! :)

Leave a comment

DK House Sapporo

I wanted to take a few minutes to give a shout out to DK House in Sapporo, as a great option to get affordable housing for mediumish to longer stays in Japan. I spent a month there in September.

DK house front

They specialise in short-term accommodations, normally starting from 3 months but if you want to stay for a shorter time like I did, you can get in touch with them in advance and ask if it's ok. Despite sending my queries in both Japanese and English, they replied in English only so you don't need to be overly concerned about the language if your Japanese isn't up to snuff yet :-)

The facilities are clean and the staff is always really helpful. As I was staying only a few weeks I couldn't get a room with a private bathroom, but it didn't turn out to be a problem. I never had to queue for a shower. There's also a shared kitchen and a convenient laundry room.

The rooms have a desk and a LAN port and I didn't have any major issues with the Internet. As a backup I'd rented a "pocket wifi" and I was glad to have it for network-intensive operations (ahem, devstack) or when there was a bit too much contention. Also, there is no wireless in the rooms. (If you're planning to spend a lot of time at that desk, you may want to mentally prepare yourself for the fact that the chair was not picked for its ergonomic qualities!).

There's some totally optional activities from time to time in the common room, if you want to meet with the other residents. You're a few minutes away from the tramway (市電) which leads you straight to the city centre in 15 minutes, 3 konbinis (Lawson, Seven Eleven) and a super tasty ramen place (てつや - I recommend the しょうゆ).

Now if you're used to luxurious super comfortable hotel mattresses and spacious rooms I suppose you may be disappointed.

DK House - Room

Personally I'm really happy with my experience and heartily recommend the place :)

Leave a comment | 6 so far

Training at EuroPython 2014 Making your first contribution to OpenStack

OpenStack logo

Last week I ran a 3-hour training on how to get started contributing to OpenStack at EuroPython. The aim was to give a high-level overview of how the contribution process works in the project and guide people through making their first contribution, from account creation to submitting a patch.


The session starts with an extremely fast overview of OpenStack, geared toward giving the participants an idea of the different components and possible areas for contribution. We then go through creating the accounts, why they're all needed, and how to work with DevStack for the people who have installed it. From there we finally start talking about the contribution process itself, some general points on open-source and OpenStack culture then go through a number of ideas for small tasks suitable for a first contribution. After that it's down to the participants to work on something and prepare a patch. Some people chose to file and triage/confirm bugs. The last part is about making sure the patch matches the community standards, submitting it, and talking about what happens next both to the patch and to the participant as a new member of the community.


During the weeks preceding the event, I ran two pilot workshops with small groups (less than 10 people) in my local hackerspace, in preparation for the big one in Berlin. That was absolutely invaluable in terms of making the material more understandable and refining the content for items I didn't think of covering initially (e.g. screen, openrc files) and topics that could use more in-depth explanations (e.g. how to find your first task), timings, and generally getting a feel for what's reasonably achievable within a 3-hour intro workshop.


I think it went well, despite some issues at the beginning due to lack of Internet connectivity (always a problem during hands-on workshops!). About 70 people had signed up to attend (a.k.a. about 7 times too many), thankfully other members of the OpenStack community stepped up and offered their help as mentors - thanks again everyone! In the end, about half the participants showed up in the morning, and we lost another dozen to the Internet woes. The people who stayed were mostly enthusiastic and seemed happy with the experience. According to the session etherpad, at least 5 new contributors uploaded a first patch :) Three are merged so far.

Distributing the slides early proved popular and useful. For an interactive workshop with lots of links and references it's really helpful for people to go back on something they missed or want to check again.


The start of the workshop is a bit lecture-heavy and could be titled "Things I Desperately Wish I Knew When Starting Out," and although there's some quizzes/discussions/demoing I'd love to make it more interactive in the future.

The information requested in order to join the Foundation tends to surprise people, I think because people come at it from the perspective of "I want to submit a patch" rather than "I am preparing to join a Foundation." At the hackerspace sessions in particular (maybe because it was easier to have candid discussions in such a small group), people weren't impressed with being forced to state an affiliation. The lack of obvious answer for volunteers gave the impression that the project cares more about contributions from companies. "Tog" might make an appearance in the company stats in the future :-)

On the sign-up form, the "Statement of Interest" is intimidating and confusing for some people (I certainly remember being uncertain over mine and what was appropriate, back when I was new and joining the Foundation was optional!). I worked around this after the initial session by offering suggestions/tips for both these fields, and spoke a bit more about their purpose.

A few people suggested I simply tell people to sign up for all these accounts in advance so there's more time during the workshop to work on the contribution itself. It's an option, though a number of people still hit non-obvious issues with Gerrit that are difficult to debug (some we opened bugs for, others we added to the etherpad). During one of the pilot sessions at the hackerspace, 6 of the 7 participants had errors when running git review -s  - I'm still not sure why, as it Worked On My Machine (tm) just fine at the same time.

Overall, I'm glad I did this! It was interesting to extract all this information from my brain, wiki pages and docs and attempt to make it as consumable as possible. It's really exciting when people come back later to say they've made a contribution and that the session helped to make the process less scary and more comprehensible. Thanks to all the participants who took the time to offer feedback as well! I hope I can continue to improve on this.

Leave a comment

Adventures with Steam on Linux Today: Mark of the Ninja doesn't start

I've been getting back into PC gaming for the last couple of months, and that has involved a lot of checking out what Steam on Linux looks like nowadays (i.e. playing lots of games). Most of the time everything works just fine and smoothly, but sometimes there are hiccups and yesterday I was motivated to learn how to debug them. Our story begins: Mark of the Ninja wouldn't start when clicking on the "Play" button from within the Steam client.

For context I'm running Steam on Fedora 19, 64 bits. I have a separate "Library" folder on another partition on which I install the games instead of Steam's default location in ~/.local/share.

Running the game from the command-line

Launching from the Steam client gives me zero information, just a brief black screen, so I thought I would see what happens when attempting to launch the game from the command-line.

jpichon@localhost:~/games/steam/SteamApps/common/mark_of_the_ninja/bin$ ./ninja.sh 
dlopen failed trying to load:
/home/jpichon/.local/share/Steam/ubuntu12_32/steamclient.sowith error:
libtier0_s.so: cannot open shared object file: No such file or directory
[S_API FAIL] SteamAPI_Init() failed; Sys_LoadModule failed to load: /home/jpichon/.local/share/Steam/ubuntu12_32/steamclient.so
[S_API FAIL] SteamAPI_Init() failed; unable to locate a running instance of Steam, or a local steamclient.dll.
./ninja.sh: line 3:  6477 Segmentation fault      (core dumped) ./ninja-bin32

Note that the Steam client must be started in order to even get that far. That library does exist at that location, let's see what's preventing it from being loaded:

$ ldd /home/jpichon/.local/share/Steam/ubuntu12_32/steamclient.so
    linux-gate.so.1 =>  (0xf77cb000)
    libtier0_s.so => not found
    libvstdlib_s.so => not found
    librt.so.1 => /lib/librt.so.1 (0xf67f4000)
    libX11.so.6 => /lib/libX11.so.6 (0xf66ba000)
    libusb-1.0.so.0 => /lib/libusb-1.0.so.0 (0xf66a1000)
    libopenal.so.1 => /lib/libopenal.so.1 (0xf664a000)
    libpulse.so.0 => /lib/libpulse.so.0 (0xf65fa000)
    libgobject-2.0.so.0 => /lib/libgobject-2.0.so.0 (0xf65aa000)
    libglib-2.0.so.0 => /lib/libglib-2.0.so.0 (0xf647b000)
    libdbus-glib-1.so.2 => not found
    libnm-glib.so.4 => not found
    libnm-util.so.2 => not found
    libudev.so.0 => not found
    libm.so.6 => /lib/libm.so.6 (0xf6437000)
    libdl.so.2 => /lib/libdl.so.2 (0xf6432000)

A number of these libraries already exist in Steam's ubuntu12_32 directory. Let's add it to our library path.

$ export LD_LIBRARY_PATH=/home/jpichon/.local/share/Steam/ubuntu12_32:/home/jpichon/.local/share/Steam/linux32
jpichon@localhost:~/games/steam/SteamApps/common/mark_of_the_ninja/bin$ ldd /home/jpichon/.local/share/Steam/ubuntu12_32/steamclient.so
    linux-gate.so.1 =>  (0xf7749000)
    libtier0_s.so => /home/jpichon/.local/share/Steam/ubuntu12_32/libtier0_s.so (0xf676b000)
    libvstdlib_s.so => /home/jpichon/.local/share/Steam/ubuntu12_32/libvstdlib_s.so (0xf6727000)
    libdbus-glib-1.so.2 => not found
    libnm-glib.so.4 => not found
    libnm-util.so.2 => not found
    libudev.so.0 => not found

Yup, that does seem to help. Let's add the rest:

$ export LD_LIBRARY_PATH=/home/jpichon/.local/share/Steam/ubuntu12_32:/home/jpichon/.local/share/Steam/linux32:
$ ldd /home/jpichon/.local/share/Steam/ubuntu12_32/steamclient.so | grep not

Excellent! Let's see if the game can run from the CLI now:

$ ./ninja-bin32 
[S_API FAIL] SteamAPI_Init() failed; no appID found.
Either launch the game from Steam, or put the file steam_appid.txt containing the correct appID in your game folder.
Segmentation fault (core dumped)

How to find a Steam appID?

That one's easy to find a solution for. One can either look at the ID in the store URL as linked earlier or check out steamdb. Let's create a file with the correct ID in that directory and try again.

$ emacs -nw steam_appid.txt
jpichon@localhost:~/games/steam/SteamApps/common/mark_of_the_ninja/bin$ ./ninja-bin32 
Setting breakpad minidump AppID = 214560
Steam_SetMinidumpSteamID:  Caching Steam ID:  76561198074768941 [API loaded no]
ERROR: Missing required OpenGL extensions.
ERROR: Missing required OpenGL extensions.
ERROR: Missing required OpenGL extensions.
ERROR: Missing required OpenGL extensions.

Missing required OpenGL extensions

At first I thought that was it - my laptop simply wasn't powerful enough to play the game. But fear not, ArchLinux came to the rescue and thanks to them I learnt about the handy -enablelog flag for the game.

$ ./ninja-bin32 -enablelog
$ less ~/.klei/ninja/log/rendering.log 
[16:34.09] (4144580416) EXT_texture_compression_s3tc required

The solution is to install the libtxc_dxtn package (available in rpmfusion-free) and/or set force_s3tc_enable=true as an environment variable (discovered in a cached version of the developer's official forum, as it's currently showing blank pages for me).

I think I had to restart X to make sure the new library was picked up correctly, and then success!

Additional notes: The game still requires the Steam client to be running, and the Steam overlay won't work. However your Steam status will be correctly shown as in-game and the time, etc, should update correctly.

Starting the game from Steam

Unfortunately, starting the game from Steam still didn't work, and I also happen to quite like the overlay especially for games that don't react well to Alt-tabbing. I modified the ninja.sh script to add the new paths and environment variables, with no luck.

To help with troubleshooting: right-click on the game name then go to Properties, there is a "Set launch options" button. There we can add the friendly -enablelog flag discovered earlier. Trying and failing to launch the game again gives us some helpful logs in the same location as before in ~/.klei.

[17:25.42] (4144023360) Failed to CreateDevice
[17:25.42] (4144023360) KGraphics failed to initialize.

[17:25.42] (4144023360) EXT_texture_compression_s3tc required

Sadly, the same problem as before - it turns out ninja.sh is likely not used at all when launching from Steam so the extra environment variables are not being picked up.

If Steam isn't using ninja.sh, how can I find out what it is using and if I can change it?

In the end, installing libtxc_dxtn.i686 (in parallel to the .x86_64 version) resolved the problem. I'm not sure why the game insists on using 32 bits libraries when articles over the web make it clear it supports 64 bits, but either way that did the trick and the game now behaves correctly like any other Steam game.

I'm still somewhat unhappy about that last part because it was more guesswork than debugging, and I don't feel equipped to properly gather information next time a similar issue occurs. How can I know which binary / path / file Steam is trying to launch and with what flags?

Hopefully while I go on to continue my Steam journey, this will have been helpful to someone else. Happy gaming!

Bonus track: the game is sloooow

After all this, it turns out my laptop is indeed a bit underpowered to play this particular game. Deactivating blur, bloom and displacement in the options helped, and so did greatly lowering the resolution (windowed mode would help too but it became pretty much unreadable to me then, so I favoured the fullscreen 640x480 instead. Your tolerance levels may vary!)

Leave a comment | 1 so far

Train the Trainer course FETAC Level 6

Last week I participated in a 3 day-long "Train the Trainer" course at Hibernian College in Dublin. My goals were to get a better understanding of how to teach to adults and learn how to design and deliver effective training sessions, in order to improve my technical courses and workshops, and teach better. It's something I had wanted to do for a couple of years and I'm glad I finally jumped and did it! I found the course both very enjoyable and very useful.

Choosing the course

Originally I was waiting for Engineers Ireland to put their 5-day "Train the Trainer" course back on their CPD calendar but after contacting them, it turns out they are not planning on running it again in the short term.

Trawling the interwebs, there are a lot of training providers out there and it can be difficult to figure out which ones are legit (perhaps I was lucky!). I decided to go with a FETAC approved course even though I don't truly need the certification at this stage, because I figured if a course is approved to offer a government backed certification it'll be somewhat serious. After that, it was down to the luck of SEO and how helpful people were when I requested information. The Hibernian folks answered all my questions promptly and are lovely to deal with, both by email and face-to-face.

The course

The advantage of a 3-day course is that you don't need to take so much time off work. The disadvantage is that it is pretty intense! I would recommend having a couple of hours available in the evening for homework, particularly on day 2. The course I did was instructed by Maura O'Toole, who also happens to have some background in computing from early in her career which made for a few funny stories. Otherwise as a professional trainer with a couple decades of experience, she had a lot of anecdotes to illustrate her content and advice and make them memorable. I also liked that although we did go through all the FETAC material, she made sure to contrast it with how the real world works whenever relevant, based on her own knowledge and experience.

It must be a somewhat stressful course to teach because as you explain how to do things, people are analysing what you're doing and checking how it matches against what you are saying - and will call you out on any discrepancy!

The first day is the most intense, I think, which is probably just as well since people come in full of energy and expectations. I learnt a ton right from the icebreaker and icebreaker analysis - not only about training in general but also things I specifically want to try in my next course. That got me enthused for the day! By day 3 it was more difficult to keep the energy levels very high, despite the interest still being there.

The class size is kept small, which is actually a FETAC requirement (max 8 people) and makes for a nice environment, particularly as people need to get up and present quite regularly. The participants' backgrounds were quite varied and we all heard presentations about horticulture, clinical trials, powerboat exams, TV production, programming...


The assessment is in two parts: a "skills demonstration" video where you spend 10 minutes - exactly 10 minutes - showing off your mad training/presentation skills, and a ~2600 words written assignment showing how you would prepare, design and deliver (or have already) a particular training session, to return within 4 weeks of completing the course. You need a pass mark (40%) in each component to get the certificate.

The requirements for the video are such that it contradicts a lot of what is learnt during the course. You shouldn't really be interacting with your students on the video (it's about showing your skills) - and it would probably be quite fake anyway. It's important to stick closely to the time. It was strange to pretend to teach programming with slides.

I still did my best though, I couldn't help wanting to try and plant the idea of maybe learning programming, in the head of my fellow trainees :-) They kindly indulged me. No one was particularly fond of the camera and the atmosphere was really supportive during the day, no matter the number of retakes.

According to the instructor, we all passed the skills demonstration (yay!), although the videos still need to go to the external FETAC examiner to confirm the final grades.

It was strange for me to go from nothing to doing a graded presentation on camera within 3 days. I'm used to preparing and rehearsing and rererererehearsing a lot more than that. It's probably good to just get it out of the way quickly, though perfectionists who want to shoot for the highest grades (ahem) will find it weird. I thought it was good.

Material-wise I was able to reuse content from my previous courses so that was one less concern in terms of preparation, I only had to adapt a little. People who never taught or trained before maybe have a bit of a rougher time (then again if they're deeply knowledgeable on a topic, maybe not). Likewise if you're not comfortable with public speaking (as in, at the level of standing up and introducing yourself/talking shite with a handful of supportive people in the room, without having a panic attack) I would carefully consider whether to attend such a fast-paced course.

A few things I learnt

It's comforting when best practices are described and this happens to match how I've been doing things :-) (For instance my way of doing training evaluation is probably not too crap).

It dawned on me at some point during day 2 that what I was trying to fix - the ways my sessions are designed - is not my main problem. My main issue is that the objectives for my courses are not defined clearly - I kinda try to fit as much as I can without overwhelming within the time I have available. Working from the results I want (using the useful "Training Analysis Framework" I now possess!) would allow me to clarify exactly what I want to achieve, which will then help me organise and (re)arrange my sessions as needed, rather than follow the chronological order. As a very important bonus point, it will help me write course descriptions that are more specific and should help "weed out" the people that the course is not designed for (i.e. non-beginners). I am really happy about this, this is a big problem each time.

I also learnt what I thought was my "happy smiley presentation face" is actually "my serious face." Must calibrate better! :-)

There'll be a lot more once I got through my notes again, but the fact that I was trying to fix the wrong thing was a huge revelation and I'm really happy I figured this out. Sadly I got sick right after the course (better than being sick for the filming, I guess!) and I haven't had a chance to sort out the rest of my notes - or, er, start on the assignment.


I have a couple of weeks to complete the assessment. Hopefully I can get a big chunk of it done over the week-end, once the cold is gone. From what I understand, it is quite time-consuming and the instructor suggested doing it for a course we're actually planning on teaching so that the output is useful and usable, rather than a simple academic exercise. I plan to use my intro to programming course for this, except on a different timeline (I'll imagine it's taught over a couple of days rather than in the evening - and who knows, if I'm inspired I might just go and try teaching it in that format too!)

After that, I want to schedule the course and teach again in Tog. I "paused" teaching because I felt my course wasn't good enough and I wanted to overhaul it first. As with many "let's redo it from scratch!" projects, it's just meant I haven't taught in over a year now... So, time to fix this, and as I refamiliarise myself with the course first time around, the only major change I'll make will be to update the examples to Python 3. It's time!

Updated in May to add: I passed! :) Let's go and teach all the things!

Leave a comment

<~ Older posts