OpenStack Client in Queens - Notes from the PTG

Here are a couple of notes about the OpenStack Client, taken while dropping in and out of the room during the OpenStack PTG in Denver, a couple of weeks ago.


The original plan was to simply get rid of deprecated stuff, change a few names here and there and have few compatibility breaking changes.

However, now shade may adopt the SDK and move some of its contents into it. Then shade would consume the SDK, and OSC would consume it as well. It would be pretty clean and easy to use, but would mean major breaking changes for OSC4. OSC would become a shim layer over osc-lib. The plugin interface is going to change, as the loading time is long - every command requires loading all of the plugins which takes over half of the loading time even though the commands themselves load quickly. (There will be more communication once we understand what the new plugin interface will look like.) OSC4 would rip out global argument processing and adopt os-client-config (breaking change). It would adopt the SDK and stop using the client libraries.

Note that this may all change depending on how the SDK situation evolves.

From the end-user perspective, some option names will change. There is some old cruft left around for compatibility reasons that will disappear (e.g. "ip floating" will be gone, it changed a year ago to "floating ip"). The column output will handle structured data better and some of this is already commited to the osc4 feature branch.

The order of commands will not be changed.

For authentication, the bevahiour may change a bit between the CLI behaviour or clouds.yaml. os-client-config came along and changed a few things, notably with regard to precedence. The OSC way of doing will be removed and replaced with OCC.

Best effort will be made not to break scripts. The "configuration show" command shows your current configuration but not where it comes from - it's a bit hard to do because of all the merging of parameters going on.

The conversation continued about auth, how shade uses adapters and may change the SDK to use them as well: would sessions or adapters make the most sense? I had to attend another session and missed the discussion and conclusions.

Command aliases

There was a long discussion around command aliases, as some commands are very long to type (e.g. healthmonitor). It was very clear it's not something OSC wants to get into the business of managing itself (master list of collisions, etc) so it would be up to individual plugins. There could be individual .osc config file that would do the short to long name mapping, similar to a shell alias. It shouldn't be part of the official plugin (otherwise, "why don't we just use those names to begin with?") but it could be another pluging that sets up alias mappings to the short name or a second set of entry points, or include a "list of shortcuts we found handy" in the documentation. Perhaps there should be a community-wide discussion about this.

Collisions are to be managed by users, not by OSC. Having one master list to manage the initial set of keywords is already an unfortunate compromise.

Filtering and others

It's not possible to do filtering on lists or any kind of complex filtering at the moment. The recommendation, or what people currently do, is to output to --json and pipe the output to jq to do what they need. The documentation should be extended to show how to do this.

At the moment filtering varies wildly between APIs and none of them are very expressive, so there isn't a lot OSC can do.

Leave a comment

TripleO Deep Dive: Internationalisation in the UI

Yesterday, as part of the TripleO Deep Dives series I gave a short introduction to internationalisation in TripleO UI: the technical aspects of it, as well as a quick overview of how we work with the I18n team.

You can catch the recording on BlueJeans or YouTube, and below's a transcript.


Life and Journey of a String

Internationalisation was added to the UI during Ocata - just a release ago. Florian implemented most of it and did the lion's share of the work, as can be seen on the blueprint if you're curious about the nitty-gritty details.

Addition to the codebase

Here's an example patch from during the transition. On the left you can see how things were hard-coded, and on the right you can see the new defineMessages() interface we now use. Obviously new patches should directly look like on the right hand-side nowadays.

The defineMessages() dictionary requires a unique id and default English string for every message. Optionally, you can also provide a description if you think there could be confusion or to clarify the meaning. The description will be shown in Zanata to the translators - remember they see no other context, only the string itself.

For example, a string might sound active like if it were related to an action/button but actually be a descriptive help string. Or some expressions are known to be confusing in English - "provide a node" has been the source of multiple discussions on list and live so might as well pre-empt questions and offer additional context to help the translators decide on an appropriate translation.

Extraction & conversion

Now we know how to add an internationalised string to the codebase - how do these get extracted into a file that will be uploaded to Zanata?

All of the following steps are described in the translation documentation in the tripleo-ui repository. Assuming you've already run the installation steps (basically, npm install):

$ npm run build

This does a lot more than just extracting strings - it prepares the code for being deployed in production. Once this ends you'll be able to find your newly extracted messages under the i18n directory:

$ ls i18n/extracted-messages/src/js/components

You can see the directory structure is kept the same as the source code. And if you peek into one of the files, you'll note the content is basically the same as what we had in our defineMessages() dictionary:

$ cat i18n/extracted-messages/src/js/components/Login.json 
    "id": "UserAuthenticator.authenticating",
    "defaultMessage": "Authenticating..."
    "id": "Login.username",
    "defaultMessage": "Username"
    "id": "Login.usernameRequired",
    "defaultMessage": "Username is required."

However, JSON is not a format that Zanata understands by default. I think the latest version we upgraded to, or the next one might have some support for it, but since there's no i18n JSON standard it's somewhat limited. In open-source software projects, po/pot files are generally the standard to go with.

$ npm run json2pot

> tripleo-ui@7.1.0 json2pot /home/jpichon/devel/tripleo-ui
> rip json2pot ./i18n/extracted-messages/**/*.json -o ./i18n/messages.pot

> [react-intl-po] write file -> ./i18n/messages.pot ✔️

$ cat i18n/messages.pot 
msgid ""
msgstr ""
"POT-Creation-Date: 2017-07-07T09:14:10.098Z\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"MIME-Version: 1.0\n"
"X-Generator: react-intl-po\n"

#: ./i18n/extracted-messages/src/js/components/nodes/RegisterNodesDialog.json
#. [RegisterNodesDialog.noNodesToRegister] - undefined
msgid ""No Nodes To Register""
msgstr ""

#: ./i18n/extracted-messages/src/js/components/nodes/NodesToolbar/NodesToolbar.json
#. [Toolbar.activeFilters] - undefined
#: ./i18n/extracted-messages/src/js/components/validations/ValidationsToolbar.json
#. [Toolbar.activeFilters] - undefined
msgid "Active Filters:"
msgstr ""

#: ./i18n/extracted-messages/src/js/components/nodes/RegisterNodesDialog.json
#. [RegisterNodesDialog.addNew] - Small button, to add a new Node
msgid "Add New"
msgstr ""

#: ./i18n/extracted-messages/src/js/components/plan/PlanFormTabs.json
#. [PlanFormTabs.addPlanName] - Tooltip for "Plan Name" form field
msgid "Add a Plan Name"
msgstr ""

This messages.pot file is what will be automatically uploaded to Zanata.

Infra: from the git repo, to Zanata

The following steps are done by the infrastructure scripts. There's infra documentation on how to enable translations for your project, in our case as the first internationalised JavaScript project we had to update the scripts a little as well. This is useful to know if an issue happens with the infra jobs; debugging will probably bring you here.

The scripts live in the project-config infra repo and there are three files of interest for us:

In this case, is the file of interest to us: it simply sets up the project on line 76, then sends the pot file up to Zanata on line 115.

What does "setting up the project" entails? It's a function in, that pretty much runs the steps we talked about in the previous section, and also creates a config file to talk to Zanata.

Monitoring the post jobs

Post jobs run after a patch has already merged - usually to upload tarballs where they should be, update the documentation pages, etc, and also upload messages catalogues onto Zanata. Being a 'post' job however means that if something goes wrong, there is no notification on the original review so it's easy to miss.

Here's the OpenStack Health page to monitor 'post' jobs related to tripleo-ui. Scroll to the bottom - hopefully tripleo-ui-upstream-translation-update is still green! It's good to keep an eye on it although it's easy to forget. Thankfully, AJaeger from #openstack-infra has been great at filing bugs and letting us know when something does go wrong.

Debugging when things go wrong: an example

We had a couple of issues whereby a linebreak gets introduced into one of the strings, which works fine in JSON but breaks our pot file. If you look at the content from the bug (the full logs are no longer accessible):

2017-03-16 12:55:13.468428 | + zanata-cli -B -e push --copy-trans False
2017-03-16 12:55:15.391220 | [INFO] Found source documents:
2017-03-16 12:55:15.391405 | [INFO]            i18n/messages
2017-03-16 12:55:15.531164 | [ERROR] Operation failed: missing end-quote

You'll notice the first line is the last function we call in the script; for debugging that gives you an idea of the steps to follow to reproduce. The upstream Zanata instance also lets you create toy projects, if you want to test uploads yourself (this can't be done directly on the OpenStack Zanata instance.)

This particular newline issue has popped up a couple of times already. We're treating it with band-aids at the moment, ideally we'd get a proper test on the gate to prevent it from happening again: this is why this bug is still open. I'm not very familiar with JavaScript testing and haven't had a chance to look into it yet; if you'd like to give it a shot that'd be a useful contribution :)

Zanata, and contributing translations

The OpenStack Zanata instance lives at This is where the translators do their work. Here's the page for tripleo-ui, you can see there is one project per branch (stable/ocata and master, for now). Sort by "Percent Translated" to see the languages currently translated. Here's an example of the translator's view, for Spanish: you can see the English string on the left, and the translator fills in the right side. No context! Just strings.

At this stage of the release cycle, the focus would be on 'master,' although it is still early to do translations; there is a lot of churn still.

If you'd like to contribute translations, the I18n team has good documentation about how to go about how to do it. The short version: sign up on Zanata, request to join your language team, once you're approved - you're good to go!

Return of the string

Now that we have our strings available in multiple languages, it's time for another infra job to kick in and bring them into our repository. This is where comes in. We pull the po files from Zanata, convert them to JSON, then do a git commit that will be proposed to Gerrit.

The cleanup step does more than it might seem. It checks if files are translated over a certain ratio (~75% for code), which avoids adding new languages when there might only be one or two words translated (e.g. someone just testing Zanata to see how it works). Switching to your language and yet having the vast majority of the UI still appear in English is not a great user experience.

In theory, files that were added but are now below 40% should get automatically removed, however this doesn't quite work for JavaScript at the moment - another opportunity to help! Manual cleanups can be done in the meantime, but it's a rare event so not a major issue.

Monitoring the periodic jobs

Zanata is checked once a day every morning, there is an OpenStack Health page for this as well. You can see there are two jobs at the moment (hopefully green!), one per branch: tripleo-ui-propose-translation-update and tripleo-ui-propose-translation-update-ocata. The job should run every day even if there are no updates - it simply means there might not be a git review proposed at the end.

We haven't had issues with the periodic job so far, though the debugging process would be the same: figure out based on the failure if it is happening at the infra script stage or in one of our commands (e.g. npm run po2json), try to reproduce and fix. I'm sure super-helpful AJaeger would also let us know if he were to notice an issue here.

Automated patches

You may have seen the automated translations updates pop up on Gerrit. The commit message has some tips on how to review these: basically don't agonise over the translation contents as problems there should be handled in Zanata anyway, just make sure the format looks good and is unlikely to break the code. A JSON validation tool runs during the infra prep step in order to "prettify" the JSON blob and limit the size of the diffs, therefore once the patch  makes it out to Gerrit we know the JSON is well-formed at least.

Try to review these patches quickly to respect the translators' work. Not very nice to spend a lot of time on translating a project and yet not have your work included because no one was bothered to merge it :)

A note about new languages...

If the automated patch adds a new language, there'll be an additional step required after merging the translations in order to enable it: adding a string with the language name to a constants file. Until recently, this took 3 or 4 steps - thanks to Honza for making it much simpler!

This concludes the technical journey of a string. If you'd like to help with i18n tasks, we have a few related bugs open. They go from very simple low-hanging-fruits you could use to make your first contribution to the UI, to weird buttons that have translations available yet show in English but only in certain modals, to the kind of CI resiliency tasks I linked to earlier. Something for everyone! ;)

Working with the I18n team

It's really all about communication. Starting with...

Release schedule and string freezes

String freezes are noted on the main schedule but tend to fit the regular cycle-with-milestones work. This is a problem for a cycle-trailing project like tripleo-ui as we could be implementing features up to 2 weeks after the other projects, so we can't freeze strings that early.

There were discussions at the Atlanta PTG around whether the I18n should care at all about projects that don't respect the freeze deadlines. That would have made it impossible for projects like ours to ever make it onto the I18n official radar. The compromise was that cycle-trailing project should have a I18n cross-project liaison that communicates with the I18n PTL and team to inform them of deadlines, and also to ignore Soft Freeze and only do a Hard Freeze.

This will all be documented under an i18n governance tag; while waiting for it the notes from the sessions are available for the curious!

What's a String Freeze again?

The two are defined on the schedule: soft freeze means not allowing changes to strings, as it invalidates the translator's work and forces them to retranslate; hard freeze means no additions, changes or anything else in order to give translators a chance to catch up.

When we looked at Zanata earlier, there were translation percentages beside each language: the goal is always the satisfaction of reaching 100%. If we keep adding new strings then the goalpost keeps moving, which is discouraging and unfair.

Of course there's also an "exception process" when needed, to ask for permission to merge a string change with an explanation or at least a heads-up, by sending an email to the openstack-i18n mailing list. Not to be abused :)

Role of the I18n liaison

...Liaise?! Haha. The role is defined briefly on the Cross-Projects Liaison wiki page. It's much more important toward the end of the cycle, when the codebase starts to stabilise, there are fewer changes and translators look at starting their work to be included in the release.

In general it's good to hang out on the #openstack-i18n IRC channel (very low traffic), attend the weekly meeting (it alternates times), be available to answer questions, and keep the PTL informed of the I18n status of the project. In the case of cycle-trailing projects (quite a new release model still), it's also important to be around to explain the deadlines.

A couple of examples having an active liaison helps with:

  • Toward the end or after the release, once translations into the stable branch have settled, the stable translations get copied into the master branch on Zanata. The strings should still be fairly similar at that point and it avoids translators having to re-do the work. It's a manual process, so you need to let the I18n PTL know when there are no longer changes to stable/*.
  • Last cycle, because the cycle-trailing status of tripleo-ui was not correctly documented, a Zanata upgrade was planned right after the main release - which for us ended up being right when the codebase had stabilised enough and several translators had planned to be most active. Would have been solved with better, earlier communication :)


After the Ocata release, I sent a few screenshots of tripleo-ui to the i18n list so translators could see the result of their work. I don't know if anybody cared :-) But unlike Horizon, which has an informal test system available for translators to check their strings during the RC period, most of the people who volunteered translations had no idea what the UI looked like. It'd be cool if we could offer a test system with regular string updates next release - maybe just an undercloud on the new RDO cloud? Deployment success/failures strings wouldn't be verifiable but the rest would, while the system would be easier to maintain than a full dev TripleO environment - better than nothing. Perhaps an idea for the Queens cycle!

The I18n team has a priority board on the Zanata main page (only visible when logged in I think). I'm grateful to see TripleO UI in there! :) Realistically we'll never move past Low or perhaps Medium priority which is fair, as TripleO doesn't have the same kind of reach or visibility that Horizon or the installation guides do. I'm happy that we're included! The OpenStack I18n team is probably the most volunteer-driven team in OpenStack. Let's be kind, respect string freezes and translators' time! \o/


Leave a comment

git status & unicode paths

I've had to set this up on three computers in the last couple of weeks and I forgot both how to do it and how to google for it effectively every single time, so here it goes as a memo to future-self for next time.

If your git status shows gibberish like this for non-ASCII file names:

$ git status
On branch master

    modified:   "\350\265\244\343\201\204\346\214\207"

The solution is to disable quotes in paths:

$ git config --global core.quotePath false


$ git status
On branch master

    modified:  赤い指
Leave a comment

OpenStack Pike PTG: TripleO, TripleO UI Some highlights

For the second part of the PTG (vertical projects), I mainly stayed in the TripleO room, moving around a couple of times to attend cross-project sessions related to i18n.

Although I always wish I understood more/everything, in the end my areas of interest (and current understanding!) in TripleO are around the UI, installing and configuring it, the TripleO CLI, and the tripleo-common Mistral workflows. Therefore the couple of thoughts in this post are mainly relevant to these - if you're looking for a more exhaustive summary of the TripleO discussions and decisions made during the PTG, I recommend reading the PTL's excellent thread about this on the dev list, and the associated etherpads.

Random points of interest

  • Containers is the big topic and had multiple sessions dedicated to it, both single and cross-projects. Many other sessions ended up revisiting the subject as well, sometimes with "oh that'll be solved with containers" and sometimes with "hm good but that won't work with containers."
  • A couple of API-breaking changes may need to happen in Tripleo Heat Templates (e.g. for NFV, passing a role mapping vs a role name around). The recommendation is to get this in as early as possible (by the first milestone) and communicate it well for out of tree services.
  • When needing to test something new on the CI, look at the existing scenarios and prioritise adding/changing something there to test for what you need, as opposed to trying to create a brand new job.
  • Running Mistral workflows as part of or after the deployment came up several times and was even a topic during a cross-project Heat / Mistral / TripleO sessions. Things can get messy, switching between Heat, Mistral and Puppet. Where should these workflows live (THT, tripleo-common)? Service-specific workflows (pre/post-deploy) are definitely something people want and there's a need to standardise how to do that. Ceph's likely to be the first to try their hand at this.
  • One lively cross-project session with OpenStack Ansible and Kolla was about parameters in configuration files. Currently whenever a new feature is added to Nova or whatever service, Puppet and so on need to be updated manually. The proposal is to make a small change to oslo.config to enable it to give an output in machine-readable YAML which can then be consumed (currently the config generated is only human readable). This will help with validations, and it may help to only have to maintain a structure as opposed to a template.
  • Heat folks had a feedback session with us about the TripleO needs. They've been super helpful with e.g. helping to improve our memory usage over the last couple of cycles. My takeaway from this session was "beware/avoid using YAQL, especially in nested stacks." YAQL is badly documented and everyone ends up reading the source code and tests to figure out how to things. Bringing Jinja2 into Heat or some kind of way to have repeated patterns from resources (e.g. based on a file) also came up and was cautiously acknowledged.
  • Predictable IP assignment on the control plane is a big enough issue that some people are suggesting dropping Neutron in the undercloud over it. We'd lose so many other benefits though, that it seems unlikely to happen.
  • Cool work incoming allowing built-in network examples to Just Work, based on a sample configuration. Getting the networking stuff right is a huge pain point and I'm excited to hear this should be achievable within Pike.

Python 3

Python 3 is an OpenStack community goal for Pike.

Tripleo-common and python-tripleoclient both have voting unit tests jobs for Python 3.5, though I trust them only moderately for a number of reasons. For example many of the tests tend to focus on the happy path and I've seen and fixed Python 3 incompatible code in exceptions several times (no 'message' attribute seems easy to get caught into), despite the unit testing jobs being all green. Apparently there are coverage jobs we could enable for the client, to ensure the coverage ratio doesn't drop.

Python 3 for functional tests was also brought up. We don't have functional tests in any of our projects and it's not clear the value we would get out of it (mocking servers) compared to the unit testing and all the integration testing we already do. Increasing unit test coverage was deemed a more valuable goal to pursue for now.

There are other issues around functional/integration testing with Python 3 which will need to be resolved (though likely not during Pike). For example our integration jobs run on CentOS and use packages, which won't be Python 3 compatible yet (cue SCL and the need to respin dependencies). If we do add functional tests, perhaps it would be easier to have them run on a Fedora gate (although if I recall correctly gating on Fedora was investigated once upon a time at the OpenStack level, but caused too many issues due to churn and the lack of long-term releases).

Another issue with Python 3 support and functional testing is that the TripleO client depends on Mistral server (due to the Series Of Unfortunate Dependencies I also mentioned in the last post). That means Mistral itself would need to fully support Python 3 as well.

Python 2 isn't going anywhere just yet so we still have time to figure things out. The conclusions, as described in Emilien's email seem to be:

  • Improve the unit test coverage
  • Enable the coverage job in CI
  • Investigate functional testing for python-tripleoclient to start with, see if it makes sense and is feasible

Sample environment generator

Currently environment files in THT are written by hand and quite inconsistent. This is also important for the UI, which needs to display this information. For example currently the environment general description is in a comment at the top of the file (if it exists at all), which can't be accessed programmatically. Dependencies between environment files are not described either.

To make up for this, currently all that information lives in the capabilities map but it's external to the template themselves, needs to be updated manually and gets out of sync easily.

The sample environment generator to fix this has been out there for a year, and currently has two blockers. First, it needs a way to determine which parameters are private (that is, parameters that are expected to be passed in by another template and shouldn't be set by the user).

One way could be to use a naming convention, perhaps an underscore prefix similar to Python. Parameter groups cannot be used because of a historical limitation, there can only be one group (so you couldn't be both Private and Deprecated). Changing Heat with a new feature like Nested Groups or generic Parameter Tags could be an option. The advantage of the naming convention is that it doesn't require any change to Heat.

From the UI perspective, validating if an environment or template is redefining parameters already defined elsewhere also matters. Because if it has the same name, then it needs to be set up with the same value everywhere or it's uncertain what the final value will end up as.

I think the second issue was that the general environment description can only be a comment at the moment, there is no Heat parameter to do this. The Heat experts in the room seemed confident this is non-controversial as a feature and should be easy to get in.

Once the existing templates are updated to match the new format, the validation should be added to CI to make sure that any new patch with environments does include these parameters. Having "description" show up as an empty string when generating a new environment will make it more obvious that something can/should be there, while it's easy to forget about it with the current situation.

The agreement was:

  • Use underscores as a naming convention to start with
  • Start with a comment for the general description

Once we get the new Heat description attribute we can move things around. If parameter tags get accepted, likewise we can automate moving things around. Tags would also be useful to the UI, to determine what subset of relevant parameters to display to the user in smaller forms (easier to understand that one form with several dozens of fields showing up all at once). Tags, rather than parameter groups are required because of the aforementioned issue: it's already used for deprecation and a parameter can only have one group.

Trusts and federation

This was a cross-project session together with Heat, Keystone and Mistral. A "Trust" lets you delegate or impersonate a user with a subset of their rights. From my experience in TripleO, this is particularly useful with long running Heat stacks as a authentication token expires after a few hours which means you lose the ability to do anything in the middle of an operation.

Trusts have been working very well for Heat since 2013. Before that they had to encrypt the user password in order to ask for a new token when needed, which all agreed was pretty horrible and not anything people want to go back to. Unfortunately with the federation model and using external Identity Providers, this is no longer possible. Trusts break, but some kind of delegation is still needed for Heat.

There were a lot of tangents about security issues (obviously!), revocation, expiration, role syncing. From what I understand Keystone currently validates Trusts to make sure the user still has the requested permissions (that the relevant role hasn't been removed in the meantime). There's a desire to have access to the entire role list, because the APIs currently don't let us check which role is necessary to perform a specific action. Additionally, when Mistral workflows launched from Heat get in, Mistral will create its own Trusts and Heat can't know what that will do. In the end you always kinda end up needing to do everything. Karbor is running into this as well.

No solution was discovered during the session, but I think all sides were happy enough that the problem and use cases have been clearly laid out and are now understood.

TripleO UI

Some of the issues relevant to the UI were brought up in other sessions, like standardising the environment files. Other issues brought up were around plan management, for example why do we use the Mistral environment in addition to Swift? Historically it looks like it's because it was a nice drop-in replacement for the defunct TripleO API and offered a similar API. Although it won't have an API by default, the suggestion is to move to using a file to store the environment during Pike and have a consistent set of templates: this way all the information related to a deployment plan will live in the same place. This will help with exporting/importing plans, which itself will help with CLI/UI interoperability (for instance there are still some differences in how and when the Mistral environment is generated, depending on whether you deployed with the CLI or the UI).

A number of other issues were brought up around networking, custom roles, tracking deployment progress, and a great many other topics but I think the larger problems around plan management was the only expected to turn into a spec, now proposed for review.

I18n and release models

After the UI session I left the TripleO room to attend a cross-project session about i18n, horizon and release models. The release model point is particularly relevant because the TripleO UI is a newly internationalised project as of Ocata and the first to be cycle-trailing (TripleO releases a couple of weeks after the main OpenStack release).

I'm glad I was able to attend this session. For one it was really nice to collaborate directly with the i18n and release management team, and catch up with a couple of Horizon people. For second it turns out tripleo-ui was not properly documented as cycle-trailing (fixed now!) and that led to other issues.

Having different release models is a source of headaches for the i18n community, already stretched thin. It means string freezes happen at different times, stable branches are cut at different points, which creates a lot of tracking work for the i18n PTL to figure which project is ready and do the required manual work to update Zanata upon branching. One part of the solution is likely to figure out if we can script the manual parts of this workflow so that when the release patch (which creates the stable branch) is merged, the change is automatically reflected in Zanata.

For the non-technical aspects of the work (mainly keeping track of deadlines and string freezes) the decision was that if you want to be translated, then you need to respect the same deadlines than the cycle-with-milestones projects do on the main schedule, and if a project doesn't want to - if it doesn't respect the freezes or cut branches when expected, then they will be dropped from the i18n priority dashboard in Zanata. This was particularly relevant for Horizon plugins, as there's about a dozen of them now with various degrees of diligence when it comes to doing releases.

These expectations will be documented in a new governance tag, something like i18n:translated.

Obviously this would mean that cycle-trailing projects would likely never be able to get the tag. The work we depend on lands late and so we continue making changes up to two weeks after each of the documented deadlines. ianychoi, the i18n PTL seemed happy to take these projects under the i18n wing and do the manual work required, as long as there is an active i18n liaison for the project communicating with the i18n team to keep them informed about freezes and new branches. This seemed to work ok for us during Ocata so I'm hopeful we can keep that model. It's not entirely clear to me if this will also be documented/included in the governance tag so I look forward to reading the spec once it is proposed! :)

In the case of tripleo-ui we're not a priority project for translations nor looking to be, but we still rely on the i18n PTL to create Zanata branches and merge translations for us, and would like to continue with being able to translate stable branches as needed.


The CI Q&A session on Friday morning was amazingly useful and unanimously agreed it should be moved to the documentation (not done yet). If you've ever scratched your head about something related to TripleO CI, have a look at the etherpad!

Leave a comment

OpenStack Pike PTG: OpenStack Client Tips and background for interested contributors

Last week I went off to Atlanta for the first OpenStack Project Teams Gathering, for a productive week discussing all sort of issues and cross-projects concerns with fellow OpenStack contributors.

For the first two days, I decided to do something you're not supposed to and attend the OpenStack Client (OSC) sessions despite not being a contributor yet. From my perspective it was incredibly helpful, as I got to hear about technical choices, historical decisions, current pain points and generally increase my understanding of the context around the project. I expect this will come very handy as I become more involved and thought I'd document some of this here, or more accurately have a giant braindump I can reference in the future. These are the things that were of interest to me during the meetings and is not meant to be an authoritative representation of everything that was discussed or other participants' thoughts and decisions.

The etherpad for these sessions lives at

Issue: Microversions

Microversions are a big topic that I think came up in multiple rooms. From the client's perspective, a huge problem is that most of the microversion stuff is hidden within the various clients and little of it surfaces back to the user.

There's a need for better negotiation, for the client to determine what is available. The path forward seems to be for the client to do auto-discovery of the latest available version.

The issue was mainly discussed while Cinder folks were around, so a lot of the discussion centred around the project. For example they do microversions like Nova, where everything is in the headers. OSC will be relying on what's available in cinderclient itself and is unlikely to ever develop more capabilities than already exist there. However in the cinder case, microversions are only available from v3 and that hasn't merged in OSC yet.

Other problems with microversions: how to keep track of what's available in each version? The plan is to use "latest" and cap specific commands if they are known to break. However there will still be issues with the interactive mode, as the conversation isn't reinitialised with each call.

Issue: Listing all the commands vs only available ones

This appears to be a topic where the answer is "we gave up a long time ago," for technical and other reasons. Many service endpoints don't even let you get to the version without being authenticated.

But even without that, there is a reluctance with having to do a server round-trip just to display the help.

It's low priority for OSC, although it may be more important for other projects like Horizon.

Terminology (Cinder): manage/unmanage vs adopt/abandon

One of the of specific issues that was brought up and resolved was about the terminology used by a couple of Cinder commands.

UX studies have been done on the client and the feedback is clear that what we currently have is not obvious. In the case of "manage" it is probably too generic, and not particularly a standard term that every Storage expert would instantly recognise (outside of Cinder).

For a similar use case, both Heat and Ironic are using the "adopt"/"abandon" terminology therefore an agreement was reached to use it here as well. It's up to the Cinder folks to decide if they wish to do the same deprecation or not in their own client.

To help people familiar with the existing command to find the new, clearer name, the usual way to do it is to include the old name in the help string so that users can grep for it.

Terminology (Cinder): Consistency groups & deprecation policy

Consistency group is being replaced by volume_group. If the command is mostly similar from the user perspective the recommendation is to simply alias it. Both commands should be kept for a while, then the old one can be deprecated. As much as possible, don't break scripts.

The logic can also be hidden within the client, like it is for Neutron and nova-network: many commands look the same from the user's perspective, the branching to use either as required is done internally.

Issue: NoAuth requirements

The requirement for a --no-auth option came up, which doesn't seem currently possible. It's important for projects like Ironic that support a stand-alone mode where there is no Keystone.

There might be a "Token 'off' type" already available in some cases, though it still requires other "unnecessary" parameters like auth_url.

OSC doesn't currently support that, though it may be already in the new layer that Monty is creating.

A new layer

That new layer came up a few times. I think this is shade? Apparently the (long term) end goal is to destroy the Python libraries and replace them with straight REST calls aka using requests for everything, to get rid of some of the issues with the existing Python libraries.

Some examples of the issues:

  • The Neutron client has a strange architecture and was never brought into OSC because of this.
  • The Glance client has unique dependencies that no one requires, like openssl which make it hard to install on Windows.

There were some discussions around why this new layer is not using the SDK. This may be because the SDK has a big ORM data layer, which doesn't fit with the "simple" strategy.

So the goal becomes building a thin REST layer on of the REST APIs, which of course has its own set of concerns e.g. how to keep track of what's available in every release, with the microversion stuff.

How about OSC using shade? It does seem to have filled a need and gained traction. However it is opiniated, there's duplication on both sides. It is both slightly higher-level and lower-level than needed. I didn't get the impression it's on the cards at this point.

New meta-package

There is a new meta-package, openstackclient (as opposed to python-openstackclient) to install the client together with all the plugins. A downside of the plugin architecture that some people don't like is that it requires multiple steps to get access to everything. The meta-package would resolve that.

This project is basically a requirement file, with some docs and a few tests (more may migrate there). It is ready for a "0.1" release, although the recommendation from the release team is to keep the versioning in sync with the main OSC.

I was wondering why python-tripleoclient wasn't in there yet, it turns out for now only projects that are in the global requirements are included. I got distracted with wanting to fix this before remembering that due to A Series Of Unfortunate Dependencies, installing the TripleO client also brings in the Mistral server... and cutting that chain is not as straightforward as I'd hoped (now documented in a Launchpad bug). Though if the other clients are there with their current set of issues, maybe it'd be ok to add it anyway.

From now on, projects should add themselves to the meta-package when creating a new plugin - this will be added to the checklist.

Another note, for this to work individual clients should not depend on OSC itself. They shouldn't anyway, because it brings in too many things.

Deprecating ALL the clients

OSC is meant to deprecate all of the existing clients. However there is no timeline, this is specific to each project. At the moment, Cinder and Neutron are getting up to speed but it's up to them to decide when to get rid of the old client - if at all.

In the case of Keystone, it took 2 releases to get deprecated, during which it established a very strict security fixes-only policy - even typo fixes were refused!

Migration path for users

There's a need for docs to help users familiar with the old clients to learn the new commands. stevemar already documented most of them, amotoki provided a mapping for Neutron commands as well.

Since the work is already done, these will be added to the docs. In theory it shouldn't get too stale since no new commands are to be added to the old CLIs (?). The mapping should start with the old commands first, as this how people will be searching.

Cliff topics: entry point freedom, adding to existing commands

dhellman dropped by on Tuesday morning to address Cliff-specific issues and feature requests. I missed the beginning but learnt a bunch of interesting things by listening in. Cliff is what gives us the base class structure and maps entry points.

Some security issues were brought up, and so was the possibility for users to lock down namespaces and give them some control over what gets added. It would be nice if there was a tool to check which entry point was hooked from where. Cliff could be modified to include additional information on where the command comes from, and an independent tool could be written up that uses that. I don't think a bug's open for it yet, though the goal seems to get it in for Pike.

Another feature request related to Cliff was to add to existing commands, which comes up a lot especially for quotas. "Purge" may be another candidate.

Pike community goal: Python 3

Python 3 should be supported by the client already. The only thing left to do is to modify devstack to load OSC with Python 3 by default.

Blueprints in OSC

In passing I learnt that OSC doesn't really use blueprints nor want to - these are used for implementing Neutron commands at the moment, but the team would rather it doesn't get used for anything else.

If someone wishes to implement a feature, better to open it as a bug instead so that it doesn't get lost.

osc_lib and feature branches

It took a long time to implement osc_lib. What to do next time big changes are expected, e.g. removing things for the next big version?

Using feature branches is generally not recommended as deleting them and integrating them at the end requires a lot of effort.

In this case since removals will break things, working with infra to create a feature branch seems to make sense.

Glance v2

Old operations like create, etc are supported but the new commands aren't (mainly around metadata and namespaces). We spent a bit of time trying to figure out what are metadata definitions (it's related to Glare, pre-Big Tent, and expanded a bit the scope of Glance. They could be used as artifacts for Nova, hardware affinity, identifying what flags actually mean). It's exposed through Horizon currently.

Current verdict: they won't be implemented till someone asks for them.

Adding your project to the client, creating your own plugin

Finally, a couple of PTLs dropped by to ask for advice on how to get started creating their client (and avoid folks trying to do it in-tree). The most important and difficult part seems to be naming resources. There is no process to determine terminology, but try to keep it consistent with the existing commands (verbs are easier than nouns, e.g. new thing = create).

There's no good answer for quotas yet, though this may change. For now, taking a new 'load-balancer' resource as an example, this would look like:

$ openstack load-balancer quotas

In the future, we may get something similar to:

$ openstack quotas --load-balancer

though the extension to do this doesn't exist yet.

There is no OSC convention for cascade-delete (e.g. adding a flag). A purge command is underway. It would delete all the resources under a project.

With regard to version/microversions, give a way for the client to detect what it needs to.

Good examples to learn from: Identity v3 is cleanly done. Volume v2 is very recent and includes a lot of lessons learnt.

On the other hand Object may not be such a good example, it's a bit different and taken directly from the Swift CLI.

I think the documentation for writing new plugins is over there. Most OSC developers and devs from related libraries hang out on #openstack-sdks. It's a good place to ask any client-related question!

Leave a comment | 3 so far

FOSDEM 2017 From the community devroom

A few weeks ago I happily went off to FOSDEM. Saturday I caught up with people, Sunday I spent most of the day in the incredibly busy Community devroom - right until I had to give up my seat for food. I really enjoyed the talks and hope the track manages to get a bigger devroom next year, dozens of people had to be kept out after every session ended (fire safety regulations).

Most talks lasted about 30mn and the videos should be available on the FOSDEM website. Here are some of the highlights from my perspective and notes.

Closing Loops (Greg Sutcliffe)

Link to talk

This talk was more about raising questions and figuring out best practices based on audience participation rather than offering The One Solution. In this context, a "loop" means an RFC still undecided, a discussion dying out, patches unreviewed. How do you decide when a topic has come to a conclusion?

It's important to remember that everybody has the best interests of the project at heart, this is more about blaming the systems we use.

Say, in a Request For Comments discussing significant changes on a mailing list, the discussion tends to peter out after a while, and it can be difficult to track how a proposal evolved. Foreman (Greg's project) decided to move the discussions to an RFC repository instead, but 6 months later it seems to have only moved the problem.

From the audience:

  • For Debian, the discussion ends "when someone starts the work" :)
  • LibreOffice has a "technical council" type meeting/discussion with about 40 people, on the phone. A wiki documents the process to join.
  • In the PHP world, the RFC author gets to choose when to call a vote and the deadline for it (at least 2 weeks ahead). Greg wondered if that might get divisive if there are too many 50/50 situations, but the audience member indicated that it wasn't the case, because most problems get solved during the discussions that precede the vote so they are usually overwhelmingly accepted. Every contributor gets a vote from a certain contribution threshold.

The speaker seemed interested in trying out the technical council direction for his project.

Data science for community management (Manrique Lopez)

Link to talk

Communities need to have a common vision and self-awareness to see if they are walking toward it: what kind of people are part of the community, what are they doing... This helps with governance and transparency (fairness, trust).

What is the role of a community manager? Do communities even need to be managed? "Manage" is a strong word.

A community manager is interested in keeping their community alive, active, and productive. Having visibility into community health is important. There are a lot of tools out there with data relevant to this: JIRA, git, IRC, slack, StackOverflow, ... A lot of places, which individually are only one part of the bigger picture.

Rather than cobling a bunch of scripts together to get to all that data, GrimoireLab does all the work for you and presents it nicely using graphs. One example was the "Pony Factor," which I hadn't heard of: who's doing 50% of the commits? For example in the kernel, the pony factor is about 200 people/10 companies. That's a lot of people!

When you have that data you can see the evolution of the project over time: reviews, gender, demography, pony factor, etc, etc., in beta, lets you try it out for free for up to 5 Github organisations.

Getting your issues fixed (Tomer Brisker)

Link to talk

I think anyone trying to get help or start contributing as a user to an open-source project needs to watch this. Tomer summarised a ton of best practices that experienced open-source people "just know" and would likely never think to document.

A super rough summary:

  • Try to ask for help on IRC
  • Try the mailing list. If no one replies and you solve it yourself or find a workaround later, please self-reply. (Mandatory xkcd link)
  • Search the bug tracker. If there is a bug, add your input, your use case. If there are none, create a new issue and include as much information as possible: traces, logs, version (including of the plugin if relevant), screenshots... Anything to help the developers reproduce the issue. If this is a feature request, explain your use case.


  • Tabs vs spaces: follow the coding guidelines for the project. If there aren't any follow the standards and conventions of the existing code base.
  • Make the commit message good. Explain "why" as completely as possible. Apparently Github even lets you include screenshots, so that the patch can be reviewed without having to pull the code. "Why" did you do it this way and not another way, explained in such a way people in the future can understand the decision.
  • Include tests. This protects against regressions, so that you don't have to fix the same bug two releases later.
  • Be kind. We're building a community. No one wants to be shouted at. Be patient.

Handle conflict, like a boss! (Deb Nicholson)

Link to talk

Conflicts often come from a piece of information that is missing. Conflict is natural. If you don't handle it, it will come back worse.

How do we minimise it? Try to understand the other people, where they're coming from.

For example:

  • Passion. When people get so excited about something they forget why they're doing it. Mismatched goals.
  • Identity. When someone's identity is linked to a task, not a goal. For example, the person known to be super helpful with wrangling the complex bug tracker, who'll end up sad when you finally upgrade to something better.

Don't leave conflict unattended or you'll end up in a snake pit. You don't owe your mental health to any FOSS project.

There are 3 styles of handling conflict. We tend to use them all, depending on the circumstances.

  • Avoidance. This is not good in a project! It festers. It looks like you don't care, like you're not connected to the community and its concerns.
  • Accommodation. Doing stuff you don't really want to in order to keep the group happy. For bigger issues it can make it look like you don't mind the way things are going. It works for small things.
  • Assertion. Can also be called Aggression. Name calling, etc. Not a good strategy. It wears out other contributors.

Conversations happen in many places. Document the decisions so that it doesn't feel like they're coming out of nowhere.

If you notice in passing a problem in the community you're probably missing some information. Gather more info in order to understand the source of conflict: misunderstanding goals, stepping on toes (e.g. that person always writes the release notes but someone else went ahead and did it this time instead, without asking), ... Is it new members clashing with the old ones? Go back to the historical vision, step out of the tactical discussion. Are there sekrit goals underway? For example a motivation you're not aware of, someone with different goals for their participation like a new job or internal deadlines...

If the conflict is based on fear, it can be harder to get at, e.g. someone afraid they can't learn a new skill if switching language. Create a back channel where it's safe to discuss, to try to deal with it.

Planning for future conflict:

  • Assume the best.
  • Have a code of conduct. Set expectations.
  • No name calling/ad hominem, only discuss technical merits.
  • Set an example.

We could do so much more in the FOSS world if we didn't spend so much time slagging each other :)

Some references:

Mentoring 101 (Brian Proffitt)

Link to talk

This was a lively talk on both formal mentoring via programs such as the Google Summer of Code as well as daily mentoring.

Onboarding is important. This is not just about keeping people in but helping people get in.

As software project, should you want to participate in formal mentoring: have a plan. Formal programs have their own rules and structure.

As a participant, be a member of the community, pick a project that interests you. Be ready to commit. Be flexible - it takes time but is rewarding. Be ready to ask for help.

Undermentoring is a problem. Have clear, set goals for the mentee. Keep an eye on them while also giving them space.

Issue with mentees disappearing... They really should keep a line of communication open, even if only to say, "sorry, I cant' participate anymore." (From personal experience mentoring, this is quite stressful and it was a relief when the mentee admitted they could no longer participate.) You need to proactively mentor to avoid this.

  • Be consistent. E.g. your style may be authoritative (do it this way), democratic (collaboration) or coaching (guidance). Stick to one. (...with Brian noting that the last one is best for mentoring!)
  • Set expectations. Make sure the goals are aligned, define the process, avoid confusion.
  • Take an interest in the mentee. Listen actively. Build a relationship. Learn their style. Don't be a stalker though.
  • Wait before giving advice. It's easy to try to fix things when someone comes at you with a problem. You might be able to solve it but that won't help them. Hit the pause button. Ask more question, get to the heart, guide them to the solution. It takes time. You may not solve the problem. As long as it's not time-sensitive that's ok.
  • Don't assume anything. Avoid stereotypes. Learn the details of a situation. Emphasise communication.

Remember this one thing: they aren't you!! Understand the mentee. This is not only about getting the project done.

A question from the audience was, how can I help my mentor as a mentee? Once again it's about being proactive: this is what I want to do, how do you help me do this? If it isn't working, reconsider the relationship.

How to recruit mentors? Some projects are easier to get in than others. In oVirt, there's a separate mailing list and no formal process. Spread the word, see who's interested. What about a project where there's a lot of mentees yet no one ever answers the call for mentors? If this is part of a corporate project, a solution could be to make it part of the job.

Impromptu community talk

One of the talks was cancelled, so a couple of people started a conversation around community and community events (too many pizzas, too much alcohol is the feedback I remember from that part).

The need for "cruel empathy" was brought up: sometimes meeting organisers should be jerks for good reasons. Loud, demeaning people are toxic and need to be called out. Tell them, in private: "I appreciate that you're here, etc, but you're sucking air out of the community." They may not realise they are doing it, so be nice about it.

The Shotwell community was brought up as one with great first contact.

Overcoming culture clash (Dave Neary)

Link to talk

This was a very interesting talk, mainly summarising information and graphs from books, which I'm going to list here. And hopefully read soon!

Basically a lot of what we do and how we interpret things depends on our upbringing (country, culture, ...)

  • The Tragedy of the Commons - Garret Hardin.
  • Moral Tribes - Joshua Greene.
  • Culture and Organizations - Geert Hofstede. That one brings up 6 dimensions of culture, for example tending to be uncomfortable with silence and filling it instantly, vs. leaving a couple seconds pause to make sure the person is finished.
  • Getting to Yes - Roger Fish. This is more psychology-based than about hard negotiation.

There are no easy solutions. Focus on the relationship. Take small steps: one conversation at a time. Encourage local user groups - though keep them connected e.g. via sending mentors/leaders to these meetups to avoid silos. Have short mentoring programs. Learn about the cultural norms and sensitivity of the community. Avoid real-time meetings.

I contributed! But what now? (Bee Padalkar)

Link to talk

Based on the Fedora community, this talk was about improving contributor retention rates by finding patterns about successful contributors.

Why do people stay? According to a survey they ran, usually thanks to the strong, positive and inclusive community. Other reasons were identifying with the Fedora foundations (core values), giving back, constantly learning.

No matter how interesting the project is, if the community is poor people don't stick around.

  • Have a defined onboarding process.
  • Encourage people to speak up.
  • Assign action items early on, to create a sense of responsibility.
  • Let people know they are doing good work, give feedback (e.g. IRC karma).
  • Use inclusive and motivating language.
  • Encourage attending events to meet people in person.
  • Encourage consistency over activity. Let people contribute according to their own time.
  • Inspire, interact, give feedback.
  • Help people identify with your mission.


Looking forward to next year's event!

Leave a comment

A Quick Introduction to Mistral Usage in TripleO (Newton) For developers

Since Newton, Mistral has become a central component to the TripleO project, handling many of the operations in the back-end. I recently gave a short crash course on Mistral, what it is and how we use it to a few people and thought it might be useful to share some of my bag of tricks here as well.

What is Mistral?

It's a workflow service. You describe what you want as a series of steps (tasks) in YAML, and it will coordinate things for you, usually asynchronously.

Link: Mistral overview.

We are using it for a few reasons:

  • it lets us manage long-running processes (e.g. introspection) and track their state
  • it acts a common interface/API, that is currently used by both the TripleO CLI and UI thus avoiding duplication, and can also be consumed directly by external non-OpenStack consumers (e.g. ManageIQ).


A workbook contains multiple workflows. (The TripleO workbooks live at

A workflow contains a series of 'tasks' which can be thought of as steps. We use the default 'direct' type of workflow on TripleO, which means tasks are executed in the order written, moving around based on the on-success and on-error values.

Every task calls to an action (or to another workflow), which is where the work actually gets done.

OpenStack services are automatically mapped into actions thanks to the mappings defined in Mistral, so we get a ton of actions for free already.

Useful tip: with the following commands you can see locally which actions are available, for a given project.

$ mistral action-list | grep $projectname

You can of course create your own actions. Which we do. Quite a lot.

$ mistral action-list | grep tripleo

An execution is what an instance of a running workflow is called, once you started one.

Link: Mistral terminology (very detailed, with diagrams and examples).

Where the TripleO Mistral workflows live

Let's look at a couple of examples.

A short one to start with, scaling down

It takes some input, starts with the 'delete_node' task and continues on to on-success or on-error depending on the action result.

Note: You can see we always end the workflow with send_message, which is a convention we use in the project. Even if an action failed and moves to on-error, the workflow itself should be successful (a failed workflow would indicate a problem at the Mistral level). We end with send_message because we want to let the caller know what was the result.

How will the consumer get to that result? We associate every workflow with a Zaqar queue. This is a TripleO convention, not a Mistral requirement. Each of our workflow takes a queue_name as input, and the clients are expected to listen to the Zaqar socket for that queue in order to receive the messages.

Another point, about the action itself on line 20: tripleo.scale.delete_node is a TripleO-specific action, as indicated in the name. If you were interested in finding the code for it, you should look at the entry_points in setup.cfg for tripleo-common (where all the workflows live):

which would lead you to the code at:

A bit more complex: node configuration

It's "slightly more complex" in that it has a couple more tasks, and it also calls to another workflow (line 426). You can see it starts with a call to ironic.node_list in its first task at line 417, which comes for free with Mistral. No need to reimplement it.

Debugging notes on workflows and Zaqar

Each workflow creates a Zaqar queue, to send progress information back to the client (CLI, web UI, other...).

Sometimes these messages get lost and the process hangs. It doesn't mean the action didn't complete successfully.

  • Check the Zaqar processes are up and running: $ sudo systemctl | grep zaqar (this has happened to me after reboots)
  • Check Mistral for any errored workflow: $ mistral execution-list
  • Check the Mistral logs (executor.log and engine.log are usually where the interesting errors are)
  • Ocata has timeouts for some of the commands now, so this is getting better

Following a workflow through its execution via CLI

This particular example will run somewhat fast so it's more of a "tracing back what happened afterwards."

$ openstack overcloud plan create my-new-overcloud
Started Mistral Workflow. Execution ID: 05d550f2-5d13-4782-be7f-a775a1d86a84
Default plan created

The CLI nicely tells you which execution ID to look for, so let's use it:

$ mistral task-list 05d550f2-5d13-4782-be7f-a775a1d86a84

| ID                                   | Name                            | Workflow name                              | Execution ID                         | State   | State info                   |
| c6e0fef0-4e65-4ee6-9ae4-a6d9e8451fd0 | verify_container_doesnt_exist   | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 72c1310d-8379-4869-918e-62eb04530e46 | verify_environment_doesnt_exist | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 74438300-8b18-40fd-bf73-62a1d90f71b3 | create_container                | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 667c0e4b-6f6c-447d-9325-ab6c20c8ad98 | upload_to_container             | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| ef447ea6-86ec-4a62-bca2-a083c66f96d3 | create_plan                     | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| f37ebe9f-b39c-4f7a-9a60-eceb80782714 | ensure_passwords_exist          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 193f65fb-502a-4e4c-9a2d-053966500990 | plan_process_templates          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 400d7e11-aea8-45c7-96e8-c61523d66fe4 | plan_set_status_success         | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 9df60103-15e2-442e-8dc5-ff0d61dba449 | notify_zaqar                    | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |

This gives you an idea of what Mistral did to accomplish the goal. You can also map it back to the workflow defined in tripleo-common to follow through the steps and find out what exactly was run. It if the workflow stopped too early, this can give you an idea of where the problem occurred.

Side-node about plans and the ERRORed tasks above

As of Newton, information about deployment is stored in a "Plan" which is implemented as a Swift container together with a Mistral environment. This could change in the future but for now that is what a plan is. (Edited to add: this changed in Pike. The plan environment is now stored in Swift as well, in a file named plan-environment.yaml.)

To create a new plan, we need to make sure there isn't already a container or an environment with that name. We could implement this in an action in Python, or since Mistral already has commands to get a container / get an environment we can be clever about this and reverse the on-error and on-success actions compared to usual:

If we do get a 'container' then it means it already exists and the plan already exists, so we cannot reuse this name. So 'on-success' becomes the error condition.

I sometimes regret a little us going this way because it leaves exception tracebacks in the logs, which is misleading when folks go to the Mistral logs for the first time in order to debug some other issue.

Finally I'd like to end all this by mentioning the Mistral Quick Start tutorial, which is excellent. It takes you from creating a very simple workflow to following its journey through the execution.

How to create your own action/workflow in TripleO

Mistral documentation:

In short:

  • Start writing your python code, probably under tripleo_common/actions
  • Add an entry point referencing it to setup.cfg
  • /!\ Restart Mistral /!\ Action code is only taken in once Mistral starts

This is summarised in the TripleO common README (personally I put this in a script to easily rerun it all).

Back to deployments: what's in a plan

As mentioned earlier, a plan is the combination of a as a Swift container + Mistral environment. In theory this is an implementation detail which shouldn't matter to deployers. In practice knowing this gives you access to a few more debugging tricks.

For example, the templates you initially provided will be accessible through Swift.

$ swift list $plan-name

Everything else will live in the Mistral environment. This contains:

  • The default passwords (which is a potential source of confusion)
  • The parameters_default aka overriden parameters (this takes priority and would override the passwords above)
  • The list of enabled environments (this looks nicer for plans created from the UI, as they are all munged into one user-environment.yaml file when deploying from CLI - see bug 1640861)
$ mistral environment-get $plan-name

For example, with an SSL-deployment done from the UI:

$ mistral environment-get ssl-overcloud
| Field       | Value                                                                             |
| Name        | ssl-overcloud                                                                     |
| Description | <none>                                                                            |
| Variables   | {                                                                                 |
|             |     "passwords": {                                                                |
|             |         "KeystoneFernetKey1": "V3Dqp9MLP0mFvK0C7q3HlIsGBAI5VM1aW9JJ6c5lLjo=",     |
|             |         "KeystoneFernetKey0": "ll6gbwcbhyAi9jNvBnpWDImMmEAaW5dog5nRQvzvEz4=",     |
|             |         "HAProxyStatsPassword": "NXgvwfJ23VHJmwFf2HmKMrgcw",                      |
|             |         "HeatPassword": "Fs7K3CxR636BFhyDJWjsbAQZr",                              |
|             |         "ManilaPassword": "Kya6gr2zp2x8ApD6wtwUUMcBs",                            |
|             |         "NeutronPassword": "x2YK6xMaYUtgn8KxyFCQXfzR6",                           |
|             |         "SnmpdReadonlyUserPassword": "5a81d2d83ee4b69b33587249abf49cd672d08541",  |
|             |         "GlancePassword": "pBdfTUqv3yxpH3BcPjrJwb9d9",                            |
|             |         "AdminPassword": "KGGz6ApEDGdngj3KMpy7M2QGu",                             |
|             |         "IronicPassword": "347ezHCEqpqhmANK4fpWK2MvN",                            |
|             |         "HeatStackDomainAdminPassword": "kUk6VNxe4FG8ECBvMC6C4rAqc",              |
|             |         "ZaqarPassword": "6WVc8XWFjuKFMy2qP2qqqVk82",                             |
|             |         "MysqlClustercheckPassword": "M8V26MfpJc8FmpG88zu7p3bpw",                 |
|             |         "GnocchiPassword": "3H6pmazAQnnHj24QXADxPrguM",                           |
|             |         "CephAdminKey": "AQDloEFYAAAAABAAcCT546pzZnkfCJBSRz4C9w==",               |
|             |         "CeilometerPassword": "6DfAKDFdEFhxWtm63TcwsEW2D",                        |
|             |         "CinderPassword": "R8DvNyVKaqA44wRKUXEWfc4YH",                            |
|             |         "RabbitPassword": "9NeRMdCyQhekJAh9zdXtMhZW7",                            |
|             |         "CephRgwKey": "AQDloEFYAAAAABAACIfOTgp3dxt3Sqn5OPhU4Q==",                 |
|             |         "TrovePassword": "GbpxyPdnJkUCjXu4AsjmgqZVv",                             |
|             |         "KeystoneCredential0": "1BNiiNQjthjaIBnJd3EtoihXu25ZCzAYsKBpPQaV12M=",    |
|             |         "KeystoneCredential1": "pGZ4OlCzOzgaK2bEHaD1xKllRdbpDNowQJGzJHo6ETU=",    |
|             |         "CephClientKey": "AQDloEFYAAAAABAAoTR3S00DaBpfz4cyREe22w==",              |
|             |         "NovaPassword": "wD4PUT4Y4VcuZsMJTxYsBTpBX",                              |
|             |         "AdminToken": "hdF3kfs6ZaCYPUwrTzRWtwD3W",                                |
|             |         "RedisPassword": "2bxUvNZ3tsRfMyFmTj7PTUqQE",                             |
|             |         "MistralPassword": "mae3HcEQdQm6Myq3tZKxderTN",                           |
|             |         "SwiftHashSuffix": "JpWh8YsQcJvmuawmxph9PkUxr",                           |
|             |         "AodhPassword": "NFkBckXgdxfCMPxzeGDRFf7vW",                              |
|             |         "CephClusterFSID": "3120b7cc-b8ac-11e6-b775-fa163e0ee4f4",                |
|             |         "CephMonKey": "AQDloEFYAAAAABAABztgp5YwAxLQHkpKXnNDmw==",                 |
|             |         "SwiftPassword": "3bPB4yfZZRGCZqdwkTU9wHFym",                             |
|             |         "CeilometerMeteringSecret": "tjyywuf7xj7TM7W44mQprmaC9",                  |
|             |         "NeutronMetadataProxySharedSecret": "z7mb6UBEHNk8tJDEN96y6Acr3",          |
|             |         "BarbicanPassword": "6eQm4fwqVybCecPbxavE7bTDF",                          |
|             |         "SaharaPassword": "qx3saVNTmAJXwJwBH8n3w8M4p"                             |
|             |     },                                                                            |
|             |     "parameter_defaults": {                                                       |
|             |         "OvercloudControlFlavor": "control",                                      |
|             |         "ComputeCount": "2",                                                      |
|             |         "ControllerCount": "3",                                                   |
|             |         "OvercloudComputeFlavor": "compute",                                      |
|             |         "NtpServer": ""                                  |
|             |     },                                                                            |
|             |     "environments": [                                                             |
|             |         {                                                                         |
|             |             "path": "overcloud-resource-registry-puppet.yaml"                     |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/inject-trust-anchor.yaml"                       |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/tls-endpoints-public-ip.yaml"                   |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/enable-tls.yaml"                                |
|             |         }                                                                         |
|             |     ],                                                                            |
|             |     "template": "overcloud.yaml"                                                  |
|             | }                                                                                 |
| Scope       | private                                                                           |
| Created at  | 2016-12-02 16:27:11                                                               |
| Updated at  | 2016-12-06 21:25:35                                                               |

Note: 'environment' is an overloaded word in the TripleO world, be careful. Heat environment, Mistral environment, specific templates (e.g. TLS/SSL, Storage...), your whole setup, ...

Bonus track

There is documentation on going from zero (no plan, no nodes registered) till running a deployment, directly using Mistral:

Also, with the way we work with Mistral and Zaqar, you can switch between the UI and CLI, or even using Mistral directly, at any point in the process.


Thanks to Dougal for his feedback on the initial outline!

Leave a comment

EFI, Linux and other boot-loading fun a.k.a. where's my Grub gone

I've been gaming more recently, on hardware that is decent-in-some-contexts-but-definitely-not-in-this-one which has meant pathetic frame rates as low as 7~12FPS as my save files grew bigger. A kind soul took pity on me and installed a better graphics card in my machine while I wasn't looking - which of course is when everything started going wrong.

I'll gloss over the "Not turning on" part because that was due to mislabelled wires - the real problem began when Windows for some reason picked up that something had changed at boot time, and promptly overwrote the boot loader.

Cannot boot from USB keys

None of my LiveUSB sticks would boot.

This turned out to be due to the device (or the system on it?) not being compatible with EFI. I'm not sure how to make a EFI-compatible Live USB system and didn't need to in the end - if you absolutely need to, enabling CSM mode in the BIOS ("Compatibility Support Module") was useful there, but likely wouldn't have helped with fixing my boot-loader. EFI and Legacy OS shouldn't be dual-booted in parallel - you can read more about this at the beginning of that excellent page.

Side-note: "chroot: cannot execute /bin/sh"

That was because the Live USB stick turned out to be a 32 bit system, while my desktop OS is 64 bits.

Where's my EFI partition anyway

I found an old install CD for Debian testing 7.10 (64 bits) lying around that turned out to have a Rescue mode option.

To prepare the system for the chroot that would fix All My Problems, first I had to figure out what was on which partition. The rescue mode let me mount them one by one and take a peek, though using parted would have been much faster.

# parted /dev/sda print
Model: Blah
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name                          Flags
 1      1049kB  420MB   419MB   ntfs            Basic data partition          hidden, diag
 2      420MB   735MB   315MB   fat32           EFI system partition          boot, esp
 3      735MB   869MB   134MB                   Microsoft reserved partition  msftres
 4      869MB   247GB   247GB   ntfs            Basic data partition          msftdata
 6      247GB   347GB   100GB   ext4            debian-root                   msftdata
etc etc etc

So my Debian partition is on /dev/sda6, the EFI stuff is on /dev/sda2. I need to make a Debian chroot and reinstall Grub from there - that's the kind of stuff I learnt last time I broke a lot of things.

Let's chroot and grub!

After selecting the "Execute a shell in Installer environment" option:

# mount /dev/sda6 /mnt
# mount -o bind /dev /mnt/dev
# chroot /mnt

Error: update-grub: device node not found

This one I think happened because I only bind mounted /dev, when you also need things like /dev/pts, /sys, /proc.

# for i in /dev /dev/pts /proc /sys ; do mount -o bind $i /mnt$i ; done

Error: grub-install /dev/sda: cannot find EFI directory

That one was before I figured out I also needed to mount the EFI special boot partition - sda2 as shown in the printed output.

# mount /dev/sda2 /mnt/boot/efi

From then on I read about the efibootmgr tool and decided to try and use that instead.

Error: efibootmgr: EFI variables are not supported on this system

Outside the chroot, you need to load the efivars module:

# modprobe efivars

How are things looking by now

# modprobe efivars
# mount /dev/sda6 /mnt
# mount /dev/sda2 /mnt/boot/efi
# for i in /dev /dev/pts /proc /sys ; do mount -o bind $i /mnt$i ; done
# chroot /mnt

Usually I can never remember the mount syntax (does the mount point come first or second?!) but I typed these commands so many times today, I bet I'll remember the syntax for at least a week.

Playing with efibootmgr

I tried to use the following command from the "Managing EFI Boot Loaders for Linux" page but it didn't quite work for me.

# efibootmgr -c -d /dev/sda -p 2 -l \\EFI\\debian\\grubx64.efi -L Linux

While this did add a Linux entry to my F12 BIOS boot menu (yay!), that booted straight into a black screen ("Reboot and Select proper Boot Device or Insert Boot Media" etc etc). Later on I learnt about the efibootmgr --verbose command which shows the difference between the working entry and the non-working one:

# efibootmgr --verbose
Boot0000* debian    HD(2,c8800,96000,0123456789-abcd-1234)File(\EFI\debian\grubx64.efi)
Boot0005  Linux    HD(2,c8800,96000,0123456789-abcd-1234)File(\EFI\redhat\grub.efi)\EFI\debian\grubx64.efi

I'm not quite sure how the path ended up looking like that. It could be a default somewhere, or I'm quite willing to believe I did something wrong - I also made a mistake on the partition number when I first ran the command.

But how did you fix it?!

Despite showing all the options I wanted in the efibootmgr output within the chroot, running grub-install and update-grub multiple times did nothing: I'd still boot straight into Windows or straight into a black screen. The strange thing is that even though only "Windows Boot Manager" and my new "Linux" entry were in the F12 boot menu, the BIOS setup did offer a 'debian' entry (created automatically at install time a long time ago) was in the boot ordering options. Moving it around didn't change a thing though.

The efibootmgr man page talks of a "BootNext" option. With the 'debian' entry right in front of me, why not try it? It's entry Boot0000 on my list, therefore:

# efibootmgr -n 0

Ta-dah! I rebooted straight into Grub then Debian. From there, grub-install /dev/sda and update-grub worked just fine.

Things I still don't know

  • Why did this happen in the first place? Can I prevent it from happening again?
  • I'm not sure why the grub install properly worked that last time. Did I miss something when working from the chroot?
  • Where does the "redhat\grub.efi" line come from, and can I amend that?
  • Why does Windows take so long to restart each time, when I don't even log in?


  • Linux on UEFI: A quick installation guide - I found this site incredibly informational.
  • The efibootmgr man page is very nice and contains useful examples.
  • GrubEFIReinstall - I probably would have tried that tool at some point. I postponed because I didn't have an easy way to burn a CD and wasn't sure about how to boot from USB without enabling CSM.
  • Booting with EFI, a blog entry about fallback boot loaders. While this didn't help in my case, I found the article very clear and enjoyed getting an explanation of the context around the problem in addition to the solution.

Punch line: the graphics card wasn't the bottle neck and my frame rate still hovers around 9FPS.

Leave a comment | 2 so far

Switching locale (Gnome 3, Fedora 20) and making sure all the menus switch too

In the spirit of immersion, I switched the laptop's language to Japanese, however most of the Gnome menus remained in English even after switching the System Language altogether, logging out etc. Another profile on the laptop shows the menus for Activities, etc in Japanese just fine so it wasn't a missing package. I found the following in ~/.profile, which seems a bit heavy-handed but does do the trick. For future reference:


Note on restarting X one is asked if they want to change the names for the default folders like Documents, etc. Personally I find it makes using the CLI a pain and don't recommend it.

Leave a comment


I'm taking a year out to travel in Japan on a working holiday visa, starting with a couple of months of studying the language some more. I landed about 2 weeks ago. The school put me at a slightly higher level than I had dared to hope and I am now learning new words, new expressions, new nuances every day and generally having a tremendous time :) I'm exactly where I want to be, doing and learning what I want to.

I'm experimenting with posting pictures on Flickr as I go, we'll see how that works out!

Also, note for future-self and other folks who are about to travel with a cold: the so-called "ear planes" work great to avoid painful ear pain on descent. I put them about 1h30 before landing on the second flight and it was painless, quite a contrast with the previous flight 10 hours earlier. Nasal spray helps a bit for comfort too.

Leave a comment | 1 so far

<~ Older posts