TripleO Deep Dive: Internationalisation in the UI

Yesterday, as part of the TripleO Deep Dives series I gave a short introduction to internationalisation in TripleO UI: the technical aspects of it, as well as a quick overview of how we work with the I18n team.

You can catch the recording on BlueJeans or YouTube, and below's a transcript.

~

Life and Journey of a String

Internationalisation was added to the UI during Ocata - just a release ago. Florian implemented most of it and did the lion's share of the work, as can be seen on the blueprint if you're curious about the nitty-gritty details.

Addition to the codebase

Here's an example patch from during the transition. On the left you can see how things were hard-coded, and on the right you can see the new defineMessages() interface we now use. Obviously new patches should directly look like on the right hand-side nowadays.

The defineMessages() dictionary requires a unique id and default English string for every message. Optionally, you can also provide a description if you think there could be confusion or to clarify the meaning. The description will be shown in Zanata to the translators - remember they see no other context, only the string itself.

For example, a string might sound active like if it were related to an action/button but actually be a descriptive help string. Or some expressions are known to be confusing in English - "provide a node" has been the source of multiple discussions on list and live so might as well pre-empt questions and offer additional context to help the translators decide on an appropriate translation.

Extraction & conversion

Now we know how to add an internationalised string to the codebase - how do these get extracted into a file that will be uploaded to Zanata?

All of the following steps are described in the translation documentation in the tripleo-ui repository. Assuming you've already run the installation steps (basically, npm install):

$ npm run build

This does a lot more than just extracting strings - it prepares the code for being deployed in production. Once this ends you'll be able to find your newly extracted messages under the i18n directory:

$ ls i18n/extracted-messages/src/js/components

You can see the directory structure is kept the same as the source code. And if you peek into one of the files, you'll note the content is basically the same as what we had in our defineMessages() dictionary:

$ cat i18n/extracted-messages/src/js/components/Login.json 
[
  {
    "id": "UserAuthenticator.authenticating",
    "defaultMessage": "Authenticating..."
  },
  {
    "id": "Login.username",
    "defaultMessage": "Username"
  },
  {
    "id": "Login.usernameRequired",
    "defaultMessage": "Username is required."
  },
[...]

However, JSON is not a format that Zanata understands by default. I think the latest version we upgraded to, or the next one might have some support for it, but since there's no i18n JSON standard it's somewhat limited. In open-source software projects, po/pot files are generally the standard to go with.

$ npm run json2pot

> tripleo-ui@7.1.0 json2pot /home/jpichon/devel/tripleo-ui
> rip json2pot ./i18n/extracted-messages/**/*.json -o ./i18n/messages.pot

> [react-intl-po] write file -> ./i18n/messages.pot ✔️

$ cat i18n/messages.pot 
msgid ""
msgstr ""
"POT-Creation-Date: 2017-07-07T09:14:10.098Z\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"MIME-Version: 1.0\n"
"X-Generator: react-intl-po\n"


#: ./i18n/extracted-messages/src/js/components/nodes/RegisterNodesDialog.json
#. [RegisterNodesDialog.noNodesToRegister] - undefined
msgid ""No Nodes To Register""
msgstr ""

#: ./i18n/extracted-messages/src/js/components/nodes/NodesToolbar/NodesToolbar.json
#. [Toolbar.activeFilters] - undefined
#: ./i18n/extracted-messages/src/js/components/validations/ValidationsToolbar.json
#. [Toolbar.activeFilters] - undefined
msgid "Active Filters:"
msgstr ""

#: ./i18n/extracted-messages/src/js/components/nodes/RegisterNodesDialog.json
#. [RegisterNodesDialog.addNew] - Small button, to add a new Node
msgid "Add New"
msgstr ""

#: ./i18n/extracted-messages/src/js/components/plan/PlanFormTabs.json
#. [PlanFormTabs.addPlanName] - Tooltip for "Plan Name" form field
msgid "Add a Plan Name"
msgstr ""
[...]

This messages.pot file is what will be automatically uploaded to Zanata.

Infra: from the git repo, to Zanata

The following steps are done by the infrastructure scripts. There's infra documentation on how to enable translations for your project, in our case as the first internationalised JavaScript project we had to update the scripts a little as well. This is useful to know if an issue happens with the infra jobs; debugging will probably bring you here.

The scripts live in the project-config infra repo and there are three files of interest for us:

In this case, upstream_translation_update.sh is the file of interest to us: it simply sets up the project on line 76, then sends the pot file up to Zanata on line 115.

What does "setting up the project" entails? It's a function in common_translations_update.sh, that pretty much runs the steps we talked about in the previous section, and also creates a config file to talk to Zanata.

Monitoring the post jobs

Post jobs run after a patch has already merged - usually to upload tarballs where they should be, update the documentation pages, etc, and also upload messages catalogues onto Zanata. Being a 'post' job however means that if something goes wrong, there is no notification on the original review so it's easy to miss.

Here's the OpenStack Health page to monitor 'post' jobs related to tripleo-ui. Scroll to the bottom - hopefully tripleo-ui-upstream-translation-update is still green! It's good to keep an eye on it although it's easy to forget. Thankfully, AJaeger from #openstack-infra has been great at filing bugs and letting us know when something does go wrong.

Debugging when things go wrong: an example

We had a couple of issues whereby a linebreak gets introduced into one of the strings, which works fine in JSON but breaks our pot file. If you look at the content from the bug (the full logs are no longer accessible):

2017-03-16 12:55:13.468428 | + zanata-cli -B -e push --copy-trans False
[...]
2017-03-16 12:55:15.391220 | [INFO] Found source documents:
2017-03-16 12:55:15.391405 | [INFO]            i18n/messages
2017-03-16 12:55:15.531164 | [ERROR] Operation failed: missing end-quote

You'll notice the first line is the last function we call in the upstream_translation_update.sh script; for debugging that gives you an idea of the steps to follow to reproduce. The upstream Zanata instance also lets you create toy projects, if you want to test uploads yourself (this can't be done directly on the OpenStack Zanata instance.)

This particular newline issue has popped up a couple of times already. We're treating it with band-aids at the moment, ideally we'd get a proper test on the gate to prevent it from happening again: this is why this bug is still open. I'm not very familiar with JavaScript testing and haven't had a chance to look into it yet; if you'd like to give it a shot that'd be a useful contribution :)

Zanata, and contributing translations

The OpenStack Zanata instance lives at https://translate.openstack.org. This is where the translators do their work. Here's the page for tripleo-ui, you can see there is one project per branch (stable/ocata and master, for now). Sort by "Percent Translated" to see the languages currently translated. Here's an example of the translator's view, for Spanish: you can see the English string on the left, and the translator fills in the right side. No context! Just strings.

At this stage of the release cycle, the focus would be on 'master,' although it is still early to do translations; there is a lot of churn still.

If you'd like to contribute translations, the I18n team has good documentation about how to go about how to do it. The short version: sign up on Zanata, request to join your language team, once you're approved - you're good to go!

Return of the string

Now that we have our strings available in multiple languages, it's time for another infra job to kick in and bring them into our repository. This is where propose_translation_update.sh comes in. We pull the po files from Zanata, convert them to JSON, then do a git commit that will be proposed to Gerrit.

The cleanup step does more than it might seem. It checks if files are translated over a certain ratio (~75% for code), which avoids adding new languages when there might only be one or two words translated (e.g. someone just testing Zanata to see how it works). Switching to your language and yet having the vast majority of the UI still appear in English is not a great user experience.

In theory, files that were added but are now below 40% should get automatically removed, however this doesn't quite work for JavaScript at the moment - another opportunity to help! Manual cleanups can be done in the meantime, but it's a rare event so not a major issue.

Monitoring the periodic jobs

Zanata is checked once a day every morning, there is an OpenStack Health page for this as well. You can see there are two jobs at the moment (hopefully green!), one per branch: tripleo-ui-propose-translation-update and tripleo-ui-propose-translation-update-ocata. The job should run every day even if there are no updates - it simply means there might not be a git review proposed at the end.

We haven't had issues with the periodic job so far, though the debugging process would be the same: figure out based on the failure if it is happening at the infra script stage or in one of our commands (e.g. npm run po2json), try to reproduce and fix. I'm sure super-helpful AJaeger would also let us know if he were to notice an issue here.

Automated patches

You may have seen the automated translations updates pop up on Gerrit. The commit message has some tips on how to review these: basically don't agonise over the translation contents as problems there should be handled in Zanata anyway, just make sure the format looks good and is unlikely to break the code. A JSON validation tool runs during the infra prep step in order to "prettify" the JSON blob and limit the size of the diffs, therefore once the patch  makes it out to Gerrit we know the JSON is well-formed at least.

Try to review these patches quickly to respect the translators' work. Not very nice to spend a lot of time on translating a project and yet not have your work included because no one was bothered to merge it :)

A note about new languages...

If the automated patch adds a new language, there'll be an additional step required after merging the translations in order to enable it: adding a string with the language name to a constants file. Until recently, this took 3 or 4 steps - thanks to Honza for making it much simpler!

This concludes the technical journey of a string. If you'd like to help with i18n tasks, we have a few related bugs open. They go from very simple low-hanging-fruits you could use to make your first contribution to the UI, to weird buttons that have translations available yet show in English but only in certain modals, to the kind of CI resiliency tasks I linked to earlier. Something for everyone! ;)

Working with the I18n team

It's really all about communication. Starting with...

Release schedule and string freezes

String freezes are noted on the main schedule but tend to fit the regular cycle-with-milestones work. This is a problem for a cycle-trailing project like tripleo-ui as we could be implementing features up to 2 weeks after the other projects, so we can't freeze strings that early.

There were discussions at the Atlanta PTG around whether the I18n should care at all about projects that don't respect the freeze deadlines. That would have made it impossible for projects like ours to ever make it onto the I18n official radar. The compromise was that cycle-trailing project should have a I18n cross-project liaison that communicates with the I18n PTL and team to inform them of deadlines, and also to ignore Soft Freeze and only do a Hard Freeze.

This will all be documented under an i18n governance tag; while waiting for it the notes from the sessions are available for the curious!

What's a String Freeze again?

The two are defined on the schedule: soft freeze means not allowing changes to strings, as it invalidates the translator's work and forces them to retranslate; hard freeze means no additions, changes or anything else in order to give translators a chance to catch up.

When we looked at Zanata earlier, there were translation percentages beside each language: the goal is always the satisfaction of reaching 100%. If we keep adding new strings then the goalpost keeps moving, which is discouraging and unfair.

Of course there's also an "exception process" when needed, to ask for permission to merge a string change with an explanation or at least a heads-up, by sending an email to the openstack-i18n mailing list. Not to be abused :)

Role of the I18n liaison

...Liaise?! Haha. The role is defined briefly on the Cross-Projects Liaison wiki page. It's much more important toward the end of the cycle, when the codebase starts to stabilise, there are fewer changes and translators look at starting their work to be included in the release.

In general it's good to hang out on the #openstack-i18n IRC channel (very low traffic), attend the weekly meeting (it alternates times), be available to answer questions, and keep the PTL informed of the I18n status of the project. In the case of cycle-trailing projects (quite a new release model still), it's also important to be around to explain the deadlines.

A couple of examples having an active liaison helps with:

  • Toward the end or after the release, once translations into the stable branch have settled, the stable translations get copied into the master branch on Zanata. The strings should still be fairly similar at that point and it avoids translators having to re-do the work. It's a manual process, so you need to let the I18n PTL know when there are no longer changes to stable/*.
  • Last cycle, because the cycle-trailing status of tripleo-ui was not correctly documented, a Zanata upgrade was planned right after the main release - which for us ended up being right when the codebase had stabilised enough and several translators had planned to be most active. Would have been solved with better, earlier communication :)

Post-release

After the Ocata release, I sent a few screenshots of tripleo-ui to the i18n list so translators could see the result of their work. I don't know if anybody cared :-) But unlike Horizon, which has an informal test system available for translators to check their strings during the RC period, most of the people who volunteered translations had no idea what the UI looked like. It'd be cool if we could offer a test system with regular string updates next release - maybe just an undercloud on the new RDO cloud? Deployment success/failures strings wouldn't be verifiable but the rest would, while the system would be easier to maintain than a full dev TripleO environment - better than nothing. Perhaps an idea for the Queens cycle!

The I18n team has a priority board on the Zanata main page (only visible when logged in I think). I'm grateful to see TripleO UI in there! :) Realistically we'll never move past Low or perhaps Medium priority which is fair, as TripleO doesn't have the same kind of reach or visibility that Horizon or the installation guides do. I'm happy that we're included! The OpenStack I18n team is probably the most volunteer-driven team in OpenStack. Let's be kind, respect string freezes and translators' time! \o/

</braindump>

Leave a comment

A Quick Introduction to Mistral Usage in TripleO (Newton) For developers

Since Newton, Mistral has become a central component to the TripleO project, handling many of the operations in the back-end. I recently gave a short crash course on Mistral, what it is and how we use it to a few people and thought it might be useful to share some of my bag of tricks here as well.

What is Mistral?

It's a workflow service. You describe what you want as a series of steps (tasks) in YAML, and it will coordinate things for you, usually asynchronously.

Link: Mistral overview.

We are using it for a few reasons:

  • it lets us manage long-running processes (e.g. introspection) and track their state
  • it acts a common interface/API, that is currently used by both the TripleO CLI and UI thus avoiding duplication, and can also be consumed directly by external non-OpenStack consumers (e.g. ManageIQ).

Terminology

A workbook contains multiple workflows. (The TripleO workbooks live at https://github.com/openstack/tripleo-common/tree/master/workbooks).

A workflow contains a series of 'tasks' which can be thought of as steps. We use the default 'direct' type of workflow on TripleO, which means tasks are executed in the order written, moving around based on the on-success and on-error values.

Every task calls to an action (or to another workflow), which is where the work actually gets done.

OpenStack services are automatically mapped into actions thanks to the mappings defined in Mistral, so we get a ton of actions for free already.

Useful tip: with the following commands you can see locally which actions are available, for a given project.

$ mistral action-list | grep $projectname

You can of course create your own actions. Which we do. Quite a lot.

$ mistral action-list | grep tripleo

An execution is what an instance of a running workflow is called, once you started one.

Link: Mistral terminology (very detailed, with diagrams and examples).

Where the TripleO Mistral workflows live

https://github.com/openstack/tripleo-common/tree/master/workbooks
https://github.com/openstack/tripleo-common/tree/master/tripleo_common/actions

Let's look at a couple of examples.

A short one to start with, scaling down

https://github.com/openstack/tripleo-common/blob/156d2c/workbooks/scale.yaml#L8

It takes some input, starts with the 'delete_node' task and continues on to on-success or on-error depending on the action result.

Note: You can see we always end the workflow with send_message, which is a convention we use in the project. Even if an action failed and moves to on-error, the workflow itself should be successful (a failed workflow would indicate a problem at the Mistral level). We end with send_message because we want to let the caller know what was the result.

How will the consumer get to that result? We associate every workflow with a Zaqar queue. This is a TripleO convention, not a Mistral requirement. Each of our workflow takes a queue_name as input, and the clients are expected to listen to the Zaqar socket for that queue in order to receive the messages.

Another point, about the action itself on line 20: tripleo.scale.delete_node is a TripleO-specific action, as indicated in the name. If you were interested in finding the code for it, you should look at the entry_points in setup.cfg for tripleo-common (where all the workflows live):

https://github.com/openstack/tripleo-common/blob/156d2c/setup.cfg#L81

which would lead you to the code at:

https://github.com/openstack/tripleo-common/blob/156d2c/tripleo_common/actions/scale.py#L52

A bit more complex: node configuration

https://github.com/openstack/tripleo-common/blob/156d2c/workbooks/baremetal.yaml#L402

It's "slightly more complex" in that it has a couple more tasks, and it also calls to another workflow (line 426). You can see it starts with a call to ironic.node_list in its first task at line 417, which comes for free with Mistral. No need to reimplement it.

Debugging notes on workflows and Zaqar

Each workflow creates a Zaqar queue, to send progress information back to the client (CLI, web UI, other...).

Sometimes these messages get lost and the process hangs. It doesn't mean the action didn't complete successfully.

  • Check the Zaqar processes are up and running: $ sudo systemctl | grep zaqar (this has happened to me after reboots)
  • Check Mistral for any errored workflow: $ mistral execution-list
  • Check the Mistral logs (executor.log and engine.log are usually where the interesting errors are)
  • Ocata has timeouts for some of the commands now, so this is getting better

Following a workflow through its execution via CLI

This particular example will run somewhat fast so it's more of a "tracing back what happened afterwards."

$ openstack overcloud plan create my-new-overcloud
Started Mistral Workflow. Execution ID: 05d550f2-5d13-4782-be7f-a775a1d86a84
Default plan created

The CLI nicely tells you which execution ID to look for, so let's use it:

$ mistral task-list 05d550f2-5d13-4782-be7f-a775a1d86a84

+--------------------------------------+---------------------------------+--------------------------------------------+--------------------------------------+---------+------------------------------+
| ID                                   | Name                            | Workflow name                              | Execution ID                         | State   | State info                   |
+--------------------------------------+---------------------------------+--------------------------------------------+--------------------------------------+---------+------------------------------+
| c6e0fef0-4e65-4ee6-9ae4-a6d9e8451fd0 | verify_container_doesnt_exist   | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 72c1310d-8379-4869-918e-62eb04530e46 | verify_environment_doesnt_exist | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 74438300-8b18-40fd-bf73-62a1d90f71b3 | create_container                | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 667c0e4b-6f6c-447d-9325-ab6c20c8ad98 | upload_to_container             | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| ef447ea6-86ec-4a62-bca2-a083c66f96d3 | create_plan                     | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| f37ebe9f-b39c-4f7a-9a60-eceb80782714 | ensure_passwords_exist          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 193f65fb-502a-4e4c-9a2d-053966500990 | plan_process_templates          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 400d7e11-aea8-45c7-96e8-c61523d66fe4 | plan_set_status_success         | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 9df60103-15e2-442e-8dc5-ff0d61dba449 | notify_zaqar                    | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
+--------------------------------------+---------------------------------+--------------------------------------------+--------------------------------------+---------+------------------------------+

This gives you an idea of what Mistral did to accomplish the goal. You can also map it back to the workflow defined in tripleo-common to follow through the steps and find out what exactly was run. It if the workflow stopped too early, this can give you an idea of where the problem occurred.

Side-node about plans and the ERRORed tasks above

As of Newton, information about deployment is stored in a "Plan" which is implemented as a Swift container together with a Mistral environment. This could change in the future but for now that is what a plan is. (Edited to add: this changed in Pike. The plan environment is now stored in Swift as well, in a file named plan-environment.yaml.)

To create a new plan, we need to make sure there isn't already a container or an environment with that name. We could implement this in an action in Python, or since Mistral already has commands to get a container / get an environment we can be clever about this and reverse the on-error and on-success actions compared to usual:

https://github.com/openstack/tripleo-common/blob/156d2c/workbooks/plan_management.yaml#L129

If we do get a 'container' then it means it already exists and the plan already exists, so we cannot reuse this name. So 'on-success' becomes the error condition.

I sometimes regret a little us going this way because it leaves exception tracebacks in the logs, which is misleading when folks go to the Mistral logs for the first time in order to debug some other issue.

Finally I'd like to end all this by mentioning the Mistral Quick Start tutorial, which is excellent. It takes you from creating a very simple workflow to following its journey through the execution.

How to create your own action/workflow in TripleO

Mistral documentation:

In short:

  • Start writing your python code, probably under tripleo_common/actions
  • Add an entry point referencing it to setup.cfg
  • /!\ Restart Mistral /!\ Action code is only taken in once Mistral starts

This is summarised in the TripleO common README (personally I put this in a script to easily rerun it all).

Back to deployments: what's in a plan

As mentioned earlier, a plan is the combination of a as a Swift container + Mistral environment. In theory this is an implementation detail which shouldn't matter to deployers. In practice knowing this gives you access to a few more debugging tricks.

For example, the templates you initially provided will be accessible through Swift.

$ swift list $plan-name

Everything else will live in the Mistral environment. This contains:

  • The default passwords (which is a potential source of confusion)
  • The parameters_default aka overriden parameters (this takes priority and would override the passwords above)
  • The list of enabled environments (this looks nicer for plans created from the UI, as they are all munged into one user-environment.yaml file when deploying from CLI - see bug 1640861)
$ mistral environment-get $plan-name

For example, with an SSL-deployment done from the UI:

$ mistral environment-get ssl-overcloud
+-------------+-----------------------------------------------------------------------------------+
| Field       | Value                                                                             |
+-------------+-----------------------------------------------------------------------------------+
| Name        | ssl-overcloud                                                                     |
| Description | <none>                                                                            |
| Variables   | {                                                                                 |
|             |     "passwords": {                                                                |
|             |         "KeystoneFernetKey1": "V3Dqp9MLP0mFvK0C7q3HlIsGBAI5VM1aW9JJ6c5lLjo=",     |
|             |         "KeystoneFernetKey0": "ll6gbwcbhyAi9jNvBnpWDImMmEAaW5dog5nRQvzvEz4=",     |
|             |         "HAProxyStatsPassword": "NXgvwfJ23VHJmwFf2HmKMrgcw",                      |
|             |         "HeatPassword": "Fs7K3CxR636BFhyDJWjsbAQZr",                              |
|             |         "ManilaPassword": "Kya6gr2zp2x8ApD6wtwUUMcBs",                            |
|             |         "NeutronPassword": "x2YK6xMaYUtgn8KxyFCQXfzR6",                           |
|             |         "SnmpdReadonlyUserPassword": "5a81d2d83ee4b69b33587249abf49cd672d08541",  |
|             |         "GlancePassword": "pBdfTUqv3yxpH3BcPjrJwb9d9",                            |
|             |         "AdminPassword": "KGGz6ApEDGdngj3KMpy7M2QGu",                             |
|             |         "IronicPassword": "347ezHCEqpqhmANK4fpWK2MvN",                            |
|             |         "HeatStackDomainAdminPassword": "kUk6VNxe4FG8ECBvMC6C4rAqc",              |
|             |         "ZaqarPassword": "6WVc8XWFjuKFMy2qP2qqqVk82",                             |
|             |         "MysqlClustercheckPassword": "M8V26MfpJc8FmpG88zu7p3bpw",                 |
|             |         "GnocchiPassword": "3H6pmazAQnnHj24QXADxPrguM",                           |
|             |         "CephAdminKey": "AQDloEFYAAAAABAAcCT546pzZnkfCJBSRz4C9w==",               |
|             |         "CeilometerPassword": "6DfAKDFdEFhxWtm63TcwsEW2D",                        |
|             |         "CinderPassword": "R8DvNyVKaqA44wRKUXEWfc4YH",                            |
|             |         "RabbitPassword": "9NeRMdCyQhekJAh9zdXtMhZW7",                            |
|             |         "CephRgwKey": "AQDloEFYAAAAABAACIfOTgp3dxt3Sqn5OPhU4Q==",                 |
|             |         "TrovePassword": "GbpxyPdnJkUCjXu4AsjmgqZVv",                             |
|             |         "KeystoneCredential0": "1BNiiNQjthjaIBnJd3EtoihXu25ZCzAYsKBpPQaV12M=",    |
|             |         "KeystoneCredential1": "pGZ4OlCzOzgaK2bEHaD1xKllRdbpDNowQJGzJHo6ETU=",    |
|             |         "CephClientKey": "AQDloEFYAAAAABAAoTR3S00DaBpfz4cyREe22w==",              |
|             |         "NovaPassword": "wD4PUT4Y4VcuZsMJTxYsBTpBX",                              |
|             |         "AdminToken": "hdF3kfs6ZaCYPUwrTzRWtwD3W",                                |
|             |         "RedisPassword": "2bxUvNZ3tsRfMyFmTj7PTUqQE",                             |
|             |         "MistralPassword": "mae3HcEQdQm6Myq3tZKxderTN",                           |
|             |         "SwiftHashSuffix": "JpWh8YsQcJvmuawmxph9PkUxr",                           |
|             |         "AodhPassword": "NFkBckXgdxfCMPxzeGDRFf7vW",                              |
|             |         "CephClusterFSID": "3120b7cc-b8ac-11e6-b775-fa163e0ee4f4",                |
|             |         "CephMonKey": "AQDloEFYAAAAABAABztgp5YwAxLQHkpKXnNDmw==",                 |
|             |         "SwiftPassword": "3bPB4yfZZRGCZqdwkTU9wHFym",                             |
|             |         "CeilometerMeteringSecret": "tjyywuf7xj7TM7W44mQprmaC9",                  |
|             |         "NeutronMetadataProxySharedSecret": "z7mb6UBEHNk8tJDEN96y6Acr3",          |
|             |         "BarbicanPassword": "6eQm4fwqVybCecPbxavE7bTDF",                          |
|             |         "SaharaPassword": "qx3saVNTmAJXwJwBH8n3w8M4p"                             |
|             |     },                                                                            |
|             |     "parameter_defaults": {                                                       |
|             |         "OvercloudControlFlavor": "control",                                      |
|             |         "ComputeCount": "2",                                                      |
|             |         "ControllerCount": "3",                                                   |
|             |         "OvercloudComputeFlavor": "compute",                                      |
|             |         "NtpServer": "my.ntp-server.example.com"                                  |
|             |     },                                                                            |
|             |     "environments": [                                                             |
|             |         {                                                                         |
|             |             "path": "overcloud-resource-registry-puppet.yaml"                     |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/inject-trust-anchor.yaml"                       |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/tls-endpoints-public-ip.yaml"                   |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/enable-tls.yaml"                                |
|             |         }                                                                         |
|             |     ],                                                                            |
|             |     "template": "overcloud.yaml"                                                  |
|             | }                                                                                 |
| Scope       | private                                                                           |
| Created at  | 2016-12-02 16:27:11                                                               |
| Updated at  | 2016-12-06 21:25:35                                                               |
+-------------+-----------------------------------------------------------------------------------+

Note: 'environment' is an overloaded word in the TripleO world, be careful. Heat environment, Mistral environment, specific templates (e.g. TLS/SSL, Storage...), your whole setup, ...

Bonus track

There is documentation on going from zero (no plan, no nodes registered) till running a deployment, directly using Mistral: http://tripleo.org/mistral-api/mistral-api.html.

Also, with the way we work with Mistral and Zaqar, you can switch between the UI and CLI, or even using Mistral directly, at any point in the process.

~

Thanks to Dougal for his feedback on the initial outline!

Leave a comment

Talk transcript: Making your first open-source contribution

Slides available on SlideShare (I need to fix an embedding bug on the blog, oops).

Most of you know what Open Source Software is and hopefully think it's awesome. Have you ever contributed a patch? A translation? A bug report? Would you like to do any of this?

(If this is the first time you hear the term, roughly open-source is software for which the source code is freely available, that can be redistributed and modified.)

This talk is for people who like open-source and would like to give back, but are not sure where to start, or if they're smart enough or if their skills would be valued.

And my goal is to encourage you to make the jump from 0 to 1 contribution (or one more!) :)

Because I'm a software developer, this talk focuses a bit more on code contributions. (Don't think this means other types of contributions aren't needed, help is always welcome everywhere!) For many years, I wasn't strongly attached to any particular project but got patches in here and there (Sugar, Gnome, Django...) Nowadays you're more likely to find me in the land of OpenStack.

First, let's clarify something important: you don't need to be a genius to contribute. It's not as difficult as it may initially appear. All skill levels and all types of skills are needed and appreciated.

As to where to start, first I'm going to tell you about a wonderful shortcut, and then I'll go through how it usually works.

OpenHatch is an organisation whose mission statement is: "helping newcomers find their way into free software projects." They aim to bring together all the simple bugs, help requests and help wanted, documentation, all sorted by project and by language. It's a great way to get started.

Now, the way it usually works. When you're looking to contribute to a project, you should first look around for contributor guidelines. If a project has contributor guidelines, it's an excellent sign. It means that they care and that they've thought about it, which is very important when it comes to first contributions.

Read them! You don't have to remember it all. It will give you a sense of direction and an idea of what matters for the project, what to watch out for too (patch format, coding conventions, where/who to talk to, etc.)

As an example, here's a screenshot of the Gnome documentation for new contributors. There's a lot in there! They care a lot about welcoming new people.

In any case, whether a project has contributor guidelines or not, the workflow tends to be the same. Read the guidelines if they exist :) From then on, look at the bug tracker to find something that you'd be interested in working on. Now that you know which part of the project you will be working on, build it. Find the bug, fix it and submit your patch!

I'm going to be using the first contribution I made to Gnome as an example throughout this presentation. The Gnome folks are a very friendly community, and have a special tags for bugs that are good for newcomers called "gnome-love". In other projects this may be called "easy picking", "bite-sized", "low hanging fruit", etc... It should be mentioned in the contributor guidelines.

Look at the list of bugs tagged with that keyword and find one you like that is easy. (Avoid choosing a big feature as a first contribution. You need to earn some trust first, as well as pick up the project technical and cultural conventions.)

Once you know what you will be working on, it's time to build the software. This is very important, you will be learning new tools, perhaps even new languages. For instance I chose a bug from gnome-terminal which is written in C and uses autotools, which I wasn't very familiar with. This is why it's important to select a small first bug, because you have a lot more to learn to get there.

If you can't find instructions to build the software, you have a couple of solutions. Sometimes a mentor comes together with the "easy pickings" tag, or the community has its own mentorship program. You can also go back to the magical portal mentioned earlier, OpenHatch, where "training missions" are available to get people up to speed with some of tools commonly used in open-source.

People forget they had to learn all this in the first place. Don't sweat it. This is a stepping stone.

Once you have a working build, it's time to reproduce the bug, find it and fix it! If you're lucky, the location of the bug may have been mentioned in the bug report. As said earlier, the person who marked the bug as an easy task may be willing to help mentor you through it. It's ok to ask for help if you're having trouble.

As an example of a first task, I was lucky, and I indeed picked a very simple one - changing a colour in the gnome terminal!

Once your fix is ready and tested (you have rebuilt the software with your fix in it), it's time to submit the patch. Now is a good time to refer yourselves to the contributor guidelines once again, it will be useful to find out what your patch should look like (coding guidelines), what command you should use to create the patch, and where you should send it. If there are no such instructions, the safest path is to attach the patch to the bug report in the tracker.

And... congratulations! You've made your first contribution to open-source. Take the time to pat yourself in the back and be proud, you deserve it.

The process isn't quite finished yet: the next step is to wait. Perhaps your patch will be accepted since it's a small contribution. It's just as likely you might get some feedback on how to improve it - everybody wants to make their project better so don't take it personally, take the feedback on-board and resubmit the patch. If your patch gets lost in the ether, you should consider pinging the component or project maintainer.

In my case... I was lucky :) The patch was accepted and merged in the main codebase about an hour later!

If the only answer you get is silence - don't take it personally, really. People want you to stay, a healthy project know they need new people in order to stay alive. Open-source projects are always short in resources.

If the people involved in the project you contributed to have a bad attitude, don't be discouraged. And don't stick around either, there are a ton of cool, useful projects with friendly communities who could use your contributions. Life's too short.

If you've done your research, read the guidelines, there won't be any dragons. Most people are very nice! It's great to become part of a community that cares about creating the most awesome project they can.

And as a closing word before opening for questions: remember OpenHatch.org. They are people dedicated to helping newcomers find their way.

Leave a comment | 5 so far