A Quick Introduction to Mistral Usage in TripleO (Newton) For developers

Since Newton, Mistral has become a central component to the TripleO project, handling many of the operations in the back-end. I recently gave a short crash course on Mistral, what it is and how we use it to a few people and thought it might be useful to share some of my bag of tricks here as well.

What is Mistral?

It's a workflow service. You describe what you want as a series of steps (tasks) in YAML, and it will coordinate things for you, usually asynchronously.

Link: Mistral overview.

We are using it for a few reasons:

  • it lets us manage long-running processes (e.g. introspection) and track their state
  • it acts a common interface/API, that is currently used by both the TripleO CLI and UI thus avoiding duplication, and can also be consumed directly by external non-OpenStack consumers (e.g. ManageIQ).

Terminology

A workbook contains multiple workflows. (The TripleO workbooks live at https://github.com/openstack/tripleo-common/tree/master/workbooks).

A workflow contains a series of 'tasks' which can be thought of as steps. We use the default 'direct' type of workflow on TripleO, which means tasks are executed in the order written, moving around based on the on-success and on-error values.

Every task calls to an action (or to another workflow), which is where the work actually gets done.

OpenStack services are automatically mapped into actions thanks to the mappings defined in Mistral, so we get a ton of actions for free already.

Useful tip: with the following commands you can see locally which actions are available, for a given project.

$ mistral action-list | grep $projectname

You can of course create your own actions. Which we do. Quite a lot.

$ mistral action-list | grep tripleo

An execution is what an instance of a running workflow is called, once you started one.

Link: Mistral terminology (very detailed, with diagrams and examples).

Where the TripleO Mistral workflows live

https://github.com/openstack/tripleo-common/tree/master/workbooks
https://github.com/openstack/tripleo-common/tree/master/tripleo_common/actions

Let's look at a couple of examples.

A short one to start with, scaling down

https://github.com/openstack/tripleo-common/blob/156d2c/workbooks/scale.yaml#L8

It takes some input, starts with the 'delete_node' task and continues on to on-success or on-error depending on the action result.

Note: You can see we always end the workflow with send_message, which is a convention we use in the project. Even if an action failed and moves to on-error, the workflow itself should be successful (a failed workflow would indicate a problem at the Mistral level). We end with send_message because we want to let the caller know what was the result.

How will the consumer get to that result? We associate every workflow with a Zaqar queue. This is a TripleO convention, not a Mistral requirement. Each of our workflow takes a queue_name as input, and the clients are expected to listen to the Zaqar socket for that queue in order to receive the messages.

Another point, about the action itself on line 20: tripleo.scale.delete_node is a TripleO-specific action, as indicated in the name. If you were interested in finding the code for it, you should look at the entry_points in setup.cfg for tripleo-common (where all the workflows live):

https://github.com/openstack/tripleo-common/blob/156d2c/setup.cfg#L81

which would lead you to the code at:

https://github.com/openstack/tripleo-common/blob/156d2c/tripleo_common/actions/scale.py#L52

A bit more complex: node configuration

https://github.com/openstack/tripleo-common/blob/156d2c/workbooks/baremetal.yaml#L402

It's "slightly more complex" in that it has a couple more tasks, and it also calls to another workflow (line 426). You can see it starts with a call to ironic.node_list in its first task at line 417, which comes for free with Mistral. No need to reimplement it.

Debugging notes on workflows and Zaqar

Each workflow creates a Zaqar queue, to send progress information back to the client (CLI, web UI, other...).

Sometimes these messages get lost and the process hangs. It doesn't mean the action didn't complete successfully.

  • Check the Zaqar processes are up and running: $ sudo systemctl | grep zaqar (this has happened to me after reboots)
  • Check Mistral for any errored workflow: $ mistral execution-list
  • Check the Mistral logs (executor.log and engine.log are usually where the interesting errors are)
  • Ocata has timeouts for some of the commands now, so this is getting better

Following a workflow through its execution via CLI

This particular example will run somewhat fast so it's more of a "tracing back what happened afterwards."

$ openstack overcloud plan create my-new-overcloud
Started Mistral Workflow. Execution ID: 05d550f2-5d13-4782-be7f-a775a1d86a84
Default plan created

The CLI nicely tells you which execution ID to look for, so let's use it:

$ mistral task-list 05d550f2-5d13-4782-be7f-a775a1d86a84

+--------------------------------------+---------------------------------+--------------------------------------------+--------------------------------------+---------+------------------------------+
| ID                                   | Name                            | Workflow name                              | Execution ID                         | State   | State info                   |
+--------------------------------------+---------------------------------+--------------------------------------------+--------------------------------------+---------+------------------------------+
| c6e0fef0-4e65-4ee6-9ae4-a6d9e8451fd0 | verify_container_doesnt_exist   | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 72c1310d-8379-4869-918e-62eb04530e46 | verify_environment_doesnt_exist | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | ERROR   | Failed to run action [act... |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 74438300-8b18-40fd-bf73-62a1d90f71b3 | create_container                | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 667c0e4b-6f6c-447d-9325-ab6c20c8ad98 | upload_to_container             | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| ef447ea6-86ec-4a62-bca2-a083c66f96d3 | create_plan                     | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| f37ebe9f-b39c-4f7a-9a60-eceb80782714 | ensure_passwords_exist          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 193f65fb-502a-4e4c-9a2d-053966500990 | plan_process_templates          | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 400d7e11-aea8-45c7-96e8-c61523d66fe4 | plan_set_status_success         | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
| 9df60103-15e2-442e-8dc5-ff0d61dba449 | notify_zaqar                    | tripleo.plan_management.v1.create_default_ | 05d550f2-5d13-4782-be7f-a775a1d86a84 | SUCCESS | None                         |
|                                      |                                 | deployment_plan                            |                                      |         |                              |
+--------------------------------------+---------------------------------+--------------------------------------------+--------------------------------------+---------+------------------------------+

This gives you an idea of what Mistral did to accomplish the goal. You can also map it back to the workflow defined in tripleo-common to follow through the steps and find out what exactly was run. It if the workflow stopped too early, this can give you an idea of where the problem occurred.

Side-node about plans and the ERRORed tasks above

As of Newton, information about deployment is stored in a "Plan" which is implemented as a Swift container together with a Mistral environment. This could change in the future but for now that is what a plan is.

To create a new plan, we need to make sure there isn't already a container or an environment with that name. We could implement this in an action in Python, or since Mistral already has commands to get a container / get an environment we can be clever about this and reverse the on-error and on-success actions compared to usual:

https://github.com/openstack/tripleo-common/blob/156d2c/workbooks/plan_management.yaml#L129

If we do get a 'container' then it means it already exists and the plan already exists, so we cannot reuse this name. So 'on-success' becomes the error condition.

I sometimes regret a little us going this way because it leaves exception tracebacks in the logs, which is misleading when folks go to the Mistral logs for the first time in order to debug some other issue.

Finally I'd like to end all this by mentioning the Mistral Quick Start tutorial, which is excellent. It takes you from creating a very simple workflow to following its journey through the execution.

How to create your own action/workflow in TripleO

Mistral documentation:

In short:

  • Start writing your python code, probably under tripleo_common/actions
  • Add an entry point referencing it to setup.cfg
  • /!\ Restart Mistral /!\ Action code is only taken in once Mistral starts

This is summarised in the TripleO common README (personally I put this in a script to easily rerun it all).

Back to deployments: what's in a plan

As mentioned earlier, a plan is the combination of a as a Swift container + Mistral environment. In theory this is an implementation detail which shouldn't matter to deployers. In practice knowing this gives you access to a few more debugging tricks.

For example, the templates you initially provided will be accessible through Swift.

$ swift list $plan-name

Everything else will live in the Mistral environment. This contains:

  • The default passwords (which is a potential source of confusion)
  • The parameters_default aka overriden parameters (this takes priority and would override the passwords above)
  • The list of enabled environments (this looks nicer for plans created from the UI, as they are all munged into one user-environment.yaml file when deploying from CLI - see bug 1640861)
$ mistral environment-get $plan-name

For example, with an SSL-deployment done from the UI:

$ mistral environment-get ssl-overcloud
+-------------+-----------------------------------------------------------------------------------+
| Field       | Value                                                                             |
+-------------+-----------------------------------------------------------------------------------+
| Name        | ssl-overcloud                                                                     |
| Description | <none>                                                                            |
| Variables   | {                                                                                 |
|             |     "passwords": {                                                                |
|             |         "KeystoneFernetKey1": "V3Dqp9MLP0mFvK0C7q3HlIsGBAI5VM1aW9JJ6c5lLjo=",     |
|             |         "KeystoneFernetKey0": "ll6gbwcbhyAi9jNvBnpWDImMmEAaW5dog5nRQvzvEz4=",     |
|             |         "HAProxyStatsPassword": "NXgvwfJ23VHJmwFf2HmKMrgcw",                      |
|             |         "HeatPassword": "Fs7K3CxR636BFhyDJWjsbAQZr",                              |
|             |         "ManilaPassword": "Kya6gr2zp2x8ApD6wtwUUMcBs",                            |
|             |         "NeutronPassword": "x2YK6xMaYUtgn8KxyFCQXfzR6",                           |
|             |         "SnmpdReadonlyUserPassword": "5a81d2d83ee4b69b33587249abf49cd672d08541",  |
|             |         "GlancePassword": "pBdfTUqv3yxpH3BcPjrJwb9d9",                            |
|             |         "AdminPassword": "KGGz6ApEDGdngj3KMpy7M2QGu",                             |
|             |         "IronicPassword": "347ezHCEqpqhmANK4fpWK2MvN",                            |
|             |         "HeatStackDomainAdminPassword": "kUk6VNxe4FG8ECBvMC6C4rAqc",              |
|             |         "ZaqarPassword": "6WVc8XWFjuKFMy2qP2qqqVk82",                             |
|             |         "MysqlClustercheckPassword": "M8V26MfpJc8FmpG88zu7p3bpw",                 |
|             |         "GnocchiPassword": "3H6pmazAQnnHj24QXADxPrguM",                           |
|             |         "CephAdminKey": "AQDloEFYAAAAABAAcCT546pzZnkfCJBSRz4C9w==",               |
|             |         "CeilometerPassword": "6DfAKDFdEFhxWtm63TcwsEW2D",                        |
|             |         "CinderPassword": "R8DvNyVKaqA44wRKUXEWfc4YH",                            |
|             |         "RabbitPassword": "9NeRMdCyQhekJAh9zdXtMhZW7",                            |
|             |         "CephRgwKey": "AQDloEFYAAAAABAACIfOTgp3dxt3Sqn5OPhU4Q==",                 |
|             |         "TrovePassword": "GbpxyPdnJkUCjXu4AsjmgqZVv",                             |
|             |         "KeystoneCredential0": "1BNiiNQjthjaIBnJd3EtoihXu25ZCzAYsKBpPQaV12M=",    |
|             |         "KeystoneCredential1": "pGZ4OlCzOzgaK2bEHaD1xKllRdbpDNowQJGzJHo6ETU=",    |
|             |         "CephClientKey": "AQDloEFYAAAAABAAoTR3S00DaBpfz4cyREe22w==",              |
|             |         "NovaPassword": "wD4PUT4Y4VcuZsMJTxYsBTpBX",                              |
|             |         "AdminToken": "hdF3kfs6ZaCYPUwrTzRWtwD3W",                                |
|             |         "RedisPassword": "2bxUvNZ3tsRfMyFmTj7PTUqQE",                             |
|             |         "MistralPassword": "mae3HcEQdQm6Myq3tZKxderTN",                           |
|             |         "SwiftHashSuffix": "JpWh8YsQcJvmuawmxph9PkUxr",                           |
|             |         "AodhPassword": "NFkBckXgdxfCMPxzeGDRFf7vW",                              |
|             |         "CephClusterFSID": "3120b7cc-b8ac-11e6-b775-fa163e0ee4f4",                |
|             |         "CephMonKey": "AQDloEFYAAAAABAABztgp5YwAxLQHkpKXnNDmw==",                 |
|             |         "SwiftPassword": "3bPB4yfZZRGCZqdwkTU9wHFym",                             |
|             |         "CeilometerMeteringSecret": "tjyywuf7xj7TM7W44mQprmaC9",                  |
|             |         "NeutronMetadataProxySharedSecret": "z7mb6UBEHNk8tJDEN96y6Acr3",          |
|             |         "BarbicanPassword": "6eQm4fwqVybCecPbxavE7bTDF",                          |
|             |         "SaharaPassword": "qx3saVNTmAJXwJwBH8n3w8M4p"                             |
|             |     },                                                                            |
|             |     "parameter_defaults": {                                                       |
|             |         "OvercloudControlFlavor": "control",                                      |
|             |         "ComputeCount": "2",                                                      |
|             |         "ControllerCount": "3",                                                   |
|             |         "OvercloudComputeFlavor": "compute",                                      |
|             |         "NtpServer": "my.ntp-server.example.com"                                  |
|             |     },                                                                            |
|             |     "environments": [                                                             |
|             |         {                                                                         |
|             |             "path": "overcloud-resource-registry-puppet.yaml"                     |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/inject-trust-anchor.yaml"                       |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/tls-endpoints-public-ip.yaml"                   |
|             |         },                                                                        |
|             |         {                                                                         |
|             |             "path": "environments/enable-tls.yaml"                                |
|             |         }                                                                         |
|             |     ],                                                                            |
|             |     "template": "overcloud.yaml"                                                  |
|             | }                                                                                 |
| Scope       | private                                                                           |
| Created at  | 2016-12-02 16:27:11                                                               |
| Updated at  | 2016-12-06 21:25:35                                                               |
+-------------+-----------------------------------------------------------------------------------+

Note: 'environment' is an overloaded word in the TripleO world, be careful. Heat environment, Mistral environment, specific templates (e.g. TLS/SSL, Storage...), your whole setup, ...

Bonus track

There is documentation on going from zero (no plan, no nodes registered) till running a deployment, directly using Mistral: http://tripleo.org/mistral-api/mistral-api.html.

Also, with the way we work with Mistral and Zaqar, you can switch between the UI and CLI, or even using Mistral directly, at any point in the process.

~

Thanks to Dougal for his feedback on the initial outline!

Leave a comment

Training at EuroPython 2014 Making your first contribution to OpenStack

OpenStack logo

Last week I ran a 3-hour training on how to get started contributing to OpenStack at EuroPython. The aim was to give a high-level overview of how the contribution process works in the project and guide people through making their first contribution, from account creation to submitting a patch.

Overview

The session starts with an extremely fast overview of OpenStack, geared toward giving the participants an idea of the different components and possible areas for contribution. We then go through creating the accounts, why they're all needed, and how to work with DevStack for the people who have installed it. From there we finally start talking about the contribution process itself, some general points on open-source and OpenStack culture then go through a number of ideas for small tasks suitable for a first contribution. After that it's down to the participants to work on something and prepare a patch. Some people chose to file and triage/confirm bugs. The last part is about making sure the patch matches the community standards, submitting it, and talking about what happens next both to the patch and to the participant as a new member of the community.

Preparing

During the weeks preceding the event, I ran two pilot workshops with small groups (less than 10 people) in my local hackerspace, in preparation for the big one in Berlin. That was absolutely invaluable in terms of making the material more understandable and refining the content for items I didn't think of covering initially (e.g. screen, openrc files) and topics that could use more in-depth explanations (e.g. how to find your first task), timings, and generally getting a feel for what's reasonably achievable within a 3-hour intro workshop.

Delivering

I think it went well, despite some issues at the beginning due to lack of Internet connectivity (always a problem during hands-on workshops!). About 70 people had signed up to attend (a.k.a. about 7 times too many), thankfully other members of the OpenStack community stepped up and offered their help as mentors - thanks again everyone! In the end, about half the participants showed up in the morning, and we lost another dozen to the Internet woes. The people who stayed were mostly enthusiastic and seemed happy with the experience. According to the session etherpad, at least 5 new contributors uploaded a first patch :) Three are merged so far.

Distributing the slides early proved popular and useful. For an interactive workshop with lots of links and references it's really helpful for people to go back on something they missed or want to check again.

Issues

The start of the workshop is a bit lecture-heavy and could be titled "Things I Desperately Wish I Knew When Starting Out," and although there's some quizzes/discussions/demoing I'd love to make it more interactive in the future.

The information requested in order to join the Foundation tends to surprise people, I think because people come at it from the perspective of "I want to submit a patch" rather than "I am preparing to join a Foundation." At the hackerspace sessions in particular (maybe because it was easier to have candid discussions in such a small group), people weren't impressed with being forced to state an affiliation. The lack of obvious answer for volunteers gave the impression that the project cares more about contributions from companies. "Tog" might make an appearance in the company stats in the future :-)

On the sign-up form, the "Statement of Interest" is intimidating and confusing for some people (I certainly remember being uncertain over mine and what was appropriate, back when I was new and joining the Foundation was optional!). I worked around this after the initial session by offering suggestions/tips for both these fields, and spoke a bit more about their purpose.

A few people suggested I simply tell people to sign up for all these accounts in advance so there's more time during the workshop to work on the contribution itself. It's an option, though a number of people still hit non-obvious issues with Gerrit that are difficult to debug (some we opened bugs for, others we added to the etherpad). During one of the pilot sessions at the hackerspace, 6 of the 7 participants had errors when running git review -s  - I'm still not sure why, as it Worked On My Machine (tm) just fine at the same time.


Overall, I'm glad I did this! It was interesting to extract all this information from my brain, wiki pages and docs and attempt to make it as consumable as possible. It's really exciting when people come back later to say they've made a contribution and that the session helped to make the process less scary and more comprehensible. Thanks to all the participants who took the time to offer feedback as well! I hope I can continue to improve on this.

Leave a comment

Testing in Horizon Unit testing for the Openstack Dashboard

Belatedly, here are the notes for the design session/tutorial I gave about testing in Horizon at the OpenStack Summit in Portland, back in April. The etherpad is available over there. Session description:

The main aspect: the Horizon unit tests can be quite complex for new contributors and people extending Horizon to wrap their head around. Mocking with mox, the django unit testing framework,the openstack specific parts of the testing framework, selenium, fixtures/test data handling, qunit...This session could work as a tutorial/tips and tricks on the different testing components. Common errors being thrown and how to debug them. If people could bring up their pain points, that would also be useful.

If there is time, it would be interesting to also address the issue from another angle and think on how to improve what we have, particularly on the Selenium front which has been quite unstable.

General structure

There are 3 main parts to Horizon testing (4 if you include the bits that come from the Python unit testing framework, but we won't get into it here. If you've done unit testing before, it's the usual set of assertions and scaffolding that come with any unit testing framework).

As an example to map to what this is all talking about, I recommend keeping InstancesTest.test_index in the background.

Django unit testing

Docs: https://docs.djangoproject.com/en/1.4/topics/testing/

At the moment Horizon is compatible with 1.4 onwards. The django documentation is excellent and I recommend having a look. Thanks to django we get a lot of goodies for free to help with testing a web application. Among other things:

  • A test client, which mimics a very simple browser (no Javascript) to do GET and POST requests, and built-in tooling to have interesting interactions with the responses.
  • A bunch of additional assertions, to check the HTML, templates, etc., all documented in the link above.

If you're familiar with django already, or while you're reading the django docs, there are a couple of things to watch out for:

  • Horizon does not use models.py, and does not have a database
  • Horizon doesn't use fixtures either (actually it does, but they're very different since they're not done the django way - cf. no models)

Horizon unit testing

Docs: http://docs.openstack.org/developer/horizon/topics/testing.html

Helper classes: https://github.com/openstack/horizon/blob/d4b0ab4aa395bf4df2964efcc358100117efdaa0/horizon/test/helpers.py#L65

There are some docs for testing in Horizon, which contain useful advice for writing good tests in general. A few sections only are specific to Horizon:

Now let's have a look at helpers.py, where the TestCase classes we extend in Horizon tests are defined.

The setUp() and tearDown() methods do the housekeeping for mox/mocking so that we don't have to worry about it when writing tests. The aforementioned Horizon-specific assertions are also defined in this class. It extends the django TestCase class thus all of the django unit test goodness is available.

In general, this class is the best documentation available of what happens in the tests and how they are set up.

Openstack Dashboard unit testing

APIs: https://github.com/openstack/horizon/tree/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/api

Test data: https://github.com/openstack/horizon/tree/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/test/test_data

Helper classes: https://github.com/openstack/horizon/blob/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/test/helpers.py#L98

The Horizon tips and tricks mentioned earlier also apply, but there are no specific documentation page about the topic.

A quick overview of openstack_dashboard/ and the sections that matter to us in the context of unit testing:

  • APIs

The API directory is the only place that talks directly with the outside world, that is, the various openstack clients. This is why Horizon doesn't have a database, because it doesn't store any data itself.

  • Test Data

The test data is also stored in a single directory, and contains the fixtures, that are used to represent (mock) the data returned by the different clients.

  • Helper classes

Like in the "framework" part of Horizon, a helpers.py file defines the TestCases we extend later in the unit tests. This is where a lot of the magic happens: the TestCase extends the Horizon TestCase helper class described earlier, loads the test data, sets up mox, creates a fake user to log in. There's also a couple of useful assertions defined that are used all over the place.

There are other TestCase classes in there, for tests that may require an Admin user, testing the APIs, or Selenium.

A quick look at the example

The flavours returned by self.flavors.list() come from the test data.

We'll look at the mocking stuff in the Tools section. The APIs being mocked all live in the API directory, so this is the only place that needs to be mocked.

self.client() is the default django client, reverse() and assertTemplateUsed() also come from django.

self.assertItemsEquals() is a Python assertion.

Tools

Mox

In Horizon, mocks are used a lot, everywhere or otherwise running the unit tests would require a fully set up, running Openstack environment.

I found mox a bit difficult to get used to. There's a specific terminology, that translates to a different set of steps than is common in other mocking tools like mock.

First you record. That's the part in the tests where you create the stubs (in a decorator in the example) and "record" what you expect will happen (that's the place in the example that says: "when api.nova.flavour_list() is called with these exact arguments as described, return self.flavors.list()").

Then you replay, with self.mox.ReplayAll() which will make sure the rest of the test will get the data it expects, that you just mocked.

Finally, the reverify step is done in the parent TestCase class' tearDown() function, which calls self.mox.VerifyAll() and ensures the functions recorded were all called, and in the order defined.

There are lots of catches in mox, it's quite strict. Order matters. By default it assumes the mocked function will only be called once and fails otherwise (that's a big one that can be difficult to track down). MultipleTimes() will save you if a function needs to be called more than a couple of times.

Stubbing can be done via a decorator (which is the favoured way going forward) or a StubOutWithMox function, which can still be found in places.

Mox errors can be confusing and I recommend reading the Horizon docs about understanding mox output, which have a couple of paragraphs explaining different errors that may be encountered, the dreaded Expected and Unexpected calls.

Selenium

Helper classes: https://github.com/openstack/horizon/blob/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/test/helpers.py#L326

We use Selenium for testing Javascript interactions. It's a bit heavyweight since it requires starting a browser, so Python unit tests are preferred when possible.

It's more stable now (thanks Kieran), so hopefully we can write a few more tests for the places where it's needed.

qUnit (briefly)

qUnit is used for some of the pure Javascript tests.

It's not used a lot in Horizon. The handmade fixtures take a lot of effort to make so maybe it's better to use Selenium in most cases.

Tips and tricks

  • See the Tips and tricks from the Horizon testing topic
  • Use pdb to check the environment status
  • Anything else? From the session:
    • Mock everything, and if it doesn't work mock it again.
    • Selenium tests: having a flag to turn off/on mocking? So we can run them as integration tests when needed and make sure we still match the correct APIs - cf. blueprint
    • Using Selenium tests as integration tests: build more tests (start a VM, ssh into it)

Discussion

Unfortunately, the day was running late (and I was speaking at the very next session) therefore the discussion part didn't have time to happen.

I'm disappointed about that and would welcome people discussing their experience and pain points, particular from a newcomer's perspective.

Fortunately when it comes to discussing the Selenium issues, Kieran Spear had successfully fixed it right before the Summit :)

Leave a comment | 1 so far