Testing in Horizon | Unit testing for the Openstack Dashboard

Belatedly, here are the notes for the design session/tutorial I gave about testing in Horizon at the OpenStack Summit in Portland, back in April. The etherpad is available over there. Session description:

The main aspect: the Horizon unit tests can be quite complex for new contributors and people extending Horizon to wrap their head around. Mocking with mox, the django unit testing framework,the openstack specific parts of the testing framework, selenium, fixtures/test data handling, qunit...This session could work as a tutorial/tips and tricks on the different testing components. Common errors being thrown and how to debug them. If people could bring up their pain points, that would also be useful.
If there is time, it would be interesting to also address the issue from another angle and think on how to improve what we have, particularly on the Selenium front which has been quite unstable.

General structure

There are 3 main parts to Horizon testing (4 if you include the bits that come from the Python unit testing framework, but we won't get into it here. If you've done unit testing before, it's the usual set of assertions and scaffolding that come with any unit testing framework).

As an example to map to what this is all talking about, I recommend keeping InstancesTest.test_index in the background.

Django unit testing

Docs: https://docs.djangoproject.com/en/1.4/topics/testing/

At the moment Horizon is compatible with 1.4 onwards. The django documentation is excellent and I recommend having a look. Thanks to django we get a lot of goodies for free to help with testing a web application. Among other things:

  • A test client, which mimics a very simple browser (no Javascript) to do GET and POST requests, and built-in tooling to have interesting interactions with the responses.
  • A bunch of additional assertions, to check the HTML, templates, etc., all documented in the link above.

If you're familiar with django already, or while you're reading the django docs, there are a couple of things to watch out for:

  • Horizon does not use models.py, and does not have a database
  • Horizon doesn't use fixtures either (actually it does, but they're very different since they're not done the django way - cf. no models)

Horizon unit testing

Docs: http://docs.openstack.org/developer/horizon/topics/testing.html

Helper classes: https://github.com/openstack/horizon/blob/d4b0ab4aa395bf4df2964efcc358100117efdaa0/horizon/test/helpers.py#L65

There are some docs for testing in Horizon, which contain useful advice for writing good tests in general. A few sections only are specific to Horizon:

Now let's have a look at helpers.py, where the TestCase classes we extend in Horizon tests are defined.

The setUp() and tearDown() methods do the housekeeping for mox/mocking so that we don't have to worry about it when writing tests. The aforementioned Horizon-specific assertions are also defined in this class. It extends the django TestCase class thus all of the django unit test goodness is available.

In general, this class is the best documentation available of what happens in the tests and how they are set up.

Openstack Dashboard unit testing

APIs: https://github.com/openstack/horizon/tree/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/api

Test data: https://github.com/openstack/horizon/tree/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/test/test_data

Helper classes: https://github.com/openstack/horizon/blob/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/test/helpers.py#L98

The Horizon tips and tricks mentioned earlier also apply, but there are no specific documentation page about the topic.

A quick overview of openstack_dashboard/ and the sections that matter to us in the context of unit testing:

  • APIs

The API directory is the only place that talks directly with the outside world, that is, the various openstack clients. This is why Horizon doesn't have a database, because it doesn't store any data itself.

  • Test Data

The test data is also stored in a single directory, and contains the fixtures, that are used to represent (mock) the data returned by the different clients.

  • Helper classes

Like in the "framework" part of Horizon, a helpers.py file defines the TestCases we extend later in the unit tests. This is where a lot of the magic happens: the TestCase extends the Horizon TestCase helper class described earlier, loads the test data, sets up mox, creates a fake user to log in. There's also a couple of useful assertions defined that are used all over the place.

There are other TestCase classes in there, for tests that may require an Admin user, testing the APIs, or Selenium.

A quick look at the example

The flavours returned by self.flavors.list() come from the test data.

We'll look at the mocking stuff in the Tools section. The APIs being mocked all live in the API directory, so this is the only place that needs to be mocked.

self.client() is the default django client, reverse() and assertTemplateUsed() also come from django.

self.assertItemsEquals() is a Python assertion.

Tools

Mox

In Horizon, mocks are used a lot, everywhere or otherwise running the unit tests would require a fully set up, running Openstack environment.

I found mox a bit difficult to get used to. There's a specific terminology, that translates to a different set of steps than is common in other mocking tools like mock.

First you record. That's the part in the tests where you create the stubs (in a decorator in the example) and "record" what you expect will happen (that's the place in the example that says: "when api.nova.flavour_list() is called with these exact arguments as described, return self.flavors.list()").

Then you replay, with self.mox.ReplayAll() which will make sure the rest of the test will get the data it expects, that you just mocked.

Finally, the reverify step is done in the parent TestCase class' tearDown() function, which calls self.mox.VerifyAll() and ensures the functions recorded were all called, and in the order defined.

There are lots of catches in mox, it's quite strict. Order matters. By default it assumes the mocked function will only be called once and fails otherwise (that's a big one that can be difficult to track down). MultipleTimes() will save you if a function needs to be called more than a couple of times.

Stubbing can be done via a decorator (which is the favoured way going forward) or a StubOutWithMox function, which can still be found in places.

Mox errors can be confusing and I recommend reading the Horizon docs about understanding mox output, which have a couple of paragraphs explaining different errors that may be encountered, the dreaded Expected and Unexpected calls.

Selenium

Helper classes: https://github.com/openstack/horizon/blob/d4b0ab4aa395bf4df2964efcc358100117efdaa0/openstack_dashboard/test/helpers.py#L326

We use Selenium for testing Javascript interactions. It's a bit heavyweight since it requires starting a browser, so Python unit tests are preferred when possible.

It's more stable now (thanks Kieran), so hopefully we can write a few more tests for the places where it's needed.

qUnit (briefly)

qUnit is used for some of the pure Javascript tests.

It's not used a lot in Horizon. The handmade fixtures take a lot of effort to make so maybe it's better to use Selenium in most cases.

Tips and tricks

  • See the Tips and tricks from the Horizon testing topic
  • Use pdb to check the environment status
  • Anything else? From the session:
    • Mock everything, and if it doesn't work mock it again.
    • Selenium tests: having a flag to turn off/on mocking? So we can run them as integration tests when needed and make sure we still match the correct APIs - cf. blueprint
    • Using Selenium tests as integration tests: build more tests (start a VM, ssh into it)

Discussion

Unfortunately, the day was running late (and I was speaking at the very next session) therefore the discussion part didn't have time to happen.

I'm disappointed about that and would welcome people discussing their experience and pain points, particular from a newcomer's perspective.

Fortunately when it comes to discussing the Selenium issues, Kieran Spear had successfully fixed it right before the Summit :)

links

social