Belatedly, here are the notes for the design session/tutorial I gave about testing in Horizon at the OpenStack Summit in Portland, back in April. The etherpad is available over there. Session description:
There are 3 main parts to Horizon testing (4 if you include the bits that come from the Python unit testing framework, but we won't get into it here. If you've done unit testing before, it's the usual set of assertions and scaffolding that come with any unit testing framework).
As an example to map to what this is all talking about, I recommend keeping InstancesTest.test_index in the background.
Django unit testing
At the moment Horizon is compatible with 1.4 onwards. The django documentation is excellent and I recommend having a look. Thanks to django we get a lot of goodies for free to help with testing a web application. Among other things:
- A bunch of additional assertions, to check the HTML, templates, etc., all documented in the link above.
If you're familiar with django already, or while you're reading the django docs, there are a couple of things to watch out for:
- Horizon does not use models.py, and does not have a database
- Horizon doesn't use fixtures either (actually it does, but they're very different since they're not done the django way - cf. no models)
Horizon unit testing
There are some docs for testing in Horizon, which contain useful advice for writing good tests in general. A few sections only are specific to Horizon:
- A couple of Horizon-specific assertions
- The debugging tips and common pitfalls also contain useful, concrete tips
Now let's have a look at helpers.py, where the TestCase classes we extend in Horizon tests are defined.
The setUp() and tearDown() methods do the housekeeping for mox/mocking so that we don't have to worry about it when writing tests. The aforementioned Horizon-specific assertions are also defined in this class. It extends the django TestCase class thus all of the django unit test goodness is available.
In general, this class is the best documentation available of what happens in the tests and how they are set up.
Openstack Dashboard unit testing
The Horizon tips and tricks mentioned earlier also apply, but there are no specific documentation page about the topic.
A quick overview of openstack_dashboard/ and the sections that matter to us in the context of unit testing:
The API directory is the only place that talks directly with the outside world, that is, the various openstack clients. This is why Horizon doesn't have a database, because it doesn't store any data itself.
- Test Data
The test data is also stored in a single directory, and contains the fixtures, that are used to represent (mock) the data returned by the different clients.
- Helper classes
Like in the "framework" part of Horizon, a helpers.py file defines the TestCases we extend later in the unit tests. This is where a lot of the magic happens: the TestCase extends the Horizon TestCase helper class described earlier, loads the test data, sets up mox, creates a fake user to log in. There's also a couple of useful assertions defined that are used all over the place.
There are other TestCase classes in there, for tests that may require an Admin user, testing the APIs, or Selenium.
A quick look at the example
The flavours returned by self.flavors.list() come from the test data.
We'll look at the mocking stuff in the Tools section. The APIs being mocked all live in the API directory, so this is the only place that needs to be mocked.
self.client() is the default django client, reverse() and assertTemplateUsed() also come from django.
self.assertItemsEquals() is a Python assertion.
In Horizon, mocks are used a lot, everywhere or otherwise running the unit tests would require a fully set up, running Openstack environment.
I found mox a bit difficult to get used to. There's a specific terminology, that translates to a different set of steps than is common in other mocking tools like mock.
First you record. That's the part in the tests where you create the stubs (in a decorator in the example) and "record" what you expect will happen (that's the place in the example that says: "when api.nova.flavour_list() is called with these exact arguments as described, return self.flavors.list()").
Then you replay, with self.mox.ReplayAll() which will make sure the rest of the test will get the data it expects, that you just mocked.
Finally, the reverify step is done in the parent TestCase class' tearDown() function, which calls self.mox.VerifyAll() and ensures the functions recorded were all called, and in the order defined.
There are lots of catches in mox, it's quite strict. Order matters. By default it assumes the mocked function will only be called once and fails otherwise (that's a big one that can be difficult to track down). MultipleTimes() will save you if a function needs to be called more than a couple of times.
Stubbing can be done via a decorator (which is the favoured way going forward) or a StubOutWithMox function, which can still be found in places.
Mox errors can be confusing and I recommend reading the Horizon docs about understanding mox output, which have a couple of paragraphs explaining different errors that may be encountered, the dreaded Expected and Unexpected calls.
It's more stable now (thanks Kieran), so hopefully we can write a few more tests for the places where it's needed.
It's not used a lot in Horizon. The handmade fixtures take a lot of effort to make so maybe it's better to use Selenium in most cases.
Tips and tricks
- See the Tips and tricks from the Horizon testing topic
- Use pdb to check the environment status
- I recommend Victoria's introduction todebugging in OpenStack if you're not familiar with pdb
- Anything else? From the session:
- Mock everything, and if it doesn't work mock it again.
- Selenium tests: having a flag to turn off/on mocking? So we can run them as integration tests when needed and make sure we still match the correct APIs - cf. blueprint
- Using Selenium tests as integration tests: build more tests (start a VM, ssh into it)
Unfortunately, the day was running late (and I was speaking at the very next session) therefore the discussion part didn't have time to happen.
I'm disappointed about that and would welcome people discussing their experience and pain points, particular from a newcomer's perspective.
Fortunately when it comes to discussing the Selenium issues, Kieran Spear had successfully fixed it right before the Summit :)