OpenStack Pike PTG: OpenStack Client | Tips and background for interested contributors

Last week I went off to Atlanta for the first OpenStack Project Teams Gathering, for a productive week discussing all sort of issues and cross-projects concerns with fellow OpenStack contributors.

For the first two days, I decided to do something you're not supposed to and attend the OpenStack Client (OSC) sessions despite not being a contributor yet. From my perspective it was incredibly helpful, as I got to hear about technical choices, historical decisions, current pain points and generally increase my understanding of the context around the project. I expect this will come very handy as I become more involved and thought I'd document some of this here, or more accurately have a giant braindump I can reference in the future. These are the things that were of interest to me during the meetings and is not meant to be an authoritative representation of everything that was discussed or other participants' thoughts and decisions.

The etherpad for these sessions lives at

Issue: Microversions

Microversions are a big topic that I think came up in multiple rooms. From the client's perspective, a huge problem is that most of the microversion stuff is hidden within the various clients and little of it surfaces back to the user.

There's a need for better negotiation, for the client to determine what is available. The path forward seems to be for the client to do auto-discovery of the latest available version.

The issue was mainly discussed while Cinder folks were around, so a lot of the discussion centred around the project. For example they do microversions like Nova, where everything is in the headers. OSC will be relying on what's available in cinderclient itself and is unlikely to ever develop more capabilities than already exist there. However in the cinder case, microversions are only available from v3 and that hasn't merged in OSC yet.

Other problems with microversions: how to keep track of what's available in each version? The plan is to use "latest" and cap specific commands if they are known to break. However there will still be issues with the interactive mode, as the conversation isn't reinitialised with each call.

Issue: Listing all the commands vs only available ones

This appears to be a topic where the answer is "we gave up a long time ago," for technical and other reasons. Many service endpoints don't even let you get to the version without being authenticated.

But even without that, there is a reluctance with having to do a server round-trip just to display the help.

It's low priority for OSC, although it may be more important for other projects like Horizon.

Terminology (Cinder): manage/unmanage vs adopt/abandon

One of the of specific issues that was brought up and resolved was about the terminology used by a couple of Cinder commands.

UX studies have been done on the client and the feedback is clear that what we currently have is not obvious. In the case of "manage" it is probably too generic, and not particularly a standard term that every Storage expert would instantly recognise (outside of Cinder).

For a similar use case, both Heat and Ironic are using the "adopt"/"abandon" terminology therefore an agreement was reached to use it here as well. It's up to the Cinder folks to decide if they wish to do the same deprecation or not in their own client.

To help people familiar with the existing command to find the new, clearer name, the usual way to do it is to include the old name in the help string so that users can grep for it.

Terminology (Cinder): Consistency groups & deprecation policy

Consistency group is being replaced by volume_group. If the command is mostly similar from the user perspective the recommendation is to simply alias it. Both commands should be kept for a while, then the old one can be deprecated. As much as possible, don't break scripts.

The logic can also be hidden within the client, like it is for Neutron and nova-network: many commands look the same from the user's perspective, the branching to use either as required is done internally.

Issue: NoAuth requirements

The requirement for a --no-auth option came up, which doesn't seem currently possible. It's important for projects like Ironic that support a stand-alone mode where there is no Keystone.

There might be a "Token 'off' type" already available in some cases, though it still requires other "unnecessary" parameters like auth_url.

OSC doesn't currently support that, though it may be already in the new layer that Monty is creating.

A new layer

That new layer came up a few times. I think this is shade? Apparently the (long term) end goal is to destroy the Python libraries and replace them with straight REST calls aka using requests for everything, to get rid of some of the issues with the existing Python libraries.

Some examples of the issues:

  • The Neutron client has a strange architecture and was never brought into OSC because of this.
  • The Glance client has unique dependencies that no one requires, like openssl which make it hard to install on Windows.

There were some discussions around why this new layer is not using the SDK. This may be because the SDK has a big ORM data layer, which doesn't fit with the "simple" strategy.

So the goal becomes building a thin REST layer on of the REST APIs, which of course has its own set of concerns e.g. how to keep track of what's available in every release, with the microversion stuff.

How about OSC using shade? It does seem to have filled a need and gained traction. However it is opiniated, there's duplication on both sides. It is both slightly higher-level and lower-level than needed. I didn't get the impression it's on the cards at this point.

New meta-package

There is a new meta-package, openstackclient (as opposed to python-openstackclient) to install the client together with all the plugins. A downside of the plugin architecture that some people don't like is that it requires multiple steps to get access to everything. The meta-package would resolve that.

This project is basically a requirement file, with some docs and a few tests (more may migrate there). It is ready for a "0.1" release, although the recommendation from the release team is to keep the versioning in sync with the main OSC.

I was wondering why python-tripleoclient wasn't in there yet, it turns out for now only projects that are in the global requirements are included. I got distracted with wanting to fix this before remembering that due to A Series Of Unfortunate Dependencies, installing the TripleO client also brings in the Mistral server... and cutting that chain is not as straightforward as I'd hoped (now documented in a Launchpad bug). Though if the other clients are there with their current set of issues, maybe it'd be ok to add it anyway.

From now on, projects should add themselves to the meta-package when creating a new plugin - this will be added to the checklist.

Another note, for this to work individual clients should not depend on OSC itself. They shouldn't anyway, because it brings in too many things.

Deprecating ALL the clients

OSC is meant to deprecate all of the existing clients. However there is no timeline, this is specific to each project. At the moment, Cinder and Neutron are getting up to speed but it's up to them to decide when to get rid of the old client - if at all.

In the case of Keystone, it took 2 releases to get deprecated, during which it established a very strict security fixes-only policy - even typo fixes were refused!

Migration path for users

There's a need for docs to help users familiar with the old clients to learn the new commands. stevemar already documented most of them, amotoki provided a mapping for Neutron commands as well.

Since the work is already done, these will be added to the docs. In theory it shouldn't get too stale since no new commands are to be added to the old CLIs (?). The mapping should start with the old commands first, as this how people will be searching.

Cliff topics: entry point freedom, adding to existing commands

dhellman dropped by on Tuesday morning to address Cliff-specific issues and feature requests. I missed the beginning but learnt a bunch of interesting things by listening in. Cliff is what gives us the base class structure and maps entry points.

Some security issues were brought up, and so was the possibility for users to lock down namespaces and give them some control over what gets added. It would be nice if there was a tool to check which entry point was hooked from where. Cliff could be modified to include additional information on where the command comes from, and an independent tool could be written up that uses that. I don't think a bug's open for it yet, though the goal seems to get it in for Pike.

Another feature request related to Cliff was to add to existing commands, which comes up a lot especially for quotas. "Purge" may be another candidate.

Pike community goal: Python 3

Python 3 should be supported by the client already. The only thing left to do is to modify devstack to load OSC with Python 3 by default.

Blueprints in OSC

In passing I learnt that OSC doesn't really use blueprints nor want to - these are used for implementing Neutron commands at the moment, but the team would rather it doesn't get used for anything else.

If someone wishes to implement a feature, better to open it as a bug instead so that it doesn't get lost.

osc_lib and feature branches

It took a long time to implement osc_lib. What to do next time big changes are expected, e.g. removing things for the next big version?

Using feature branches is generally not recommended as deleting them and integrating them at the end requires a lot of effort.

In this case since removals will break things, working with infra to create a feature branch seems to make sense.

Glance v2

Old operations like create, etc are supported but the new commands aren't (mainly around metadata and namespaces). We spent a bit of time trying to figure out what are metadata definitions (it's related to Glare, pre-Big Tent, and expanded a bit the scope of Glance. They could be used as artifacts for Nova, hardware affinity, identifying what flags actually mean). It's exposed through Horizon currently.

Current verdict: they won't be implemented till someone asks for them.

Adding your project to the client, creating your own plugin

Finally, a couple of PTLs dropped by to ask for advice on how to get started creating their client (and avoid folks trying to do it in-tree). The most important and difficult part seems to be naming resources. There is no process to determine terminology, but try to keep it consistent with the existing commands (verbs are easier than nouns, e.g. new thing = create).

There's no good answer for quotas yet, though this may change. For now, taking a new 'load-balancer' resource as an example, this would look like:

$ openstack load-balancer quotas

In the future, we may get something similar to:

$ openstack quotas --load-balancer

though the extension to do this doesn't exist yet.

There is no OSC convention for cascade-delete (e.g. adding a flag). A purge command is underway. It would delete all the resources under a project.

With regard to version/microversions, give a way for the client to detect what it needs to.

Good examples to learn from: Identity v3 is cleanly done. Volume v2 is very recent and includes a lot of lessons learnt.

On the other hand Object may not be such a good example, it's a bit different and taken directly from the Swift CLI.

I think the documentation for writing new plugins is over there. Most OSC developers and devs from related libraries hang out on #openstack-sdks. It's a good place to ask any client-related question!