OpenStack Pike PTG: TripleO, TripleO UI Some highlights

For the second part of the PTG (vertical projects), I mainly stayed in the TripleO room, moving around a couple of times to attend cross-project sessions related to i18n.

Although I always wish I understood more/everything, in the end my areas of interest (and current understanding!) in TripleO are around the UI, installing and configuring it, the TripleO CLI, and the tripleo-common Mistral workflows. Therefore the couple of thoughts in this post are mainly relevant to these - if you're looking for a more exhaustive summary of the TripleO discussions and decisions made during the PTG, I recommend reading the PTL's excellent thread about this on the dev list, and the associated etherpads.

Random points of interest

  • Containers is the big topic and had multiple sessions dedicated to it, both single and cross-projects. Many other sessions ended up revisiting the subject as well, sometimes with "oh that'll be solved with containers" and sometimes with "hm good but that won't work with containers."
  • A couple of API-breaking changes may need to happen in Tripleo Heat Templates (e.g. for NFV, passing a role mapping vs a role name around). The recommendation is to get this in as early as possible (by the first milestone) and communicate it well for out of tree services.
  • When needing to test something new on the CI, look at the existing scenarios and prioritise adding/changing something there to test for what you need, as opposed to trying to create a brand new job.
  • Running Mistral workflows as part of or after the deployment came up several times and was even a topic during a cross-project Heat / Mistral / TripleO sessions. Things can get messy, switching between Heat, Mistral and Puppet. Where should these workflows live (THT, tripleo-common)? Service-specific workflows (pre/post-deploy) are definitely something people want and there's a need to standardise how to do that. Ceph's likely to be the first to try their hand at this.
  • One lively cross-project session with OpenStack Ansible and Kolla was about parameters in configuration files. Currently whenever a new feature is added to Nova or whatever service, Puppet and so on need to be updated manually. The proposal is to make a small change to oslo.config to enable it to give an output in machine-readable YAML which can then be consumed (currently the config generated is only human readable). This will help with validations, and it may help to only have to maintain a structure as opposed to a template.
  • Heat folks had a feedback session with us about the TripleO needs. They've been super helpful with e.g. helping to improve our memory usage over the last couple of cycles. My takeaway from this session was "beware/avoid using YAQL, especially in nested stacks." YAQL is badly documented and everyone ends up reading the source code and tests to figure out how to things. Bringing Jinja2 into Heat or some kind of way to have repeated patterns from resources (e.g. based on a file) also came up and was cautiously acknowledged.
  • Predictable IP assignment on the control plane is a big enough issue that some people are suggesting dropping Neutron in the undercloud over it. We'd lose so many other benefits though, that it seems unlikely to happen.
  • Cool work incoming allowing built-in network examples to Just Work, based on a sample configuration. Getting the networking stuff right is a huge pain point and I'm excited to hear this should be achievable within Pike.

Python 3

Python 3 is an OpenStack community goal for Pike.

Tripleo-common and python-tripleoclient both have voting unit tests jobs for Python 3.5, though I trust them only moderately for a number of reasons. For example many of the tests tend to focus on the happy path and I've seen and fixed Python 3 incompatible code in exceptions several times (no 'message' attribute seems easy to get caught into), despite the unit testing jobs being all green. Apparently there are coverage jobs we could enable for the client, to ensure the coverage ratio doesn't drop.

Python 3 for functional tests was also brought up. We don't have functional tests in any of our projects and it's not clear the value we would get out of it (mocking servers) compared to the unit testing and all the integration testing we already do. Increasing unit test coverage was deemed a more valuable goal to pursue for now.

There are other issues around functional/integration testing with Python 3 which will need to be resolved (though likely not during Pike). For example our integration jobs run on CentOS and use packages, which won't be Python 3 compatible yet (cue SCL and the need to respin dependencies). If we do add functional tests, perhaps it would be easier to have them run on a Fedora gate (although if I recall correctly gating on Fedora was investigated once upon a time at the OpenStack level, but caused too many issues due to churn and the lack of long-term releases).

Another issue with Python 3 support and functional testing is that the TripleO client depends on Mistral server (due to the Series Of Unfortunate Dependencies I also mentioned in the last post). That means Mistral itself would need to fully support Python 3 as well.

Python 2 isn't going anywhere just yet so we still have time to figure things out. The conclusions, as described in Emilien's email seem to be:

  • Improve the unit test coverage
  • Enable the coverage job in CI
  • Investigate functional testing for python-tripleoclient to start with, see if it makes sense and is feasible

Sample environment generator

Currently environment files in THT are written by hand and quite inconsistent. This is also important for the UI, which needs to display this information. For example currently the environment general description is in a comment at the top of the file (if it exists at all), which can't be accessed programmatically. Dependencies between environment files are not described either.

To make up for this, currently all that information lives in the capabilities map but it's external to the template themselves, needs to be updated manually and gets out of sync easily.

The sample environment generator to fix this has been out there for a year, and currently has two blockers. First, it needs a way to determine which parameters are private (that is, parameters that are expected to be passed in by another template and shouldn't be set by the user).

One way could be to use a naming convention, perhaps an underscore prefix similar to Python. Parameter groups cannot be used because of a historical limitation, there can only be one group (so you couldn't be both Private and Deprecated). Changing Heat with a new feature like Nested Groups or generic Parameter Tags could be an option. The advantage of the naming convention is that it doesn't require any change to Heat.

From the UI perspective, validating if an environment or template is redefining parameters already defined elsewhere also matters. Because if it has the same name, then it needs to be set up with the same value everywhere or it's uncertain what the final value will end up as.

I think the second issue was that the general environment description can only be a comment at the moment, there is no Heat parameter to do this. The Heat experts in the room seemed confident this is non-controversial as a feature and should be easy to get in.

Once the existing templates are updated to match the new format, the validation should be added to CI to make sure that any new patch with environments does include these parameters. Having "description" show up as an empty string when generating a new environment will make it more obvious that something can/should be there, while it's easy to forget about it with the current situation.

The agreement was:

  • Use underscores as a naming convention to start with
  • Start with a comment for the general description

Once we get the new Heat description attribute we can move things around. If parameter tags get accepted, likewise we can automate moving things around. Tags would also be useful to the UI, to determine what subset of relevant parameters to display to the user in smaller forms (easier to understand that one form with several dozens of fields showing up all at once). Tags, rather than parameter groups are required because of the aforementioned issue: it's already used for deprecation and a parameter can only have one group.

Trusts and federation

This was a cross-project session together with Heat, Keystone and Mistral. A "Trust" lets you delegate or impersonate a user with a subset of their rights. From my experience in TripleO, this is particularly useful with long running Heat stacks as a authentication token expires after a few hours which means you lose the ability to do anything in the middle of an operation.

Trusts have been working very well for Heat since 2013. Before that they had to encrypt the user password in order to ask for a new token when needed, which all agreed was pretty horrible and not anything people want to go back to. Unfortunately with the federation model and using external Identity Providers, this is no longer possible. Trusts break, but some kind of delegation is still needed for Heat.

There were a lot of tangents about security issues (obviously!), revocation, expiration, role syncing. From what I understand Keystone currently validates Trusts to make sure the user still has the requested permissions (that the relevant role hasn't been removed in the meantime). There's a desire to have access to the entire role list, because the APIs currently don't let us check which role is necessary to perform a specific action. Additionally, when Mistral workflows launched from Heat get in, Mistral will create its own Trusts and Heat can't know what that will do. In the end you always kinda end up needing to do everything. Karbor is running into this as well.

No solution was discovered during the session, but I think all sides were happy enough that the problem and use cases have been clearly laid out and are now understood.

TripleO UI

Some of the issues relevant to the UI were brought up in other sessions, like standardising the environment files. Other issues brought up were around plan management, for example why do we use the Mistral environment in addition to Swift? Historically it looks like it's because it was a nice drop-in replacement for the defunct TripleO API and offered a similar API. Although it won't have an API by default, the suggestion is to move to using a file to store the environment during Pike and have a consistent set of templates: this way all the information related to a deployment plan will live in the same place. This will help with exporting/importing plans, which itself will help with CLI/UI interoperability (for instance there are still some differences in how and when the Mistral environment is generated, depending on whether you deployed with the CLI or the UI).

A number of other issues were brought up around networking, custom roles, tracking deployment progress, and a great many other topics but I think the larger problems around plan management was the only expected to turn into a spec, now proposed for review.

I18n and release models

After the UI session I left the TripleO room to attend a cross-project session about i18n, horizon and release models. The release model point is particularly relevant because the TripleO UI is a newly internationalised project as of Ocata and the first to be cycle-trailing (TripleO releases a couple of weeks after the main OpenStack release).

I'm glad I was able to attend this session. For one it was really nice to collaborate directly with the i18n and release management team, and catch up with a couple of Horizon people. For second it turns out tripleo-ui was not properly documented as cycle-trailing (fixed now!) and that led to other issues.

Having different release models is a source of headaches for the i18n community, already stretched thin. It means string freezes happen at different times, stable branches are cut at different points, which creates a lot of tracking work for the i18n PTL to figure which project is ready and do the required manual work to update Zanata upon branching. One part of the solution is likely to figure out if we can script the manual parts of this workflow so that when the release patch (which creates the stable branch) is merged, the change is automatically reflected in Zanata.

For the non-technical aspects of the work (mainly keeping track of deadlines and string freezes) the decision was that if you want to be translated, then you need to respect the same deadlines than the cycle-with-milestones projects do on the main schedule, and if a project doesn't want to - if it doesn't respect the freezes or cut branches when expected, then they will be dropped from the i18n priority dashboard in Zanata. This was particularly relevant for Horizon plugins, as there's about a dozen of them now with various degrees of diligence when it comes to doing releases.

These expectations will be documented in a new governance tag, something like i18n:translated.

Obviously this would mean that cycle-trailing projects would likely never be able to get the tag. The work we depend on lands late and so we continue making changes up to two weeks after each of the documented deadlines. ianychoi, the i18n PTL seemed happy to take these projects under the i18n wing and do the manual work required, as long as there is an active i18n liaison for the project communicating with the i18n team to keep them informed about freezes and new branches. This seemed to work ok for us during Ocata so I'm hopeful we can keep that model. It's not entirely clear to me if this will also be documented/included in the governance tag so I look forward to reading the spec once it is proposed! :)

In the case of tripleo-ui we're not a priority project for translations nor looking to be, but we still rely on the i18n PTL to create Zanata branches and merge translations for us, and would like to continue with being able to translate stable branches as needed.

CI Q&A

The CI Q&A session on Friday morning was amazingly useful and unanimously agreed it should be moved to the documentation (not done yet). If you've ever scratched your head about something related to TripleO CI, have a look at the etherpad!

Leave a comment

OpenStack Pike PTG: OpenStack Client Tips and background for interested contributors

Last week I went off to Atlanta for the first OpenStack Project Teams Gathering, for a productive week discussing all sort of issues and cross-projects concerns with fellow OpenStack contributors.

For the first two days, I decided to do something you're not supposed to and attend the OpenStack Client (OSC) sessions despite not being a contributor yet. From my perspective it was incredibly helpful, as I got to hear about technical choices, historical decisions, current pain points and generally increase my understanding of the context around the project. I expect this will come very handy as I become more involved and thought I'd document some of this here, or more accurately have a giant braindump I can reference in the future. These are the things that were of interest to me during the meetings and is not meant to be an authoritative representation of everything that was discussed or other participants' thoughts and decisions.

The etherpad for these sessions lives at https://etherpad.openstack.org/p/osc-ptg-pike.

Issue: Microversions

Microversions are a big topic that I think came up in multiple rooms. From the client's perspective, a huge problem is that most of the microversion stuff is hidden within the various clients and little of it surfaces back to the user.

There's a need for better negotiation, for the client to determine what is available. The path forward seems to be for the client to do auto-discovery of the latest available version.

The issue was mainly discussed while Cinder folks were around, so a lot of the discussion centred around the project. For example they do microversions like Nova, where everything is in the headers. OSC will be relying on what's available in cinderclient itself and is unlikely to ever develop more capabilities than already exist there. However in the cinder case, microversions are only available from v3 and that hasn't merged in OSC yet.

Other problems with microversions: how to keep track of what's available in each version? The plan is to use "latest" and cap specific commands if they are known to break. However there will still be issues with the interactive mode, as the conversation isn't reinitialised with each call.

Issue: Listing all the commands vs only available ones

This appears to be a topic where the answer is "we gave up a long time ago," for technical and other reasons. Many service endpoints don't even let you get to the version without being authenticated.

But even without that, there is a reluctance with having to do a server round-trip just to display the help.

It's low priority for OSC, although it may be more important for other projects like Horizon.

Terminology (Cinder): manage/unmanage vs adopt/abandon

One of the of specific issues that was brought up and resolved was about the terminology used by a couple of Cinder commands.

UX studies have been done on the client and the feedback is clear that what we currently have is not obvious. In the case of "manage" it is probably too generic, and not particularly a standard term that every Storage expert would instantly recognise (outside of Cinder).

For a similar use case, both Heat and Ironic are using the "adopt"/"abandon" terminology therefore an agreement was reached to use it here as well. It's up to the Cinder folks to decide if they wish to do the same deprecation or not in their own client.

To help people familiar with the existing command to find the new, clearer name, the usual way to do it is to include the old name in the help string so that users can grep for it.

Terminology (Cinder): Consistency groups & deprecation policy

Consistency group is being replaced by volume_group. If the command is mostly similar from the user perspective the recommendation is to simply alias it. Both commands should be kept for a while, then the old one can be deprecated. As much as possible, don't break scripts.

The logic can also be hidden within the client, like it is for Neutron and nova-network: many commands look the same from the user's perspective, the branching to use either as required is done internally.

Issue: NoAuth requirements

The requirement for a --no-auth option came up, which doesn't seem currently possible. It's important for projects like Ironic that support a stand-alone mode where there is no Keystone.

There might be a "Token 'off' type" already available in some cases, though it still requires other "unnecessary" parameters like auth_url.

OSC doesn't currently support that, though it may be already in the new layer that Monty is creating.

A new layer

That new layer came up a few times. I think this is shade? Apparently the (long term) end goal is to destroy the Python libraries and replace them with straight REST calls aka using requests for everything, to get rid of some of the issues with the existing Python libraries.

Some examples of the issues:

  • The Neutron client has a strange architecture and was never brought into OSC because of this.
  • The Glance client has unique dependencies that no one requires, like openssl which make it hard to install on Windows.

There were some discussions around why this new layer is not using the SDK. This may be because the SDK has a big ORM data layer, which doesn't fit with the "simple" strategy.

So the goal becomes building a thin REST layer on of the REST APIs, which of course has its own set of concerns e.g. how to keep track of what's available in every release, with the microversion stuff.

How about OSC using shade? It does seem to have filled a need and gained traction. However it is opiniated, there's duplication on both sides. It is both slightly higher-level and lower-level than needed. I didn't get the impression it's on the cards at this point.

New meta-package

There is a new meta-package, openstackclient (as opposed to python-openstackclient) to install the client together with all the plugins. A downside of the plugin architecture that some people don't like is that it requires multiple steps to get access to everything. The meta-package would resolve that.

This project is basically a requirement file, with some docs and a few tests (more may migrate there). It is ready for a "0.1" release, although the recommendation from the release team is to keep the versioning in sync with the main OSC.

I was wondering why python-tripleoclient wasn't in there yet, it turns out for now only projects that are in the global requirements are included. I got distracted with wanting to fix this before remembering that due to A Series Of Unfortunate Dependencies, installing the TripleO client also brings in the Mistral server... and cutting that chain is not as straightforward as I'd hoped (now documented in a Launchpad bug). Though if the other clients are there with their current set of issues, maybe it'd be ok to add it anyway.

From now on, projects should add themselves to the meta-package when creating a new plugin - this will be added to the checklist.

Another note, for this to work individual clients should not depend on OSC itself. They shouldn't anyway, because it brings in too many things.

Deprecating ALL the clients

OSC is meant to deprecate all of the existing clients. However there is no timeline, this is specific to each project. At the moment, Cinder and Neutron are getting up to speed but it's up to them to decide when to get rid of the old client - if at all.

In the case of Keystone, it took 2 releases to get deprecated, during which it established a very strict security fixes-only policy - even typo fixes were refused!

Migration path for users

There's a need for docs to help users familiar with the old clients to learn the new commands. stevemar already documented most of them, amotoki provided a mapping for Neutron commands as well.

Since the work is already done, these will be added to the docs. In theory it shouldn't get too stale since no new commands are to be added to the old CLIs (?). The mapping should start with the old commands first, as this how people will be searching.

Cliff topics: entry point freedom, adding to existing commands

dhellman dropped by on Tuesday morning to address Cliff-specific issues and feature requests. I missed the beginning but learnt a bunch of interesting things by listening in. Cliff is what gives us the base class structure and maps entry points.

Some security issues were brought up, and so was the possibility for users to lock down namespaces and give them some control over what gets added. It would be nice if there was a tool to check which entry point was hooked from where. Cliff could be modified to include additional information on where the command comes from, and an independent tool could be written up that uses that. I don't think a bug's open for it yet, though the goal seems to get it in for Pike.

Another feature request related to Cliff was to add to existing commands, which comes up a lot especially for quotas. "Purge" may be another candidate.

Pike community goal: Python 3

Python 3 should be supported by the client already. The only thing left to do is to modify devstack to load OSC with Python 3 by default.

Blueprints in OSC

In passing I learnt that OSC doesn't really use blueprints nor want to - these are used for implementing Neutron commands at the moment, but the team would rather it doesn't get used for anything else.

If someone wishes to implement a feature, better to open it as a bug instead so that it doesn't get lost.

osc_lib and feature branches

It took a long time to implement osc_lib. What to do next time big changes are expected, e.g. removing things for the next big version?

Using feature branches is generally not recommended as deleting them and integrating them at the end requires a lot of effort.

In this case since removals will break things, working with infra to create a feature branch seems to make sense.

Glance v2

Old operations like create, etc are supported but the new commands aren't (mainly around metadata and namespaces). We spent a bit of time trying to figure out what are metadata definitions (it's related to Glare, pre-Big Tent, and expanded a bit the scope of Glance. They could be used as artifacts for Nova, hardware affinity, identifying what flags actually mean). It's exposed through Horizon currently.

Current verdict: they won't be implemented till someone asks for them.

Adding your project to the client, creating your own plugin

Finally, a couple of PTLs dropped by to ask for advice on how to get started creating their client (and avoid folks trying to do it in-tree). The most important and difficult part seems to be naming resources. There is no process to determine terminology, but try to keep it consistent with the existing commands (verbs are easier than nouns, e.g. new thing = create).

There's no good answer for quotas yet, though this may change. For now, taking a new 'load-balancer' resource as an example, this would look like:

$ openstack load-balancer quotas

In the future, we may get something similar to:

$ openstack quotas --load-balancer

though the extension to do this doesn't exist yet.

There is no OSC convention for cascade-delete (e.g. adding a flag). A purge command is underway. It would delete all the resources under a project.

With regard to version/microversions, give a way for the client to detect what it needs to.

Good examples to learn from: Identity v3 is cleanly done. Volume v2 is very recent and includes a lot of lessons learnt.

On the other hand Object may not be such a good example, it's a bit different and taken directly from the Swift CLI.

I think the documentation for writing new plugins is over there. Most OSC developers and devs from related libraries hang out on #openstack-sdks. It's a good place to ask any client-related question!

Leave a comment | 5 so far

FOSDEM 2017 From the community devroom

A few weeks ago I happily went off to FOSDEM. Saturday I caught up with people, Sunday I spent most of the day in the incredibly busy Community devroom - right until I had to give up my seat for food. I really enjoyed the talks and hope the track manages to get a bigger devroom next year, dozens of people had to be kept out after every session ended (fire safety regulations).

Most talks lasted about 30mn and the videos should be available on the FOSDEM website. Here are some of the highlights from my perspective and notes.

Closing Loops (Greg Sutcliffe)

Link to talk

This talk was more about raising questions and figuring out best practices based on audience participation rather than offering The One Solution. In this context, a "loop" means an RFC still undecided, a discussion dying out, patches unreviewed. How do you decide when a topic has come to a conclusion?

It's important to remember that everybody has the best interests of the project at heart, this is more about blaming the systems we use.

Say, in a Request For Comments discussing significant changes on a mailing list, the discussion tends to peter out after a while, and it can be difficult to track how a proposal evolved. Foreman (Greg's project) decided to move the discussions to an RFC repository instead, but 6 months later it seems to have only moved the problem.

From the audience:

  • For Debian, the discussion ends "when someone starts the work" :)
  • LibreOffice has a "technical council" type meeting/discussion with about 40 people, on the phone. A wiki documents the process to join.
  • In the PHP world, the RFC author gets to choose when to call a vote and the deadline for it (at least 2 weeks ahead). Greg wondered if that might get divisive if there are too many 50/50 situations, but the audience member indicated that it wasn't the case, because most problems get solved during the discussions that precede the vote so they are usually overwhelmingly accepted. Every contributor gets a vote from a certain contribution threshold.

The speaker seemed interested in trying out the technical council direction for his project.

Data science for community management (Manrique Lopez)

Link to talk

Communities need to have a common vision and self-awareness to see if they are walking toward it: what kind of people are part of the community, what are they doing... This helps with governance and transparency (fairness, trust).

What is the role of a community manager? Do communities even need to be managed? "Manage" is a strong word.

A community manager is interested in keeping their community alive, active, and productive. Having visibility into community health is important. There are a lot of tools out there with data relevant to this: JIRA, git, IRC, slack, StackOverflow, ... A lot of places, which individually are only one part of the bigger picture.

Rather than cobling a bunch of scripts together to get to all that data, GrimoireLab does all the work for you and presents it nicely using graphs. One example was the "Pony Factor," which I hadn't heard of: who's doing 50% of the commits? For example in the kernel, the pony factor is about 200 people/10 companies. That's a lot of people!

When you have that data you can see the evolution of the project over time: reviews, gender, demography, pony factor, etc, etc. Cauldron.io, in beta, lets you try it out for free for up to 5 Github organisations.

Getting your issues fixed (Tomer Brisker)

Link to talk

I think anyone trying to get help or start contributing as a user to an open-source project needs to watch this. Tomer summarised a ton of best practices that experienced open-source people "just know" and would likely never think to document.

A super rough summary:

  • Try to ask for help on IRC
  • Try the mailing list. If no one replies and you solve it yourself or find a workaround later, please self-reply. (Mandatory xkcd link)
  • Search the bug tracker. If there is a bug, add your input, your use case. If there are none, create a new issue and include as much information as possible: traces, logs, version (including of the plugin if relevant), screenshots... Anything to help the developers reproduce the issue. If this is a feature request, explain your use case.

Patches:

  • Tabs vs spaces: follow the coding guidelines for the project. If there aren't any follow the standards and conventions of the existing code base.
  • Make the commit message good. Explain "why" as completely as possible. Apparently Github even lets you include screenshots, so that the patch can be reviewed without having to pull the code. "Why" did you do it this way and not another way, explained in such a way people in the future can understand the decision.
  • Include tests. This protects against regressions, so that you don't have to fix the same bug two releases later.
  • Be kind. We're building a community. No one wants to be shouted at. Be patient.

Handle conflict, like a boss! (Deb Nicholson)

Link to talk

Conflicts often come from a piece of information that is missing. Conflict is natural. If you don't handle it, it will come back worse.

How do we minimise it? Try to understand the other people, where they're coming from.

For example:

  • Passion. When people get so excited about something they forget why they're doing it. Mismatched goals.
  • Identity. When someone's identity is linked to a task, not a goal. For example, the person known to be super helpful with wrangling the complex bug tracker, who'll end up sad when you finally upgrade to something better.

Don't leave conflict unattended or you'll end up in a snake pit. You don't owe your mental health to any FOSS project.

There are 3 styles of handling conflict. We tend to use them all, depending on the circumstances.

  • Avoidance. This is not good in a project! It festers. It looks like you don't care, like you're not connected to the community and its concerns.
  • Accommodation. Doing stuff you don't really want to in order to keep the group happy. For bigger issues it can make it look like you don't mind the way things are going. It works for small things.
  • Assertion. Can also be called Aggression. Name calling, etc. Not a good strategy. It wears out other contributors.

Conversations happen in many places. Document the decisions so that it doesn't feel like they're coming out of nowhere.

If you notice in passing a problem in the community you're probably missing some information. Gather more info in order to understand the source of conflict: misunderstanding goals, stepping on toes (e.g. that person always writes the release notes but someone else went ahead and did it this time instead, without asking), ... Is it new members clashing with the old ones? Go back to the historical vision, step out of the tactical discussion. Are there sekrit goals underway? For example a motivation you're not aware of, someone with different goals for their participation like a new job or internal deadlines...

If the conflict is based on fear, it can be harder to get at, e.g. someone afraid they can't learn a new skill if switching language. Create a back channel where it's safe to discuss, to try to deal with it.

Planning for future conflict:

  • Assume the best.
  • Have a code of conduct. Set expectations.
  • No name calling/ad hominem, only discuss technical merits.
  • Set an example.

We could do so much more in the FOSS world if we didn't spend so much time slagging each other :)

Some references:

Mentoring 101 (Brian Proffitt)

Link to talk

This was a lively talk on both formal mentoring via programs such as the Google Summer of Code as well as daily mentoring.

Onboarding is important. This is not just about keeping people in but helping people get in.

As software project, should you want to participate in formal mentoring: have a plan. Formal programs have their own rules and structure.

As a participant, be a member of the community, pick a project that interests you. Be ready to commit. Be flexible - it takes time but is rewarding. Be ready to ask for help.

Undermentoring is a problem. Have clear, set goals for the mentee. Keep an eye on them while also giving them space.

Issue with mentees disappearing... They really should keep a line of communication open, even if only to say, "sorry, I cant' participate anymore." (From personal experience mentoring, this is quite stressful and it was a relief when the mentee admitted they could no longer participate.) You need to proactively mentor to avoid this.

  • Be consistent. E.g. your style may be authoritative (do it this way), democratic (collaboration) or coaching (guidance). Stick to one. (...with Brian noting that the last one is best for mentoring!)
  • Set expectations. Make sure the goals are aligned, define the process, avoid confusion.
  • Take an interest in the mentee. Listen actively. Build a relationship. Learn their style. Don't be a stalker though.
  • Wait before giving advice. It's easy to try to fix things when someone comes at you with a problem. You might be able to solve it but that won't help them. Hit the pause button. Ask more question, get to the heart, guide them to the solution. It takes time. You may not solve the problem. As long as it's not time-sensitive that's ok.
  • Don't assume anything. Avoid stereotypes. Learn the details of a situation. Emphasise communication.

Remember this one thing: they aren't you!! Understand the mentee. This is not only about getting the project done.

A question from the audience was, how can I help my mentor as a mentee? Once again it's about being proactive: this is what I want to do, how do you help me do this? If it isn't working, reconsider the relationship.

How to recruit mentors? Some projects are easier to get in than others. In oVirt, there's a separate mailing list and no formal process. Spread the word, see who's interested. What about a project where there's a lot of mentees yet no one ever answers the call for mentors? If this is part of a corporate project, a solution could be to make it part of the job.

Impromptu community talk

One of the talks was cancelled, so a couple of people started a conversation around community and community events (too many pizzas, too much alcohol is the feedback I remember from that part).

The need for "cruel empathy" was brought up: sometimes meeting organisers should be jerks for good reasons. Loud, demeaning people are toxic and need to be called out. Tell them, in private: "I appreciate that you're here, etc, but you're sucking air out of the community." They may not realise they are doing it, so be nice about it.

The Shotwell community was brought up as one with great first contact.

Overcoming culture clash (Dave Neary)

Link to talk

This was a very interesting talk, mainly summarising information and graphs from books, which I'm going to list here. And hopefully read soon!

Basically a lot of what we do and how we interpret things depends on our upbringing (country, culture, ...)

  • The Tragedy of the Commons - Garret Hardin.
  • Moral Tribes - Joshua Greene.
  • Culture and Organizations - Geert Hofstede. That one brings up 6 dimensions of culture, for example tending to be uncomfortable with silence and filling it instantly, vs. leaving a couple seconds pause to make sure the person is finished.
  • Getting to Yes - Roger Fish. This is more psychology-based than about hard negotiation.

There are no easy solutions. Focus on the relationship. Take small steps: one conversation at a time. Encourage local user groups - though keep them connected e.g. via sending mentors/leaders to these meetups to avoid silos. Have short mentoring programs. Learn about the cultural norms and sensitivity of the community. Avoid real-time meetings.

I contributed! But what now? (Bee Padalkar)

Link to talk

Based on the Fedora community, this talk was about improving contributor retention rates by finding patterns about successful contributors.

Why do people stay? According to a survey they ran, usually thanks to the strong, positive and inclusive community. Other reasons were identifying with the Fedora foundations (core values), giving back, constantly learning.

No matter how interesting the project is, if the community is poor people don't stick around.

  • Have a defined onboarding process.
  • Encourage people to speak up.
  • Assign action items early on, to create a sense of responsibility.
  • Let people know they are doing good work, give feedback (e.g. IRC karma).
  • Use inclusive and motivating language.
  • Encourage attending events to meet people in person.
  • Encourage consistency over activity. Let people contribute according to their own time.
  • Inspire, interact, give feedback.
  • Help people identify with your mission.

~

Looking forward to next year's event!

Leave a comment

FOSDEM 2015

FOSDEM logo

Last week-end I attended FOSDEM for the 7th time. It's kinda strange to say and think - if someone tells me they've been going to this or that open-source conference for 7 years I tend to assume they're hardcore and totally know what they're doing. I go to hang out with cool folks and learn new things. This year's FOSDEM didn't disappoint in that regard!

As usual, the conference was packed and most rooms filled up quickly, but I was happily surprised to see it was still possible to squeeze in some of the more popular rooms regardless. I think many devrooms organisers are well aware of the frustration with not being able to get in and they did a great job at encouraging/demanding folks use all seats rather than leave spaces in the middle, which really helped (special kudos to the Legal devroom, which was in a smaller room in H). Also the main conference organisers appear quite good at trying to adjust the room size based on popularity year to year (e.g. the Mozilla room used to be utterly impossible to get into).

Some of the conference highlights from my perspective:

Identity Crisis: Are we who we say we are?

This was the first keynote on Saturday morning, which I think did a good job of bringing up many possible ambiguities hidden in the "we" we use when contributing to a project. One of the strengths of open-source is that we're quick to say "we" and include everyone, but sometimes it bears more thinking or clarification of who we actually mean with "we" - sometimes two "we" can describe different subgroups of contributors even in the same sentence. Taking the time to think explicitly about who we mean, and avoid unintended conflicts of interests is important.

Link

Fog of War - The GNOME Trademark Battle

The story of what was happening in the background during the Gnome battle for their trademark with Groupon last year, told by a Gnome board member and the lawyer that helped them on the case. Interesting insights thanks to the lawyer's perspective particularly, who also took a guess at what possibly happened in the Groupon lawyers' mind during their risk analysis and the consequences (e.g. "Groupon was dealing with an animal they'd never seen before." A charitable org not willing to be silenced or take a big donation.) Not a kind reflection on Groupon.

Link

Why Samba moved to GPLv3: Why we moved, what we gained, what we lost.

Emboldened by having managed to get a seat in the Legal devroom, I decided to also stick around for the next talk. I hadn't attended a talk on GPLv3 in a few years and I wasn't to be disappointed. It was a very honest and funny talk - I knew of Jeremy Allison aka the Samba guy, but I didn't know he was such an entertaining speaker. Overall Samba seems very happy with the move to GPLv3, it simplified a lot of things for them especially in terms of copyright managenent (some companies are just nasty), and most of the contributors and users they initially lost ended up returning (multiple closed-source vendors being bought out and leaving their customers in the cold likely helped). They felt really let down that the FSF didn't force their own projects to move as well (though I understand that is not the case anymore) and of course the Linux kernel being GPLv2-only is hurtful too. The speaker is convinced that all the scary stuff around GPLv3 is FUD and everyone should switch to using GPLv3+ right now if they don't have to link to v2 stuff. An audience member did raise an issue/unclear point with the v3 licence, for when a company rents a device to a customer (that the user doesn't actually own and thus perhaps? shouldn't be allowed to modify).

Link

Participation metrics at Mozilla: A story of systems and data

For projects that depend so heavily on volunteer contributions as Mozilla does, understanding who the community is made of and where/when people are being lost or leave is really important. The speaker started by showing us some of the ways they tried and failed to measure participation and what they ended up with. They defined what "participation" means by formalising paths a contributor might take across their systems (e.g. file a bug, comment on a bug, translate a string, etc) and they extract and map the data they have to these paths. This enables them to also deduplicate contributor information: for instance it's not because you have 100 translators and 300 developers that you have 400 contributors, people can do more than one thing, and it also lets them identify more clearly whether someone is leaving the project altogether or simply moving to another area. Very interesting stuff!

This is work in progress but their current results and reports are available at Are we a million yet.

Link

Maintaining & growing a technical community: Mozilla Developer Network

The other Mozilla talk I attended explored the meaning of community and the motivations behind why people start contributing, why they continue to contribute and how to help folks feel involved and want to contribute. The speaker made some really good points, one that really stuck with me being that contributors ≠ community. It's really important to connect contributors to your community or they will not stick around! The example she used was getting people to contribute at hackathons-like events, but then disappear - as someone who's run such events that certainly rang true, simply showing folks they can make a positive impact easily is not enough to make them come back or feel part of the community.

Link

Retooling Fedora: A Retrospective on Fedora 21 (and looking to 22)

I knew Fedora had been changing their model since the past release but I hadn't been following closely. This clarified the goals and the why, and I was very impressed with the beginning of the talk where the speaker (Matthew Miller, the current Fedora Project Leader) took a really hard look at where distributions are today and why they appear to be becoming less relevant - for instance looking at the contrast between the number of open-source projects available on platforms like Github compared to what is actually packaged in the distro. People used to care about getting their software into the major distributions but it doesn't seem to matter as much nowadays. In that light the "ring" graph showed toward the end, explaining that perhaps the apps at the outer layer don't need as strict and stringent criterias for inclusion than the more core OS components, totally makes sense and the future looks interesting.

Link

¤

I continue to be hugely impressed by how much Mozilla cares about improving the experience for new and existing contributors (impressed but not surprised! Their "Get Involved" page remains excellent, letting you get in touch with real people while showing at one glance all the different ways you can help, and having a mentored bugs process for new contributors is an awesome step-up from simply tagging easy bugs. Keep rocking and showing us all how it's done Mozilla!)

Videos of the talks should be available in time on the FOSDEM website.

Leave a comment

PyCon Ireland 2014

Together with a couple of colleagues, I wrote a short report about PyCon Ireland 2014. Once again the conference was a lot of fun and I'm looking forward to the 2015 edition! :)

Leave a comment

Evolving Open-Source Night Open-Source Night - June

The monthly Open-Source night experiment continues. On Wednesday, June 19th we had the June edition of Open-Source Night. Rory gave us a slightly-longer-than-15-minutes talk on OpenStreetMap, and the kind of contributions the project welcomes (data data data, really!). I was the only one to volunteer for a lightning talk, in stark contrast with the last event where we had multiple ones both by people attending open-source night and network people who happened to be visiting the hackerspace at the time.

Rather than doing an on-topic talk about an open-source project, I did a meta-talk about Open-Source Night itself and different ways in which it could evolve.

I make no secret that I don't think open-source night works very well in its current format. My goal (and measure for success) is to help people actually get started contributing.

Note: if you were at my deep dive on contributing to open-source and planning on attending the next open-source night, please do come!! This all just means that I think there are things that can be improved :)

Reminder: The current format

We're currently meeting on the 3rd Wednesday of the month. The event usually starts with two 15 minutes talk, ideally one on the life of open-source (e.g. licences, version control, IRC, using the command-line...) and one on how to contribute to a specific project. Then there's a number of 5-minutes-long-or-less lightning talks where people can introduce the project they will be working on during the evening. Then people break into groups or independently work on an open-source project of their choice.

Or that's the idea anyway. The talks usually work well and are inspiring, though they tend to run overtime and then people have trouble sitting at a table and doing things.

The future

These are my thoughts and ideas as to where open-source night might go in the future. These are not plans. I would like to hear feedback from interested people - attendees, would-be attendees, other organisers and thoughtful passerbys.

Topics: General open-source vs Project-specific

I see value in both topics, but perhaps attendees and would-be attendees favour one or the other, both, or neither? I haven't really heard much on this, what people find most useful. I think the project specific talks at least are interesting for both newcomers and established contributors, to see how other projects do things.

I believe 15 minutes is a good amount of time to get an overview, get inspired and get ideas. And not get too bored if already familiar with the topic.

I could see the value and how efficient it would be to focus on one project for a full session and guide people through contributing to it. However I can also see how it could be offputting to people not particularly interested in the project (e.g. "I don't use that distro, why should I bother attending?"). Perhaps as separate, off-shoot events if people are interested in leading that kind of session (get in touch!).

Topics: All kind of contributions vs Focusing on code

At the moment, I try really hard to emphasise (and encourage speakers to do the same) all the kinds of contributions that can be made - and are well needed! - even though my own experience lies in code-based contributions. Maybe I should give up on being inclusive and focus on what I know best, rather than try to be all things to all people? Code-focused open-source events could go from learning to program to fixing a bug.

Initially I was hoping people would step up to talk and encourage people to join their area of interest (I think we have e.g. experienced open-source translators around...) but that hasn't happened.

This is actually a point where someone came up to me after the talk and said straight off they really liked the breadth of contributions demoed during the talks, in particular mentioning last month's talk by Guilherme on OpenMandriva that did a great job showing how someone can help, in a multitude of ways, even if they're not all that confident in their coding skills.

Format: More course-like?

A couple of days before the June OSS Night I was pointed to this article on a really, really interesting summer course on learning to program the open-source way. That's a really cool concept. Should we entertain the thought of doing something similar during open-source night? e.g. A session on learning how to use version control with exercises, very workshop like, one month, some other topic the next one, etc.?

Probably not, but something to consider as a separate course with a shorter timeline. A course with one session a month will lose people and have little momentum. Is this the kind of things people have an interest in learning?

Format: Doing vs Listening/Talking

My goal with this event is still to get people started contributing. I'm not interested in organising a monthly night of talks. Finding speakers is stressful. If the talks aren't followed by some contributing action, to me the event is failing and I'm not interested in continuing to organise it. There are plenty of events around Dublin already where people can meet, talk tech and shoot the breeze.

With regard to open-source related talks, I think that's already handled well by the ILUG folks, who are now keen to set them up regularly again under the new chairmanship :) And we can join forces if that's the most attractive part of the event to folks. If your main interest is in having a regular night of open-source talks, get in touch and I'll be happy to help you have this in Tog. I'd attend with pleasure anyway, I'm just not interested in organising it and go speaker-hunting every 3 weeks.

I still believe we can make something really cool happen by putting in the same room people experienced in open-source together with newcomers interested in contributing. So I'm not giving up yet!

There's of course also the timeframe issue: with or without talks, an evening of maybe 3 hours is not a lot of time to accomplish something. Maybe we could (also/instead?) have events on Saturdays? And/or week-end workshops, Friday eve to Sunday?

HOWEVER, in any case an evening is still enough time to accomplish something, get started, get the momentum going, get unblocked and finish your contribution later at the week-end, in your own time.

You: Why are you here anyway*? ;)

* Or why weren't you? :) I'm just as interested in the answer to that!

Are you interested in learning how to contribute? Interested in helping and mentoring newcomers? What were you hoping this evening would be about?

I then invited people to have a productive discussion with me about this should they wish to, somewhat contradicting my own doing vs talking rant :-)

Please feel free and welcome to continue the discussion in the comments or by email, I would love to pick more brains and exchange ideas about this.

Django challenge

To avoid the talk being entirely meta (and in case people didn't care that much about all the blah blah blah and more about the doing!), I issued a challenge to attendees as well: this evening, run the Django unit tests suite. If that's something you're set up for, it'll take 2 minutes. If you're familiar with the concept but don't have all the dependencies set up yet it'll take 20 minutes. If this is all new to you it might take 2 hours, but what you learn you'll be able to reuse when you start working on a project you care about in the future, and it means it'll take less long then.

One person took me up on it and it took them 10 minutes. This shows how possible it is to actually get the ball rolling during open-source night, get people to realise they're not that far away from a first contribution.

I feel I should give the disclaimer that since the last time I talked about how to contribute to Django, the Django folks added to their docs a tutorial on how to make your first contribution, which just makes the project that little bit more awesome (and this challenge, easier to solve!)

Next Open-Source Night: July

So next month. That'll be on the 17th of July. Are you interested in giving a talk? :)

If no one volunteers we'll have a session similar to the first event except with more lightning talks. Lightning talks don't have to be prepared, there's no need for slides or anything you don't fancy. It's as simple as chatting about what you plan on doing or would like to do during the evening, inviting others to join you if they'd like.

It can be like:

"Hello, I'm Chris, a contributor to AwesomeProject which is a project that does this and that and also that thing. At the moment we're looking for help in $area1, $area2, $area41, if you think that's cool and you'd like to help, I'll be sitting at that table over here, come and chat with me. Maybe I can help you find a good first task. Otherwise I'll be working on the defroglirnator for the project -- er if you have any experience in that area I'd love to chat to you too."

or maybe

"Hey, I'm J. Bloggs, I've been using Wordpress for a few years, I think it's an awesome project and I'd like to start giving back. Tonight my plan is to figure out the new contributor process - if you're interested too, we can do this together. Come and chat with me."

It doesn't matter if there is no existing contributor to the specific project in the room. Since there are people familiar with the way open-source projects and communities usually tick, they will be able to help you if you get stuck.


Ok, that's it! I'm hoping to also have the time to find a few good first tasks in a new project, maybe LibreOffice. If not, then we can figure it out together on the day :)

I'm very much looking forward to hearing your thoughts, suggestions and ideas about all this, and perhaps also see you on the 17th.

Leave a comment | 2 so far

Open-Source Night #2: March 2013

On Wednesday the 20th, we had the 2nd edition of Open-Source Night in Tog. I think it went well. Once again there was about a dozen attendees, many of whom have never contributed to open-source before. A third of them were also in Tog for the first time. It might be too early to matter but there was also very little overlap with the audience from last time.

Talks

We started the evening with 2 talks, meant to be about 15 minutes long each. Mark started the evening telling us about open-source licences and the philosophy they encapsulate/were born from. Then I walked through how one would go about contributing to Django, basically clicking through the Django website and explaining different tasks the project needs help with, particularly for bug fixing contributions.

After this, we had 2 lightning talks that were meant to last 2 to 5 minutes, to give people a chance to talk about a project they contribute to and get people to join in. This time the talks were more about ideas, which is fine, but both also ran overtime, which is less cool. I'm not sure if either found additional contributors/would-be contributors out of it for the evening.

Hands-on

The second part of the evening, the part that should be hands-on, didn't go so well. After the talks (which lasted for 1h30 instead of 45mins) and a tour of the hackerspace for the new people, most continued chatting instead of sitting down and getting things done. This especially saddened me for the ones who had never contributed before. The goal of the event is to help newcomers get started contributing, when they have experienced people at hand to ask questions to.

Next time

I'm not sure how to improve this next time and help attendees get started actually doing stuff. Running overtime for the talks really hurt for the rest of the evening, which is already such a short time to accomplish something. An idea: after my talk I was asked "How long would it take for someone to start from nothing to being able to run the Django unit test suite?" and maybe this kind of well-defined, self-contained task would be good to help people get started. It's not a contribution yet, but it's a first, necessary step toward it (for code contributions in any case), and it could be fun to try and mix this with some sort of open badge.

Somewhat related announcement: open-source night won't happen on April 17th next month but probably on April 24th instead. Check the tog.ie calendar for confirmation. If you're interested in speaking on a topic relating to the life of open-source or a project in particular, please get in touch :)

Leave a comment

Open-Source Night: Event #1 February 20th

On Wednesday, Tog hosted the first monthly Open-Source Night.

It's an event I'd been wanting to organise for a while, with an eye on it being hands-on and slanted toward helping interested people get started in open-source, but I wasn't sure what format would work best. I'm still not sure, but in the spirit of release early, release often, I thought I'd give it a shot for a few months and iterate.

For the first event, about a dozen people showed up. About 7-9 of them had already made some kind of contribution, most people had a clear idea of the project they wanted to contribute to for the evening, and 3 were hesitant and not sure what they wanted to do.

Blackboard with project names

We started with 2 super short talks, an ill-prepared one from myself about what to do tonight: basically find the contributor's guidelines for the project you're interested in and speak to the person next to you for help, since we had such a skewed ratio of experienced contributors. Triona followed with a talk on what she planned to do in the evening with Free Penguin, an open-source sewing pattern for Tux plushes. The maintainer hasn't updated nor responded to emails in years, so it seems it will need to be forked in order to start improving the documentation. Open-source projects aren't all about code! :)

I directed the hesitant project-less people toward Cheryl and the Dreamwidth project, which has an excellent reputation for being friendly to newcomers. Even without an experienced contributor around, I thought figuring things out together would be a fun learning experience. It may not have been that effective though, people were interested and looked around but nothing got accomplished (perhaps that is to be expected for a first couple of hours getting acquainted with a project and open-source?). Then further efforts were thwarted by technical problems (bugzilla down). Cheryl's thoughts abut this is that it's difficult to get into a project one doesn't feel strongly about (a similar downside applies to projects discovered via OpenHatch, as someone else mentioned to me).

There were a couple of serendipitous meetings, like the person wanting to get started with Debian packaging who happened to be sitting besides a Debian Developer.

But overall, I think having encouraged people to come along already having a project in mind made it difficult to form groups and encourage collaboration, because people ended up working on the project they had planned to alone. It may not have been a great experience, particularly for people who didn't know anyone or hadn't been in Tog before.

I also need to become more familiar with projects who have good, specific non-programming-related tasks for newcomers. I had a general idea but wasted time trying to find the details. We had a graphic designer interested in either contributing his design skills, translating or participating in testing efforts but I wasn't able to quickly find a good "Here's a concrete task you can do now" for some of the better known projects. He did discover InkScape and became eager to learn it, so I hope to see him again in Tog in a few weeks for teaching an intro workshop to InkScape :-) (Thanks Borud!!)

Ideas on how to evolve the format for next time:

Choose one project and make it the main focus of the evening, at least at the start. Meaning only one presentation, that is a bit longer (ideally 20 minutes, max 30 -- we still need time to actually do stuff!) and give specific, step-by-step instructions on "this is where you go to find something to work on, this is how you choose a task" and afterwards have the people interested in working on the project do so together - several people to one task can work, to encourage learning together and avoid getting stuck. People are still welcome to work on whatever else they want to of course. This was suggested by Ulrich based on the recent Debian Bug Squashing Parties he attended.

Becky said there was an interest in a GitHub pair programming type of exercise. People upload code on GitHub they never touch again. Working on someone else's code with the help of the author could turn into an instructive experience. It would also be cool to see what a pull request looks like from both sides.

I think we can try both these things for next time, the GitHub pair programming could start after the presentation for people not interested in working on the highlighted project.

Now. The next step is to find a project to highlight and a willing contributor who'd like to present and guide, for the next session on March 20th. Ideas, volunteers? :)

Feedback and general thoughts on evolving the format are warmly welcome as well.

Leave a comment

"Making your First Open-Source Contribution" Ireland Girl Geek Dinners

Hello good people who attended my talk on "making your first open-source contribution" for the January edition of the Girl Geek Dinners in Dublin. I hope you enjoyed it!

The slides are available on Slideshare - this version is actually missing a couple of recent updates, the main one being the mention of the Open-Source Night starting in Tog on Wednesday, February 20th. Come make your first contribution or help others get there!

If you're looking for a copy of the hand-out, you can download it here.

I'll be posting a transcript of the talk in the next couple of days. It still needs a bit of proof-reading :-) (It's posted now)

Many thanks to Christina for organising GGD!

Picture of the talk
(Thanks to @Imisaninja for the picture)

Leave a comment

Interested in open-source? Come to Tog!

Are you an experienced open-source contributor interested in recruiting new people for your project?

Are you a fan of open-source who would be interested in contributing at any level but isn't sure how to?

Come to Tog's first Open-Source Night on February 20th!

These hands-on sessions aim to bring together experienced open-source contributors with people who would like to get started but aren’t sure where to start or would generally benefit from having someone to ask questions to.
[...]
Every month we will start with a couple of people speaking for 5-10 minutes, to introduce the project they are working on, what is the usual path for contributing and where they are currently looking for help. Then we will form groups and work on making a contribution for the rest of the evening.


I'm hoping to make this into a regular monthly event. The current plan is to try it for a few months and see what it becomes. This will heavily depends on who attends so, help me recruit lots of interested people from both side of the contributor spectrum in Dublin :)

Leave a comment

Python meet-up, November 2012: DevOps 101 DevOps 101

Shame on me for not blogging about PyCon Ireland this year, another success where I happily spent a big chunk of time greeting attendees at the registration desk and attended some interesting talks.

The stylish volunteers:

PyCon Ireland 2012 Volunteers

In the meantime, I took better notes at the November Python Ireland meet-up, where Ulrich took us through DevOps 101, going from the basics of what it is and all sort of related tools to help, mostly Python-based.

I thought interesting that Ulrich worked from a specific definition of DevOps that differs from the ones I heard. There was a time when I used to call myself "a devop" because in addition to development work I was also maintaining production and test servers and handling deployments, etc. However at the time I learnt from other sources that "a devop" was actually a system admin who also writes code to make their job easier... so I just stopped using the word ;) Ulrich based his talk on the premise  of development and operations working together.

Development tends to move fast while operations are more stable and conservative, why would you put these groups together?

The need to scale is an important one. You need the knowledge that the developers have of the application itself, but you also need to understand infrastructure. For instance if you deploy something and suddenly the number of requests spikes, you want devs and ops to be close together to solve the problem.

Following a "release early, release often" principle where you might deploy several times a day makes that collaboration important.

Environments are complicated and it's important for people to integrate ; if the departments remain separate people tend to stick only with the same group of people.

Devops enable better communication and understanding. Developers can move fast and deploy faster as they know what ops need. This also helps mitigate risk thanks to the regular deployments: fewer changes are deployed at once (helpful to determine what breaks) and it becomes easier to do.

The talk then moved on to tools, including advice on how to create a simple django app to keep track of which versions of software are deployed where -- very simple and yet usually enough.

Some of the projects that were mentioned:

  • Using Sword instead of puppet (...which is still best, apparently) if you want to stick to Python, though it's a very young project. Using such tools you can start versioning your configuration and environment, and store reasons for changes in commit messages.
  • Fabric for ssh the Python way :)
  • Virtualenv to isolate Python and manage dependencies
  • Buildout is like Maven for Python projects. It's used to maintain dependencies and deploy, though it can be overkill for many projects...
  • Augeas is not Python specific and supports many configuration formats

Ulrich then went through some of Martin Fowler's principles for refactoring among other things, and also talked about Jenkins, not Python but known as the "Wordpress of Continuous Integration" because it has so many plugins. Sonar can be used for code quality.


We tried something different for the second part of the evening and watched a video of a Pyramid talk. I don't think it quite worked, even at 30 minutes it felt too long. Perhaps we should try again with a video more likely to appeal to a wider range of people, or simply shorter talks. Fair play to Diarmuid for standing up and taking the heat of answering questions afterwards ;)

Leave a comment

EuroPython 2012 Let's go

I'm very happy to be going to EuroPython again this year :) Once more the talks I'm most looking forward to are mainly related to scalability, perhaps with a dash of internationalisation/encoding on the side. The tutorial on Django testing with Selenium should be interesting too.

Say hi if you see me! :)

Python Logo

Leave a comment | 2 so far

Python meetup, June 2012: Message brokers & Python ActiveMQ

Last Wednesday was June's monthly Python Ireland meet-up, this time with a talk by Fernando Ciciliati about message brokers and Python.

After a general  introduction on the advantages and concepts behind messaging, the main part of the talk was about ActiveMQ, a message broker under the Apache umbrella. Fernando explained that this was a broker mostly used by Java people and required a bit of work on their part to get it to play nicely with Python. They had to make significant changes and improvements to the existing libraries.

ActiveMQ is written in Java and designed for high-performance as opposed to reliability, and the default configuration reflects this. If you want your messages to persist after reboot and never be dropped, you will have to tweak the default setup.

The community is very active... which also means lots of bugs! Fernando recommends reading the tracker, as there will certainly be a couple of bugs that are relevant to your situation.

Thanks to the provided binaries, ActiveMQ is easy to install. You get a nice web interface to manage and monitor your queues and messages.

By default, STOMP is not installed and ActiveMQ is only configured to talk to Java via OpenWire. Enabling STOMP support is simply a matter of adding one line to an XML configuration file, and is the easiest way to communicate with Python.

There are several Python libraries available: stomp.py, stompy, Twisted also supports it... They selected stomp.py.

To test your ActiveMQ setup, you can simply telnet into the port and send messages, while using the web interface to see what's happening. The protocol is similar to SMTP.

The terminology used by ActiveMQ differs a bit from other message brokers, be careful when comparing them. (I wonder if this is due to using STOMP vs. using AMQP?). For instance, "topics" seem to be a way to broadcast in ActiveMQ, rather than a way to filter interesting messages.

As usual, the technical talks continued in the pub afterwards :)

Leave a comment

CESI 2012 "TEACHnology: Merging teaching and technology in schools"

Last Saturday I attended CESI 2012, a conference organised by the Computers in Education Society of Ireland. It's a conference aimed at teachers and this was the first time I attended. Even though as an IT professional I wasn't really the target audience, I really enjoyed the talks and left the day with my head full of ideas and really happy with the interesting conversations I had.

Random highlights and talk summaries from the day...

Opening talks

As part of the opening talks, Gerard McHugh told us that the lack of engaging content explains failing western schools. Even though many things in life have become more engaging over the decades, school hasn't: we need it to be more interactive, more collaborative and encourage participatory active learning.

For the keynote, Stephen Howell animately encouraged school teachers to do the PR for third-level courses :-) and help students find that spark to discover if they would enjoy a career in IT. The "3 Ds" should be taught in school: Design, Develop and Debug -- not all of them require computers or knowing how to code. Using software such as Scratch, students can be moved to a producer role as opposed to what they do with most wonderful modern devices such as the iPad or xBox, that are slanted toward consuming content. The goal is to get them to make their own games. The Scratch + Kinect demo was quite impressive :)

A brief history of the near future

John Hefferman looked at what technologies are currently being created, citing an interesting quote: "The future is already there, it's just not evenly distributed." If we look at what R&D departments are working on right now, there will be less of a surprise when it arrives into the classroom in 10 or 20 years. "The Horizon Report" relates technological innovations that are coming up. John ended the talk by telling us examples of how all this will affect history teaching and the classroom in general.

One of the questions was about how to bring some of the tools to the classroom, when there are school and curriculum constraints. The answer was that it's better to ask for forgiveness that permission, and start under the radar initially.

Game-based learning in Irish education

Patrick Felicia is doing research on the impact of games in education. His surveys indicate that most teachers agree games are a good learning tool, that improves plenty of skills (with a bit of hesitation regarding social skills, which is likely due to people having different types of game in mind). However despite agreeing on the benefits, only a tiny percentage have actually used any in their classroom. Most of the talk took the form of a conversation with the audience, aiming to figure out why this is and people's thoughts about it.

The main problems and constraints strongly relate to the curriculum and time constraints. There is not a lot of content tailored for the Irish market. Teachers suggested a portal of suitable games, and bringing workshops on how to use them to the schools to make sure people attend and learn about it.

Someone suggested, inspired by Stephen Howell's keynote, to have children develop the content :-) This way they get to use their creativity, meaningful content is created and they are taking responsibility for it. Teach to create!

The iGBL is an Irish conference on game-based learning.

The LiveScribe pen in action

Adrienne Webb explained to us what the LiveScribe pen is, how it can be used and how she uses it to provide additional resources to her students. The pen is a very interesting piece of technology that records what you're writing and your voice, which you can then upload or send as a video. There are cool additional little features, such as clicking an element of the video to listen to the playback of what was said when that particular element was drawn.

She used it to provide sample exam answers, so the students can focus on what they want and ask questions only on what they are having difficulty with. This worked better and more efficiently that trying to cover every question for everyone over 40 minutes. The students - in an exam year - really took to it.

Social networking with our students

Catherine Cronin related her very interesting experience on using social media such as Twitter and Google Plus to interact with students for a third-level module, touching on themes such as digital identity, privacy and authenticity.

Currently there is a tension between the current model of delivering education, standardised, static and stale versus a student-centred model. Meaningful learning occurs with knowledge construction, not knowledge reproduction.

There are 5 stages to go through:

  • Awareness (of what is going on)
  • Commitment (which requires time and learning)
  • Access (to the appropriate technology)
  • Authority (to change things, which is easier at the 3rd level since you can do what you want for the modules you teach)
  • Design

This is about challenging students while still honouring who they are and how they work.

The Google Plus experiment was leaky (in that G+ makes it easy to re-share stuff that was submitted privately to a circle), though this accidentally made the conversation more authentic by allowing the author of a book they were studying, Jeff Jarvis, to join in.

There is of course a dilemma between being graded and having an authentic discussion. From the student's point of view, there was a wide spectrum of opinions with regard to the usefulness of social networks, thoughts on privacy and comfort expressing opinions in public.

Student comments covered a wide range in terms of appreciation of the experiment. The general opinion seems to be that it was useful, but/and messy!

The talk Q&A reflected on the dangers of having students participate in that kind of public discussions, at a stage where they're likely still trying on different identities to find themselves. This is going to happen anyway, so it might as well be within the context of a class with someone to offer guidance.

Moodle in the classroom

Declan Donnelly gave a nice introduction to Moodle by explaining to us how his primary school uses it -- all of which is also applicable to the secondary level.

The presentation mainly focused on the possibilities offered by interactive exercises, notably by linking with SCORM compatible software like Hot Potatoes, which is better than the basic Moodle quizzes.

Thanks to Moodle it's easier to share resources and do grading and assessment. It acts as a digital link between home and school, both to do the work (including drafts) and show off accomplishments to the family.

This link also applies to staff, as the school they use Moodle to publish policies and meeting notes, in a section only accessible by the staff.

Enhanced Learning Futures

After a full day of listening to tremendously interesting talks coming straight from the classroom trenches, I felt the "capstone address" resonated a bit hollow -- it was an excellent presentation but it nearly felt too polished!

Still Steve Wheeler brought up plenty of good ideas and food for thought, regarding the direction society, technology and education are going toward. Tools shape our behaviour, the more we use them.

Some examples of societal shifts: Amazon now sells more Kindle books than paper books. 1.5 billion mobile phones have been sold. Girls are catching up in terms of gaming trends. The gamification of learning can lead to deeper learning because we want to repeat the experience.

Learners need to acquire "digital wisdom". Lovely new term: darwikinism! The survival of the fittest content.


Looking forward to next year's event :-)

Leave a comment

Electro-sewing workshop in Tog

Cheryl will be teaching the first electro-sewing workshop in Tog, on October 21st! Have a look at the Tog post to learn more, including how to sign up. This will likely be followed by more electro-fashion workshops in the future, keep an eye out for them. I'm really looking forward to it, combining technology with artistic fields is bound to result in wonderful projects.

Come along and sign up to the workshop, learn how to use conductive thread and create a small circuit to make your very own LED flower :)

~ ~ ~

Do you work on something cool, in open-source or open culture or general tech? Would you like to teach a workshop about it, give people a taste of why it is cool and interesting? Please get in touch!

Leave a comment