Paul Henman agile Recapping Agile Tour Toronto (part 2)

Recapping Agile Tour Toronto (part 2)

[Part 1 of my recap (“the morning”) is in a previous blog entry.]

After lunch I decided to investigate the Open Spaces area, and the title “Painless Agile Adoption” caught my eye. Having told past managers that Agile adoption is necessarily painful and not everyone will make it through the transition, this was going to be an interesting discussion.

It was just Gil Broza and myself initially, but shortly after I started listing off some of the pain points I’d seen in a recent transition we were joined by Gino Marckx. I didn’t take any notes because the discussion was too engrossing, but from what I recall we distilled many of the specific pains down to a few causes (e.g. lack of up-front preparation / education, changing any established pattern causes pain) and then got into a great discussion about whether the pain was avoidable.

For me, one of the revelations was that the team going through the transition needs to have a problem to solve, i.e. a pain that they want to reduce/remove, otherwise why would they want to adopt Agile? “If it ain’t broke, don’t fix it!” We came up with the idea of a pain baseline, i.e. the pre-adoption level. We agreed there’s the “fixed cost” of the growing pain (because any change involves some pain) but that there’s also the “unnecessary” pains, e.g. making a scrum team take a complex route to solving a problem rather than invest (time and/or money) in helping them solve it the most efficient way.

Certainly on my last project the team didn’t have any pain/problem with the status quo, so introducing some elements of Agile must have felt like a lot of unnecessary pain. However if the senior management (who directed the team to adopt Agile) had shared their rationale, and maybe even pushed down some of their pain, then the team may have been more accepting. One motivation was clearly to save money, so how could the team share that pain? Well, at the same time as trying to introduce Agile, the decision was made to stop paying overtime. If that decision had been taken sooner, then the team would have wanted to find ways to be more efficient – most of the team didn’t mind working extra hours when they were being paid for it, but very few were willing to do it for free. That’s a pain that Agile could have helped them address. Instead it appeared to some that the Agile transition was the cause of them losing OT – that was definitely an unnecessary pain which hindered adoption and, I believe, led to some resentment that Agile caused them to lose OT.

We concluded that there is no such thing as “Painless Agile Adoption”, just degrees of pain and hopefully the team see their baseline pain levels reduce as they begin to adopt Agile practices. Gil kept the sheet with our notes on it and he plans to write something on this topic soon, so I’m looking forward to reading that.


I was torn as to which session to attend next: “An introduction to Agile Through the Theory of Constraints” (J. B. Rainsberger) and “An Introduction to Business Value Engineering” (Joseph Little) both looked interesting but I thought “Project Vital Signs” (Stelios Pantazopoulos of ThoughtWorks) would be most useful as metrics is an area where I’ve encountered challenges during a transition from waterfall to more Agile methods.

Stelios started by saying that the “State of the Union” for project management was one of a poor track record with too many failures, and that many status reports simply show whether a project is on track – we need to show how & why a project is on/off track.

He used the analogy of the project as a patient: we need to track its medical history, apply some tests, track its vital signs, and apply our experience in order to reach a diagnosis and then to recommend the appropriate treatment. His definition of the project’s vital signs are the four PMBOK project levers (scope, quality, schedule & budget) plus the team. He proposed that the vital signs be measured and shared in terms of scope burn up, the current state of delivery, budget burn down, delivery quality metrics, and the team dynamics. These can be measured as:

  • scope burn up: the number (I would suggest using total story points) of the scope (product backlog), so use a graph showing the total backlog, scope (story points) delivered and a trend line.
  • current state of delivery: a snapshot of the story board (product backlog), showing who is working on which story. This seemed to me to be too much detail unless the whole project team is only a few people; I need to think some more about how this could work for a large (50+) person project.
  • Budget burn down: similar to a sprint burn down, except it shows the initial budget, current remaining budget and a trend based on the “velocity”.
  • Delivery quality: the usual defect tracking stats, i.e. the number of bugs tallied by severity & priority. Stelios also suggested using test coverage stats but I think bug stats are fair more commonly used.
  • Team dynamics: this is the team’s assessment of their “maturity” as defined by Tuckman’s Stages (forming, storming, norming, performing). Stelios said he collects this as an anonymous assessment as part of the team’s Retrospective. Looking at his example data, I’m concerned that after just a couple of iterations the team felt they hit the Performing stage – this seems too quick, and so I question the validity / usefulness of this data.

One of Stelios’ slides was a project wall which showed all the vital signs – it was dashboard view of the project’s status. (It was quite similar to things I’ve done in the past, so it’s reassuring to see we’re in sync.)

During the Q&A section, someone asked which (if any) tools were used to maintain the charts; Stelios said it was currently done by manually entering the data into Excel but agreed that it should be possible to pull at least some of the info from other tools such as the build system.


For the last session of the day, I went to Declan Whelan‘s entitled “Building a learning culture on your agile team”. This is an area where I feel my recent projects were quite weak, although part of the reason was management pressure to develop (produce code) rather than develop (grow) as a team. [There were a lot of references in this presentation so my notes are more like pointers for additional research rather than specific things I learned in this session.]

Declan began by mentioning the Satir Change Model and how short-circuiting the learning curve could lead to a lower “new status quo” than could otherwise have been achieved – the summary being “don’t rush it”.

We were given an exercise to do in pairs (or in our case a group of 3): ask your partner their name, how many siblings they have, and their biggest challenge as a kid. However, we were told, if it’s too painful then either pick another challenge or pass – this was important because it gave people an “out” if they needed it. It was an interesting ice-breaker and probably would be a good exercise at the start of a training session or retrospective.

A few people joked how Declan was giving us lots of things to add to our Amazon wishlists:

I agree with Declan’s assertion that it’s important to focus learning on bottlenecks and challenges – it makes sense to address those areas where the team needs help. He also said that an expert is not necessarily best teacher because they can’t think like novice, and I’ve certainly seen this myself. “In the beginner’s mind there are many possibilities, but in the expert’s there are few.” – Shunryu Suzuki.

As for how people learn, Declan said they pass through three stages of behaviour: Shu Ha Ri, or following, detaching, and fluent. (As per Alistair Cockburn.) He also referred to the Dreyfus model of skill acquisition but didn’t go in to any detail.

He also mentioned that there was some research to support the idea of “promiscuous pairing”, in which pair programmers frequently switch their partner sometimes daily or even more often. I can see the theoretical benefits in this but it requires a very mature team in order for it to work efficiently, I suspect.

We watched a clip from TED called “Tinkering School” which was fun and was one of the few things I tweeted about during the day – it seems like such a great idea. A useful tool that I’ve not encountered before is the “gold card” which is a “fun” story in the product backlog, for example investigating a new technology and reporting back to the team.


The wrap up (“Highlights of the Day”) was done in an interesting way I’d not seen before: the organisers named each session in turn, had the attendees stand up and asked people to give some brief feedback.


I really enjoyed the day and have just two suggestions for next time:

  1. declare the tags we should use – I saw people tweeting using any (sometime all!) of #agiletourtoronto, #agiletour and #agiletoronto
  2. find a way to encourage attendees to mingle at the end – it was a shame many people left as soon as the last session ended instead of staying for the wrap up and socialising afterwards, but I understand some people had travelled a long way or had other commitments.

I’ll definitely be at the next event; hopefully I’ll even have something I can present!

Related Post

Agile resourcesAgile resources

I haven’t blogged about Agile very much (compared to photography and Formula One) but that might change soon. In the meantime, I’ve been reviewing some bookmarks I’d filed under “Agile-related