I wrote a brief saying I was looking forward to the one-day event, and I definitely wasn’t disappointed. I was so engrossed I failed to take any photos but I did take some notes during the day, although I hoped that the presenters’ material would be available sooner. (Speaking of sooner, I know it’s taken me forever to post this recap – I’ve been kept really busy by both my photography and job hunting.) Anyway, on with the show…
After a very simple sign-in and grabbing breakfast, the first challenge was picking which sessions to attend. I really liked the idea of using Kanban cards – it was a simple introduction for anyone not familiar with Kanban, it gave attendees a physical reminder of where & when the sessions were, and it gave the organisers an easy way to control the number of attendees in each session (once all the cards for a session are gone, it’s full).
First up was the keynote: “Agility at Scale: Agile Software Development in the Real-World” by Scott Ambler. He started by pointing out that “scale” doesn’t just mean large teams and it’s not limited to just software development, despite the talk’s title. Scott had some good points (e.g. about 80% of Agile teams are working with legacy systems) and suggestions (in the daily stand-up, ask “what issues do you foresee?”) but it just felt quite adversarial. He proposed that Agile projects should use some time before the first sprint to create a vision statement and sketch out the user interface, as well as creating the initial product backlog stories. A couple of quotes I jotted down because they seemed quite extreme: “Giving an estimate up front is unethical” and “Using Excel for burndown charts is just bureaucracy”; I wonder if he was just being provocative to wake people up. I definitely agree with a couple of other thoughts though: coordinating projects means testing against other systems which are still in development and trying to predict the status of those other releases when your project is ready to go live; also, prove architecture early – “fail quickly” definitely resonated with me.
For the first session, I went to Gil Broza‘s talk: “A Product Backlog Is Not Enough“. Having been a project manager / scrum master on some big projects, I sometimes felt we lacked a more cohesive overview, a “big picture” vision of what we were doing (and why!), so I hoped to pick up some tips from Gil and the attendees.
We started by discussing mission statements, project objectives and SMART goals. One acronym that was new to me was “SUCCESS”: Simple, Unexpected, Concrete, Credible, Emotional, Story. (I’m not sure if Gil mentioned the source* but a quick Google search reveals it’s from “Made To Stick: why come ideas survive and other die” by Chip Heath and Dan Heath.) I found a good summary of SUCCESS in a review of the book:
- Simplicity: Don’t just shorten a message, simplify it so that it’s easy to remember and easily repeated. The authors write, “The Golden Rule is the ultimate model of simplicity: a one-sentence statement so profound that an individual could spend a lifetime learning to follow it.”
- Unexpectedness: Use surprising statistics to wake up a meeting and then generate interest and curiosity.
- Concreteness: Explain ideas in terms of human actions and speak in concrete language. An example of abstract thought in concrete language is “A bird in the hand is worth two in the bush.” Use terms that will mean the same thing to everyone in the audience.
- Credibility: Ideas have to carry their own credentials. Instead of simply presenting hard numbers, make data accessible and understandable.
- Emotions: We are wired to feel things for people, not abstractions. Make people care about the idea by appealing to their emotions.
- Stories: People respond to narrative tales. Putting an idea into the context of a story will draw in the listener and help him remember the idea.
Another key point that Gil made was that the mission statement “sets the destination without saying how to get there” – this means the Product Owner describes what he needs but the Scrum team is left to work out the details of how to get there.
I enjoyed Gil’s presentation and his use of some real world examples; I definitely plan to use at least a mission statement and probably project objectives on my next project, although whether they will truly be SMART and SUCCESSful only time will tell 🙂
[*Gil sent me an email confirming that SUCCESS came from “Made To Stick” – he did mention it but it wasn’t on his slides, which I was using to refresh my memory. He also wrote a review of the book on the Amazon site.]
I picked Thanou Thirakul‘s Experience Report “Large Scale Testing In Agile Time” for my second morning session as testing (especially with a large legacy code base and no automated testing) has been a problem on some of my projects.
Thanou works for Intelliware on a prescription project using Eclipse SWT and CruiseControl. He described how the team lost faith in their build process as a result of too many build failures, erroneous failure reports, and long gaps between clean builds (in the order of 8 days). At the root of the problem were big, complex build Ant scripts, which resulted in a fear of changing them as well as the feeling they (the build scripts) were important and not worth spending time on. Their solution came in a few steps:
- they broke the old build system into its component parts (build and test) and used Maven and Hudson to manages dependencies and testing.
- in order to address the false failure reports, they used VM on the client boxes and before each test they reset the environment to a known start point.
- to speed up integration tests, the team built their own tool to manage and distribute the execution of around 1200 test cases. Each client (approx. 3 VM client environments per physical box) notifies the test server that is ready to run a test, and the server passes it the test case information. It struck me as a very neat way of implementing a self-regulating load balancing system.
The team’s findings were (1) that the maintenance of integration testing is expensive, and therefore there needs to be a balance of cost vs value; (2) teams shouldn’t be afraid to create a team to focus on specific problems or build tools; and (3) tests should start from a known good state.
Some of these things were not news to me (e.g. breaking up an overly complex build system, and having a clean start point for each test) but it was interesting to hear how Thanou’s team identified and tackled these problems. I was most intrigued by the client-server system they developed to handle automated testing; I’m sure this is a concept I can use in the future.
[I’ve realised this has become quite a long post, so I’ll publish this now and then write a separate post for the afternoon sessions.]
[Update: is now online!]