The Average Time To Green Game

Something special happened last week. I was in Bangalore doing some training at the request of my good friend Olve Maudal of Tandberg (now part of Cisco). A day on Test Driven Development was scheduled for Monday and I fell asleep Saturday night thinking about how to really get across the idea and nature of TDD to a group of developers. I woke up at 2am Sunday morning with The Average Time To Green Game pre formed and named, ready in my head!


Game setup

  1. Each computer is given a label (eg Alligators, Bears, Cheetahs, etc).
  2. Minimum of two people per computer.
  3. Each computer must have a TDD framework installed (better still, use CyberDojo).

Game play

  1. Every 10-15 minutes ring the Average Time To Green Bell (we found a small brass bell in a local shop).
  2. When you ring the bell you also start a timer and project it so everyone can see it. This timer starts at zero and increments second by second.
  3. The aim at each computer is then to get to green (all tests passing).
  4. When a computer has got to green two things have to be recorded for that computer: the iteration number, the time it took to get to green since the bell (simply look at the projected timer).
  5. When your computer is green you have to cover your laptop (we provided sheets of green paper) and wait till all the computers have got to green.
  6. The data for all computers is recorded in a spreadsheet.
  7. When all computers are at green everyone briefly looks at the graph made from the spreadsheet.
  8. Then you have to swap partners and computers and a new iteration starts.

Game Goal

The goal of the game, which was clearly and explicitly printed on the instruction sheets was simply to control the average time to green across the whole group. The group naturally didn't understand that at first - they focused instead on the problem. The problem was completely trivial - Olve and I picked stripping backslash newline pairs off a character buffer - as in C/C++ preprocessor logical lines. It was utterly fascinating to watch how things progressed, and we feel it worked really well (and more to the point I think the participants did too), both in the TDD sense and in the team building sense.

Game photos

  1. Graph of the average time to green over several iterations.
  2. Helping to solve one computer that was holding the team up.
  3. Total number of tests passing split by computer.
  4. Relaxing at the end.

Game retrospective

  1. The group were all well above average ability so we could have perhaps run fewer iterations, or used a less trivial problem.
  2. We could have used staged goals. First measure the average time to green, then lower it, then control it.
  3. Once the group felt they had control of the average time to green we could have let them choose their own goals.
  4. We could have encouraged participants to write down their choice of strategies and their experience of pair programming.


Here's a follow up blog-entry/


5 comments:

  1. It would be interesting to see the results and how the participants improved as they figured out what was expected.

    ReplyDelete
  2. Hi Andy. The initial result was essentially that all pairs failed (in varying degrees) to realize the aim of the game. The first time to green was quite large (especially for such a simple exercise). About 6 minutes if I recall correctly. A long time when your sitting doing nothing. But we expected that. The second time to green was marginally quicker. People were still clinging to their existing habits. By the third iteration people had started to get it and were forming strategies. And, crucially, because we insisted on a change of iteration and machine each iteration the knowledge was making its way through the whole team. The exercise was deliberately trivial to try and leave as much spare capacity as possible for the main point of the game which was to learn about TDD.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. I don't really understand what they're supposed to learn from this game? Is my understanding correct, that they complete the same task every iteration and each computer has the same task? It seems like it's more about a teamwork exercise, to help the others out, since it's about reducing the average time, rather than learning anything about tdd... Obviously the time will go down each iteration, as they redo the same task... Am I misunderstanding something here?

    ReplyDelete
  5. Hi Jordan. Yes they work on the same task each iteration. But the iterations are short - you need numerous iterations to get a good solution. At first when the bell goes the pairs are typically thinking only about finishing their current tweak. They are not thinking about getting to green. It is partly about reducing the average time to green, but mostly it is about controlling the average time to green. Of getting a sense of how far from green you are at any point in time. The aspects of TDD I hoped to convey using this exercise were not really the T in TDD but more the DD. The ongoing feedback nature of how tests can effect development. Hope this helps.

    ReplyDelete