ITMPI FLAT 005
Main Menu
Home / Authors / Assorted Authors / Here’s the Secret: Approach Improvements like Experiments

Here’s the Secret: Approach Improvements like Experiments

Some changes, especially those coming down from management, are enacted as rules. For instance, some companies put in place the rule “X new unit tests per programmer per iteration” before starting a more formal Agile transition. Another popular rule is “A story’s not done until its owner has applied the code reviewer’s comments.” The team or management establishes these rules for a good reason, such as improving quality or reducing costs.

This approach usually has two shortcomings, however:

  • The rules are predetermined to be the right measure. No attempt is made to assess their effect on performance. Sometimes they cause an unnoticed reversal elsewhere.
  • The rules are an imitation of others’ success. When you hear someone refer to a tactic or rule as a “best practice,” you know that the context for its success has been lost. No practice is universally best; all practices have contexts in which they are costlier or less effective than others.

A better approach is to consider each improvement an experiment, with all the implications of the scientific method. Formulate a hypothesis; test it out, and collect data; then analyze your data and draw a conclusion. Since a work environment is a complex adaptive system, other techniques such as variable isolation and control groups might well be a stretch. In addition, objective measures are not always possible, but having a hypothesis to prove or disprove, and data to analyze, will get you far. Just make sure the experimentation period allows the team sufficient time in the integration phase of the change curve.

Several teams that I’ve coached were theoretically interested in pairing but resistant to it in practice. They became considerably more receptive once I suggested experimenting with pairing. (I used the magic phrase, “Let’s give it a try.”) We usually limited the experiment to two or three iterations. I helped the teams establish clear rules (what to pair on, how often to switch, and the like), collect objective measurements such as defect data and completed story points, and solicit subjective measures such as satisfaction and knowledge acquisition. Time-boxing the experiment assured the participants of having a way out if they didn’t like it.

Reframing a change as an experiment — and carrying it out as a bona fide experiment — is a great way to get buy-in. For stronger buy-in and participation in an experiment, it’s better to have the team agree to run it. If the team can reach consensus on an experiment, they’ll feel more curious about the hypothesis and more enthusiastic about participating than if they’re told, “You are going to pair up for the next two iterations, and after we process the results, we’ll tell you whether to keep pairing or not.”

It’s important to realize that you’re not just reframing a change as an experiment. Agile teams and organizations are such complex systems that you don’t truly know whether the change will result in net improvement. For instance, many people consider code reviews a “best practice” for enforcing standards and finding omissions. In some teams, code reviews do achieve those targets, but they also cause considerable context switching, delays in story completion, and costly overhead of managing review findings.

Beware the Law of Unintended Consequences. Whatever action you take, you might receive an unexpected benefit, cause an undesirable effect, or make the underlying problem worse. As you formulate your hypothesis and the experiment, consider what other consequences might occur. For instance, one Agility assessment I conducted revealed that senior developers used the code review mechanism to sneak in many changes and gold-plating tasks, thereby messing up estimates and bypassing the backlog prioritization mechanism. For a systematic exploration of possible unintended consequences, consider their five possible causes as described by Robert Merton: ignorance, error, immediate interest, basic values, and self-defeating prophecy.

Finally, each experiment has value on several dimensions. The obvious one is the added learning. Another one is team building: the team committed to a shared activity, performed it, and followed up on it. And there is the value of humility when an experiment’s results are nothing like the hypothesis.

Author Bio:

Gil Broza’s mission is to make software development more effective, humane, and responsible. He helps people and organizations pick up where Scrum left off, especially on the technical and the human sides of Agile. He is the author of The Human Side of Agile: How to Help Your Team Deliver, the definitive practical book on leading Agile teams to greatness. Any given day, you can find Gil coaching, consulting, training, speaking, facilitating, and writing. For quick, free advice to help you break the cycle of Agile mediocrity and move toward the promised benefits of Agile, get Gil’s popular 20-session mini-program, “Something Happened on the Way to Agile.”

About Gil Broza

Principal agile coach and trainer.

Check Also

10 Mistakes to Avoid When Troubleshooting IT Problems

Troubleshooting a problem can be a pretty tense time in the heat of the moment. …

Leave a Reply

Your email address will not be published. Required fields are marked *