Old analogue clock hanging on a wall papered with old newspaper pages

Drawing on my recent experiences with different delivery teams, on Tuesday morning I posted a thread on Mastodon and Twitter about two different responses to team interdependencies in software products. I like one much more than I like the other.

On initiative A, the team learned that a “strategic” solution to their needs would not be ready in the required timescales. So they asked developers to build a manual workaround to see them through the gap. The architecture isn’t pretty. It’s not what anyone would have wanted. But it works. Users are getting a service.

What’s more, the team are learning loads from the experience of running the manual service, and are able to feed the insights from that into delivery of the strategic solution. When it does arrive, the strategic service will be based less on guesswork, and more on learning that can only be gained by having a live service in contact with real users and real data at scale.

Meanwhile on initiative B, the team is being asked to wait for another team that has a similar “strategic” solution on their future roadmap. While they wait, the team will be disempowered. Their users are getting no value, and worse, the team are not learning anything about the real user needs.

This seems so wrong. It rests on a misunderstanding of scarcity and costs in digital service delivery. Here’s the thing: code is cheap; ignorance is costly.

Agile teams don’t accept and manage dependencies, they work actively to eliminate them. In particular, they should never accept a dependency on something that doesn’t exist yet. Any “target future state” is merely that, a target, until it is performing in the world – in the words of the Internet Engineering Task Force – as rough consensus and running code.

The agile way is not to wait. Build alphas and betas. Deliver value to users sooner. When the time comes to throw away that code in favour of something better, value all the learning we gained in the time it was live.

“The bad news is that a strong culture of ‘managing dependencies’ will hinder the implementation of the fundamental solutions.”

Illia Pavlichenko, ‘Eliminate dependencies, don’t manage them

I should clarify that when I say “code is cheap” I’m not arguing for low quality software. The team that releases early and often, learning iteratively through delivering service to users, can to do so with more rigour than the one that has to take a risky big bang approach.

After I posted this thread, several people whose opinions I trust raised the same point: What if you get stuck with a temporary workaround and the better, strategic thing never gets delivered? This problem is real, but let’s consider 3 reasons why it might happen: the good, the bad, and the ugly:

  • Good reason: Your MVP has generated new learning, that the core value in this product is smaller and more concentrated than you first thought. Congratulations, you just avoided wasted work on features that turn out not to be needed.
  • Bad reason: There was never really a commitment to build the strategic solution. Congratulations, you’ve just dodged the pain of being dependent on vapourware, and can now move forward by making your MVP more sustainable.
  • Ugly reason: The organisation is a feature factory and lacks commitment to continuous improvement. You’re not alone, this stuff is hard. But at least you now have a working product delivering some value and producing daily evidence of its own deficiencies.

In all 3 scenarios, I reckon we’re better off for having put a product into the world. But what am I missing?

Original source – Matt Edgar writes here

Comments closed