Agile FAQs
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Action Precedes Clarity

Remember the dot-com days of Webvan and Pets.com? We took traditional businesses and gave then an online presence. Rapidly acquiring a large customer base was the sole goal of many dot-coms. “If we can get enough users, we can easily figure out how to monetize it.” And all of this made perfect sense expressed in dollars and cents. I know people who melted down Yahoo Finance’s servers by checking for their favourite stocks prices throughout the day, calculating their (paper) net worth in real time. If you were not part of this madness, you were certainly considered stupid.

But then on March 10, 2000, the perspective changed. Suddenly it became clear that this was really a bubble. Without having real profits (or even revenue/cash-flow), it was really just a house of cards. In hindsight, the entire dot-com burst made perfect sense. But why wasn’t this obvious to everyone (including me) to start with?

In complex adaptive system, the causality is retrospectively coherent. .i.e. hindsight does not lead to foresight. When we look back at the events, we can (relatively) easily construct a theory to explain the rationale behind the occurrence of these events. In fact, when we look back, the reasons are so obvious that one can easily be fooled into believing that “Only if we spend more time, carefully analysing and thinking through the situation at hand, we can completely avoid unwanted events in future.” Yet, time and again, we’ve always been caught by surprise and it almost appears to be impossible to predict such events ahead of time. Call it the Black Swan effect or whatever name you fancy.

This effect gives rise to a classic management dilemma – Predictability Paradox(pdf). In the zeal to improve the effectiveness and reliability of software development, managers institutionalise practices that unfortunately decrease, rather than increase, the predictability of the product’s success. Most companies spend an awful lot of effort and money to analyse the past, derive patterns and best practices, set targets and create processes to prevent past failure and produce ideal future goals. If software development was highly structured, if we had a stable environment and we had a good data points from million other projects, this approach might work. But for software development, which is a creative-problem solving domain, with high levels of uncertainty and each project having an unique context, these techniques (best practices) are rather dangerous.

In our domain,

  • We need to break the vague problem down into small safe-fail experiments.
  • Then execute each experiment in short iterative and incremental cycles.
  • We need to focus on tight feedback loops, which will help us adapt & co-evolve the system. (We cannot be stuck with analysis paralysis.)
  • We need to probe the system with experiments and find evolutionary practices.
  • And then apply these practices in a given context, for a short duration.
  • Speed and Sustainability are extremely important factors.

This is what I mean when I say “Action Precedes Clarity”.

  • http://www.johannesbrodwall.com/ Johannes Brodwall

    I like this perspective a lot. The like between the predictability paradox and analysis paralysis was a new idea for me.

    When it comes to the dot-boom, I understand you to say that further analysis would not have prevented the individuals from ending up in the same situation. Is this a fatalistic observation, or could you outline a scenario where safe-fail experiments and tighter feedback loops could have helped the world (or individuals) avoid this particular crisis?

    • http://blogs.agilefaqs.com Naresh Jain

      Thanks Johannes. If we take the dot-boom example, first of all being open to “critically” question what is going on, is important. Unfortunately a lot of us end up taking a herd mentality and fail to question the drivers. Putting together some hypothesis behind the dot-boom and then running safe-fail experiments with tight feedback loops could certainly help. For example, the main hypothesis of the dot-boom was – “If we can get enough users, we can figure out a way to monetise it.” If we could validate this model at a small scale, then our confidence in it would increase. If we can’t, then we would spend creative time and energy to validate the hypothesis first. Thus avoiding us from banking on a hypothesis that was flawed. Having said this, I do acknowledge that identifying the hypothesis is hard. We do run into the “Invisible Gorilla effect.”

      • http://www.johannesbrodwall.com/ Johannes Brodwall

        It’s always difficult to get evidence on what might have been, but I agree with your speculation in that small scale validation as a cultural norm instead of analysis as a cultural norm sounds like a better strategy to avoid these sort of mistakes.


    Licensed under
Creative Commons License