In previous posts we saw that the generation of business value via IT projects essentially follows a power law distribution. By examining the nature of power law systems, we went on to conclude that adaptive strategies offer the most effective way of managing risk in such environments. We will now begin to explore what a fully adaptive risk management strategy might look like, using as our starting point an overview of the key principles underlying nature’s great adaptive risk management engine: Natural Selection..

Evolutionary ideas have recently been gaining prominence in studies of organisational behaviour and efficiency from two directions:

  • Evolutionary Micro-economics (top down), in response to the limitations of traditional rationalist supply/demand models based on Game Theory.
  • Adaptive Project Methodologies (bottom up), focussing on evolutionary design and iterative delivery to mitigate the inherent unpredictability of requirements and market conditions.

The most fundamental principle on which these ideas are based is the notion of a replicator. A replicator can be defined as any entity of which copies are made, where that entity has some causal influence on its own probability of being propagated. The classic biological example is a gene, which is copied during cell division and which influences its probability of being propagated via the environmental effects of the proteins it encodes (and in turn, the effects of the composite structures out of which those proteins are built). The specific DNA sequence of a gene is known as its genotype, and the corresponding expression of that genotype is its phenotype.

In the Extended Phenotype, Richard Dawkins switched the primary focus of evolutionary studies away from the organism. He showed that “organism” is ultimately just an arbitrary point along the scale of phenotypes: from specific proteins at one end, up through more complex protein structures to organs, organisms and social groups at the other. The fundamental unit driving natural selection forwards across the generations is the replicator or Selfish Gene – everything else from protein to social group is just artefactual byproduct (that impacts the probability of further replicator propagation).

Other instances of replicators include memes. A meme is “any unit of cultural information, such as a practice or idea, that gets transmitted verbally or by repeated action from one mind to another. Examples include thoughts, ideas, theories, practices..” When we consider the field of IT project delivery within this context, we can spot obvious correlations. Business cases are memes which, when ratified, result in the generation of a suite of phenotypic artefacts ranging from marketing strategies to IT delivery teams to unit tests, SCM repositories and deployed production systems to new revenue streams. These artefacts end up shaping their business division, organisation and industry sector, and in doing so determine the probability of the business case propagating and spawning further system releases, new marketing campaigns, etc.

There is a key lesson for us as IT practioners to take from this, one that evolutionary biologists have already learnt. It is that artefacts (be they organisms, social groups, IT projects or marketing campaigns) don’t ultimately matter. The thing that matters is the replicator: the business case or gene. We need to follow evolutionary biology’s re-orientation towards the gene, and shift our focus away from IT projects and create practices centred solely on the business case. I now believe that “projects” can actually be an impediment to the efficient generation of real business value from IT. They act as an inflexible body of emotional and financial investment that creates resistance to both a.) change and b.) termination where such change makes the business case no longer viable in real terms (which is when real damage is then inflicted). We will discuss more on this topic in subsequent posts. Before that however, we need to examine the nature of selective environments – which will be the subject of the next post. In doing so we will hopefully shed some light on the factors that have led to our current project-orientated IT world view.

 

Advertisements

In the previous post, we explored the behavioural differences of simple and complex systems. We saw that complex systems display power law distributions, the key characteristics of which are increased unpredictability and an increased likelihood of extreme events when compared to simple Gaussian systems. Additionally, the existence of positive and negative feedback loops makes them more resistant to causal analaysis: the potential for repeated amplification of trivial trigger events can make it very difficult to understand what is going on (see the 1987 stock market crash as an example). We will now examine the implications of those differences for risk management, focussing in particular on IT project delivery.

Conventional business management practices are based on the implicit assumptions we have inheritted from our cultural past, that ultimately have their roots in the scientific tradition: we use specific instances or case studies to infer a generalised understanding of a domain; that understanding then allows us to predict it, and once we can predict it we can then define an effective strategy for managing it. On the other hand, in our everyday lives and throughout the natural world reactive risk management is the norm. For example, to avoid being run over by a car when crossing the road we do not need to understand how a car works but only what it looks like (i.e. fast moving metallic thing on wheels). Similarly to avoid being eaten by a lion, a deer does not need to understand big cat physiology but only what one looks like (i.e. fast moving furry thing on legs).

From this, we can see that risk management strategies can be grouped at the most basic level into one of two categories:

  1. Cause based:
    • Standard business practice
    • Analyse cause, then define strategy
    • Predictive/Pro-active
  2. Observation based:
    • Normal practice in daily life and natural world
    • Adaptive/Reactive

In situations where they both work, the latter is obviously inferior as it affords no potential for proactivity and forward planning. However the former is critically dependent on the predictability of the thing being managed.

Now previously we have seen that IT project success in real terms appears to display power law behaviour. Possible explanations for this might include:

  1. There is a simple causal relationship with an underlying pseudo-power law phenomenon. It might just be that the size of investment in IT projects follows a roughly power law distribution and that the returns generated are directly proportionate to that investment. Most projects receive small to moderate investment whilst a few get massive investment and that is what results in the correlated power law distribution of generated business value.
  2. The world of IT project delivery is a complex but deterministic system, hence it displays power law behaviour.
  3. IT project delivery has dependecies on truly random phenomena, hence the generation of delivered business value displays power law behaviour.

Which of these is most accurate is a matter of conjecture: some people might argue for the first explanation, whilst others might stand by the second. We are going to stand back from that debate. Instead we will only assume this: that to the best of our knowledge, all of the explanations sound to some extent reasonable and one of them actually happens to be true. As discussed in the first post of this series, this then allows us to assess each strategy against possible explanation/scenario as follows:

 

This demonstrates that in the absence of certain knowledge, adaptive metholodies clearly represent the lowest risk approach to IT project delivery as they are effective for every explanation. More generally, we can summarise this by stating:

  • Simple, independent processes that are described by normal distributions are best managed by predictive strategies
  • Complex, interdependent processes that are described by power law distributions are best managed by adaptive strategies.

In the next post we will start exploring what a fully adaptive IT risk management strategy might look like, within the context of lessons we can learn from other areas including evolutionary biology.

The Power Law

May 5, 2008

In the previous post we argued that the starting point for managing risk in IT project delivery should be a description of the distribution and frequency of project success: you can’t manage something if you don’t know what it looks like. However, we saw that project success in real terms – i.e. of maintaining or increasing the long-term viability of the organisation – is not obviously measurable. We therefore proposed a triangulation approach to infer its distribution from a number of key indicators. These indicators all display power law behaviour. We will now examine what this means..

First however, some historical context. The history of ideas within our culture has its roots in the Renaissance and before that Persia and Ancient Greece. And as we should expect of any people starting to explore the unknown workings of the world they inhabit, the first relationships they discovered were the simplest. Mathematical descriptions of simple, independent observable events were formulated in the natural philosophy of Newton and Descartes, out of which evolved the classical physical sciences. The apparently objective, predictive and repeatable nature of these relationships was hailed as a sign of their exactitude (as opposed to their simplicity) and as a result the physical sciences became the benchmark by which the validity of other areas of inquiry were judged. At the same time, their core tenets of predictability and causal interaction were used as the foundations on which fields ranging from financial mathematics to the social sciences and management theory have been built.

This world of classical physics is one of Bell Curves (also known as the Normal or Gaussian distribution), stable averages and meaningful standard deviations. It is easily demonstrated by example of a coin toss: if I repeatedly toss 10 unbiased coins then the distribution of heads will tend towards a bell curve with an average/peak at 5 heads.

Fig 1. example bell curves (courtesy of Wikipedia):

example Bell Curves

The first challenge to this world view came from quantum mechanics at the turn of the last century, where discrete causal interaction was replaced by the fuzziness of probability distribution functions and the uncertainty principle. More recently it was then challenged at the macro level by the study of the chaotic behaviour of complex systems. These systems are characterised by interdependence between events which can result in both positive and negative feedback loops. On the one hand seemingly large causal triggers can be absorbed without apparent impact whilst on the other, large effects can be spun up from trivial and essentially untraceable root causes. The result is pseudo-random behaviour, and something that follows the same mathematical description the economist Pareto discovered eighty years earlier in his studies of income distribution (succintly summarised as the 80:20 rule) and that Bradford discovered thirty years earlier in textual index analysis: namely the power law. Since then examples have been found everywhere from epidemiology, stock price variations, fractals and premature birth frequencies through to coastline structure, word usage in language, movie profits and job vacancies. 

Fig 2. example Power Law Curves (courtesy of Wikipedia):

example Bell Curves

The power law derives its name from the dependence or inverse dependence of one variable on the squared, cubed, etc power of the other. (Plot the log of one against the other, and the gradient of the straight line will give you the exponent – i.e. whether it is a square or cube relationship). For example, Pareto discovered that income distributions across populations often followed a roughly inverse square law: for a given income band, roughly one quarter of the amount of people will receive double that income and one ninth will receive triple. The fact that this holds true whether you are looking at the lowest or highest income brackets denotes a signature characteristic of power law phenomena. It is known as scale-invariance or self-similarity, and is most widely recognised in another power law field: fractals.

Other key characteristics of power laws are an unstable mean and variance (i.e. they are statistically irregular, hence unpredictable), and they have a fat/long tail in comparison to bell curves (i.e. extreme events are a lot more frequent):

“The dream of social science [JE: project methodologies??], of building robust frameworks that allow prediction, is shattered by the absence of statistical regularity in phenomena dominated by persistent interconnectivity.” (Sornette, 2003)

“Paretian tails decay more slowly than those of normal distributions. These fat tails affect system behaviour in significant ways. Extreme events, that in a Gaussian world could be safely ignored, are not only more common than expected but also of vastly larger magnitude and consequence. For instance, standard theory suggests that over that time [JE: 1916 – 2003] there should be 58 days when the Dow moved more than 3.4 percent; in fact there were 1001″ (Mandelbrot and Hudson, 2004)

The fundamental message here can be read as follows. The apparently objective world of simple, independent events, normal distributions and classical physical/economic sciences is not actually the norm. Being the domain of the most simple events, it’s just that we discovered it earlier than everything else. In fact it is the limiting edge case along a sliding scale of much more commonly occurring complex and/or chaotic systems through to truly random or stochastic processes, all of which exhibit intrinsically unpredictable and more extreme power law behaviour. And the critically important point as it affects us in the delivery of IT projects? – that we need a risk management model tailored to the complex world of generating business value rather than the vastly over-simplistic world of basic mechanics.  The most spectacular/shocking example of what happens when someone attempts to model such power law systems using the normal distributions of classical methodologies is given by the collapse of the Long Term Capital Management hedge fund. As regards the implications for us within the realms of risk management of IT project deliveries, that will be the subject of the next post.