An interesting article in the Wall Street Journal by the father of modern portfolio theory, Harry Markowitz, discussing the current financial crisis. He emphasises the need for diversification across uncorrelated risk as the key to minimising balanced portfolio risk, and highlights its lack as a major factor behind our present predicament. However to my mind this still begs a question of whether any risk can be truly uncorrelated in complex systems. Power law environments such as financial markets are defined by their interconnectedness, and the presence of positive and negative feedback loops which undermine their predictability. That interconnectedness makes identifying uncorrelated risk exceptionally problematic, especially when such correlations have been hidden inside repackaged derivatives and insurance products.

In systems that are intrinsically unpredictable, no risk management framework can ever tell us which decisions to make: that is essentially unknowable. Instead, good risk strategies should direct us towards minimising our liability, in order to minimise the cost of failure. If we consider “current liability” within the field of software product development as our investment in features that have yet to ship and prove their business value generation capabilities, then this gives us a clear, objective steer towards frequent iterative releases of minimum marketable featuresets and trying to incur as much cost as possible at the point of value generation (i.e. through Cloud platforms, open source software, etc). I think that is the real reason why agile methodologies have been so much more successful that traditional upfront-planning approaches: they allow organisations to be much more efficient at limiting their technology investment liability.

Advertisements

In the last post we argued for a more rigourous, quantitative approach to featureset valuation over the conventional, implicit and overly blunt mechanisms of product backlog prioritisation. We borrowed a simple valuation equation from decision tree analysis to give us a more powerful tool for both managing risk and determining the optimal exercise point for any MMF:

Value = (Estimated Generated Value * Estimated Risk) - Estimated Cost

where 

Estimated Risk = Estimated Project Risk * Estimated Market Risk

A few comments are worth noting about this equation:
1.) It contains no time-dependent variable. The equation simply assumes a standard amortisation period to be agreed with stakeholders (typically 12 to 24 months). Market payback functions and similar are ignored as they introduce complexity and hence risk (as described in more detail below). We are not seeking accuracy per se, but simply enough accuracy to enable us to make the correct implementation decisions.
2.) It is very simplistic. Risk management must be reflexive: at the most basic level, project risk can be divided into two fundamental groupings:

  • Model-independent risks
  • Model-related risk

The former includes typical factors such as new technologies, staff quality and training, external project dependecies, etc. The latter includes two components: the inaccuracy of the risk model and the incomprehensibility of the risk model. We start incurring inaccuracy risk as soon as the simplicity of our model is so great that it leads us to make bad decisions or else provides no guidance. MoSCoW prioritisation is a good example of this. On the other hand we start incurring incomprehensibility risk as soon as the risk model is so complex that it is no longer comprehensible by everyone in the delivery team (which will clearly be relative across different teams). The current financial crisis is a large-scale example of a collapse in incomprehensibility risk management. If financial risk models had been reflexive and taken their own complexity into account as a risk factor, then there is no way we would have ended up with situations where cumulative liabilities were only even vaguely understood by financial maths PhDs: if we take a team of twenty people, it is clear that a sophisticated and accurate model that is only understood by one person entails vastly more risk than a simplistic, less accurate model that everyone can follow. We can generalise this in our estimation process as follows:

Total Risk = Project Risk * Market Risk * Model Incomprehensibility Risk * Model Inaccuracy Risk

or as functions:

Total Risk = Risk(Project) * Risk(Market) * Risk(Incomprehensibility(Model)) * Risk(Inaccuracy(Model))

Furthermore, given our general human tendency towards overcomplexity for most situations this can be approximated to

Total Risk = Risk(Project) * Risk(Market) * Risk(Incomprehensibility(Model))

3.) All risk is assigned as a multiplier against Generated Value, rather than treating delivery risk as an inverse multiplier of Cost. I have had very interesting conversations about this recently with both Chris Matts and some of the product managers I am working with. They have suggested a more accurate valuation might be some variation of:

Value = (Estimated Generated Value * Estimated Realisation Risk) - (Estimated Cost / Estimated Delivery Risk)

In other words, risks affecting technical delivery should result in a greater risk-adjusted cost rather than a lesser risk-adjusted revenue. This is probably more accurate. However is that level of accuracy necessary? In my opinion at least, no. Firstly it creates a degree of confusion as regards how to differentiate revenue realisation risk and delivery risk: is your marketing campaign launch really manifestly different in risk terms from your software release? If either fails it is going to blow the return on investment model, so I would say fundamentally no. Secondly, I might be wrong but I got the feeling that part of the reticence to accept the simpler equation from our product management was a preference against their revenue forecasts being infected by a thing over which they had no control: namely delivery risk (perhaps a reflection of our general psychological tendency to perceive greater risk in situations where we have no control). However that is a major added benefit in my opinion: it helps break down the traditional divides between “the business” and “IT”. As the technology staff of Lehman Brothers will now no doubt attest, the only people who aren’t part of “the business” are the people who work for someone else.

For me, this approach creates the missing link between high-level project business cases and the MMF backlog. We start with a high-level return on investment model in the business case, that then gets factored down into componentised return on investement models as part of the MMF valuation process. These ROI components effectively comprise the business level acceptance tests for the business case. The componentised ROI models then drive out the MMF acceptance tests, from which we define our unit tests and then start development. In this way, we complete the chain of red-gree-refactor cycles from the highest level of commercial strategy down to unit testing a few lines of code. The scale invariance of this approach I find particularly aestheticly pleasing: it is red-green-refactor for complex systems…

fractal-red-green-refactor

In the previous post, we explored the behavioural differences of simple and complex systems. We saw that complex systems display power law distributions, the key characteristics of which are increased unpredictability and an increased likelihood of extreme events when compared to simple Gaussian systems. Additionally, the existence of positive and negative feedback loops makes them more resistant to causal analaysis: the potential for repeated amplification of trivial trigger events can make it very difficult to understand what is going on (see the 1987 stock market crash as an example). We will now examine the implications of those differences for risk management, focussing in particular on IT project delivery.

Conventional business management practices are based on the implicit assumptions we have inheritted from our cultural past, that ultimately have their roots in the scientific tradition: we use specific instances or case studies to infer a generalised understanding of a domain; that understanding then allows us to predict it, and once we can predict it we can then define an effective strategy for managing it. On the other hand, in our everyday lives and throughout the natural world reactive risk management is the norm. For example, to avoid being run over by a car when crossing the road we do not need to understand how a car works but only what it looks like (i.e. fast moving metallic thing on wheels). Similarly to avoid being eaten by a lion, a deer does not need to understand big cat physiology but only what one looks like (i.e. fast moving furry thing on legs).

From this, we can see that risk management strategies can be grouped at the most basic level into one of two categories:

  1. Cause based:
    • Standard business practice
    • Analyse cause, then define strategy
    • Predictive/Pro-active
  2. Observation based:
    • Normal practice in daily life and natural world
    • Adaptive/Reactive

In situations where they both work, the latter is obviously inferior as it affords no potential for proactivity and forward planning. However the former is critically dependent on the predictability of the thing being managed.

Now previously we have seen that IT project success in real terms appears to display power law behaviour. Possible explanations for this might include:

  1. There is a simple causal relationship with an underlying pseudo-power law phenomenon. It might just be that the size of investment in IT projects follows a roughly power law distribution and that the returns generated are directly proportionate to that investment. Most projects receive small to moderate investment whilst a few get massive investment and that is what results in the correlated power law distribution of generated business value.
  2. The world of IT project delivery is a complex but deterministic system, hence it displays power law behaviour.
  3. IT project delivery has dependecies on truly random phenomena, hence the generation of delivered business value displays power law behaviour.

Which of these is most accurate is a matter of conjecture: some people might argue for the first explanation, whilst others might stand by the second. We are going to stand back from that debate. Instead we will only assume this: that to the best of our knowledge, all of the explanations sound to some extent reasonable and one of them actually happens to be true. As discussed in the first post of this series, this then allows us to assess each strategy against possible explanation/scenario as follows:

 

This demonstrates that in the absence of certain knowledge, adaptive metholodies clearly represent the lowest risk approach to IT project delivery as they are effective for every explanation. More generally, we can summarise this by stating:

  • Simple, independent processes that are described by normal distributions are best managed by predictive strategies
  • Complex, interdependent processes that are described by power law distributions are best managed by adaptive strategies.

In the next post we will start exploring what a fully adaptive IT risk management strategy might look like, within the context of lessons we can learn from other areas including evolutionary biology.

The Power Law

May 5, 2008

In the previous post we argued that the starting point for managing risk in IT project delivery should be a description of the distribution and frequency of project success: you can’t manage something if you don’t know what it looks like. However, we saw that project success in real terms – i.e. of maintaining or increasing the long-term viability of the organisation – is not obviously measurable. We therefore proposed a triangulation approach to infer its distribution from a number of key indicators. These indicators all display power law behaviour. We will now examine what this means..

First however, some historical context. The history of ideas within our culture has its roots in the Renaissance and before that Persia and Ancient Greece. And as we should expect of any people starting to explore the unknown workings of the world they inhabit, the first relationships they discovered were the simplest. Mathematical descriptions of simple, independent observable events were formulated in the natural philosophy of Newton and Descartes, out of which evolved the classical physical sciences. The apparently objective, predictive and repeatable nature of these relationships was hailed as a sign of their exactitude (as opposed to their simplicity) and as a result the physical sciences became the benchmark by which the validity of other areas of inquiry were judged. At the same time, their core tenets of predictability and causal interaction were used as the foundations on which fields ranging from financial mathematics to the social sciences and management theory have been built.

This world of classical physics is one of Bell Curves (also known as the Normal or Gaussian distribution), stable averages and meaningful standard deviations. It is easily demonstrated by example of a coin toss: if I repeatedly toss 10 unbiased coins then the distribution of heads will tend towards a bell curve with an average/peak at 5 heads.

Fig 1. example bell curves (courtesy of Wikipedia):

example Bell Curves

The first challenge to this world view came from quantum mechanics at the turn of the last century, where discrete causal interaction was replaced by the fuzziness of probability distribution functions and the uncertainty principle. More recently it was then challenged at the macro level by the study of the chaotic behaviour of complex systems. These systems are characterised by interdependence between events which can result in both positive and negative feedback loops. On the one hand seemingly large causal triggers can be absorbed without apparent impact whilst on the other, large effects can be spun up from trivial and essentially untraceable root causes. The result is pseudo-random behaviour, and something that follows the same mathematical description the economist Pareto discovered eighty years earlier in his studies of income distribution (succintly summarised as the 80:20 rule) and that Bradford discovered thirty years earlier in textual index analysis: namely the power law. Since then examples have been found everywhere from epidemiology, stock price variations, fractals and premature birth frequencies through to coastline structure, word usage in language, movie profits and job vacancies. 

Fig 2. example Power Law Curves (courtesy of Wikipedia):

example Bell Curves

The power law derives its name from the dependence or inverse dependence of one variable on the squared, cubed, etc power of the other. (Plot the log of one against the other, and the gradient of the straight line will give you the exponent – i.e. whether it is a square or cube relationship). For example, Pareto discovered that income distributions across populations often followed a roughly inverse square law: for a given income band, roughly one quarter of the amount of people will receive double that income and one ninth will receive triple. The fact that this holds true whether you are looking at the lowest or highest income brackets denotes a signature characteristic of power law phenomena. It is known as scale-invariance or self-similarity, and is most widely recognised in another power law field: fractals.

Other key characteristics of power laws are an unstable mean and variance (i.e. they are statistically irregular, hence unpredictable), and they have a fat/long tail in comparison to bell curves (i.e. extreme events are a lot more frequent):

“The dream of social science [JE: project methodologies??], of building robust frameworks that allow prediction, is shattered by the absence of statistical regularity in phenomena dominated by persistent interconnectivity.” (Sornette, 2003)

“Paretian tails decay more slowly than those of normal distributions. These fat tails affect system behaviour in significant ways. Extreme events, that in a Gaussian world could be safely ignored, are not only more common than expected but also of vastly larger magnitude and consequence. For instance, standard theory suggests that over that time [JE: 1916 – 2003] there should be 58 days when the Dow moved more than 3.4 percent; in fact there were 1001″ (Mandelbrot and Hudson, 2004)

The fundamental message here can be read as follows. The apparently objective world of simple, independent events, normal distributions and classical physical/economic sciences is not actually the norm. Being the domain of the most simple events, it’s just that we discovered it earlier than everything else. In fact it is the limiting edge case along a sliding scale of much more commonly occurring complex and/or chaotic systems through to truly random or stochastic processes, all of which exhibit intrinsically unpredictable and more extreme power law behaviour. And the critically important point as it affects us in the delivery of IT projects? – that we need a risk management model tailored to the complex world of generating business value rather than the vastly over-simplistic world of basic mechanics.  The most spectacular/shocking example of what happens when someone attempts to model such power law systems using the normal distributions of classical methodologies is given by the collapse of the Long Term Capital Management hedge fund. As regards the implications for us within the realms of risk management of IT project deliveries, that will be the subject of the next post.  

IT Project Success

April 29, 2008

An interesting article by Scott Ambler has been the recent subject of discussion within the development community of my current employer. In the article, Ambler suggests that the IT project failure rates frequently bemoaned by the likes of the Standish Chaos report in fact paint a distorted picture. Many so-called failures go on to deliver additional value to their organisations that far outweighs their total cost despite the fact they originally shipped over budget or schedule. In doing so, they render the traditional success criteria of “on-time, on-budget” pretty meaningless. As a result, he reasonably argues, project success is actually more frequent than the commonly held view suggests.

His article clearly highlights the elusive nature of project success as a directly observable (and therefore measurable) phenomenon. It is apparent that “on-time” and “on-schedule” are neither necessary or sufficient as benchmarks of success: many of us have worked both on a.) projects that shipped on-time/budget but delivered no long term value due to a flawed business case or an unforeseen changes in market conditions, and b.) projects that shipped late or over budget but that have transformed the profitability of the organisation that delivered them. Long-term revenue generation might seem a more reliable (but also less measurable) indicator, but even then many projects – e.g. regulatory compliance systems – have no direct bearing on revenue generation.

This has some significant implications for software project management. If on-time and on-budget are misleading benchmarks of project success and failure, then they must equally be unreliable indicators for risk management (as the risk we are managing is the risk of success/failure), which means that risk mitigation strategies based on those indicators should similarly be unreliable.

Which poses the question, if we can’t directly measure project success then how can we effectively manage it?

As a start to answering this, it seems reasonable to suggest that whilst we can’t measure success per se we might still be able to triangulate to at least a general understanding of its distribution using other markers that are directly measurable. By inducing a few plausible possibilities that to the best of our abilities we believe to be roughly equally likely, we can then perform a risk analysis by assessing the cumulative total risk for different management strategies across those scenarios:

So, what might we consider valid triangulation markers?

  • Distribution of internet site traffic: business-to-consumer web sites are a subset of IT project deliveries that most commonly generate revenue by either CPM or CPC advertising models or else some form or e-commerce. In both instances, sites with the most page impressions will generate the greatest ad revenue or sales, and in short will be more successful (strictly speaking this is not entirely true as sites with better user segmentation data will be able to charge higher CPMs, but CPM % variations are negligable in comparison to variations in site traffic volumes so it remains a valid approximation).
  • Technology stock price variations: technology companies have a business critical dependency on IT project delivery (something that most other sectors are also tending towards if not there already). Therefore we might expect some kind of correlation between technology company success, i.e. stock price, and the success of the underlying IT projects on which those organisations depend.
  • Technology firm termination stats: not such an obvious choice, but firm termination stats can still tell us something indirectly about the nature of project success: a near-constant annual rate of firm terminations would imply some degree of predictability, whereas wide variations would indicate more chaotic/complex behaviour.
  • Key performance indicators: the initially obvious choice. Almost equally obvious however is the question as to why projects currently remain judged against criteria of on-time/on-budget rather than KPIs. Answers might include the suggestion that we often use on-time/on-budget explicitly as our KPI, or else the more worrying possibility that on-time/on-budget is used as the standard default KPI in the frequent absence of better considered indicators and clearly defined business case acceptance tests. Add to this the facts that a.) where they exist, KPIs are normally used as an overly simplistic binary success/failure threshold thereby masking variations in the degree of success; b.) most firms do not (alas!) publish stats detailing their breakdown of IT project investments and KPI ratios where they exist; c.) as in the case of on-time/budget, the chosen KPIs might actually be bad indicators of real performance anyway, and we can conclude that they might not be so useful after all.

As it happens, all the above indicators exhibit power law distributions (for more info see Andriani and McKelvey – “Beyond Gaussian Averages: Redirecting Management Research Toward Extreme Events and Power Laws”Barabasi and Frangos – “The New Science of Networks”; Paul Ormerod – “Why Most Things Fail”). What this means, and how power law behaviour differs from the classical mathematical/physical/economic world of normal distibutions will be the subject of the next post. In essence however, it suggests that while Ambler is correct in proposing that some projects over-schedule or over-budget may in fact be successes when measured against more meaningful criteria, overall there actually appears to be a higher rather than lower incidence of project “failure” in real terms.

As a concluding note it is worth highlighting that at this stage we are not making any claims about cause but only behaviour (for example, the power law distribution of internet site traffic could simply be a direct result of a 1:1 causal relationship with a power law distribution in web site investment/costs). All we are doing is a slight twist on the standard approach of starting from observable behaviour and then inferring the generality/causal explanation: because the behaviour (i.e. distribution of IT project success) is hidden in this instance, we must add another step first to triangulate it from related observables. Only then will be in any position to consider underlying causes.