Optimal Exercise Point

April 13, 2010

Although I have been making use of them over the last 18 months in various presentations and the real options tutorial, I recently realised I’d omitted to publish graphs illustrating the optimal exercise point for real options on this blog. As a result, here they are:

The dotted blue line represents risk entailed by the agilist’s Cone of Uncertainty, and covers technical, user experience, operational, market and project risk – in short anything that could jeopardise your return on investment. The way this curve is negotiated is detailed below, by driving out mitigation spikes to address each risk.

The dotted brown line represents risk from late delivery. At a certain point this will start rising at a rate greater than your mitigation spike s are reducing risk, creating a minimum in the Cumulative Risk curve denoted in red. Remembering that

Feature Value = ((1 - Risk) x Generated Value) - Cost

this minimum identifies the Optimal Exercise Point.

One point worth exploring further is why Delayed Delivery is represented as risk rather than cost. The reason is because Cost of Delay is harder to model. For example, let’s say we are developing a new market-differentiating feature for a product. Let’s also say that there are X potential new customers for that feature in our target market, and that it would take Y weeks of marketing to convert 80% of those sales. Providing whenever we do launch, there remains Y weeks until a competitor launches a similar feature then then the cost of delay may be marginal. On the other hand, if we delay too long and a competitor launches their feature before us then there will be a massive spike in cost due to the loss of first mover advantage. However the timings of that cost spike will be unknown (unless we are spying on the competition), and therefore very hard to capture. What we can model though is the increasing risk is of that spike biting us the longer we delay.

I have found it very helpful to think about this using the insightful analysis David Anderson presented during his excellent risk management presentation at Agile 2009 last year. He divides features into four categories:

  • Differentiator: drive customer choice and profits
  • Spoiler: spoil a competitor’s differentiators
  • Cost Reducer: reduce cost to produce, maintain or service and increase margin
  • Table Stakes: “must have” commodities

Differentiators (whether on feature or cost) are what drive revenue generation. Spoilers are the features we need to implement to prevent loss of existing customers to a competitor, and are therefore more about revenue protection. And Table Stakes are the commodities we need to have an acceptable product at all. We can see how this maps clearly onto the example above. The cost of delay spike is incurred at the point when the feature we intended to be a differentiator becomes in fact only a spoiler.

This also has a nice symmetry with the meme lifecycle in product S-curve terms.

We can see how features start out as differentiators, become spoilers, then table stakes and are finally irrelevant – which ties closely to the Diversity, Dominance and Zombie phases. There are consequences here in terms of market maturity. If you are launching a product into a new market then everything you do is differentiating (as no-one else is doing anything). Over time, competitors join you, your differentiators become their spoilers (as they play catch-up), and then finally they end up as the table stakes features for anyone wishing to launch a rival product. In other words, the value of table stakes features in mature markets is that they represent barrier to entry.

More recently however, I have come to realise that these categories are as much about your customers as your product. They are a function of how a particular segment of your target market views a given feature. Google Docs is just one example that demonstrates how one person’s table stakes can be another person’s bloatware. Similarly, despite the vast profusion of text editors these days, I still used PFE until fairly recently because it did all the things I needed and did them really well. Its differentiators were the way it implemented my functional table stakes. The same is true of any product that excels primarily in terms of its user experience, most obviously the iPod or iPhone. Marketing clearly also plays a large part in convincing/co-ercing a market into believing what constitutes the table stakes for any new product, as witnessed in the era of Office software prior to Google Docs. The variation in the 20% of features used by 80% of users actually turned out not to be so great after all.

So what does this mean for anyone looking to launch a product into a mature market? Firstly, segment you target audience as accurately as possible. Then select the segment which requires the smallest number of table stakes features. Add the differentiator they most value, get the thing out the door as quickly as possible, and bootstrap from there.

The phrase “Agile in the Large” is one I’ve heard used a number of times over the last year in discussions about scaling up agile delivery. I have to say that I’m not a fan, primarily because it entails some pretty significant ambiguity. That ambiguity arises from the implied question: Agile What in the Large? So far I have encountered two flavours of answer:

1.) Agile Practices in the Large
This is the common flavour. It involves the deployment of some kind of overarching programme container, e.g. RUP, which is basically used to facilitate the concurrent rollout of standard (or more often advanced) agile development practices.

2.) Agile Principles in the Large
This is the less common, but I believe much more valuable, flavour. It involves taking the principles for managing complexity that have been proven over the last ten years within the domain of software delivery and re-applying them to manage complexity in wider domains, in particular the generation of return from technology investment. That means:

  • No more Big Upfront Design: putting an end to fixed five year plans and big-spend technology programmes, and instead adopting an incremental approach to both budgeting and investment (or even better, inspirationally recognising that budgeting is a form of waste and doing without it altogether – thanks to Dan North for the pointer)
  • Incremental Delivery: in order to ensure investment liability (i.e. code that has yet to ship) is continually minimised
  • Frequent, Rapid Feedback: treating analytics integration, A/B testing capabilites, instrumentation and alerting as a first order design concern
  • Retrospectives and Adaptation: a test-and-learn product management culture aligned with an iterative, evolutionary approach to commercial and technical strategy

When it comes down to it, it seems to me that deploying cutting-edge agile development practices without addressing the associated complexities of the wider business context is really just showing off. It makes me think back to being ten years old and that kid at the swimming pool who was always getting told off by his parents “Yes Johnny I know you can do a double piked backflip, but forget all that for now – all I need you to do is enter the water without belly-flopping and emptying half the pool”.

An interesting article in the Wall Street Journal by the father of modern portfolio theory, Harry Markowitz, discussing the current financial crisis. He emphasises the need for diversification across uncorrelated risk as the key to minimising balanced portfolio risk, and highlights its lack as a major factor behind our present predicament. However to my mind this still begs a question of whether any risk can be truly uncorrelated in complex systems. Power law environments such as financial markets are defined by their interconnectedness, and the presence of positive and negative feedback loops which undermine their predictability. That interconnectedness makes identifying uncorrelated risk exceptionally problematic, especially when such correlations have been hidden inside repackaged derivatives and insurance products.

In systems that are intrinsically unpredictable, no risk management framework can ever tell us which decisions to make: that is essentially unknowable. Instead, good risk strategies should direct us towards minimising our liability, in order to minimise the cost of failure. If we consider “current liability” within the field of software product development as our investment in features that have yet to ship and prove their business value generation capabilities, then this gives us a clear, objective steer towards frequent iterative releases of minimum marketable featuresets and trying to incur as much cost as possible at the point of value generation (i.e. through Cloud platforms, open source software, etc). I think that is the real reason why agile methodologies have been so much more successful that traditional upfront-planning approaches: they allow organisations to be much more efficient at limiting their technology investment liability.

A guide to putting into practice a number of ideas that have been discussed on this blog over the last 18 months, especially with regards to MMF valuation (already incorporating a number of feedback iterations from different projects).

Download: Options Analysis User Guide 1.2

If you’d like a copy of the spreadsheet then drop me a line or else join the Real Options discussion group and download from there.

[update: uploaded latest version of PDF]

In the last post we argued for a more rigourous, quantitative approach to featureset valuation over the conventional, implicit and overly blunt mechanisms of product backlog prioritisation. We borrowed a simple valuation equation from decision tree analysis to give us a more powerful tool for both managing risk and determining the optimal exercise point for any MMF:

Value = (Estimated Generated Value * Estimated Risk) - Estimated Cost

where 

Estimated Risk = Estimated Project Risk * Estimated Market Risk

A few comments are worth noting about this equation:
1.) It contains no time-dependent variable. The equation simply assumes a standard amortisation period to be agreed with stakeholders (typically 12 to 24 months). Market payback functions and similar are ignored as they introduce complexity and hence risk (as described in more detail below). We are not seeking accuracy per se, but simply enough accuracy to enable us to make the correct implementation decisions.
2.) It is very simplistic. Risk management must be reflexive: at the most basic level, project risk can be divided into two fundamental groupings:

  • Model-independent risks
  • Model-related risk

The former includes typical factors such as new technologies, staff quality and training, external project dependecies, etc. The latter includes two components: the inaccuracy of the risk model and the incomprehensibility of the risk model. We start incurring inaccuracy risk as soon as the simplicity of our model is so great that it leads us to make bad decisions or else provides no guidance. MoSCoW prioritisation is a good example of this. On the other hand we start incurring incomprehensibility risk as soon as the risk model is so complex that it is no longer comprehensible by everyone in the delivery team (which will clearly be relative across different teams). The current financial crisis is a large-scale example of a collapse in incomprehensibility risk management. If financial risk models had been reflexive and taken their own complexity into account as a risk factor, then there is no way we would have ended up with situations where cumulative liabilities were only even vaguely understood by financial maths PhDs: if we take a team of twenty people, it is clear that a sophisticated and accurate model that is only understood by one person entails vastly more risk than a simplistic, less accurate model that everyone can follow. We can generalise this in our estimation process as follows:

Total Risk = Project Risk * Market Risk * Model Incomprehensibility Risk * Model Inaccuracy Risk

or as functions:

Total Risk = Risk(Project) * Risk(Market) * Risk(Incomprehensibility(Model)) * Risk(Inaccuracy(Model))

Furthermore, given our general human tendency towards overcomplexity for most situations this can be approximated to

Total Risk = Risk(Project) * Risk(Market) * Risk(Incomprehensibility(Model))

3.) All risk is assigned as a multiplier against Generated Value, rather than treating delivery risk as an inverse multiplier of Cost. I have had very interesting conversations about this recently with both Chris Matts and some of the product managers I am working with. They have suggested a more accurate valuation might be some variation of:

Value = (Estimated Generated Value * Estimated Realisation Risk) - (Estimated Cost / Estimated Delivery Risk)

In other words, risks affecting technical delivery should result in a greater risk-adjusted cost rather than a lesser risk-adjusted revenue. This is probably more accurate. However is that level of accuracy necessary? In my opinion at least, no. Firstly it creates a degree of confusion as regards how to differentiate revenue realisation risk and delivery risk: is your marketing campaign launch really manifestly different in risk terms from your software release? If either fails it is going to blow the return on investment model, so I would say fundamentally no. Secondly, I might be wrong but I got the feeling that part of the reticence to accept the simpler equation from our product management was a preference against their revenue forecasts being infected by a thing over which they had no control: namely delivery risk (perhaps a reflection of our general psychological tendency to perceive greater risk in situations where we have no control). However that is a major added benefit in my opinion: it helps break down the traditional divides between “the business” and “IT”. As the technology staff of Lehman Brothers will now no doubt attest, the only people who aren’t part of “the business” are the people who work for someone else.

For me, this approach creates the missing link between high-level project business cases and the MMF backlog. We start with a high-level return on investment model in the business case, that then gets factored down into componentised return on investement models as part of the MMF valuation process. These ROI components effectively comprise the business level acceptance tests for the business case. The componentised ROI models then drive out the MMF acceptance tests, from which we define our unit tests and then start development. In this way, we complete the chain of red-gree-refactor cycles from the highest level of commercial strategy down to unit testing a few lines of code. The scale invariance of this approach I find particularly aestheticly pleasing: it is red-green-refactor for complex systems…

fractal-red-green-refactor

I’m a big fan of Domain Driven Design, but one of the subtler points I have found when implementing it concerns the notion of logical domain models. Eric Evans advocates the use of a single domain model – the one enshrined in the codebase – as a defense against the bad old days of ivory tower architecture where unimplementable logical domain models were produced by people who hadn’t coded in years, to be thrown over the wall into development teams who were then miserably handicapped by them until they inevitably decided to chuck them out and start making some progress instead. However, I believe this to be a failure in application rather than an intrinsic flaw. Logical domain models can actually be exceptionally useful when correctly applied. They get developers thinking in abstract terms and stop us making a whole bunch of assumptions that typically result from jumping immediately to implementation (“I’m programming in language-of-choice so of course I’m going to code it like blah”). Far preferable is that all such decisions are made explicitly, as conscious informed choices about the different trade-offs to be balanced in building a system. The subtlety with Domain Driven Design is that to get the best of both worlds – abstract modelling and a single model –  the logical domain model should be evolved into the implementation domain model (I think of it as being a bit like a heatmap, with certain areas expressed logically and other, increasingly larger, areas expressed in implementation terms as your team iterates through sprint cycles). To be clear here, this has nothing to do with Big Design Up Front but simply the complimentary practice to BDD/TDD of whiteboarding sessions – but where we start by considering the logical model rather than immediately addressing implementation details.

What I have recently come to realise is that another model precedes the logical domain model, namely the business value throughput model. This is simply the exercise of domain modelling in terms of value rather than state/behaviour, and will be familiar to most people involved in business process engineering and organisational efficiency improvement. However, rarely does it get carried down to the level of software delivery. Instead, we base our decision making on a nebulous exercise called “project prioritisation”. In the worst case, this means MoSCoW (it is beyond me why we persist in this – why not just assign everything a “Must” and save all involved a deeply predictable day of pain!), or if we’re lucky we might end up with a slightly more helpful score out of ten or similar. The problem with this is as follows: if our aim is to generate business value then prioritisation is actually an implicit form of valuation – true, otherwise what’s the point? However it is an insidious, surreptitious form that slips under the radar without any kind of rigour, transparency or quantification. Is that story a “Must” because it will generate the most business value, or because someone bought an expensive new middleware platform and now they are looking for a problem to solve? Is that a “Must” because it will generate the most business value, or because you are fine-tuning your CV?! Is that a “Must” because it will generate the most business value, or because you know it will benefit the senior management’s current pet project?

So, what might a more explicit version of valuation look like? Well, we have recently been been trying out a very simple MMF model based on decision tree analysis:

Value = (Generated Business Value * Cumulative Risk) - Estimated Cost

where

Cumulative Risk = Project Risk * Market Risk

When I have presented this to various audiences, a wide range of initial objections have been raised – which normally fall under one of the following headings:

1.) Uncertainty (pt 1.): The business value delivered by a project is notoriously hard to measure and normally not even vaguely clear until a couple of years after delivery. How on earth can you expect me to work it out in advance?

Answer: This is an estimate, not a commitment. We fully expect it to be “wrong” – that is why we are limiting our liability by implementating the minimal marketable featureset, so we can drive out the real answer in production. We are just trying to come up with a rough figure on the basis of the information available right now, so we can move beyond the world of Must-Should-Could-Would towards anything that better resembles sanity. No-one is going to hold you to it. Please go and pair with an architect or someone in marketing or finance if you are struggling.

2.) Uncertainty (pt 2.): I don’t know enough yet to estimate the benefit or costs

Answer: Then that fact needs to be priced into your valuation as risk (similarly, inexperience at performing valuations should also be priced as risk). As that risk will almost certainly now be crippling your MMF valuation, why don’t you defer that piece of work and concentrate on something you are more certain about?

3.) Non-fiscal benefits: But the benefit of this will be in brand value/market awareness/etc

Answer: Fine, but without a subsequent programme of work to monetise that brand value it will be a pointless undertaking. You need to cross-attribute a percentage of the value delivered by the latter workstream back onto what we are considering now.

4.) Indirect benefits: But my web service doesn’t generate revenue. It’s the client systems that consume it which generate the business value?

Answer: Firstly, you do currently have more than one consumer don’t you? No, then why is it a web service and not an in-process component? Because you think other systems might need it in future? Then why not defer support for other message protocols until you know for certain which ones you’re going to need? Ah sorry, I misheard you – you do have multiple clients. In that case, you should simply cross-attribute a certain (usage-based?) percentage of the clients’ generated value back onto your service, as it is a value-throughput modelling dependency. 

With this approach, we simply do whatever has the highest estimated value at any point in time. In doing so, prioritisation takes care of itself and we are finally, mercifully released from MoSCoW hell. Furthermore, a number of additional side-benefits become apparent as the process starts being applied in practice:

  • MMF refactoring: once numbers start being assigned against MMFs, business stakeholders begin requesting whiteboarding sessions with architects to drive out other alternatives that entail less risk. This leads to a detailed technology-engaged breakdown of the business case and multiple iterations of MMF refactoring, with product managers and architects jointly discussing and creating business solutions around the table. Welcome to the world of agile technical strategy!
  • Incremental release of small chunks of functionality. The estimated value of anything with more than a couple of non-trivial risks drops through the floor, creating a strong driver towards a “perpetual beta” style release model.
  • A quantitative justification against system rewrites.
  • A strong driver to improve IT capability maturity. If your IT department can’t be predictable about the costs of building even small chunks of functionality then that source of risk will impact the valuation of everything they do, and needs to be mitigated as soon as possible.
  • Optimal implementation point: this is simply the maximum point on the valuation curve, where if you travel any further down the Cone of Uncertainty before starting to build then the reduced risk from greater clarity is outweighed by the risk from failing to deliver in optimal market conditions.
  • Target delivery date: see the above. At last we have a quantitative means of separating meaningful delivery dates from arbitrary political statements.
  • Should I do the highest risk stuff first or last? Are you talking about MMFs or components within an MMF? If the former, just do the highest value work regardless. If the latter, then always seek to maximise the estimated value of the MMF. That means take on the the highest risk work first. Doing so will remove that risk and up the feature’s valuation should you recalculate at the end of your day’s work. 

More about this next time. For now one final point. If you outsource a programme of work and wish to use this approach, then don’t forget to ensure you engage with two consultancies to avoid the obvious conflict of interests: one to do the valuation, and another to do the build. (Yes I know they are lovely people, but you can’t really expect them to toil ceaselessly to find you the build option that will earn them the least money can you?!…)

On Failure

April 23, 2009

(Firstly, a quick apology for the radio silence of late – regular updates here will now be resuming.)

So, in the last post we examined incentivisation in its broadest sense with the aim of shedding some light on the way selective pressures are created within organisations. We saw that the alignment of intra-organisational pressures with those of the external market is a fundamentally important factor in the health and efficiency of a business: put simply, it is critical to ensure that a.) employees really act in the long-term interests of the business and that b.) the business really acts in the long-term interests of the employees. We also highlighted the subtle and far-reaching consequences of how behaviour is rewarded (something the investment banks are now coming to terms with), in particular with respect to the conventional benchmarks of “on time, on budget”. We noted that this has driven the maturation of the IT project-centric world view, not because projects are fundamentally the best way of delivering business value but because they are the best vehicle for demonstrating benchmarks achieved and charging clients. More insidiously, such benchmarks can actually create selective pressures against innovation and new features (i.e. the very drivers of competitive advantage in the wider marketplace): anything innovative may be percieved as the risky option for a project team, whereas the safest way to deliver on time and on budget is to ensure they only ever undertake projects they have essentially delivered before. Indeed I have seen this happen: project after project shipped on time and budget whilst senior management sat around scratching their heads at the fact that they kept losing market share. As a wise man once said, “be careful what you ask for”.

The fundamental flaw with such approaches is obviously that they elevate the removal of uncertainty (i.e. perceived risk to timely delivery) to the primary objective, whereas the generation of new business value and innovation is normally all about working _with_ uncertainty. I think its enlightening to flip things round for a moment, and take a look at project timelines in reverse. So starting at the end point, we have the project being decommissioned at some point hopefully a number of years into the future. Roll back a bit and we might have major release upgrades interspersed with patch releases, users making use of x% of the available functionality, ongoing user training, the initial roll-out, iterations of analysis/test/build, etc. I find this “termination-oriented” perspective highly instructive. I haven’t been able to find stats concerning the lifespan of “successful” IT projects, but I’d be suprised if the average was more than five years. That implies nothing negative about the business value they generate. It simply reflects the fact that as long as a business case implementation is generating more value than it costs then it should be continued, and once that is no longer the case then it should be terminated. 

Where the damage occurs is when projects/business cases despite no longer being cost effective or financially viable. In other words it not project failure that causes the worst problems but business cases that are perpetuated beyond their useful lifespan. These become the living-dead “zombie” memes, that drain the financial, staffing and morale lifeblood of an organisation. Interestingly, a former Clinton aide has just published The Tyranny of Dead Ideas about exactly this subject. What matters most is identifying such instances as early as possible and terminating them. Unlike their gamekeeping equivalent however, such project culls should also have a life-giving compliment, whereby dormant “before their time” business cases are activated in response to changing favourable market conditions. In this way, both ends of the useful lifespan of a business case are managed effectively. This is the essence of working in acceptance of uncertainty.

There is an obvious parallel here in software design. Unchecked failures/exceptions escalate. Good architecture treats error handling as an integral part of the design, and promotes defensive programming practices in order to create failure isolation boundaries that isolate and contain the consequences of the intrinsic instabilities of complex production environments. Similarly good program management should treat project failure handling as an integral part of the methodology, and promote defensive program management practices that create failure isolation boundaries that isolate and contain the consequences of the intrinsic instabilities of complex market environments. If a project failure escalates and causes significant problems to an organisation then that is not a project problem – it is a fundamental failing of program management methodology. This is why 37signals are correct in stating that organisation failure amongst start-ups is overrated. Any company that allows failure to escalate unchecked to the point where higher-level market isolation boundaries are invoked and it collapses, would signify to me a lack of management understanding concerning the essential nature of business value generation in unpredictable market conditions.

As a final thought, this idea also creates an additional viewpoint in the debate about regulation and state intervention in free-market economies. An evolutionary virtue of the free-market economy is that it entails a failure isolation boundary/ceiling between organisation and economic sector such that organisational failure is always culled before it escalates higher (the collapse of Soviet communism arguably being an instance of unchecked failure that bubbled up beyond through economic spheres and into the political). However this begs the question of whether such culling could possibly be enforced by some other means (e.g. corporate taxation structures??). If so then regulatory interventionist policies may not intrinsically be a bad thing…

In the previous post we started an examination of software delivery from the perspective of evolutionary biology. In that context we saw that business cases can be viewed as memes and software projects as a certain class of phenotype (or way in which that business case gets expressed). Following on from that, an organisation’s slate of new commercial development proposals can be seen as its meme-equivalent DNA, where at any given moment a subset of those replicators will be activated/ratified and then express an extended range of intra- and extra-organisational phenotypes including marketing campaigns, IT projects, industry bodies, etc. The success of these phenotypes will in turn determine the degree to which those memes/business cases are then perpetuated via further iterations of investment and development. To understand more about how this happens, we now need to look at the nature of selective environments.

An examination of most companies today will reveal multiple concurrent levels of collaboration and competition: individuals compete and collaborate within the environment of their team, with their peers in separate teams and business divisions, and very often with other people in the industry within which they work (IT news groups being an obvious collaborative example). Teams compete and collaborate within the enviroment of their business division, across business divisions, and quite often across company boundaries with similar teams in competitor organisations. Similarly organisations compete and collaborate within industry sectors, and again sectors quite often compete within the wider economy (e.g. online music sharing services competing with the traditional record industry).

The first point of great interest about this is its symmetry with the scale-invariance of power law systems. Whether we are looking at the level of individual team members or the global economy, we can see the same thing happening: namely different environmental factors applying selective pressure in favour of certain key characteristics. Secondly, when we more closely examine those environmental factors within a business context we can see they are nothing other than what micro-economists refer to as incentives. Incentives are the features of economic environments that determine adaptive advantage: they create the selective pressure. (It is worth highlighting at this point that we are not making any claims about human nature: incentives can promote altruistic, enlightened behaviour as much as greed/self-interest). Along the scale described above from individuals to the global economy, different incentives will create different selective pressures. Those pressures may act in the same direction or else they may act in conflict. For example, the impending credit crunch clearly suggests that recent city trader incentive/bonus structures were in conflict with the interests of the wider economy. 

Incentives can be specified either explicitly or implicitly. Explicit incentivisation takes the form of sales targets, call centre response times, unit test coverage targets or any other published quality metrics. Implicit incentivisation fills in the remaining gaps, and is normally adopted as a result of unreflective organisational behaviour (for example, inexperienced IT management rewarding anti-collaborative “rock star coder” behaviour with more kudos or the most interesting project work). It is frequently the underlying cause of unexpected or undesirable behaviour, and the first step towards effectively addressing such situations is normally the identification of those rogue incentives so they can be removed or else explicitly overridden.

In this way, we can see that the health of a business environment or any other complex system depends on the alignment of its incentives (i.e. success criteria) across the different tiers of selective pressure (something Jim Shore has recently aluded to in slightly different terms as the multiple aspects of project success). This in turn reflects the interdependencies characteristic of such power law systems. Where incentives get out of alignment, those interdependencies are no longer accommodated and malignancy is the result (quite literally in the biological world: cancerous cells compete and replicate very successfully at the cellular level, but at the overall expensive of other levels i.e. the organism).

When we consider the project-centric world view currently prevalent across the IT industry from this perspective, a few things come to light. We begin to understand that a programme management culture of on-time/on-budget project incentivisation has created selective pressure in favour of IT projects simply because they are an easy vehicle for meeting that target. Part of this is related to the misguided insistence by so many IT divisions today of referring to the other parts of their organisation as “the business” (frequently this is in turn symptomatic of an over-the-wall software release mentality and ultimately a basic lack of care about the real value of what is being delivered: “the project shipped on time and on budget, beyond that it’s not my problem”). A project does not just deliver within the IT division environment: we are part of “the business” too and we need continual reminding of that fact. As we’ve seen previously, on-time/on-budget has no direct correlation with organisation-level pressures to deliver added value. When we align selective pressures across the delivery environment and incentivise software delivery more meaningfully in terms of generating business value, IT projects are demoted to their rightful position as incidental artefacts – artefacts that frequently just get in the way.

A final key point to note about the scale-invariance of selective pressure is that it also emphasises the holistic nature of organisational health. It’s not just about the organisation: unless the needs of every interdependent adaptive tier are being met – from job satisfaction of team members up to healthy competition across your industry sector – then your organisation is ultimately going to end up in trouble.

In previous posts we saw that the generation of business value via IT projects essentially follows a power law distribution. By examining the nature of power law systems, we went on to conclude that adaptive strategies offer the most effective way of managing risk in such environments. We will now begin to explore what a fully adaptive risk management strategy might look like, using as our starting point an overview of the key principles underlying nature’s great adaptive risk management engine: Natural Selection..

Evolutionary ideas have recently been gaining prominence in studies of organisational behaviour and efficiency from two directions:

  • Evolutionary Micro-economics (top down), in response to the limitations of traditional rationalist supply/demand models based on Game Theory.
  • Adaptive Project Methodologies (bottom up), focussing on evolutionary design and iterative delivery to mitigate the inherent unpredictability of requirements and market conditions.

The most fundamental principle on which these ideas are based is the notion of a replicator. A replicator can be defined as any entity of which copies are made, where that entity has some causal influence on its own probability of being propagated. The classic biological example is a gene, which is copied during cell division and which influences its probability of being propagated via the environmental effects of the proteins it encodes (and in turn, the effects of the composite structures out of which those proteins are built). The specific DNA sequence of a gene is known as its genotype, and the corresponding expression of that genotype is its phenotype.

In the Extended Phenotype, Richard Dawkins switched the primary focus of evolutionary studies away from the organism. He showed that “organism” is ultimately just an arbitrary point along the scale of phenotypes: from specific proteins at one end, up through more complex protein structures to organs, organisms and social groups at the other. The fundamental unit driving natural selection forwards across the generations is the replicator or Selfish Gene – everything else from protein to social group is just artefactual byproduct (that impacts the probability of further replicator propagation).

Other instances of replicators include memes. A meme is “any unit of cultural information, such as a practice or idea, that gets transmitted verbally or by repeated action from one mind to another. Examples include thoughts, ideas, theories, practices..” When we consider the field of IT project delivery within this context, we can spot obvious correlations. Business cases are memes which, when ratified, result in the generation of a suite of phenotypic artefacts ranging from marketing strategies to IT delivery teams to unit tests, SCM repositories and deployed production systems to new revenue streams. These artefacts end up shaping their business division, organisation and industry sector, and in doing so determine the probability of the business case propagating and spawning further system releases, new marketing campaigns, etc.

There is a key lesson for us as IT practioners to take from this, one that evolutionary biologists have already learnt. It is that artefacts (be they organisms, social groups, IT projects or marketing campaigns) don’t ultimately matter. The thing that matters is the replicator: the business case or gene. We need to follow evolutionary biology’s re-orientation towards the gene, and shift our focus away from IT projects and create practices centred solely on the business case. I now believe that “projects” can actually be an impediment to the efficient generation of real business value from IT. They act as an inflexible body of emotional and financial investment that creates resistance to both a.) change and b.) termination where such change makes the business case no longer viable in real terms (which is when real damage is then inflicted). We will discuss more on this topic in subsequent posts. Before that however, we need to examine the nature of selective environments – which will be the subject of the next post. In doing so we will hopefully shed some light on the factors that have led to our current project-orientated IT world view.

 

In the previous post, we explored the behavioural differences of simple and complex systems. We saw that complex systems display power law distributions, the key characteristics of which are increased unpredictability and an increased likelihood of extreme events when compared to simple Gaussian systems. Additionally, the existence of positive and negative feedback loops makes them more resistant to causal analaysis: the potential for repeated amplification of trivial trigger events can make it very difficult to understand what is going on (see the 1987 stock market crash as an example). We will now examine the implications of those differences for risk management, focussing in particular on IT project delivery.

Conventional business management practices are based on the implicit assumptions we have inheritted from our cultural past, that ultimately have their roots in the scientific tradition: we use specific instances or case studies to infer a generalised understanding of a domain; that understanding then allows us to predict it, and once we can predict it we can then define an effective strategy for managing it. On the other hand, in our everyday lives and throughout the natural world reactive risk management is the norm. For example, to avoid being run over by a car when crossing the road we do not need to understand how a car works but only what it looks like (i.e. fast moving metallic thing on wheels). Similarly to avoid being eaten by a lion, a deer does not need to understand big cat physiology but only what one looks like (i.e. fast moving furry thing on legs).

From this, we can see that risk management strategies can be grouped at the most basic level into one of two categories:

  1. Cause based:
    • Standard business practice
    • Analyse cause, then define strategy
    • Predictive/Pro-active
  2. Observation based:
    • Normal practice in daily life and natural world
    • Adaptive/Reactive

In situations where they both work, the latter is obviously inferior as it affords no potential for proactivity and forward planning. However the former is critically dependent on the predictability of the thing being managed.

Now previously we have seen that IT project success in real terms appears to display power law behaviour. Possible explanations for this might include:

  1. There is a simple causal relationship with an underlying pseudo-power law phenomenon. It might just be that the size of investment in IT projects follows a roughly power law distribution and that the returns generated are directly proportionate to that investment. Most projects receive small to moderate investment whilst a few get massive investment and that is what results in the correlated power law distribution of generated business value.
  2. The world of IT project delivery is a complex but deterministic system, hence it displays power law behaviour.
  3. IT project delivery has dependecies on truly random phenomena, hence the generation of delivered business value displays power law behaviour.

Which of these is most accurate is a matter of conjecture: some people might argue for the first explanation, whilst others might stand by the second. We are going to stand back from that debate. Instead we will only assume this: that to the best of our knowledge, all of the explanations sound to some extent reasonable and one of them actually happens to be true. As discussed in the first post of this series, this then allows us to assess each strategy against possible explanation/scenario as follows:

 

This demonstrates that in the absence of certain knowledge, adaptive metholodies clearly represent the lowest risk approach to IT project delivery as they are effective for every explanation. More generally, we can summarise this by stating:

  • Simple, independent processes that are described by normal distributions are best managed by predictive strategies
  • Complex, interdependent processes that are described by power law distributions are best managed by adaptive strategies.

In the next post we will start exploring what a fully adaptive IT risk management strategy might look like, within the context of lessons we can learn from other areas including evolutionary biology.