A few times recently I have come across discussions where complex systems have been described as self-organising. Whilst being strictly true within the limits of logical possibility, I believe such descriptions to be significantly misrepresentative. I hope you’ll forgive the frivolity, but I find it useful to think about this in relation to the infinite monkey theorem (if you give a bunch of monkeys a typewriter and wait long enough then eventually one of them will produce the works of Shakespeare). I think describing complex systems as self-organising is a bit like describing monkeys as great authors of English literature. Perhaps the underlying issue here is that evolutionary timeframes are of a scale that is essentially meaningless relative to human concepts of time. Natural selection’s great conjuring trick has been to hide the countless millenia and uncountably large numbers of now-dead failures it has taken to stumble upon every evolutionary step forwards. In monkey terms, it is a bit like a mean man with a shotgun hiding out in the zoo and shooting every primate without a freshly typed copy of Twelfth Night under its arm, and then us visiting the zoo millions of years later and thinking that a core attribute of monkeys is that they write great plays!

So, are complex systems self-organising? For the one-in-trillions freak exceptions that have managed to survive until today the answer is yes, but in a deeply degenerative and entropy-saturated way. For everything else – which is basically everything – then the answer is no. An example of this is being played out around us right now in the financial crisis. The laissez-faire economic ideologies of recent decades were primarily an expression of faith in the abilities of complex financial markets to self-organise, as enshrined in the efficient market hypothesis championed by Milton Friedman and the Chicago School. The recent return to more Keynesian ideas of external government intervention via lower taxation and increased public spending together with the obvious need for stronger regulation are indicative of a clear failure in self-organisational capability.  

When people in technology talk about self-organisation, what they are normally referring to are the highly desirable personal qualities of professional pride, initiative, courage and a refusal to compromise against one’s better judgement. A couple of years back Jim Highsmith argued that “self-organisation” has outlived its usefulness. I would agree, and for the sake of clarity perhaps we might do better now to talk about such attributes in specific terms?

A guide to putting into practice a number of ideas that have been discussed on this blog over the last 18 months, especially with regards to MMF valuation (already incorporating a number of feedback iterations from different projects).

Download: Options Analysis User Guide 1.2

If you’d like a copy of the spreadsheet then drop me a line or else join the Real Options discussion group and download from there.

[update: uploaded latest version of PDF]

In the last post we argued for a more rigourous, quantitative approach to featureset valuation over the conventional, implicit and overly blunt mechanisms of product backlog prioritisation. We borrowed a simple valuation equation from decision tree analysis to give us a more powerful tool for both managing risk and determining the optimal exercise point for any MMF:

Value = (Estimated Generated Value * Estimated Risk) - Estimated Cost

where 

Estimated Risk = Estimated Project Risk * Estimated Market Risk

A few comments are worth noting about this equation:
1.) It contains no time-dependent variable. The equation simply assumes a standard amortisation period to be agreed with stakeholders (typically 12 to 24 months). Market payback functions and similar are ignored as they introduce complexity and hence risk (as described in more detail below). We are not seeking accuracy per se, but simply enough accuracy to enable us to make the correct implementation decisions.
2.) It is very simplistic. Risk management must be reflexive: at the most basic level, project risk can be divided into two fundamental groupings:

  • Model-independent risks
  • Model-related risk

The former includes typical factors such as new technologies, staff quality and training, external project dependecies, etc. The latter includes two components: the inaccuracy of the risk model and the incomprehensibility of the risk model. We start incurring inaccuracy risk as soon as the simplicity of our model is so great that it leads us to make bad decisions or else provides no guidance. MoSCoW prioritisation is a good example of this. On the other hand we start incurring incomprehensibility risk as soon as the risk model is so complex that it is no longer comprehensible by everyone in the delivery team (which will clearly be relative across different teams). The current financial crisis is a large-scale example of a collapse in incomprehensibility risk management. If financial risk models had been reflexive and taken their own complexity into account as a risk factor, then there is no way we would have ended up with situations where cumulative liabilities were only even vaguely understood by financial maths PhDs: if we take a team of twenty people, it is clear that a sophisticated and accurate model that is only understood by one person entails vastly more risk than a simplistic, less accurate model that everyone can follow. We can generalise this in our estimation process as follows:

Total Risk = Project Risk * Market Risk * Model Incomprehensibility Risk * Model Inaccuracy Risk

or as functions:

Total Risk = Risk(Project) * Risk(Market) * Risk(Incomprehensibility(Model)) * Risk(Inaccuracy(Model))

Furthermore, given our general human tendency towards overcomplexity for most situations this can be approximated to

Total Risk = Risk(Project) * Risk(Market) * Risk(Incomprehensibility(Model))

3.) All risk is assigned as a multiplier against Generated Value, rather than treating delivery risk as an inverse multiplier of Cost. I have had very interesting conversations about this recently with both Chris Matts and some of the product managers I am working with. They have suggested a more accurate valuation might be some variation of:

Value = (Estimated Generated Value * Estimated Realisation Risk) - (Estimated Cost / Estimated Delivery Risk)

In other words, risks affecting technical delivery should result in a greater risk-adjusted cost rather than a lesser risk-adjusted revenue. This is probably more accurate. However is that level of accuracy necessary? In my opinion at least, no. Firstly it creates a degree of confusion as regards how to differentiate revenue realisation risk and delivery risk: is your marketing campaign launch really manifestly different in risk terms from your software release? If either fails it is going to blow the return on investment model, so I would say fundamentally no. Secondly, I might be wrong but I got the feeling that part of the reticence to accept the simpler equation from our product management was a preference against their revenue forecasts being infected by a thing over which they had no control: namely delivery risk (perhaps a reflection of our general psychological tendency to perceive greater risk in situations where we have no control). However that is a major added benefit in my opinion: it helps break down the traditional divides between “the business” and “IT”. As the technology staff of Lehman Brothers will now no doubt attest, the only people who aren’t part of “the business” are the people who work for someone else.

For me, this approach creates the missing link between high-level project business cases and the MMF backlog. We start with a high-level return on investment model in the business case, that then gets factored down into componentised return on investement models as part of the MMF valuation process. These ROI components effectively comprise the business level acceptance tests for the business case. The componentised ROI models then drive out the MMF acceptance tests, from which we define our unit tests and then start development. In this way, we complete the chain of red-gree-refactor cycles from the highest level of commercial strategy down to unit testing a few lines of code. The scale invariance of this approach I find particularly aestheticly pleasing: it is red-green-refactor for complex systems…

fractal-red-green-refactor

I’m a big fan of Domain Driven Design, but one of the subtler points I have found when implementing it concerns the notion of logical domain models. Eric Evans advocates the use of a single domain model – the one enshrined in the codebase – as a defense against the bad old days of ivory tower architecture where unimplementable logical domain models were produced by people who hadn’t coded in years, to be thrown over the wall into development teams who were then miserably handicapped by them until they inevitably decided to chuck them out and start making some progress instead. However, I believe this to be a failure in application rather than an intrinsic flaw. Logical domain models can actually be exceptionally useful when correctly applied. They get developers thinking in abstract terms and stop us making a whole bunch of assumptions that typically result from jumping immediately to implementation (“I’m programming in language-of-choice so of course I’m going to code it like blah”). Far preferable is that all such decisions are made explicitly, as conscious informed choices about the different trade-offs to be balanced in building a system. The subtlety with Domain Driven Design is that to get the best of both worlds – abstract modelling and a single model –  the logical domain model should be evolved into the implementation domain model (I think of it as being a bit like a heatmap, with certain areas expressed logically and other, increasingly larger, areas expressed in implementation terms as your team iterates through sprint cycles). To be clear here, this has nothing to do with Big Design Up Front but simply the complimentary practice to BDD/TDD of whiteboarding sessions – but where we start by considering the logical model rather than immediately addressing implementation details.

What I have recently come to realise is that another model precedes the logical domain model, namely the business value throughput model. This is simply the exercise of domain modelling in terms of value rather than state/behaviour, and will be familiar to most people involved in business process engineering and organisational efficiency improvement. However, rarely does it get carried down to the level of software delivery. Instead, we base our decision making on a nebulous exercise called “project prioritisation”. In the worst case, this means MoSCoW (it is beyond me why we persist in this – why not just assign everything a “Must” and save all involved a deeply predictable day of pain!), or if we’re lucky we might end up with a slightly more helpful score out of ten or similar. The problem with this is as follows: if our aim is to generate business value then prioritisation is actually an implicit form of valuation – true, otherwise what’s the point? However it is an insidious, surreptitious form that slips under the radar without any kind of rigour, transparency or quantification. Is that story a “Must” because it will generate the most business value, or because someone bought an expensive new middleware platform and now they are looking for a problem to solve? Is that a “Must” because it will generate the most business value, or because you are fine-tuning your CV?! Is that a “Must” because it will generate the most business value, or because you know it will benefit the senior management’s current pet project?

So, what might a more explicit version of valuation look like? Well, we have recently been been trying out a very simple MMF model based on decision tree analysis:

Value = (Generated Business Value * Cumulative Risk) - Estimated Cost

where

Cumulative Risk = Project Risk * Market Risk

When I have presented this to various audiences, a wide range of initial objections have been raised – which normally fall under one of the following headings:

1.) Uncertainty (pt 1.): The business value delivered by a project is notoriously hard to measure and normally not even vaguely clear until a couple of years after delivery. How on earth can you expect me to work it out in advance?

Answer: This is an estimate, not a commitment. We fully expect it to be “wrong” – that is why we are limiting our liability by implementating the minimal marketable featureset, so we can drive out the real answer in production. We are just trying to come up with a rough figure on the basis of the information available right now, so we can move beyond the world of Must-Should-Could-Would towards anything that better resembles sanity. No-one is going to hold you to it. Please go and pair with an architect or someone in marketing or finance if you are struggling.

2.) Uncertainty (pt 2.): I don’t know enough yet to estimate the benefit or costs

Answer: Then that fact needs to be priced into your valuation as risk (similarly, inexperience at performing valuations should also be priced as risk). As that risk will almost certainly now be crippling your MMF valuation, why don’t you defer that piece of work and concentrate on something you are more certain about?

3.) Non-fiscal benefits: But the benefit of this will be in brand value/market awareness/etc

Answer: Fine, but without a subsequent programme of work to monetise that brand value it will be a pointless undertaking. You need to cross-attribute a percentage of the value delivered by the latter workstream back onto what we are considering now.

4.) Indirect benefits: But my web service doesn’t generate revenue. It’s the client systems that consume it which generate the business value?

Answer: Firstly, you do currently have more than one consumer don’t you? No, then why is it a web service and not an in-process component? Because you think other systems might need it in future? Then why not defer support for other message protocols until you know for certain which ones you’re going to need? Ah sorry, I misheard you – you do have multiple clients. In that case, you should simply cross-attribute a certain (usage-based?) percentage of the clients’ generated value back onto your service, as it is a value-throughput modelling dependency. 

With this approach, we simply do whatever has the highest estimated value at any point in time. In doing so, prioritisation takes care of itself and we are finally, mercifully released from MoSCoW hell. Furthermore, a number of additional side-benefits become apparent as the process starts being applied in practice:

  • MMF refactoring: once numbers start being assigned against MMFs, business stakeholders begin requesting whiteboarding sessions with architects to drive out other alternatives that entail less risk. This leads to a detailed technology-engaged breakdown of the business case and multiple iterations of MMF refactoring, with product managers and architects jointly discussing and creating business solutions around the table. Welcome to the world of agile technical strategy!
  • Incremental release of small chunks of functionality. The estimated value of anything with more than a couple of non-trivial risks drops through the floor, creating a strong driver towards a “perpetual beta” style release model.
  • A quantitative justification against system rewrites.
  • A strong driver to improve IT capability maturity. If your IT department can’t be predictable about the costs of building even small chunks of functionality then that source of risk will impact the valuation of everything they do, and needs to be mitigated as soon as possible.
  • Optimal implementation point: this is simply the maximum point on the valuation curve, where if you travel any further down the Cone of Uncertainty before starting to build then the reduced risk from greater clarity is outweighed by the risk from failing to deliver in optimal market conditions.
  • Target delivery date: see the above. At last we have a quantitative means of separating meaningful delivery dates from arbitrary political statements.
  • Should I do the highest risk stuff first or last? Are you talking about MMFs or components within an MMF? If the former, just do the highest value work regardless. If the latter, then always seek to maximise the estimated value of the MMF. That means take on the the highest risk work first. Doing so will remove that risk and up the feature’s valuation should you recalculate at the end of your day’s work. 

More about this next time. For now one final point. If you outsource a programme of work and wish to use this approach, then don’t forget to ensure you engage with two consultancies to avoid the obvious conflict of interests: one to do the valuation, and another to do the build. (Yes I know they are lovely people, but you can’t really expect them to toil ceaselessly to find you the build option that will earn them the least money can you?!…)