The previous post we discussed control in complex adaptive systems. We examined the different categories of constraints that shape emergent behaviour, and highlighted the dangers created by failing to recognise the limits of human control. In this post I would now like to examine the extent to which it’s meaningful to think about lean product development as being ‘natural science’. In doing so, I intend to avoid getting drawn into philosophical debate about the scientific method. Rather, taking my cue once more from David Snowden’s LSSC11 keynote, I’d like to start by examining the narrative context of appeals to natural science (i.e. claims that field F ‘is science’). Historically, such appeals have commonly been made for one of two reasons:

  1. Legitimacy
  2. Power

Legitimacy

A long tradition exists of appeals to science in order to assert the reliability and validity of new or poorly established fields of intellectual inquiry. That tradition has been based in academic circles, and can be traced back through management and education science; political science, economics and anthropology; through to linguistics and famously even history.

Whether this is something lean product development needs concern itself with, I would question. As a discipline based on practical application rather than theoretical speculation, we can rely on natural selection to take care of that validation for us: if the methods we use aren’t effective then we will simply go out of business. Economic recession makes this process all the more reliable. Earlier this year I came across a really great risk management debate between Philippe Jorion and Nassim Taleb from 1997, where Taleb makes the following point:

We are trained to look into reality’s garbage can, not into the elegant world of models. [We] are rational researchers who deal with the unobstructed Truth for a living and get (but only in the long term) [our] paycheck from the Truth without the judgment or agency of the more human and fallible scientific committee.

For me this summarises lean practice. Yes, we are performing validated learning – but simply because doing so is rational behaviour, and more effective than taking a blind punt. Beyond that, whether the techniques employed constitute science seems to me an unnecessary diversion.

That would be of arguably little consequence were it not for one issue: namely, risk. Reminding ourselves that risk is a function of value rather than an entity in its own right, the value that science strives for is truth (however that might be defined). However the value that complex systems risk management strives for is survival. This difference creates a diametrically opposing attitude to outlier data points. Risk management in science is concerned with protecting the truth, and so all outlier data points are by default excluded unless they are repeatable and independently verified. On the other hand, risk management in complex systems is all about managing the cost of failure. It doesn’t matter if you are wrong most of the time, as long as in all those cases the cost of failure is marginal. What counts is being effective when the cost of failure is highest, as that creates the greatest chance of survival. As a result, outlier data points are by default included and are only ever discounted once we are highly confident they are not risk indicators.

The financial crisis has demonstrated the consequences of confusing these two different perspectives: of risk management techniques which are right most of the time but least effective when the cost of failure is greatest.

Power

The other common motive for appeals to science has been power. Going back to Descartes and the birth of western science, its narrative has always been one of mastery over and possession of nature – in short, the language of dominion and control. This takes us back to the themes of the previous post.

Such a perspective has become so deeply embedded in our cultural consciousness that it now influences our comprehension of the world in all sorts of ways. A recent example has been the debate in software circles about avoiding local optimisation. As a technique for improving flow through systems, the principle is sound and highly valuable. However it is entirely dependent on the descriptive coverage of the system in question. Some systems, such as manufacturing or software delivery pipelines, are amenable to such complete mapping. Many others however, such as economic markets, are not. A technique commonly used to describe such systems in evolutionary biology is the fitness landscape:

The sectional view through such an environment might be as follows, leading the unwary to highlight the importance of avoiding local optimisations at points A and C and always evolving towards point B.

The problem here is that for landscapes such as economic markets, the above diagram represents the omniscient/God view. For mere mortals, we only have knowledge of where we have been and so the above diagram looks simply like this:

Whilst it is a valuable insight in its own right simply to understand that our product is a replicator competing in a fitness landscape, as much as we might like to avoid local optimisations doing so is impossible because we never know where they are (even at maxima, we can never tell if they are global or local).

It is for these reasons that I think it is unhelpful to think of lean as science. The narrative context of lean should be not one of arrogance but humility. Rather than succumbing to the illusions of mastery and control in the interests of appeasing our desire for certainty, we are performing validated learning simply because it is the most effective course of action once we recognise the actual limits of both our control and understanding.