Read: July 2012
Rating: 3/5 (Good)
Hartford chronicles, and does a good job of explaining in an accessible way, one of the meta trends facing our generation. Namely that “success” is increasingly based on our ability to operate effectively in a complex ecosystem where inputs are not clearly connected with outputs.
Success is the result of random tinkering and making decisions based on the feedback. Evolution is robust which is why it has proved so effective for so long. Harford gives a lot of ways for individuals and organizations to apply the concepts Nassim Taleb discusses as it relates to randomness and modernity.
The modern world is mind-bogglingly complicated.
Experts do outperform non-experts. These intelligent, educated and experienced professionals have insights to contribute – it’s just that those insights go only so far. The problem is not the experts; it is the world they inhabit – the world we all inhabit – which is simply too complicated for anyone to analyse with much success.
Today, 10 per cent of American companies disappear every year. What is striking about the market system is not how few failures there are, but how ubiquitous failure is even in the most vibrant growth industries.
The difference between market-based economies and centrally planned disasters, such as Mao Zedong’s Great Leap Forward, is not that markets avoid failure. It’s that large-scale failures do not seem to have the same dire consequences for the market as they do for planned economies. (The most obvious exception to this claim is also the most interesting: the financial crisis that began in 2007. We’ll find out why it was such a catastrophic anomaly in chapter six.) Failure in market economies, while endemic, seems to go hand in hand with rapid progress.
Astounding complexity emerges in response to a simple process: try out a few variants on what you already have, weed out the failures, copy the successes – and repeat for ever. Variation, and selection, again and again.
Imagine a vast, flat landscape, divided into a grid of billions of squares. On each square is a document: a recipe describing a particular strategy. Evolutionary theorists call this a ‘fitness landscape’.
The evolutionary approach is not just another way of solving complex problems. Given the likely shape of these ever-shifting landscapes, the evolutionary mix of small steps and occasional wild gambles is the best possible way to search for solutions. Evolution is effective because, rather than engaging in an exhaustive, time-consuming search for the highest peak – a peak that may not even be there tomorrow – it produces ongoing, ‘works for now’ solutions to a complex and ever-changing set of problems.
The evidence suggests that in a competitive environment, many corporate decisions are not successful, and corporations constantly have to cull bad ideas and search for something better. The same conclusion is suggested by Tetlock’s studies of expert judgement and by the history of ‘excellent’ companies that so often lose their way: we are blinder than we think. In a complex, changeable world, the process of trial and error is essential. That is true whether we harness it consciously or simply allow ourselves to be tossed around by the results.
But whether we like it or not, trial and error is a tremendously powerful process for solving problems in a complex world, while expert leadership is not. Markets harness this process of trial and error, but that does not mean that we should leave everything to the market. It does mean – in the face of seemingly intractable problems, such as civil war, climate change and financial instability – that we must find a way to use the secret of trial and error beyond the familiar context of the market.
The Soviet failure revealed itself much more gradually: it was a pathological inability to experiment. The building blocks of an evolutionary process, remember, are repeated variation and selection. The Soviets failed at both: they found it impossible to tolerate a real variety of approaches to any problem; and they found it hard to decide what was working and what was not. The more the Soviet economy developed, the less of a reference point the planners had. The whole system was unable to adapt.
‘Palchinsky principles’: first, seek out new ideas and try new things; second, when trying something new, do it on a scale where failure is survivable; third, seek out feedback and learn from your mistakes as you go along.
Above all, feedback is essential for determining which experiments have succeeded and which have failed. And in the Soviet Union, feedback was ruthlessly suppressed.
We want all of our public services to be like Coca-Cola: all identical, all good. And they can’t be. If we are to take the ‘variation’ part of ‘variation and selection’ seriously, uniformly high standards are not only impossible but undesirable. When a problem is unsolved or continually changing, the best way to tackle it is to experiment with many different approaches.
Traditional organisations are badly equipped to benefit from a decentralised process of trial and error. Static, solved problems are ideal for such organisations; as are tasks where generalised expertise counts for much more than local knowledge. But such ‘Coca-Cola problems’ are increasingly rare in a rapidly changing world, which is why – as we shall see – many businesses are beginning to decentralise and strip authority away from managers.
‘A person who has not made peace with his losses is likely to accept gambles that would be unacceptable to him otherwise’. Even those of us who aren’t professional poker players know how it feels to chase a loss.
Faced with a mistake or a loss, the right response is to acknowledge the setback and change direction. Yet our instinctive reaction is denial. That is why ‘learn from your mistakes’ is wise advice that is painfully hard to take.
The more complex and elusive our problems are, the more effective trial and error becomes, relative to the alternatives.
The three essential steps are: to try new things, in the expectation that some will fail; to make failure survivable, because it will be common; and to make sure that you know when you’ve failed.
A group of the very smartest agents isn’t as successful as a more diverse group of dumber agents. Even though ‘different’ often means ‘wrong’, trying something different has a value all of its own –a lesson Peter Palchinsky learned as he travelled the industrial hubs of Europe. Both because of the conformity effect Asch discovered, and because of the basic usefulness of hearing more ideas, better decisions emerge from a diverse group.
It is not enough to tolerate dissent: sometimes you have to demand it.
Trial and error will always be a part of how any organisation solves a complex, ever-shifting problem.
A complex world is full of knowledge that is localised and fleeting. Crucially, the local information is often something that local agents would prefer to use for their own purposes.
Any large organisation faces a basic dilemma between centralisation and decentralisation. Hayek, back in 1945, argued that the dilemma should be resolved by thinking about information. Decisions taken at the centre can be more coordinated, limit wasteful duplication, and may be able to lower average costs because they can spread fixed resources (anything from a marketing department to an aircraft carrier) across a bigger base. But decisions taken at the fringes of an organisation are quick and the local information will probably be much better, even if the big picture is not clear. Hayek believed that most people overestimated the value of centralised knowledge, and tended to overlook ‘knowledge of the particular circumstances of time and place’.
The evidence suggests that more technologically advanced firms are also more decentralised. Typically, new equipment (anything from software to a large machine tool) is superior not because it does the same things faster, but because it is more flexible. To get the most out of that flexibility requires well-trained, adaptable workers with authority to make their own decisions, which is precisely the kind of workforce successful firms seek out or train when they upgrade their machinery or their software. In the organisation of the future, the decisions that matter won’t be taken in some high-tech war-room, but on the front line.
It is easy to say with hindsight that official doctrine was completely wrong. But it would also be easy to draw the wrong lesson from that. Could ministers and air marshals really have predicted the evolution of aerial combat? Surely not. The lesson of the Spitfire is not that the Air Ministry nearly lost the war with their misconceived strategy. It is that, given that misconceptions in their strategy were all but inevitable, they somehow managed to commission the Spitfire anyway. The lesson is variation, achieved through a pluralistic approach to encouraging new innovations. Instead of putting all their eggs in what looked like the most promising basket – the long-range bomber – the Air Ministry had enough leeway in its procedures that individuals like Air Commodore Cave-Brown-Cave could fund safe havens for ‘most interesting’ approaches that seemed less promising, just in case – even approaches, like the Spitfire, that were often regarded with derision or despair.
In an uncertain world, we need more than just Plan A; and that means finding safe havens for Plans B, C, D and beyond.
It is easy to talk about ‘skunk works’, or creating safe havens for fledgling technologies, but when tens of billions of dollars are required, highly speculative concepts look less appealing. We have not thought seriously enough about how to combine the funding of costly, complex projects with the pluralism that has served us so well with the simpler, cheaper start-ups of Silicon Valley.
Humans are risk averse.
Two vital principles for promoting new technology. First, create as many separate experiments as possible, even if they appear to embody contradictory views about what might work, on the principle that most will fail. Second, encourage some long-shot experiments, even though failure is likely, because the rewards to success are so great. The great weakness of most government-funded research is that both these goals are the antithesis of government planning.
The two elements essential to encourage significant innovation in a complex world: a true openness to risky new ideas, and a willingness to put millions or even billions of dollars at risk. These two elements are fundamental to twenty-first-century innovation, yet they seem mutually incompatible. They are not. In fact the way to combine them has been around, if often forgotten, for more than three centuries.
Grants, unlike prizes, are a powerful tool of patronage. Prizes, in contrast, are open to anyone who produces results. That makes them intrinsically threatening to the establishment.
The lesson is that pluralism encourages pluralism. If you want to stimulate many innovations, combine many strategies. Prizes could, in theory, replace the patent system – governments could scrap patent protection but offer prizes for desirable inventions. But to explain that idea is to see its limitations. How could the government know enough about the costs, benefits and even the very possibility of an innovation to write the rules and set the prize money for a competition? We know we need an HIV vaccine, but nobody knew we needed the internet until we had it. We couldn’t have established a prize for inventing the World Wide Web.
At its most basic, adapting requires variation and selection.
It can be surprisingly difficult to distinguish between what is working and what is not,
We’ve already seen that bottom-up often beats top-down, and we’ll see even more powerful examples of that tendency later. But this is the point: the world is complicated. What works in the US Army may not work in a rural Javanese village. The lesson is to keep experimenting and adapting, because a single success may or may not replicate in other contexts.)
7 ‘We should not try to design a better world. We should make better feedback loops’
Governments should not be picking and choosing, in our complex economies, specific ways to save the planet. They should be tilting the playing field to encourage us to make all our decisions with the planet in mind.
All this harks back to Peter Palchinsky’s second principle: make failures survivable. Normally, carrying out lots of small experiments – variation and selection – means that survivability is all part of the package. But in tightly coupled systems, failure in one experiment can endanger all. That is the importance of successfully decoupling. ‘We
‘One doesn’t have to be a Marxist to be awed by the scale and success of early-20th-century efforts to transform strong-willed human beings into docile employees.’
‘Your first try will be wrong. Budget and design for it.’ – Aza Raskin, designer at Firefox
In business, if you’re in the right place at the right time and happen to hit the right strategy, you’ll thrive without much need for adapting.
There are three essential steps to using the principles of adapting in business and everyday life, and they are in essence the Palchinsky principles. First, try new things, expecting that some will fail. Second, make failure survivable: create safe spaces for failure or move forward in small steps. As we saw with banks and cities, the trick here is finding the right scale in which to experiment: significant enough to make a difference, but not such a gamble that you’re ruined if it fails. And third, make sure you know when you’ve failed, or you will never learn.
‘If I find 10,000 ways something won’t work, I haven’t failed. I am not discouraged, because every wrong attempt discarded is just one more step forward,’
When a problem reaches a certain level of complexity, formal theory won’t get you nearly as far as an incredibly rapid, systematic process of trial and error.
Ariely has since become one of the most celebrated behavioural economists after the success of his book Predictably Irrational.
It isn’t cutting-edge technology that tends to undo the market leaders. It is the totally new approach, often with quite primitive technology and invariably of little value to the best customers of the leading industry players.
Disruptive innovations are disruptive precisely because the new technology doesn’t appeal to the traditional customers: it is different and for their purposes, it’s inferior. But for a small niche of new customers the new disruptive product is exactly what is needed. They want smaller, cheaper hard drives, or cameras that produce digital files, or email that you can access on any computer – and they are willing to tolerate the fact that the new product is inferior to the old one along all the traditional dimensions. That foothold in the niche market gives the new technology an opportunity to develop into a true threat to the old way of doing things.
Creating a space to experiment in which failures can be instructive and recoverable.
‘The best failures are the private ones you commit in the confines of your room, alone, with no strangers watching. Private failures are great.’ Quite so: you can learn from them without embarrassing yourself. But the next-best kind is in front of a limited audience. If your new show is going to fail, better that it does so away from Broadway, giving you a shot at recovering before it hits the big stage.
Being willing to fail is the essential first step to applying the ideas of Adapt in everyday life.
The next step is finding, whenever possible, relatively safe spaces in which to fail:
Peter Palchinsky’s principles: First, try new things; second, try them in a context where failure is survivable. But the third and final essential step is how to react to failure, and Tharp avoided several oddities of the human brain that often prevent us from learning from our failures and becoming more successful. The first of those quirks leads to denial.
It seems to be the hardest thing in the world to admit that we have made a mistake and to try to put it right. Twyla Tharp herself has the perfect explanation of why: because ‘it requires you to challenge a status quo of your own making.’
Cognitive dissonance describes the mind’s difficulty in holding two apparently contradictory thoughts simultaneously: in Tharp’s case, ‘I am a capable, experienced and respected choreographer’ and ‘My latest creation is stupefyingly clichéd.’ This odd
The second trap our minds set for us is that we chase our losses in an attempt to make them go away.
‘Hedonic editing’, borrowing a term coined by Richard Thaler, the behavioural economist behind the book Nudge. While denial is the process of refusing to acknowledge a mistake, and loss-chasing is the process of causing more damage while trying to hastily erase the mistake, hedonic editing is a subtler process of convincing ourselves that the mistake doesn’t matter.
A different psychological process, but with a similar effect on our ability to learn from our mistakes, is simply to reinterpret our failures as successes. We persuade ourselves that what we did was not that bad; in fact, everything worked out for the best. Twyla Tharp could have decided that what she’d actually set out to achieve was something artistically radical rather than commercially mass-market, so the incomprehension of the critics was, in a way, validation; she could have found a few audience members who liked it, and convinced herself that the views of this discerning clientele should be given greater weight.
This is ‘Happiness being synthesised … “The one I got is really better than I thought! That other one I didn’t get, suuuucks!”’ We systematically reinterpret our past decision as being better than it really was.
These, then, are the three obstacles to heeding that old advice, ‘learn from your mistakes’: denial, because we cannot separate our error from sense of self-worth; self-destructive behaviour, because like the game-show contestant Frank, or Twyla Tharp when marrying Bob Huot, we compound our losses by trying to compensate for them; and the rose-tinted processes outlined by Daniel Gilbert and Richard Thaler, whereby we remember past mistakes as though they were triumphs, or mash together our failures with our successes. How can we overcome them?
I am not a failure – but I have made a mistake.
We need what Twyla Tharp calls ‘a validation squad’: friends and acquaintances who will back you but also tell it like it is.
John Kay, whose book The Truth about Markets was a profound influence on this one, uses the term ‘disciplined pluralism’ to describe how markets work: exploring many new ideas but ruthlessly cutting down the ones that fail, whether they are brand-new or hundreds of years old. But although Kay does not make this claim, ‘disciplined pluralism’ could also be a credo for a successful and fulfilling life.
Pluralism matters because life is not worth living without new experiences – new people, new places, new challenges. But discipline matters too: we cannot simply treat life as a psychedelic trip through a random series of novel sensations. We must sometimes commit to what is working: to decide that the hobby we are pursuing is worth mastering; that it’s time to write that novel, or strive for that night-school degree; or maybe to get married. Equally important: sometimes we need to make the opposite kind of commitment, and decide that the toxic job and the toxic boyfriend are simply not worth the amount of life they cost.
The excitement that so many students feel as they arrive at university–a world of possibilities, of safe experiments – is one we tend to lose. But we need not: the new possibilities are always out there. It’s one thing to be committed; it’s another to trap ourselves unnecessarily. Perhaps we become more shy of experimenting as we get older because we become more aware of the truth that has defined this book: that in a complex world, we’re unlikely to get it right first time. To embrace the idea of adapting in everyday life seems to be to accept blundering into a process of unremitting failure. So it’s worth remembering once again why it is worth experimenting, even though so many experiments will, indeed, end in failure. It’s because the process of correcting the mistakes can be more liberating than the mistakes themselves are crushing, even though at the time we so often feel that the reverse is true. It’s because a single successful experiment can become Reginald Mitchell’s Spitfire or H.R. McMaster’s counterinsurgency strategy for Iraq. A single experiment that succeeds can transform our lives for the better in a way that a failed experiment will not transform them for the worse – as long as we don’t engage in denial or chase our losses.
The ability to adapt requires this sense of security, an inner confidence that the cost of failure is a cost we will be able to bear. Sometimes that takes real courage; at other times all that is needed is the happy self-delusion of a lost three-year-old. Whatever its source, we need that willingness to risk failure. Without it, we will never truly succeed.
Last Updated on April 18, 2019 by RipplePop