Editorial Note: The majority of this post is copied and pasted directly from The Origin of Wealth. It got cumbersome and clunky to delineate exactly what was quoted and where I was making edits to clarify or explain points quotes without enough context. So, for attribution purposes, just assume everything intelligent and helpful is from the book and all errors and omissions are mine.
Traditional neoclassical economics tends to use tools that require it to look at the economy as a static, equilibrium-seeking thing – something akin to a factory or machine. Complexity economics, an outgrowth of complexity science, instead tends to view the economy more like a biological system.
In The Origin of Wealth, Eric Beinhocker introduces complexity economics and argues that “wealth creation is the product of a simple, but profoundly powerful, three-step formula—differentiate, select, and amplify—the formula of evolution” and “the same process that led to an explosion of species diversity in the Cambrian period led to an explosion in SKU diversity during the Industrial Revolution.”
Evolution is an algorithm for innovation searching the “fitness landscape” of a given system. This could be an ecosystem as in biological evolution or the economy. The environment creates a design space and then selection (natural or otherwise) tests all the configurations in that design space over time and evolves with it.
Some plants thrive in a rainy climate with lots of shade, like moss. Others thrive in dry climates with lots of sun, like a cactus. Both are the result of the same basic evolutionary process, but that process evolved specific structures that made each of them better adapted to their specific environment.
To take an economic example, consider how new clothing items are designed. Clothes designers come up with various sketches. The marketing team then selects a subset of the designs that they thought consumers would like and makes a limited number of samples.
They then show those samples to the management of a clothing company, which selects a subset of the designs that it thinks consumers would like and arranges for their manufacture. The products are launched and the sales of each item provide feedback as to what people want and inform the next round of designs.
The fitness landscape here is defined by a constantly shifting set of consumer preferences and technological capabilities among other factors which ultimately impact how successful a product may be.
The Three Forces of Economic Evolution
Three forces co-evolve to create economic progress:
- Physical technologies
- Social technologies
- Business models
During the Industrial Revolution, for example, Richard Arkwright’s invention of the spinning frame (a Physical Technology) in the eighteenth century made it economical to organize cloth-making in large factories (a Social Technology), which in turn helped spur numerous innovations in the application of water power, steam, and electricity to manufacturing (back to Physical Technologies).
Business models evolve to adapt to the changing fitness landscape created by the interplay of physical and social technologies. The spinning frame kicked off the possibility of large manufacturing businesses that could use economies of scale.
I think of entrepreneurship largely as a skill for looking at how physical technologies and social technologies are shifting and trying to construct (and execute on) a business model that is optimized for the future fitness landscape created by those.
Business models in turn go to shape social and physical technologies. The development of the internal combustion engine and affordable cars (physical technologies) ultimately led to what we today consider the American Middle Class and the Organization Man.
Famously, Henry Ford offered a $5/day wage to any of his employees so they too could afford a car. This is remembered as a generous and benevolent act, but it turned out that there were some footnotes:
- The wage was available to married men, and to men under 22, and any women if they were supporting dependents.
- Workers were strongly advised not to buy consumer products on installment plans, other than horses and cars.
- Workers’ families would be interviewed to ask about their savings and drinking habits.
Henry Ford didn’t just want to pay people well, he wanted to mold the culture in a way that fit the emergence of the automobile industry.
An important part of middle-class existence is showing up at work, getting paid a steady salary, and paying bills on time – precisely what you needed someone to do to be able to buy a car.
No technology is developed in isolation. All technologies depend on a web of relationships with other technologies; the invention of the mobile phone, for example, drew not only on radio technology but on many other areas, such as computer technology and coding technology.
These interrelationships are not just technological, but economic. The economic web that has grown around the automobile, for example, includes industries ranging from steelmaking to oil, hotels, and fast food.
Complexity Economics vs. “Traditional” Economics
There are a bunch of examples in this book of how complexity causes you to look at economics in a different way, but the biggest one is this notion of evolution as the organizing metaphor.
Traditional economics has historically been concerned with two great questions:
- how wealth is created
- how wealth is allocated.
The models most economists use are focused on the second question. They begin with the assumption that an economy already exists, producers have resources, and consumers own various commodities. The models view the problem as how to allocate the existing finite wealth of the economy in a way that provides the maximum benefit for everyone.
For this focus on the allocation of finite resources, the mathematical equations of equilibrium imported from physics were extremely useful, however, they were not so useful for the question of the economy grows and so that question has been largely understudied by economists (though there are notable exceptions such as Joseph Schumpeter).
As Poincaré put it, “[traditional economics] regards men as infinitely selfish and infinitely farsighted. The first hypothesis may perhaps be admitted in a first approximation, the second may call for some reservations.”
What is needed is a more realistic and dynamic model and this is what complexity economics tries to supply.
In particular, the concept of time and path dependency is crucial.
Most Traditional Economic models don’t actually consider time; instead, they simply assume that the economy clicks along instantly from one equilibrium to another and that the transient conditions between equilibrium states do not matter
In the Traditional Economics view, when a $20 bill hits the street, the world is suddenly out of equilibrium. As rational, self-interested people have an incentive to pick up $20 bills, someone will come along, pick up the bill, and move the world back to equilibrium.
What matters from the traditional economics view is that we know what the equilibrium state is—one with no $20 bills lying on the streets.
In the real world, of course, there is a time delay between a $20 bill’s landing on the sidewalk and someone’s seeing it and picking it up. It then stands to reason that at any point in time, there are at least some undiscovered $20 bills lying on sidewalks somewhere.
Markets may be efficient in the long run, but there are time-related arbitrages. If there weren’t, no startups would ever succeed. We call the skill of capitalizing on this arbitrage entrepreneurship.
It’s true that someone was going to start a payment processing company on the internet, but there was an arbitrage period where the demand and supply were out of equilibrium. Paypal succeeded in the early 2000s by arbitraging the demand for an online payment service and the lack of good options.
These notions are handled in traditional economics as “exogenous variables” which mostly ignores them because it’s not clear how to handle them.
The problem with this approach is that it gives economists an escape hatch and allows them to put the most difficult and often most interesting questions outside the bounds of economics. For example, if technological change is treated as a random, outside force (like the weather), then one doesn’t need a fundamental theory of the interaction between technological change and changes in the economy.
One can attribute the waves of the business cycle to mysterious outside forces such as changes in consumer confidence, or attribute crashes in the stock market to news. However, that’s not very useful and is likely not even true.
There is a parallel to this approach in biology. For years, evolutionary theorists pondered the puzzle of mass-extinction events. Our natural instinct is to look for a proximate and proportionate cause. A big event must have had a big cause, right?
In the 1980s the geologist Walter Alvarez and his father, Nobel Prize laureate physicist Luis Alvarez, proposed a theory that the dinosaurs were wiped out by a massive asteroid colliding with the earth at the end of the Cretaceous period. Indeed, some evidence supports this hypothesis.
Yet when other researchers stepped back and examined the long-term fossil record, they found that while the asteroid theory might explain the particular mass extinction event of the Late Cretaceous, it did not account for the ten other major extinction spasms (some much bigger) evident in the fossil record.
More recent work has shown that extinction spasms are probably caused by the internal dynamics of evolution itself, without a major external event. In complex adaptive systems, small, innocuous events can occasionally set off avalanches of change, a phenomenon known as sensitivity to initial conditions or, more memorably from Jurassic Park, the butterfly effect.
A lot of economic thinking suffers from the spherical cow problem – it ignores a lot of stuff that seems to matter like how the economy grows over time and makes so many simplifying assumptions as to be useless in practice.
Sugarscape And Emergence
To illustrate the idea of how an economy changes over time, a group of complexity scientists built a simple computer simulation called Sugarscape.
Imagine a group of people shipwrecked on a desert island, except that both the island and the castaways are simulations inside a computer.
The computer island is a perfect square with a fifty-by-fifty grid overlaid on top of it, like a giant chessboard. The virtual island has only one resource—sugar—and each square in the grid has different amounts of sugar piled on it. The heights of the sugar piles range from four sugar units high (the maximum) to zero (no sugar). The sugar piles are arranged such that there are two sugar mountains, one mountain at the northeast corner and one at the southwest corner, each with sugar piled three and four units high (figure 4-1). Between the two mountains is a “badlands” area with little or no sugar.
The Sugarscape landscape is, of course, a vast simplification of an actual island, but it highlights three essential features of real-world environments:
- There is a notion of physical space. That is, you can move north, south, east, and west on it.
- There is a source of energy, namely, the sugar.
- The terrain is differentiated; it has mountains, valleys, fertile areas, and desert areas.
Likewise, the virtual castaways on Sugarscape are vast simplifications but share some key characteristics with real people. Each virtual person, or “agent,” is an independent computer program that takes in information from the Sugarscape environment, crunches that information through its code, and then makes decisions and takes actions.
At first, when the game begins, things are a bit chaotic as the agents rush around looking for sugar, and many of the agents who start off in the badlands die of starvation. Pretty quickly, however, order begins to emerge.
As order starts to emerge, you see economic differences between agents start to emerge. At the beginning of the simulation, Sugarscape is a fairly egalitarian society and the distribution of wealth is a smooth, bell-shaped curve with only a few very rich agents, a few very poor, and a broad middle class.
As time passes, however, this distribution changes dramatically. Average wealth rose as the agents convened on the two sugar mountains but the distribution of wealth became very skewed, with a few emerging superrich agents, a long tail of upper-class yuppie agents, a shrinking middle class, and then a big, growing underclass of poorer agents.
How, from these random initial conditions do we get a skewed wealth distribution?
The skewed distribution is an emergent property of the system. It is a macro behavior that emerges out of the collective micro behavior of the population of agents. The combination of the shape of the physical landscape, the genetic endowments of the agents, where they were born, the rules that they follow, the dynamics of their interactions with each other and with their environment, and, above all, luck all conspire to give the emergent result of a skewed wealth distribution.
To see how this works imagine Agent 1 and Agent 2, both born in the middle of the range of genetic endowments, and both born in middle-class sugar neighborhoods. At birth, their chances of success or failure are equal. But Agent 1, on his first move in life, looks in each direction and sees other agents already occupying the landscape to the east, west, and south. So he heads north, eating sugar along the way.
By chance, north takes him into the heartland of one of the sugar mountains and to an area that, also by chance, is only sparsely occupied. He then spends the next several turns feasting on the maximum levels of sugar, racking up large savings, until other agents discover the area and start to move in.
This initial golden period, however, has boosted Agent 1’s wealth far above the average, and as one of the first onto the sugar mountain, he is able to stay there and live out the rest of his life in comfort. Meanwhile, Agent 2 isn’t so lucky. On his first move, he heads south, toward the badlands. By the time he realizes his error, other agents have filled in the rich northern area, preventing him from heading back to the fertile region for several turns. During those turns, his savings dwindle and he falls further and further behind the average.
Even when the agents had the same initial endowments, small, chance events early on were magnified by the dynamics of the game, leading to very different outcomes for the two agents. This is an example of path dependence – the initial path you set out on matters a lot in the long run.1
Sugarscape is far too simple to draw specific conclusions about real-world poverty and inequality. But, the model does show that one-dimensional views, whether on the left (e.g., poverty is caused by the rich exploiting the poor) or on the right (e.g., if you are poor, you must be dumb or lazy) are likely to be wrong.
It also helps to provide a better intuitive model for how the economy and business actually work. There is a large element of path dependence where early choices constrain future choices. It shows the economy as something that is not a solvable problem in perfect equilibrium, but a shifting and dynamic fitness landscape with constantly shifting “piles of sugar.”
Starting from this more evolutionary example, Beinhocker provides a number of mental models for how complexity economics views the economy differently than traditional approaches.
- Stocks and Flows
- Behavioral Irregularities and Bounded Rationality
- System Structure and the Bullwhip Effect
- Fitness Landscapes
I’ll outline those here and then talk about some of the implications for business people and investors at the end.
Stocks and Flows
A convenient way to think and talk about a dynamic system such as the economy is in terms of stocks and flows. A stock is an accumulation of something, such as the balance in a bank account or water in a bathtub. The rate at which a stock change over time is known as a flow. For example, the rate of money flowing into or out of a bank account or water flowing into and out of a bathtub would both be flows.
When one starts thinking of the economy as a collection of stocks and their related flows, it quickly becomes apparent that the various stocks and flows are connected to each other in complex ways.
For example, if the stock of employment fell to a low level, a policy maker might decide to cut interest rates in order to encourage borrowing, which would expand the stock of money available for investment.
This would be used by businesses to invest in new productive capacity, creating more demand for employees, thus raising the stock of employment, which finally would feed back to affect future interest rate policy. Such chains of relationships between stocks and flows in a dynamic system are known as feedback loops.
A positive feedback loop is one that is self-reinforcing and drives the system further and further from equilibrium. For example, a drop in consumer confidence can lead to decreased spending, which leads to decreased production, which leads to unemployment, which leads to even lower consumer confidence and thus a further drop in spending, spiraling right down into a recession. This was the dynamic loop that John Maynard Keynes famously identified in his General Theory of Employment in 1936.
It is an example of positive feedback, even though it is not very positive for the people in it. The key thing to remember is that positive feedback reinforces, accelerates, or amplifies whatever is happening, whether it is a virtuous cycle or downward spiral. Systems with positive feedback can thus exhibit exponential growth, exponential collapse, or oscillations with increasing amplitude.
Systems with negative feedback tend to get pushed back to some set point, an equilibrium, or oscillate with decreasing amplitude and peter out over time. A thermostat would be a good example. If you set the temperature to 72 degrees then anytime it goes above or below that, the A/C system will turn on to push it back in line.
Dynamic systems of stocks and flows and feedback loops have a third ingredient—time delays.
You have probably had the experience of taking a shower in an unfamiliar place such as a hotel room, turning on the hot water, noticing it isn’t hot enough, turning it up some more, and then it turns scalding, so you turn it down, it is still too hot, so you turn it down some more, then it is freezing, and so on.
The problem is that there is a time delay between your actions on the water knobs and the feedback from the shower temperature. The delay causes you to overshoot and oscillate around the desired temperature.
This happens in the economy all the time and can result in a bull whip effect (on which, more below). If there is more demand for avocados, you can’t instantly just make more avocados no matter the price. You have to plant them and wait for them to grow to a sufficient size to harvest them which takes time.
Dynamic systems can quickly become quite complex if one has multiple stocks and flows interacting via both positive and negative feedback loops. The positive feedback loops drive the system at the same time as the negative feedbacks are fighting back to dampen and control it. When time delays are thrown in, the driving and damping can get out of balance, and out of sync, causing the system to oscillate in highly elaborate ways.
The economy, then, is an n-body problem on a massive scale. Each individual person in the economy has his or her own set of stocks (savings, debt, skills, etc.) and flows (income, expenses, learning, etc.) just as each individual agent in the highly simplified Sugarscape economy had its own stock of sugar and flow of sugar consumption and digestion. And, just as the dynamics of Sugarscape were a product of all the interactions between the agents in that imaginary economy, the dynamics of the real economy are a product of the nonlinear interactions of billions of people.
You can begin to appreciate why economic forecasters have such a tough job and an even lower reputation than weather forecasters. The combination of sensitivity to initial conditions, path dependence, and immense dynamic complexity makes the economy, like the weather, unforecastable overall but the very short term.
When we talk about “flows over fundamentals,” this is what we are talking about. The fundamentals of the economy might matter the most in the long run, but, in the short term, the stocks and flows tend to dominate.
Most economic decisions are dynamic, self-referential, and ill-defined. Company A thinking about cutting prices will worry not just about its own situation, but what companies B and C will do; they in turn will partly base their strategies on their perception of what company A will do.
Adopting a new technology standard, positioning a new product in the market, or assessing the value of a stock all have self-referential expectations built into them. Such a world of evolving, interacting, muddling agents will never reach equilibrium.
Behavioral Irregularities and Bounded Rationality
Real human beings have real behavioral irregularities. There is extensive literature on cognitive biases2 that include items like.
- Framing Biases – Exactly how an issue is framed can affect how we think about it. Compare, for example, the two questions “Should Britain adopt the euro?” and “Should Britain abolish the pound?” Under perfect rationality, this framing should not matter.
- Representativeness – People have a bad habit of drawing big conclusions from very small and biased samples. For example, we might talk to three friends in the office, each of whom has coincidentally had a bad day, and conclude that the company is falling apart.
- Availability Biases – People tend to make decisions based on data that is easily available as opposed to finding the data that is really needed to make a good decision. This is, in effect, “looking for your lost keys under the lamp post” because that is where the light is best.
Humans have bounded rationality, they don’t know what they don’t know. As an example, consider the El Farol Bar Problem. El Farol is a popular bar in Santa Fe that is the favorite spot of 100 people for a Thursday night happy hour. But, they don’t like it to be too crowded when they go. If they believe the bar will be too crowded — will have more than 60 people, say — they will not go. If they believe fewer than 60 will show up, they go.
How will they act? The bar (and the collection of forecasts being acted on) self-organizes into an equilibrium pattern that hovers around the comfortable 60 level.
The reason is that, if fewer than 60 came in the long term, low forecasts would be valid, so many would come, negating those low forecasts. If more came in the long run, fewer would show up.
So an “attraction” to this level emerges. But, although the population of forecasts on average supports this comfortable level, the actual forecasts in use keeps changing. The outcome is a bit like a forest, the shape of which does not change, but the individual trees of which do. Notice that equilibrium in this problem is not assumed, it emerges and self-organizes because it is a natural attractor.
This intuitively makes sense. If many people believe that a stock is very undervalued and many people buy it, then the price is driven up.
It also gives a way of thinking about why markets have bubbles or crashes – periods of high volatility – as well as relatively stable periods.
In this view, random periods of high and low volatility can emerge because, if some investors occasionally discover new profitable forecasting methods, they then invest more and this changes the market slightly, causing other investors to also change their forecasting methods and their bids and offers.
Changes in forecasting beliefs thus ripple through the market in avalanches of all sizes, causing periods of high and low volatility.3
In this view, such phenomena as random volatility, technical trading or bubbles, and crashes are not ‘departures from rationality’. These phenomena are the result of economic agents discovering behavior that works temporarily in situations caused by other agents discovering behavior that works temporarily. This is neither rational nor irrational, it merely emerges.
System Structure: The Bullwhip Effect
Second, the system’s structure makes a big difference. The structure of the supply chain between a manufacturer and a retailer creates dynamics that can lead to oscillations. This phenomenon is known as the bullwhip effect (which many people learned about following the 2020 Covid outbreak).
From my post on The Bullwhip Effect and Supply Chain Inflation:
Inventory-based businesses don’t want to have too much on stock because inventory costs money to store and ties up cash flow (and, in some cases, can spoil). You also don’t want to have too little in stock because you are losing sales.
The theoretical ideal then is the “just-in-time” supply chain where each piece of the supply chain has exactly the right amount of inventory at the right time.
This seems like a simple problem, but there is an issue: time. You can’t instantly place an order and get what you want. Let’s say I am a wholesaler selling beer to retail stores and there is a two-week communication gap toward the upstream and a two-week supply chain delay of product towards the downstream.
As the wholesaler, you want to buy exactly enough inventory from the distributor so that they can deliver the exact amount needed to the retailer.
The wholesaler can’t communicate with the retailer except to see the incoming orders. So, on any given week, the wholesaler will see how many orders that the retailer placed in the previous week. They will need to deliver that quantity the following week.
It’s not so easy to just place an order for that amount with the distributor because of the delay.
Let’s say the retailer wants 10 cases of beer shipped to them next week.
If that same week, I place an order for ten cases of beer from the distributor, but it won’t arrive at the distributor until next week, then it won’t arrive back to me until the following week then I can’t deliver it to the retailer until the week after that.
If the end demand is fairly steady, then it’s pretty easy to know how much to order. Let’s say the retailer is ordering between 8-12 cases of beer every week from the wholesaler. As the wholesaler, it’s pretty easy to forecast your demand.
You want to make sure you have about 12 cases in stock at any time so that you can fill the order without holding too much excess inventory. (Even this is trickier than it sounds, but still somewhat solvable).
Where things start to break down is when you get big swings in end-user demand. Let’s say in week 1 the retail orders ten cases. As the wholesaler, you assume that is about how much beer the retailer normally orders so you then order ten cases from the distributor.
The next week though, the retailer orders forty cases. The wholesaler only has ten cases in stock and ten more coming so they can’t fulfill the order and have a backlog of twenty cases they need to deliver. They’re losing out on sales!
Worrying that demand has now increased dramatically, the wholesaler then orders sixty cases from the distributor – twenty cases will be used to fill the backlog and the wholesaler is expecting another order for forty cases and wants to make sure they are in stock.
Well, the next week the retailer only orders five cases. Now the wholesaler is going to have way too much inventory which is tying up their cash flow and costing them money to store.
The wholesaler then doesn’t place an order at all with the distributor since they already have a ton of inventory. The next week the retailer comes in and orders sixty cases and the wholesaler doesn’t have enough.
As the wholesaler, you end up getting a chart that looks something like this showing how you performed.
So there’s a bullwhip effect at each individual player’s level, but more perniciously, it whips up and down the chain.
The key learning from the beer game is that small changes in end demand can create huge imbalances as they move up and down the supply chain.
In the case of the stock market, you can get similar behavior. The structure of the limit order system, leverage and liquidity dynamics, when combined with trader behavior, led to power law volatility in Q1 2020.
The bullwhip effect shows how feedback loops and system structure can create emergent behavior that no individual actor wants. Everyone wants to have the right amount in stock at the right time but trying to do that can cause problems.
The Three Root Causes of Complex Behavior
Within the frame of seeing the economy as a fitness landscape like Sugarscape full of stocks, flows and time delays, complex emergent phenomena such as business cycles and stock price movements seem likely to have three root causes that grow out of these dynamics:
- Behavioral Irregularities and Bounded Rationality
- System Structure like what you see in The Bullwhip Effect
- Exogenous Inputs
Exogenous inputs would be the large and unexpected jump in customer orders in the case of the Bullwhip Effect. In the case of the stock market, it is new information or news such as earnings reports.
The first and third causes (behavioral irregularities and exogenous inputs) are ones over which no one really has control. So focusing on the system structure is probably where improvements can be made.
The final mental model Beinhocker offers for thinking about complexity economics is the idea of fitness landscapes.
Fitness Landscapes
A fitness landscape shows us visually (and to researchers mathematically) where the good designs in a design space are located. We can think of good designs as high fitness peaks, and our problem of finding good designs in the near infinity of design space can then be reconceived as finding high peaks in the fitness landscape.
Some attributes seem to have very high fitness. In biology, the eye is incredibly complex but we believe it evolved in at least two separate ways. Eyes are very useful and so evolution probably tended towards creating eyes even though the path to getting there was incredibly complex.
From an economics lens, the highest points on the landscape are the most fit (read: most enduringly profitable). A good business model would occupy a high fitness spot while a bad business model would occupy a low fitness spot.
Unlike a normal mountain landscape, the fitness landscape is pocketed with flat spots. The vast majority of small changes in an organism’s DNA code do nothing either positive or negative, for fitness. Thus, there will be a large, flat spot in the landscape around the 36 billion or so one-letter variants of your DNA, and the trillions and trillions of few-letter variants, and so on.
However, within these flat plateaus are deep holes and tall peaks.
The deep holes pockmark the landscape like Swiss cheese. A step into one of these holes results in a dramatic drop in fitness. This is because even though the vast majority of small mutations do little, some are quite dangerous. If the instruction for making a key protein in brain function is deleted from your DNA, you’re in big trouble.
The flip side is that while there are some highly negative small mutations, there are some highly positive ones as well: the tall peaks.
Some small mutations can have a dramatic improvement in fitness or, sometimes even more importantly, can open the way for still further fitness-enhancing mutations.
The flat spots, Swiss cheese holes, and portal routes characteristic of the landscape also contribute to punctuated equilibrium by creating a nonlinearity in the impact of genetic changes. Most changes have little or no effect, but some changes have a big impact on fitness (for good or ill) and thus may have a disproportionate effect on everything else.
When you have a new physical technology, social technology, or business model, it causes the fitness landscape to shift.
In my book The End of Jobs, I use the example of the Long Tail which explains how the internet changed the fitness landscape. By reducing the cost of carrying an additional product (there is no concept of shelf space on the internet), it allowed for a long tail of products to be sold that didn’t make sense in a retail-based world.
One of the first people to capitalize on this was Derek Sivers. In 1998, Derek Sivers was a musician living in New York. In his words, he’d “already made it.” He had bought a house from his music and was living his childhood dream of being a musician. He thought it might be cool if he could take his music and sell it to people over this thing called “the internet.” He spent three months building a website, getting a payment processor and shopping cart set up, and eventually managed to actually sell some of his CDs.
When his friends heard he could sell his CDs online, they asked him if he would sell their CDs online. Derek’s a pretty nice guy, so he said: “Why not?” Then their friends asked. So Derek started charging musicians to post their CDs and turned his hobby into CDBaby.com.
CD Baby was eventually acquired by Amazon for $22 million. CD Baby’s success was because it was one of the first companies built around the concept of the Long Tail. It leveraged the internet to make it possible to sell CDs for independent musicians who, in the past, had never been able to get their record into record stores.
This was revolutionary for the music industry. Derek described it as though he were in the 60’s at Woodstock: “Woah man, the shackles are off! Those record labels can’t hold us back anymore.”
In a world where distribution is controlled by record stores and the costs of holding inventory are non-trivial, there’s a cutoff point where it stops making sense for the record stores to stock shelves with your CD. A record store has to pay for more shelf space, so if a record doesn’t sell a certain number of copies, then they can’t afford to stock your CD.
Because CD Baby was selling online, the cost of holding more inventory, of listing more CDs for sale, rapidly approached zero. Once they’d built the website, the cost of adding another product page was negligible and decreased with each product.
If you wanted to get your CD sold by a traditional distributor, you had to pay a few thousand dollars to get it set up. It took nine months or more to get paid because the artist wasn’t paid until the stores had returned any albums that didn’t sell. You didn’t know who your customers were, so you couldn’t market or sell future albums to them.
To sell with CD Baby, it cost $35 to set your CD up on the site. You were paid every week and you were given a list of your customers, including their email addresses, in order to sell to them.
These aren’t marginal changes, they’re order of magnitude changes.
Cost: ~$3000 → $35 = ~100 times cheaper
Time to get paid: 9 months → 1 week = ~36 times faster
Customer Communication: None → You own every customer’s contact info
CD Baby revolutionized the music industry by escaping the limitations of the short head and “revealing” the Long Tail.
A new physical technology (the internet) shifted the fitness landscape to make the marginal cost of carrying a product much lower than it had been in a retail-based world. This made a new business model (CD Baby) viable. This, in turn, made independent musicians more viable as they could sell their CDs online. Of course, we now know that as internet bandwidth improved that music transitioned to streaming, changing the landscape yet again. This is the nature of it. The fitness landscape is not static, but undulating with the interplay of physical technologies, social technologies, and business models.4
How to get to the top of the Mountain
If you want to get to the top of the mountain, you have to keep searching all these paths. The height of peaks constantly changes in the landscape, so by the time we found the one highest rod, undoubtedly it would have shrunk, or another would have grown higher.
In order to search the landscape, we might first pick a random starting point and then use the following simple rule: take a step in a random direction; if the step led you up, stay there and take another random step. If not, return to where you were before and try again. You can imagine that if your starting point were down in a valley and you were following this rule, you would initially wander the valley floor in random directions. But eventually, you would find a path up and pretty quickly scale the nearest peak. This rule is called an adaptive walk.
While the adaptive walk is efficient at climbing individual peaks, it has an important limitation; once you reach the top of a peak, you stop and are stuck on a local maximum. There might be a much higher peak just a short way over the valley, but you will never find it because you would have to go down first to get to it. The adaptive walk could even get stuck on a molehill right in the middle of a field of Everests.
Another strategy would, again, start with a random point. This time, however, instead of walking, imagine you have a very powerful pogo stick. When you push a button on the pogo stick, it launches you in a random direction over a random distance. Thus, you keep hitting the button hoping to land on a high spot. This strategy is called a random jump.
The random jump has the advantage over the adaptive walk of not getting stuck on local maxima—you might hop clear over the intervening valley from a low peak to a higher peak. However, it also has the disadvantage that you might also find yourself down in a death valley. Thus, the random jump is a riskier strategy than the adaptive walk because at least the adaptive walk keeps you out of the lowest lowlands and avoids the poisonous fog.
Let’s try an algorithm that mixes the two choices, an algorithm that does an adaptive walk to keep us climbing higher and higher in the landscape, but also gives us a few random jumps to keep us from getting stuck on local peaks. We’ll also weigh our random jumps toward smaller jumps (the longer the jump, the lower its probability of occurring). This will still help keep us from getting stuck on local peaks, but reduce the odds of ending up in a really low valley.
This implies that mostly you want to work on incremental improvements but every now and then, you want to make a jump.
Let’s add one more twist. Instead of just one hiker searching, let’s imagine we have a whole army of hikers to help us explore the landscape. How would this search method do?
This gets the best of both worlds. The bulk of our resources are applied to the relatively low-risk adaptive walk, marching away through the landscape. But we have a dispersal of our bets, with some hikers fanning out farther from the center, and a few scouts taking real fliers and searching quite far from the main group. Such a strategy will inevitably lose some hikers, but will also have a greater likelihood of finding high-fitness regions of the landscape without getting stuck on local plateaus.
Hungarian physicist Leo Szilard recommended, “Do your work for six years; but in the seventh, go into solitude or among strangers so that the memory of your friends does not hinder you from being what you have become.” There’s probably nothing special about those particular timelines, but that’s effectively saying “make incremental improvements most of the time but occasionally you want to make a big jump.”
Similarly, a business should probably spend most of its resources on doing the main thing it does better, but it should spend some portion on “moonshots” that could have huge payoffs. This is often referred to as the barbell approach: spend 80% of resources on high likelihood but incremental improvements and 20% on the moonshot low probability high impact projects.
This dispersal of bets created by differentiation is critical not only for discovering new ways up in fitness but also for increasing the odds that some hikers will survive if the landscape changes. It enhances fitness both by optimizing but also not dying. The bulk of our hiker population might be buzzing around enjoying life on a nice high-fitness plateau when suddenly the landscape changes and their plateau collapses beneath the fog. The only ones left would be the scouts who were further afield and who then must rebuild the population. Likewise, some of the scouts who were at low-fitness spots on the landscape before suddenly might find themselves at a new, higher altitude as the landscape shifts.
This is the important role that genetic diversity plays in a biological system. Without spreading your bets across the landscape, you risk losing it all when the environment suddenly changes.
In the context of business and investing, this type of diversification both improves your returns and reduces your risk, it makes the path more ergodic.
Fitness landscapes are a helpful way to think about path dependence. In evolutionary systems, history matters; where you can go in the future depends on where you have been in the past.
Imagine a peak occupied by the design for a certain type of fish. However, the fish’s environment is changing; its niche is disappearing and its peak is sinking into the poison fog. From our god’s-eye vantage point, we can see that nearby is a peak for another type of fish design that doesn’t exist yet—it is high above the poison and appears stable.
Between the two designs in the landscape, however, there is no path, no land bridge that stays above the poison fog. The designs are too far apart to be bridged by random jumps. Because there are no sustainable, intermediate niches along the way between the first fish design and the second, the fish is a prisoner of its history. Its particular path led to the cul-de-sac on its particular peak, and its options for the future are limited by its past.
Path dependence means sometimes there is no way to get to a higher local maxima because all paths kill you first. Companies that wait too long to innovate get stuck in this position.
As a business example, Blockbuster made 50% of its revenue from late fees. When Netflix showed up and didn’t charge late fees, Blockbuster was too scared to imitate them because it would have tanked their revenue.
However, it’s likely the only thing that could have made them survive. They ultimately did get rid of late fees, but it was too little and too late. Like the fish, they were stuck at a dead-end in the fitness landscape.
Business Implications
Having covered these different mental models for thinking about economics and business, let’s now turn to what some of the more practical implications might be.
Use the Right Amount of Hierarchy
If a network has, on average, more than one connection per node, then as the number of nodes grows, the number of connections will scale exponentially with the number of nodes. This means that the number of interdependencies in the network grows faster than the network itself.
This is where problems in many organizations start to happen.
As the number of interdependencies grows, changes in one part of the network are more likely to have ripple effects on other parts of the network. As the potential for these knock-on effects grows, the probability that a positive change in one part of the network will have a negative effect somewhere else also increases.
To illustrate, let’s imagine you are the cofounder of a small start-up company with only two departments: product development and marketing. You run product development and have an idea for a new product.
So you have a meeting to discuss your plan, the marketing department agrees to it, and you are ready to go—simple enough. Your new product is a success. Your company begins to grow, and you decide you need to create a finance department and a customer service department.
However, like all start-ups, yours is a bit disorganized. None of the new groups talk to each other, but because you are one of the founders, they all talk to you. You now have another new product idea, so you have a meeting with marketing, a meeting with finance, and a meeting with customer service to ensure they all support the new offering. More complex than before, but not too bad. The total number of meetings only grew by the number of departments, which is to say, from one with your first product to three with the second.
But, you are getting tired of being the communications hub, and so you tell each of the department heads that he or she should be regularly talking directly with the other heads, sharing information, and coordinating.
Soon the emails are flying and the conference rooms are full of meetings—your initiative to improve communications is a success. You now have an idea for your third-generation product, but something bizarre has happened. You have your usual meeting with marketing, but now before you get the department’s OK, the marketing managers say they have to check the impact on their budget, which was approved by finance.
The finance folks say they can’t approve your project until they get an estimate from customer service on the cost of the additional support needed. And customer service has to check with marketing to make sure its plans are consistent with the company’s brand and pricing strategy.
All of a sudden, you have gone from three meetings to ten (if all the permutations occur) even though the size of the company is the same.
Big is both beautiful and bad: as an organization grows, its degrees of possibility increase exponentially while its degrees of freedom collapse exponentially.
Large organizations inherently have more attractive opportunities before them than small organizations do (the large can theoretically do everything the small can do, plus more). But reaching those future opportunities involves trade-offs, and the more densely connected the organizational network, the more painful those trade-offs will be.
The politics of organizations are such that local pain in particular groups or departments is often sufficient to prevent the organization from moving to a new state, even if that state is more globally fit on the fitness landscape.
There are two ways of dealing with this:
- Reduce the density of connections
- Increase the predictability of decision making
Reducing the density of connections works for the fairly obvious reason as given above. There are fewer meetings and communication lines.
One way to reduce the density of connections is to hire slow and try to get people with a diverse skill set. Fewer people reduce the number of nodes needed to complete a task. Going from one software engineer to two usually doesn’t double productivity. It maybe increases by 50%. Why? Because now both engineers have to spend 25% of their time communicating with each other in addition to actually writing code.
Another approach to reducing the density of connections is to give the units within a hierarchical structure more autonomy.
This was one of Alfred P. Sloan’s great insights when he invented the concept of autonomous divisions, enabling General Motors to grow to become, at the time, the largest company in the world. Sloan essentially created five car companies within a car company, each with its own brand and a high degree of independence. The move by many companies in the 1980s and 1990s to more autonomous business units with their own profit-and-loss accountability was to a large degree a response to the complexity that came with organizational growth.
A final approach to restructuring the density of connections is the Amazon model of autonomous two pizza teams. In that approach, you keep the number of nodes low that are doing high bandwidth coordination so things don’t get too complex within the teams, and then teams have to interface via APIs.
Increasing the predictability of decision-making works because the more regularity there is in the behavior of the nodes, the more density in connections the network can tolerate.
To give a macro example, the rule of law, the existence of property rights, a well-organized banking system, economic transparency, a lack of corruption, and other social and institutional factors seem to play a dominant role in determining national economic success.
When an entrepreneur can count on all these things, they are much more likely to start a business and foreign corporations are much more likely to want to invest in a particular country or region. How the government will behave is predictable and this is what you can count on. (This is, in my view, the best argument for crypto.)
If there is predictability in the decision-making of an organization, then the organization can function effectively with a more densely connected network.
If decision-making is less predictable then less-dense connections, more hierarchy, and smaller spans of control are needed. Thus, for example, in an army, where regular, predictable behavior of troops is highly valued, it might be possible to get away with larger unit sizes than, say, in a creative advertising agency.
It also means that factors that make behavior less predictable, such as office politics and emotions, can limit the size an organization can grow to before being overwhelmed by complexity. One can see a recipe for creating a dysfunctional organization: just mix unpredictable behavior, a flat hierarchy, and lots of dense interconnections—the chances of getting anything done would be roughly zero.
This is an argument for why you need SOPs and cultural principles at different levels in an organization. By having shared standards and working procedures for the parts of the business that are not rapidly changing, they allow for greater creativity and complexity in other parts of the business.
Treat Strategy as a Portfolio of Experiments
Rather than thinking of strategy as a single plan built on predictions of the future, we should think of strategy as a portfolio of experiments, a population of competing Business Plans that evolves over time.
Consider the Microsoft story. Imagine it is now the year 1987, six years after Gates signed his big contract with IBM. The still-nascent PC industry has just gone through a period of explosive growth.
No one has ridden that growth harder than Microsoft. But MS-DOS is now coming to the end of its natural life cycle. Customers are beginning to look for a replacement operating system that will take better advantage of the graphics and greater power of the new generation of machines.
A change is coming, and the industry is far from certain how things will work out. Despite its success, Microsoft was still a $346 million minnow in 1987 compared to the multibillion-dollar giants hungrily eyeing its lucrative position.
IBM was developing its own powerful multitasking OS/2 system; AT&T was leading a consortium of other companies, including Sun Microsystems and Xerox, to create a user-friendly version of the widely admired Unix operating system; and Hewlett-Packard and Digital Equipment Corporation were pushing their own version of Unix.
Apple was also still a threat, consistently out-innovating the rest of the industry, and its highly graphical Macintosh was selling well.
The conventional wisdom is that Gates made an enormous “bet the company” gamble by investing in building a new operating system called Windows and migrating his base of DOS users to the new standard. This enabled Microsoft to continue its dominance of desktop operating systems and spend the next decade fighting antitrust regulators.
But, that is not actually what happened. What Gates and his team did was much more interesting—they simultaneously pursued six strategic experiments.
Rather than try to predict the future, Gates created a population of competing Business Plans within Microsoft that mirrored the evolutionary competition going on outside in the marketplace. Microsoft thus was able to evolve its way into the future. Eventually, each of the other initiatives was killed off or scaled down, and Windows was amplified to become the focus of the company’s operating-system efforts as its success became clear.
At the time, Gates was heavily criticized for this portfolio approach. Journalists cried that Microsoft had no strategy and was confused and adrift; they wondered when Gates was going to make up his mind. Likewise, it was difficult for those working inside the company to find themselves competing directly with their colleagues down the hall. There is no evidence that Bill Gates looked to evolutionary theory or was thinking about fitness landscapes when designing this strategy. Yet, regardless of how the approach was specifically developed, the effect was to create an adaptive strategy that was robust against the twists and turns of potential history.
For another example, in the late 1990s and through the 2000s Amazon was heavily criticized for “lacking focus” compared to its competitors. A BusinessWeek magazine cover story in 1999 comparing eBay and Amazon penalized Amazon for a lack of market focus. Whoops!
Use Planning to Prepare and Generate Optionality, not Predict
Planning is not about predicting the future, that’s impossible. It’s about establishing a common frame and making sure that there is a cohesive and shared mental model across all stakeholders so that decisions based on new information will be in alignment.
Typical strategic planning processes focus on chopping down the branches of the strategy decision tree, eliminating options, and making choices and commitments. In contrast, an evolutionary approach to strategy emphasizes creating choices, keeping options open, and making the tree of possibilities as bushy as possible at any point in time.
As the Microsoft example shows, Options have value. An evolving portfolio of strategic experiments gives the management team more choices, which means better odds that some of the choices will be right, or, as my McKinsey colleague Lowell Bryan calls it, “loading the dice.”
The objective is to be able to make lots of small bets, and only make big bets as a part of amplifying successful experiments when uncertainties are much lower. Being forced to make all-or-nothing bets under uncertainty means that a company is boxed in—the opposite of a good strategic position.
One of the reasons evolution is so effective at exploring fitness landscapes is that it mixes the length of its jumps. In biology, a combination of mutations and sexual recombination ensures a mix of short and long jumps.
When thinking of “jump distance” in the Business Plan landscape, we should consider three dimensions: risk, relatedness, and time horizon.
- Risk refers to all the uncertainties that can affect the outcome of a strategic experiment and the degree of irreversibility of the commitment.
- Relatedness refers to how close or how far the experiment is from the experience, skills, and assets the business already has.
- Time horizon refers to the expected time to payoff from the experiments.
You want to build a portfolio of bets with different risks, relatedness, and time horizons. For example, in the Microsoft case, the company’s Unix initiatives were relatively high risk, long-term, and less related to the company’s existing offering than were the other experiments. In contrast, Windows itself was probably medium on these dimensions, and arguably OS/2 was the lowest risk experiment, as it was technically less ambitious than Windows, and co-opted their most dangerous competitor. Thus, Microsoft had a fairly good spread of bets across the landscape.
Tinker: Unexpected Successes are the Best Source of Strategy
There are numerous well-known stories of people stumbling onto useful inventions. For example, in the 1980s, the 3M chemist Spence Silver was trying to make stronger adhesives and accidentally made one that was weaker. His colleague Art Fry then used the weak adhesive to make a sticky bookmark, which eventually morphed into the now-ubiquitous Post-it Note.
Another way of saying this: Unexpected success is the best form of strategy. You want to identify which elements of a business seem to be most adaptive. You don’t necessarily need to invent something new, just copy what is working in various fields and combine it in a new way, a process strategist John Boyd called snowmobiling.
Embrace Redundancy and Differentiation
Most people hate redundancy—managers are always looking to make things more efficient by squeezing out slack capacity—yet by definition, diversity requires redundancy, overlap, and excess capacity.
Thus, the drive for operational efficiency, while a necessary and worthy goal, often has the unintended side effect of lowering the diversity of strategic experiments.
The Robustness-Efficiency Trade-Off (RETO) states that systems (systems used herein the very general sense meaning it could be a project, company, country, society, etc.) must trade off between being Robust and being Efficient.
To start with a simple example, let’s say you have 10 hours to edit some articles and it takes you two hours to edit an article in full. You have to make a decision between:
- Editing one article five times and making sure your edits are extremely robust and precise
- Editing five articles one time and being more efficient but at the cost of some robustness and confidence in the piece
- Something in between those two points.
When resources are scarce (and time is always scarce – the universe isn’t making any more of it…), there is a tradeoff between being more efficient and being more robust.
The tendency for many people is to optimize for efficiency in the short term but this often harms efficiency in the long run.
My favorite example of how this goes wrong is from the German forestry service in the 1700s.
In the late 18th century, the German government started growing “scientific forests” so they could more easily track and harvest timber.
The German government wanted to be able to forecast and plan how much timber needed to be harvested each year to provide enough firewood to their citizens and ships to their sailors.
The underbrush was cleared since it was hard to quantify and it did not produce usable timber.
The number of species in the forest was reduced, often to one, because it was easier to track.
The result was mass plantings of a single species of tree, done in straight rows and grids on large tracts of land. It looked more like a tree farm than a forest. They were making it more efficient, right?
The first plantings by the government did well because they were able to use the nutrients in the soil that had accumulated over centuries. This created an initial surge in the amount of timber available to the German industry.
This initial surge in timber increased the German central planners’ confidence in the plan working. Since early results were positive, it seemed to be working and so they built more scientific forests.
Narrator: It wasn’t actually working…
The clearing of the underbrush reduced the diversity of the insect, mammal, and bird populations that were essential to the soil-building process. Since there were only one species of tree in the scientific forests, pests and diseases could easily move from tree to tree infecting the entire forest. All of these issues came together to result in massive forest death across the country over a short period of time, effectively setting back the German industry by decades, a devastating blow.
The long-term result was less total timber available to Germany, the exact opposite of what the German forestry central planners had intended… What they did seems, on the surface, entirely reasonable doesn’t it?
They were trying to reduce the volatility and variance in timber production so that the German industry could rely on a consistent and predictable amount of timber each year. However, they failed to realize how costly and hard to reverse failure would be.
The important takeaway is that attempts to increase efficiency in the short run often lead to less efficiency in the long run. It’s often better to run a little fat rather than too lean.
Create Selection Pressure
Jack Welch’s aspiration for GE in the 1980s: was “Become No. 1 or No. 2 in every market we serve and revolutionize this company to have the speed and agility of a small enterprise.”
Welch’s vision is clear, challenging, emotive, and memorable. But words alone do not ensure that an aspiration is carried around in people’s heads. Welch ensured that the idea of No. 1 or No. 2 was wired into compensation plans, performance evaluations, and business unit reviews, and he talked about it obsessively.
In short, he built a selection environment around the aspiration, thus shaping the thinking of thousands of people and the hundreds of thousands of decisions that determined which Business Plans would live and which would die. In the 1980s, a Plan for a new GE product that would boost a No. 3 business past its competitors into the No. 2 slot would get a lot of attention and resources. Likewise, a Plan to invest in a business that was No. 5, even if it was very profitable.
One must strike an important balance in formulating an aspiration. It needs to be specific enough to provide a selection pressure, but not so specific as to require the ability to predict the future. The aspiration to be “a company that cares about its people,” for example, is too bland and generic; no one really knows what to do with statements like that. But the aspiration to be “the leading whalebone corset manufacturer in northern England” makes, perhaps, unwarranted assumptions about the future opportunities for whalebone corsets in northern England. Welch’s “No. 1 or No. 2” aspiration strikes the proper balance, as does Henry Ford’s “to build a car for the multitudes.” Ford’s vision was explicit enough to provide selection pressures.
See also: Company Culture & How It Can Be Worth $150 Million
Finance and Investing Implications
In 1963, Benoit Mandelbrot published a paper titled “The Variation of Certain Speculative Prices.” In four strokes, he demolished the random-walk hypothesis:
- The distribution of the data had much fatter tails than a bell-shaped curve had; in other words, there were more extreme price swings than a random walk would predict;
- Those extreme events were in fact quite extreme; a large proportion of the total variance was explained by just a few violent price movements;
- There appeared to be some clustering of price movements in time, in other words, a pattern of punctuated equilibrium; and
- The statistics describing the data were not stationary as the random walk predicted, but changed over time.
This work was built on by Doyne Farmer and his colleagues at the Prediction Company who found that they could detect statistically significant signals in market data. These “signals” consisted of complex patterns and relationships between various factors that predicted future share prices (e.g., interest rates, trading volumes—as the signals were proprietary, they did not disclose exactly what they consisted of).
Traditional theory says that any such signals, once discovered, should be arbitraged away immediately. But Farmer and his team found that the signals would persist over time, often for days, months, or even sometimes for as long as a decade. Often they would see the signals weaken over time, as traders (including themselves) discovered and exploited them. But they also found that the complex, nonlinear dynamics of the markets meant that new signals were constantly being created even as old signals faded due to arbitrage.
Their alternative to the perfectly rational idealization of Traditional finance was that investors are “inductively rational.” In Farmer’s model, agents engaged in an evolutionary search for profitable strategies. Which strategies were successful at a particular time depended on the strategies the other agents in the market were using.
A dynamic was created whereby particular combinations of strategies would create patterns or structures in the market, which in turn would change the behavior of other agents as they sought to exploit those strategies. That created further patterns causing other agents to react, and so on. The results of the model did a good job of replicating the key statistical characteristics of real-world markets, such as clustered volatility (i.e., punctuated equilibrium).
Returning to the complexity idea, markets are ecosystems, evolving based on constantly shifting pressures.
Initially, Farmer created a market with just one fundamental agent and the market maker. Then he made it simpler still by assuming that the fundamental agent knew exactly what the true, perfectly rational value of the stock was at all times. In a bow to Traditional finance, Farmer also assumed that the true value followed a random walk. The fundamental agent was then given a simple trading rule: if the stock’s price is less than fundamental value, buy; if it is higher, sell.
The expectation was that this simple setup, with a perfectly rational agent and perfect information, would replicate the equilibrium conditions of Traditional finance, and as such, the market price would track the true value and follow a random walk.
It didn’t.
Price and value did roughly track each other, but not perfectly. Recall the example of a shower in which there is a delay between turning the knob and the response in water temperature—at first, you oscillate around the desired temperature, but then you eventually figure it out and the oscillations get smaller and smaller until you hit the target temperature.
Now imagine that instead of there being a constant desired temperature, the desired temperature fluctuated randomly.
In this case, the oscillations would never quite die down. Moreover, when the desired temperature took a big jump up, or a big dive down, you would be lagging even further behind and chasing it. Eventually, you would catch up, but you would overshoot it slightly. Thus, the actual temperature of your shower would roughly track the random movements of the desired temperature over time, but not match it exactly.
In traditional models, the notion of time and ergodicity is not fully factored in, there are delays in feedback loops and it doesn’t happen instantaneously.
Farmer then created a group of agents, seasonal traders, who simply bought and sold the stock in an alternating pattern. Their actions created a simple, regular, oscillating pattern in the price of the stock.
Farmer next created a group of technical traders. He gave each technical trader a randomly generated strategy based on whether the price was going up or down during past periods (e.g., IF THEN or IF THEN ). Each agent was given a bit of money to start with, and successful trading agents would keep their profits and reinvest them.
At first, things went according to Traditional theory. Initially, the technical traders didn’t have much money and so they didn’t affect the price very much. But they quickly picked up on the oscillating pattern, started to arbitrage it, and, as they did, started to make a lot of money. With success, they started making bigger trades, which in turn began to affect the price. After some time, Farmer began to see the oscillations dampen as the traders arbitraged the inefficient pattern out of the market and brought it closer to the fundamental value.
After five thousand periods had passed, the oscillations were virtually gone, and the market looked as if it were rapidly approaching perfect efficiency. But then, volatility suddenly exploded, and prices began to move chaotically.
What had happened was this: as the technical traders became richer, their trades became larger, and the large trades started introducing their own movements into the price. These movements created opportunities for other technical traders to try to arbitrage the patterns created by their fellow technical traders—when the technical traders had finished lunching on the seasonal traders, they began feeding off each other!
As you get bigger then you create new opportunities. It’s generally true that whatever strategy or asset class did well for the past decade usually underperforms in the next. The market is constantly evolving.
Farmer’s model produced just the kind of behavior he witnessed in his real-world trading experience. As the complex dynamics of the market unfolded, patterns came and went over time, and the market moved in periods of quiet and storm.
Traditional theory assumes that infinite amounts of capital are instantaneously available for abnormally profitable strategies. But in the real world, successful traders need to accumulate capital over time. For example, if a trader started with $1 million and earned a 25 percent return (which would be very good) he or she would take thirty years to increase his or her funds under management to $1 billion.
If in addition, the trader doubled his or her capital every year from outside sources, it would still take ten years. It is possible for successful traders to raise large funds from outside investors much more quickly, but Farmer’s point is that the timescale to establish a track record and then to raise the funds is measured in years. But markets do not stand still during periods of years, and profitable trading strategies can thus come and go on timescales faster than the market can arbitrage them to efficiency.
This means that some inefficiencies persist for years or decades because of the time the feedback loop takes to exploit them.
This is a fairly strong argument for the importance of diversification. In another interview, Farmer explained it this way.
We found several interesting things about our model ecology that we studied. One is that when we reached the equilibrium where the returns of all the strategies were the same, we actually saw that we had mutualist interactions between all the strategies. That is, if their own wealth went up, the returns would go down. But if anybody else’s wealth went up, the returns would go up. That kind of surprised us.
But then we realized, well, it’s actually maybe what you should expect: an equilibrium and inefficient place where the markets are efficient in some sense, and that at the efficient place, all the returns of the strategies are the same. And if you deviate from that then one of the strategies starts to have an advantage again.
This is effectively an argument for diversification and the rebalancing premium. If the wealth of a strategy goes up then returns go down so you rebalance into something else, a principle well illustrated in the Farmer’s Fable.
Conclusion
Complexity Economics does not have all the answers to the puzzle of economic patterns, but it provides us with new tools and sensibilities to begin to understand the behaviors we observe in the real world.
Last Updated on July 25, 2022 by Taylor Pearson
Footnotes
- See also ergodicity and sequencing risk.
- I tend to agree with Gerd Gigerenzer’s critique that a lot of psychological biases are merely a failure of theory to consider ergodicity whereas humans intuitively understand it, but there are certainly still some that exist for which this explanation does not account.
- Per Bak’s Sandpile Model is a helpful tool for thinking about this sort of emergent behavior.
- Before I knew what a landscape was, this made a certain intuitive sense to me and was a major part of my book The End of Jobs.