Are your friends or colleagues bothering you that you must read Antifragile?
Did you just finish and aren’t quite sure if you wrapped your head around it?
As part of my work at Mutiny Funds, a deep area of interest for me is how to make systems more antifragile, starting with investors portfolios and going from there.
To that end, these are the notes that I put together for myself after reading Antifragile. The first section below is a (relatively) quick reference point for the key concepts that he elaborates in the book (a best-of, if you will).
The second section contains other anecdotes and points I found particularly interesting and thought-provoking.
Most of these notes are pulled directly from the book. I’ve added a few explanations and clarifications in italics and also added some ways that I’m trying to implement the concepts Taleb presents at the bottom of this write-up.
If you’d prefer these notes in a PDF that you can read later or send to friends or colleagues, enter your email below and I’ll send you a pdf.
Key Concepts
The Triad – Fragile, Robust, Antifragile –
This is Taleb’s central concept in both Antifragile and the Black Swan. All systems can be categorized as either fragile, robust, or Antigragile. Antifragile systems are those which he advocates we move towards. They are systems that improve or get stronger when unexpected, volatile events happen (like the airline industry, see below).
Fragile things are exposed to volatility, robust things resist it, anti fragile things benefit from it.
The fragile is the package that would be at best unharmed, the robust would be at best and at worst unharmed. And the opposite of fragile is therefore what is at worst unharmed.
Fragility implies more to lose than to gain, equals more downside than upside, equals (unfavorable) asymmetry and Antifragility implies more to gain than to lose, equals more upside than downside, equals (favorable) asymmetry You are antifragile for a source of volatility if potential gains exceed potential losses (and vice versa). Further, if you have more upside than downside, then you may be harmed by lack of volatility and stressors.
Good systems such as airlines are set up to have small errors, independent from each other— or, in effect, negatively correlated to each other, since mistakes lower the odds of future mistakes. This is one way to see how one environment can be antifragile (aviation) and the other fragile (modern economic life with “earth is flat” style interconnectedness). If every plane crash makes the next one less likely, every bank crash makes the next one more likely. We need to eliminate the second type of error— the one that produces contagion— in our construction of an ideal socioeconomic system.
The first step toward antifragility consists in first decreasing downside, rather than increasing upside; that is, by lowering exposure to negative Black Swans and letting natural antifragility work by itself.
The difference between a thousand pebbles and a large stone of equivalent weight is a potent illustration of how fragility stems from nonlinear effects. Nonlinear? Once again, “nonlinear” means that the response is not straightforward and not a straight line, so if you double, say, the dose, you get a lot more or a lot less than double the effect— if I throw at someone’s head a ten-pound stone, it will cause more than twice the harm of a five-pound stone, more than five times the harm of a two-pound stone, etc. It is simple: if you draw a line on a graph, with harm on the vertical axis and the size of the stone on the horizontal axis, it will be curved, not a straight line. That is a refinement of asymmetry. Now the very simple point, in fact, that allows for a detection of fragility: For the fragile, shocks bring higher harm as their intensity increases (up to a certain level).
For the fragile, the cumulative effect of small shocks is smaller than the single effect of an equivalent single large shock. This leaves me with the principle that the fragile is what is hurt a lot more by extreme events than by a succession of intermediate ones. Finito— and there is no other way to be fragile. Now let us flip the argument and consider the antifragile. Antifragility, too, is grounded in nonlinearties, nonlinear responses. For the antifragile, shocks bring more benefits (equivalently, less harm) as their intensity increases (up to a point).
Another Example – Weightlifting – Lifting heavier your body compensates to be able to lift even heavier the next time
The Teleological Fallacy
Our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness.
Experience is devoid of the cherry-picking that we find in studies, particularly those called “observational,” ones in which the researcher finds past patterns, and, thanks to the sheer amount of data, can therefore fall into the trap of an invented narrative.
Antifragility Loves randomness and uncertainty. It’s better to create an antifragile structure and learn from trial and error than try to be right all the time in a fragile ecosystem. Prediction is impossible
Mediocristan vs. Extremistan – Knives vs. Atomic Bombs
Taleb explains this more in the Black Swan. Mediocristan is the world we evolved in, where volatility was much less than in the modern, Extremistan, world.
Fragilistas – People who encourage you to engage in policies and actions, all artificial, in which the benefits are small and visible, and the side effects potentially severe and invisible.
Naive Intervention and Iatrogenics – Iatrogenics is Greek for “caused by the healer.” We have predisposition to do something instead of nothing even when nothing may be the better option. We create fragile systems in our attempt to reduce volatility in the short term.
Example – Treating patients with blood pressure medication that are only slightly outside of norms.
We should do nothing to those experiencing mild volatility but be wildly experimental with those experiencing extreme volatility.
It’s much easier to sell “Look what I did for you” than “Look what I avoided for you.”
The first principle of iatrogenics is as follows: we do not need evidence of harm to claim that a drug or an unnatural via positiva procedure is dangerous.
Iatrogenics, being a cost-benefit situation, usually results from the treacherous condition in which the benefits are small, and visible— and the costs very large, delayed, and hidden. And of course, the potential costs are much worse than the cumulative gains.
Another principle of iatrogenics: it is not linear. We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.
The Barbell Strategy
A dual attitude of playing it safe in some areas (robust to negative Black Swans) and taking a lot of small risks in others (open to positive Black Swans), hence achieving antifragility. That is extreme risk aversion on one side and extreme risk loving on the other, rather than just the “medium” or the beastly “moderate” risk attitude that in fact is a sucker game
Antifragility is the combination aggressiveness plus paranoia— clip your downside, protect yourself from extreme harm, and let the upside, the positive Black Swans, take care of itself. We saw Seneca’s asymmetry: more upside than downside can come simply from the reduction of extreme downside (emotional harm) rather than improving things in the middle.
An example is Mark Cuban’s investment strategy. He keeps most of his assets in cash (robust, not going to crash with the market) and it allows him to move quickly when he sees large opportunities (anti fragile).
Options, any options, by allowing you more upside than downside, are vectors of antifragility.
If you “have optionality,” you don’t have much need for what is commonly called intelligence, knowledge, insight, skills, and these complicated things that take place in our brain cells. For you don’t have to be right that often. All you need is the wisdom to not do unintelligent things to hurt yourself (some acts of omission) and recognize favorable outcomes when they occur. (The key is that your assessment doesn’t need to be made beforehand, only after the outcome.)
Option = asymmetry + rationality
The mechanism of optionlike trial and error (the fail-fast model), a.k.a. convex tinkering. Low-cost mistakes, with known maximum losses, and large potential payoff (unbounded). A central feature of positive Black Swans.
Central to optionality is Taleb’s assertion that prediction in the modern world is impossible. Instead of trying to predict what is going to happen, position yourself in such a way that you have optionality. That way whatever happens, all you have to do is evaluate it once you have all the information and make a rational decision.
Touristification – an aspect of modern life that treats humans as washing machines, with simplified mechanical responses— and a detailed user’s manual. It is the systematic removal of uncertainty and randomness from things, trying to make matters highly predictable in their smallest details. All that for the sake of comfort, convenience, and efficiency. The opposite of flaneur.
Ex. The Soccer Mom Problem. She attempts to remove all randomness and uncertainty from her kid’s lives and protect them. In doing so she prevents them from developing the ability to bounce back and adapt to future difficulties.
The Rational Flaneur
The rational flâneur is someone who, unlike a tourist, makes a decision at every step to revise his schedule, so he can imbibe things based on new information, what Nero was trying to practice in his travels, often guided by his sense of smell. The flâneur is not a prisoner of a plan. Tourism, actual or figurative, is imbued with the teleological illusion; it assumes completeness of vision and gets one locked into a hard-to-revise program, while the flâneur continuously— and, what is crucial, rationally— modifies his targets as he acquires information.
The opportunism of the flâneur is great in life and business— but not in personal life and matters that involve others. The opposite of opportunism in human relations is loyalty, a noble sentiment— but one that needs to be invested in the right places, that is, in human relations and moral commitments. The error of thinking you know exactly where you are going and assuming that you know today what your preferences will be tomorrow has an associated one. It is the illusion of thinking that others, too, know where they are going, and that they would tell you what they want if you just asked them. Never ask people what they want, or where they want to go, or where they think they should go, or, worse, what they think they will desire tomorrow. The strength of the computer entrepreneur Steve Jobs was precisely in distrusting market research and focus groups— those based on asking people what they want— and following his own imagination. His modus was that people don’t know what they want until you provide them with it. This ability to switch from a course of action is an option to change. Options— and optionality, the character of the option— are the topic of Book IV. Optionality will take us many places, but at the core, an option is what makes you antifragile and allows you to benefit from the positive side of uncertainty, without a corresponding serious harm from the negative side.
The Soviet-Harvard illusion –
Real knowledge comes from the process of Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship –> Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship
Soviet-Harvard illusion is that academic knowledge is superior and that we must understand the mechanism in order to understand the effectiveness or phenomenology.
An example would be a lot of the benefits associated with traditional Eastern practices like meditation or Yoga. We don’t understand the mechanism by which they benefit us, but it’s clear that they do.
The Green Lumber Fallacy – fallacy the situation in which one mistakes a source of necessary knowledge— the greenness of lumber— for another, less visible from the outside, less tractable, less narratable.
People with too much smoke and complicated tricks and methods in their brains start missing elementary, very elementary things. Persons in the real world can’t afford to miss these things; otherwise they crash the plane. Unlike researchers, they were selected for survival, not complications. So I saw the less is more in action: the more studies, the less obvious elementary but fundamental things become; activity, on the other hand, strips things to their simplest possible model.
Example – The guy trading green lumber most successfully at a firm in London thought it was lumber painted green. The Soviet-Harvard knowledge doesn’t translate to success in business and life. He learned how trade successfully using the process of Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship without ever actually understanding what Green Lumber was.
Convexity – if you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform. This property is very central to life.
Concavity – the opposite of convexity. These are negative asymmetries that expose you to exponentially more harm as randomness increases.
The technique, a simple heuristic called the fragility (and antifragility) detection heuristic, works as follows. Let’s say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars, travel time now extends by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until the acceleration becomes mild (acceleration, I repeat, is acute concavity, or negative convexity effect). Likewise, government deficits are particularly concave to changes in economic conditions. Every additional deviation in, say, the unemployment rate— particularly when the government has debt— makes deficits incrementally worse. And financial leverage for a company has the same effect: you need to borrow more and more to get the same effect. Just as in a Ponzi scheme. The same with operational leverage on the part of a fragile company. Should sales increase 10 percent, then profits would increase less than they would decrease should sales drop 10 percent.
Jensen’s inequality
If you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform. This property is very central to life.
Neomania
An obsession with new and a discounting of the old when it is the old which is more robust as time kills all things equally thus a technology that has survived a long time is likely to survive longer.
With so many technologically driven and modernistic items— skis, cars, computers, computer programs— it seems that we notice differences between versions rather than commonalities. We even rapidly tire of what we have, continuously searching for versions 2.0 and similar iterations. And after that, another “improved” reincarnation. These impulses to buy new things that will eventually lose their novelty, particularly when compared to newer things, are called treadmill effects. As the reader can see, they arise from the same generator of biases as the one about the salience of variations mentioned in the section before: we notice differences and become dissatisfied with some items and some classes of goods. This treadmill effect has been investigated by Danny Kahneman and his peers when they studied the psychology of what they call hedonic states. People acquire a new item, feel more satisfied after an initial boost, then rapidly revert to their baseline of well-being. So, when you “upgrade,” you feel a boost of satisfaction with changes in technology. But then you get used to it and start hunting for the new new thing.
We are obsessed with the newest things when what provides the most utility to us is things which are holder. Taleb gives the examples of cooking pots and pans discovered in a Pompeii kitchens being nearly identical to the ones we use today.
Wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.). Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.
The greatest— and most robust— contribution to knowledge consists in removing what we think is wrong— subtractive epistemology. In life, antifragility is reached by not being a sucker.
We know a lot more of what is wrong than what is right, or, phrased according to the fragile/ robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition— given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily. If I spot a black swan (not capitalized), I can be quite certain that the statement “all swans are white” is wrong. But even if I have never seen a black swan, I can never hold such a statement to be true. Rephrasing it again: since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.
Another application of via negativa: spend less, live longer is a subtractive strategy. We saw that iatrogenics comes from the intervention bias, via positiva, the propensity to want to do something, causing all the problems we’ve discussed. But let’s do some via negativa here: removing things can be quite a potent (and, empirically, a more rigorous) action.
If true wealth consists in worriless sleeping, clear conscience, reciprocal gratitude, absence of envy, good appetite, muscle strength, physical energy, frequent laughs, no meals alone, no gym class, some physical labor (or hobby), good bowel movements, no meeting rooms, and periodic surprises, then it is largely subtractive (elimination of iatrogenics).
When you see a young and an old human, you can be confident that the younger will survive the elder. With something nonperishable, say a technology, that is not the case. We have two possibilities: either both are expected to have the same additional life expectancy (the case in which the probability distribution is called exponential), or the old is expected to have a longer expectancy than the young, in proportion to their relative age. In that situation, if the old is eighty and the young is ten, the elder is expected to live eight times as long as the younger one.
For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy. So the longer a technology lives, the longer it can be expected to live.
People have difficulties grasping probabilistic notions, particularly when they have spent too much time on the Internet (not that they need the Internet to be confused; we are naturally probability-challenged). The first mistake is usually in the form of the presentation of the counterexample of a technology that we currently see as inefficient and dying, like, say, telephone land lines, print newspapers, and cabinets containing paper receipts for tax purposes. These arguments come with anger as many neomaniacs get offended by my point. But my argument is not about every technology,
The second mistake is to believe that one would be acting “young” by adopting a “young” technology, revealing both a logical error and mental bias. It leads to the inversion of the power of generational contributions, producing the illusion of the contribution of the new generations over the old— statistically, the “young” do almost nothing. This mistake has been made by many people, but most recently I saw an angry “futuristic” consultant who accuses people who don’t jump into technology of “thinking old” (he is actually older than I am and, like most technomaniacs I know, looks sickly and pear-shaped and has an undefined transition between his jaw and his neck). I didn’t understand why one would be acting particularly “old” by loving things historical. So by loving the classics (“ older”) I would be acting “older” than if I were interested in the “younger” medieval themes.
Example for Choosing Books: The best filtering heuristic, therefore, consists in taking into account the age of books and scientific papers. Books that are one year old are usually not worth reading (a very low probability of having the qualities for “surviving”), no matter the hype and how “earth-shattering” they may seem to be. So I follow the Lindy effect as a guide in selecting what to read: books that have been around for ten years will be around for ten more; books that have been around for two millennia should be around for quite a bit of time, and so forth.
Empedocles’ Tile
Empedocles, the pre-Socratic philosopher, who was asked why a dog prefers to always sleep on the same tile. His answer was that there had to be some likeness between the dog and that tile. (Actually the story might be even twice as apocryphal since we don’t know if Magna Moralia was actually written by Aristotle himself.) Consider the match between the dog and the tile. A natural, biological, explainable or nonexplainable match, confirmed by long series of recurrent frequentation— in place of rationalism, just consider the history of it. Which brings me to the conclusion of our exercise in prophecy. I surmise that those human technologies such as writing and reading that have survived are like the tile to the dog, a match between natural friends, because they correspond to something deep in our nature.
We can’t explain these things, but they are demonstrably true phenomenologically and so there is some fundamental truth present there despite our inability to understand it.
If something that makes no sense to you (say, religion— if you are an atheist— or some age-old habit or practice called irrational); if that something has been around for a very, very long time, then, irrational or not, you can expect it to stick around much longer, and outlive those who call for its demise.
If there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding. So there is a logic to natural things that is much superior to our own. Just as there is a dichotomy in law: innocent until proven guilty as opposed to guilty until proven innocent, let me express my rule as follows: what Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.
The Agency Problem
We live in a world where Agency is divorced from consequences. People don’t have skin in the game.
Skin in the Game
Skin in the game is the only true mitigator of fragility. Hammurabi’s code provided a simple solution— close to thirty-seven hundred years ago. This solution has been increasingly abandoned in modern times, as we have developed a fondness for neomanic complication over archaic simplicity. We need to understand the everlasting solidity of such a solution.
A half-man (or, rather, half-person) is not someone who does not have an opinion, just someone who does not take risks for it.
Dignity is worth nothing unless you earn it, unless you are willing to pay a price for it.
Fat Tony has two heuristics. First, never get on a plane if the pilot is not on board. Second, make sure there is also a copilot. The first heuristic addresses the asymmetry in rewards and punishment, or transfer of fragility between individuals. Ralph Nader has a simple rule: people voting for war need to have at least one descendant (child or grandchild) exposed to combat.
For the Romans, engineers needed to spend some time under the bridge they built— something that should be required of financial engineers today. The English went further and had the families of the engineers spend time with them under the bridge after it was built. To me, every opinion maker needs to have “skin in the game”
The second heuristic is that we need to build redundancy, a margin of safety, avoiding optimization, mitigating (even removing) asymmetries in our sensitivity to risk.
The Robert Rubin Problem
An Example of no skin in the game
Corporate managers have incentives without disincentives— something the general public doesn’t quite get, as they have the illusion that managers are properly “incentivized.” Somehow these managers have been given free options by innocent savers and investors. I am concerned here with managers of businesses that are not owner-operated
Robert Rubin, former treasury secretary, earned $120 million from Citibank in bonuses over about a decade. The risks taken by the institution were hidden but the numbers looked good … until they didn’t look good (upon the turkey’s surprise). Citibank collapsed, but he kept his money— we taxpayers had to compensate him retrospectively since the government took over the banks’ losses and helped them stand on their feet. This type of payoff is very common, thousands of other executives had it.
The Joseph Stiglitz problem
Stiglitz Syndrome = fragilista (with good intentions) + ex post cherry-picking
Never ask anyone for their opinion, forecast, or recommendation. Just ask them what they have— or don’t have— in their portfolio.
The Alan Blinder Problem – complex environments with nonlinearities are easier to game than linear ones with a small number of variables. The same applies to the gap between the legal and the ethical.
The story is as follows. At Davos, during a private coffee conversation that I thought aimed at saving the world from, among other things, moral hazard and agency problems, I was interrupted by Alan Blinder, a former vice chairman of the Federal Reserve Bank of the United States, who tried to sell me a peculiar investment product that aims at legally hoodwinking taxpayers. It allowed the high net worth investor to get around the regulations limiting deposit insurance (at the time, $ 100,000) and benefit from coverage for near-unlimited amounts. The investor would deposit funds in any amount and Prof. Blinder’s company would break it up into smaller accounts and invest in banks, thus escaping the limit; it would look like a single account but would be insured in full. In other words, it would allow the super-rich to scam taxpayers by getting free government-sponsored insurance. Yes, scam taxpayers. Legally. With the help of former civil servants who have an insider edge. I blurted out: “Isn’t this unethical?” I was then told in response “It is perfectly legal,” adding the even more incriminating “we have plenty of former regulators on the staff,” (a) implying that what was legal was ethical and (b) asserting that former regulators have an edge over citizens. It took a long time, a couple of years, before I reacted to the event and did my public J’accuse. Alan Blinder is certainly not the worst violator of my sense of ethics; he probably irritated me because of the prominence of his previous public position, while the Davos conversation was meant to save the world from evil (I was presenting to him my idea of how bankers take risks at the expense of taxpayers). But what we have here is a model of how people use public office to, at some point, legally profit from the public. Tell me if you understand the problem in its full simplicity: former regulators and public officials who were employed by the citizens to represent their best interests can use the expertise and contacts acquired on the job to benefit from glitches in the system upon joining private employment— law firms, etc. Think about it a bit further: the more complex the regulation, the more bureaucratic the network, the more a regulator who knows the loops and glitches would benefit from it later, as his regulator edge would be a convex function of his differential knowledge. This is a franchise, an asymmetry one has at the expense of others. (Note that this franchise is spread across the economy; the car company Toyota hired former U.S. regulators and used their “expertise” to handle investigations of its car defects.) Now stage two— things get worse. Blinder and the dean of Columbia University Business School wrote an op-ed opposing the government’s raising the insurance limit on individuals. The article argued that the public should not have the unlimited insurance that Blinder’s clients benefit from.
Cherry-picking
Cherry-picking has optionality: the one telling the story (and publishing it) has the advantage of being able to show the confirmatory examples and completely ignore the rest— and the more volatility and dispersion, the rosier the best story will be (and the darker the worst story). Someone with optionality— the right to pick and choose his story— is only reporting on what suits his purpose. You take the upside of your story and hide the downside, so only the sensational seems to count.
The asymmetry (antifragility of postdictors): postdictors can cherry-pick and produce instances in which their opinions played out and discard mispredictions into the bowels of history. It is like a free option— to them; we pay for it.
If you’d prefer these notes in a PDF that you can read later or send to friends or colleagues, enter your email below and I’ll send you a pdf.
My Other Highlights
At no point in history have so many non-risk-takers, that is, those with no personal exposure, exerted so much control. The chief ethical rule is the following: Thou shalt not have antifragility at the expense of the fragility of others.
The process of discovery (or innovation, or technological progress) itself depends on antifragile tinkering, aggressive risk bearing rather than formal education.
Our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness. – We tell stories to ourselves to make sense of the past. – The teleological Fallacy
The fragilista (medical, economic, social planning) is one who makes you engage in policies and actions, all artificial, in which the benefits are small and visible, and the side effects potentially severe and invisible.
Seek Simplicity – simplicity has been difficult to implement in modern life because it is against the spirit of a certain brand of people who seek sophistication so they can justify their profession. Less is more and usually more effective.
“you have to work hard to get your thinking clean to make it simple.” The Arabs have an expression for trenchant prose: no skill to understand it, mastery to write it.
Apophatic (what cannot be explicitly said, or directly described, in our current vocabulary)
Hormesis is Essential difficulty is what wakes up the genius (ingenium mala saepe movent), which translates in Brooklyn English into “When life gives you a lemon …” The excess energy released from overreaction to setbacks is what innovates!
Many, like the great Roman statesman Cato the Censor, looked at comfort, almost any form of comfort, as a road to waste. 1 He did not like it when we had it too easy, as he worried about the weakening of the will. And the softening he feared was not just at the personal level: an entire society can fall ill.
It is all about redundancy. Nature likes to overinsure itself. Layers of redundancy are the central risk management property of natural systems. We humans have two kidneys (this may even include accountants), extra spare parts, and extra capacity in many, many things (say, lungs, neural system, arterial apparatus), while human design tends to be spare and inversely redundant, so to speak— we have a historical track record of engaging in debt, which is the opposite of redundancy (fifty thousand in extra cash in the bank or, better, under the mattress, is redundancy; owing the bank an equivalent amount, that is, debt, is the opposite of redundancy). Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens— usually.
Causal Opacity: it is hard to see the arrow from cause to consequence, making much of conventional methods of analysis, in addition to standard logic, inapplicable. As I said, the predictability of specific events is low, and it is such opacity that makes it low. Not only that, but because of nonlinearities, one needs higher visibility than with regular systems— instead what we have is opacity.
In the complex world, the notion of “cause” itself is suspect; it is either nearly impossible to detect or not really defined— another reason to ignore newspapers, with their constant supply of causes for things.
Humans tend to do better with acute than with chronic stressors, particularly when the former are followed by ample time for recovery, which allows the stressors to do their jobs as messengers. – Think weight lifting
Some parts on the inside of a system may be required to be fragile in order to make the system antifragile as a result. Or the organism itself might be fragile, but the information encoded in the genes reproducing it will be antifragile. The point is not trivial, as it is behind the logic of evolution. This applies equally to entrepreneurs and individual scientific researchers. – Entrepreneurship is systematically antifragile, but individual efforts are fragile.
If you view things in terms of populations, you must transcend the terms “hormesis” and “Mithridatization” as a characterization of antifragility. Why? To rephrase the argument made earlier, hormesis is a metaphor for direct antifragility, when an organism directly benefits from harm; with evolution, something hierarchically superior to that organism benefits from the damage. From the outside, it looks like there is hormesis, but from the inside, there are winners and losers.
He who has never sinned is less reliable than he who has only sinned once. And someone who has made plenty of errors— though never the same error more than once— is more reliable than someone who has never made any.
My characterization of a loser is someone who, after making a mistake, doesn’t introspect, doesn’t exploit it, feels embarrassed and defensive rather than enriched with a new piece of information, and tries to explain why he made the mistake rather than moving on. These types often consider themselves the “victims” of some large plot, a bad boss, or bad weather.
By disrupting the model, as we will see, with bailouts, governments typically favor a certain class of firms that are large enough to require being saved in order to avoid contagion to other business. This is the opposite of healthy risk-taking; it is transferring fragility from the collective to the unfit. People have difficulty realizing that the solution is building a system in which nobody’s fall can drag others down— for continuous failures work to preserve the system. Paradoxically, many government interventions and social policies end up hurting the weak and consolidating the established.
This is the central illusion in life: that randomness is risky, that it is a bad thing – and that eliminating randomness is done by eliminating randomness
There is another issue with the abstract state, a psychological one. We humans scorn what is not concrete. We are more easily swayed by a crying baby than by thousands of people dying elsewhere that do not make it to our living room through the TV set. The one case is a tragedy, the other a statistic. Our emotional energy is blind to probability. The media make things worse as they play on our infatuation with anecdotes, our thirst for the sensational, and they cause a great deal of unfairness that way. At the present time, one person is dying of diabetes every seven seconds, but the news can only talk about victims of hurricanes with houses flying in the air.
The Great Turkey Problem – A turkey is fed for a thousand days by a butcher; every day confirms to its staff of analysts that butchers love turkeys “with increased statistical confidence.” The butcher will keep feeding the turkey until a few days before Thanksgiving. Then comes that day when it is really not a very good idea to be a turkey. So with the butcher surprising it, the turkey will have a revision of belief— right when its confidence in the statement that the butcher loves turkeys is maximal and “it is very quiet” and soothingly predictable in the life of the turkey.
Absence of fluctuations in the market causes hidden risks to accumulate with impunity. The longer one goes without a market trauma, the worse the damage when commotion occurs.
The ancients perfected the method of random draw in more or less difficult situations— and integrated it into divinations. These draws were really meant to pick a random exit without having to make a decision, so one would not have to live with the burden of the consequences later. You went with what the gods told you to do, so you would not have to second-guess yourself later. One of the methods, called sortes virgilianae (fate as decided by the epic poet Virgil), involved opening Virgil’s Aeneid at random and interpreting the line that presented itself as direction for the course of action. You should use such method for every sticky business decision. I will repeat until I get hoarse: the ancients evolved hidden and sophisticated ways and tricks to exploit randomness. For instance, I actually practice such randomizing heuristic in restaurants. Given the lengthening and complication of menus, subjecting me to what psychologists call the tyranny of choice, with the stinging feeling after my decision that I should have ordered something else, I blindly and systematically duplicate the selection by the most overweight male at the table; and when no such person is present, I randomly pick from the menu without reading the name of the item, under the peace of mind that Baal made the choice for me.
The problem with artificially suppressed volatility is not just that the system tends to become extremely fragile; it is that, at the same time, it exhibits no visible risks. Also remember that volatility is information. In fact, these systems tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policy makers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite. These artificially constrained systems become prone to Black Swans.
Think second and Third Order Consequence Re: Dalio – Principles
The separation of “work” and “leisure” (though the two would look identical to someone from a wiser era)
A theory is a very dangerous thing to have. And of course one can rigorously do science without it. What scientists call phenomenology is the observation of an empirical regularity without a visible theory for it. In the Triad, I put theories in the fragile category, phenomenology in the robust one. Theories are superfragile; they come and go, then come and go, then come and go again; phenomenologies stay, and I can’t believe people don’t realize that phenomenology is “robust” and usable, and theories, while overhyped, are unreliable for decision making— outside physics.
Over-intervention comes with under-intervention. Indeed, as in medicine, we tend to over-intervene in areas with minimal benefits (and large risks) while under-intervening in areas in which intervention is necessary, like emergencies. So the message here is in favor of staunch intervention in some areas, such as ecology or to limit the economic distortions and moral hazard caused by large corporations. What should we control? As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks. These actions may be devoid of iatrogenics— but it is hard to get governments to limit the size of government.
Since procrastination is a message from our natural willpower via low motivation, the cure is changing the environment, or one’s profession, by selecting one in which one does not have to fight one’s impulses. Few can grasp the logical consequence that, instead, one should lead a life in which procrastination is good, as a naturalistic-risk-based form of decision making.
The supply of information to which we are exposed thanks to modernity is transforming humans from the equable second fellow into the neurotic first one. For the purpose of our discussion, the second fellow only reacts to real information, the first largely to noise. The difference between the two fellows will show us the difference between noise and signal. Noise is what you are supposed to ignore, signal what you need to heed.
Noise is a generalization beyond the actual sound to describe random information that is totally useless for any purpose, and that you need to clean up to make sense of what you are listening to.
Just as we are not likely to mistake a bear for a stone (but likely to mistake a stone for a bear), it is almost impossible for someone rational, with a clear, uninfected mind, someone who is not drowning in data, to mistake a vital signal, one that matters for his survival, for noise— unless he is overanxious, oversensitive, and neurotic, hence distracted and confused by other messages. Significant signals have a way to reach you.
Curiosity is antifragile, like an addiction, and is magnified by attempts to satisfy it— books have a secret mission and ability to multiply, as everyone who has wall-to-wall bookshelves knows well.
Excess wealth, if you don’t need it, is a heavy burden. Nothing was more hideous in his eyes than excessive refinement— in clothes, food, lifestyle, manners— and wealth was nonlinear. Beyond some level it forces people into endless complications of their lives, creating worries about whether the housekeeper in one of the country houses is scamming them while doing a poor job and similar headaches that multiply with money.
A man is honorable in proportion to the personal risks he takes for his opinion— in other words, the amount of downside he is exposed to. To sum him up, Nero believed in erudition, aesthetics, and risk taking— little else.
Stoicism makes you desire the challenge of a calamity. And Stoics look down on luxury: about a fellow who led a lavish life, Seneca wrote: “He is in debt, whether he borrowed from another person or from fortune.”
Stoicism, seen this way, becomes pure robustness— for the attainment of a state of immunity from one’s external circumstances, good or bad, and an absence of fragility to decisions made by fate, is robustness. Random events won’t affect us either way (we are too strong to lose, and not greedy to enjoy the upside), so we stay in the middle column of the Triad.
I would go through the mental exercise of assuming every morning that the worst possible thing had actually happened— the rest of the day would be a bonus. Actually the method of mentally adjusting “to the worst” had advantages way beyond the therapeutic, as it made me take a certain class of risks for which the worst case is clear and unambiguous, with limited and known downside. It is hard to stick to a good discipline of mental write-off when things are going well, yet that’s when one needs the discipline the most. Moreover, once in a while, I travel, Seneca-style, in uncomfortable circumstances (though unlike him I am not accompanied by “one or two” slaves). An intelligent life is all about such emotional positioning to eliminate the sting of harm, which as we saw is done by mentally writing off belongings so one does not feel any pain from losses. The volatility of the world no longer affects you negatively.
Invest in good actions. Things can be taken away from us— not good deeds and acts of virtue.
The barbell businessman-scholar situation was ideal; after three or four in the afternoon, when I left the office, my day job ceased to exist until the next day and I was completely free to pursue what I found most valuable and interesting. When I tried to become an academic I felt like a prisoner, forced to follow others’ less rigorous, self-promotional programs.
Professions can be serial: something very safe, then something speculative. A friend of mine built himself a very secure profession as a book editor, in which he was known to be very good. Then, after a decade or so, he left completely for something speculative and highly risky. This is a true barbell in every sense of the word: he can fall back on his previous profession should the speculation fail, or fail to bring the expected satisfaction. This is what Seneca elected to do: he initially had a very active, adventurous life, followed by a philosophical withdrawal to write and meditate, rather than a “middle” combination of both. Many of the “doers” turned “thinkers” like Montaigne have done a serial barbell: pure action, then pure reflection.
“f*** you money”— a sum large enough to get most, if not all, of the advantages of wealth (the most important one being independence and the ability to only occupy your mind with matters that interest you) but not its side effects, such as having to attend a black-tie charity event and being forced to listen to a polite exposition of the details of a marble-rich house renovation. The worst side effect of wealth is the social associations it forces on its victims, as people with big houses tend to end up socializing with other people with big houses. Beyond a certain level of opulence and independence, gents tend to be less and less personable and their conversation less and less interesting.
Authors, artists, and even philosophers are much better off having a very small number of fanatics behind them than a large number of people who appreciate their work. The number of persons who dislike the work don’t count— there is no such thing as the opposite of buying your book, or the equivalent of losing points in a soccer game, and this absence of negative domain for book sales provides the author with a measure of optionality. Further, it helps when supporters are both enthusiastic and influential. Wittgenstein, for instance, was largely considered a lunatic, a strange bird, or just a b*** t operator by those whose opinion didn’t count (he had almost no publications to his name). But he had a small number of cultlike followers, and some, such as Bertrand Russell and J. M. Keynes, were massively influential. Beyond books, consider this simple heuristic: your work and ideas, whether in politics, the arts, or other domains, are antifragile if, instead of having one hundred percent of the people finding your mission acceptable or mildly commendable, you are better off having a high percentage of people disliking you and your message (even intensely), combined with a low percentage of extremely loyal and enthusiastic supporters. Options like dispersion of outcomes and don’t care about the average too much. – Think Seth Godin and Tribes
Consider two types of knowledge. The first type is not exactly “knowledge”; its ambiguous character prevents us from associating it with the strict definitions of knowledge. It is a way of doing things that we cannot really express in clear and direct language— it is sometimes called apophatic— but that we do nevertheless, and do well. The second type is more like what we call “knowledge”; it is what you acquire in school, can get grades for, can codify, what is explainable, academizable, rationalizable, formalizable, theoretizable, codifiable, Sovietizable, bureaucratizable, Harvardifiable, provable, etc. The error of naive rationalism leads to overestimating the role and necessity of the second type, academic knowledge, in human affairs— and degrading the uncodifiable, more complex, intuitive, or experience-based type. There is no proof against the statement that the role such explainable knowledge plays in life is so minor that it is not even funny. We are very likely to believe that skills and ideas that we actually acquired by antifragile doing, or that came naturally to us (from our innate biological instinct), came from books, ideas, and reasoning. We get blinded by it; there may even be something in our brains that makes us suckers for the point
Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship → Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship …
In parallel to the above loop,
Practice → Academic Theories → Academic Theories → Academic Theories → Academic Theories … ( with of course some exceptions, some accidental leaks, though these are indeed rare and overhyped and grossly generalized).
People with too much smoke and complicated tricks and methods in their brains start missing elementary, very elementary things. Persons in the real world can’t afford to miss these things; otherwise they crash the plane. Unlike researchers, they were selected for survival, not complications. So I saw the less is more in action: the more studies, the less obvious elementary but fundamental things become; activity, on the other hand, strips things to their simplest possible model.
Evolution is not a competition between ideas, but between humans and systems based on such ideas. An idea does not survive because it is better than the competition, but rather because the person who holds it has survived! Accordingly, wisdom you learn from your grandmother should be vastly superior (empirically, hence scientifically) to what you get from a class in business school (and, of course, considerably cheaper). My sadness is that we have been moving farther and farther away from grandmothers.
If you face n options, invest in all of them in equal amounts. 5 Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: “The payoff can be so large that you can’t afford not to be in everything.”
The difference between humans and animals lies in the ability to collaborate, engage in business, let ideas, pardon the expression, copulate. Collaboration has explosive upside, what is mathematically called a superadditive function, i.e., one plus one equals more than two, and one plus one plus one equals much, much more than three. That is pure nonlinearity with explosive benefits— we will get into details on how it benefits from the philosopher’s stone. Crucially, this is an argument for unpredictability and Black Swan effects: since you cannot forecast collaborations and cannot direct them, you cannot see where the world is going. All you can do is create an environment that facilitates these collaborations, and lay the foundation for prosperity.
Corporations are in love with the idea of the strategic plan. They need to pay to figure out where they are going. Yet there is no evidence that strategic planning works— we even seem to have evidence against it. A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning— it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.
(i) Look for optionality; in fact, rank things according to optionality, (ii) preferably with open-ended, not closed-ended, payoffs; (iii) Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more (an idea that is part of the modus operandi of the venture capitalist Marc Andreessen); one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so; (iv) Make sure you are barbelled, whatever that means in your business.
Only the autodidacts are free. And not just in school matters— those who decommoditize, detouristify their lives. Sports try to put randomness in a box like the ones sold in aisle six next to canned tuna— a form of alienation.
“much of what other people know isn’t worth knowing.” To this day I still have the instinct that the treasure, what one needs to know for a profession, is necessarily what lies outside the corpus, as far away from the center as possible. But there is something central in following one’s own direction in the selection of readings: what I was given to study in school I have forgotten; what I decided to read on my own, I still remember.
On the primacy of tradition and Naive Rationalism:
FAT TONY: you are killing the things we can know but not express. And if I asked someone riding a bicycle just fine to give me the theory behind his bicycle riding, he would fall from it. By bullying and questioning people you confuse them and hurt them.”
FAT TONY: “My dear Socrates … you know why they are putting you to death? It is because you make people feel stupid for blindly following habits, instincts, and traditions. You may be occasionally right. But you may confuse them about things they’ve been doing just fine without getting in trouble. You are destroying people’s illusions about themselves. You are taking the joy of ignorance out of the things we don’t understand. And you have no answer; you have no answer to offer them.”
Things are too complicated to be expressed in words; by doing so, you kill humans. Or people— as with the green lumber— may be focusing on the right things but we are not good enough to figure it out intellectually.
The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself. Philosophers talk about truth and falsehood. People in life talk about payoff, exposure, and consequences (risks and rewards), hence fragility and antifragility. And sometimes philosophers and thinkers and those who study conflate Truth with risks and rewards
You decide principally based on fragility, not probability. Or to rephrase, You decide principally based on fragility, not so much on True/ False.
If I tell you that some result is true with 95 percent confidence level, you would be quite satisfied. But what if I told you that the plane was safe with 95 percent confidence level? Even 99 percent confidence level would not do, as a 1 percent probability of a crash would be quite a bit alarming (today commercial planes operate with less than one in several hundred thousand probabilities of crashing, and the ratio is improving, as we saw that every error leads to the improvement of overall safety). So, to repeat, the probability (hence True/ False) does not work in the real world; it is the payoff that matters.
In spite of what is studied in business schools concerning “economies of scale,” size hurts you at times of stress; it is not a good idea to be large during difficult times.
There are many things without words, matters that we know and can act on but cannot describe directly, cannot capture in human language or within the narrow human concepts that are available to us. Almost anything around us of significance is hard to grasp linguistically— and in fact the more powerful, the more incomplete our linguistic grasp.
A wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.). Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.
We are moving into the far more uneven distribution of 99/ 1 across many things that used to be 80/ 20: 99 percent of Internet traffic is attributable to less than 1 percent of sites, 99 percent of book sales come from less than 1 percent of authors … and I need to stop because numbers are emotionally stirring. Almost everything contemporary has winner-take-all effects, which includes sources of harm and benefits. Accordingly, as I will show, 1 percent modification of systems can lower fragility (or increase antifragility) by about 99 percent— and all it takes is a few steps, very few steps, often at low cost, to make things better and safer.
If someone has a long bio, I skip him— at a conference a friend invited me to have lunch with an overachieving hotshot whose résumé “can cover more than two or three lives”; I skipped to sit at a table with the trainees and stage engineers. Likewise when I am told that someone has three hundred academic papers and twenty-two honorary doctorates, but no other single compelling contribution or main idea behind it, I avoid him like the bubonic plague.
What survives must be good at serving some (mostly hidden) purpose that time can see but our eyes and logical faculties can’t capture. In this chapter we use the notion of fragility as a central driver of prediction. Recall the foundational asymmetry: the antifragile benefits from volatility and disorder, the fragile is harmed. Well, time is the same as disorder.
The prime error is as follows. When asked to imagine the future, we have the tendency to take the present as a baseline, then produce a speculative destiny by adding new technologies and products to it and what sort of makes sense, given an interpolation of past developments. We also represent society according to our utopia of the moment, largely driven by our wishes— except for a few people called doomsayers, the future will be largely inhabited by our desires. So we will tend to over-technologize it and underestimate the might of the equivalent of these small wheels on suitcases that will be staring at us for the next millennia.
I received an interesting letter from Paul Doolan from Zurich, who was wondering how we could teach children skills for the twenty-first century since we do not know which skills will be needed in the twenty-first century— he figured out an elegant application of the large problem that Karl Popper called the error of historicism. Effectively my answer would be to make them read the classics. The future is in the past. Actually there is an Arabic proverb to that effect: he who does not have a past has no future.
Another mental bias causing the overhyping of technology comes from the fact that we notice change, not statics. The classic example, discovered by the psychologists Daniel Kahneman and Amos Tversky, applies to wealth. (The pair developed the idea that our brains like minimal effort and get trapped that way, and they pioneered a tradition of cataloging and mapping human biases with respect to perception of random outcomes and decision making under uncertainty). If you announce to someone “you lost $ 10,000,” he will be much more upset than if you tell him “your portfolio value, which was $ 785,000, is now $ 775,000.” Our brains have a predilection for shortcuts, and the variation is easier to notice (and store) than the entire record. It requires less memory storage. This psychological heuristic (often operating without our awareness), the error of variation in place of total, is quite pervasive, even with matters that are visual.
A rule on what to read. “As little as feasible from the last twenty years, except history books that are not about the last fifty years,”
The problem with lack of recursion in learning— lack of second-order thinking— is as follows. If those delivering some messages deemed valuable for the long term have been persecuted in past history, one would expect that there would be a correcting mechanism, that intelligent people would end up learning from such historical experience so those delivering new messages would be greeted with the new understanding in mind. But nothing of the sort takes place. This lack of recursive thinking applies not just to prophecy, but to other human activities as well: if you believe that what will work and do well is going to be a new idea that others did not think of, what we commonly call “innovation,” then you would expect people to pick up on it and have a clearer eye for new ideas without too much reference to the perception of others. But they don’t: something deemed “original” tends to be modeled on something that was new at the time but is no longer new, so being an Einstein for many scientists means solving a similar problem to the one Einstein solved when at the time Einstein was not solving a standard problem at all. The very idea of being an Einstein in physics is no longer original. I’ve detected in the area of risk management the similar error, made by scientists trying to be new in a standard way. People in risk management only consider risky things that have hurt them in the past (given their focus on “evidence”), not realizing that, in the past, before these events took place, these occurrences that hurt them severely were completely without precedent, escaping standards. And my personal efforts to make them step outside their shoes to consider these second-order considerations have failed— as have my efforts to make them aware of the notion of fragility.
Only resort to medical techniques when the health payoff is very large (say, saving a life) and visibly exceeds its potential harm, such as incontrovertibly needed surgery or lifesaving medicine (penicillin). It is the same as with government intervention. This is squarely Thalesian, not Aristotelian (that is, decision making based on payoffs, not knowledge). For in these cases medicine has positive asymmetries— convexity effects— and the outcome will be less likely to produce fragility. Otherwise, in situations in which the benefits of a particular medicine, procedure, or nutritional or lifestyle modification appear small— say, those aiming for comfort— we have a large potential sucker problem (hence putting us on the wrong side of convexity effects).
What we call diseases of civilization result from the attempt by humans to make life comfortable for ourselves against our own interest, since the comfortable is what fragilizes.
Evolution proceeds by undirected, convex bricolage or tinkering, inherently robust, i.e., with the achievement of potential stochastic gains thanks to continuous, repetitive, small, localized mistakes. What men have done with top-down, command-and-control science has been exactly the reverse: interventions with negative convexity effects, i.e., the achievement of small certain gains through exposure to massive potential mistakes. Our record of understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place, yet we keep making the mistake), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of randomness. Simply, humans should not be given explosive toys (like atomic bombs, financial derivatives, or tools to create life).
If there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding. So there is a logic to natural things that is much superior to our own. Just as there is a dichotomy in law: innocent until proven guilty as opposed to guilty until proven innocent, let me express my rule as follows: what Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.
So the modus operandi in every venture is to remain as robust as possible to changes in theories (let me repeat that my deference to Mother Nature is entirely statistical and risk-management-based, i.e., again, grounded in the notion of fragility).
If true wealth consists in worriless sleeping, clear conscience, reciprocal gratitude, absence of envy, good appetite, muscle strength, physical energy, frequent laughs, no meals alone, no gym class, some physical labor (or hobby), good bowel movements, no meeting rooms, and periodic surprises, then it is largely subtractive (elimination of iatrogenics).
Look at it again, the way we looked at entrepreneurs. They are usually wrong and make “mistakes”— plenty of mistakes. They are convex. So what counts is the payoff from success. – It’s ok to be wrong as long as you’re wrong on a small scale and learn from it. If you’re right BIG enough then you only need to be right once or a few times.
Playing on one’s inner agency problem can go beyond symmetry: give soldiers no options and see how antifragile they can get. On Apri 29, 711, the armies of the Arab commander Tarek crossed the Strait of Gibraltar from Morocco into Spain with a small army (the name Gibraltar is derived from the Arabic Jabal Tarek, meaning “mount of Tarek”). Upon landing, Tarek had his ships put to the fire. He then made a famous speech every schoolchild memorized during my school days that I translate loosely: “Behind you is the sea, before you, the enemy. You are vastly outnumbered. All you have is sword and courage.” And Tarek and his small army took control of Spain. The same heuristic seems to have played out throughout history, from Cortés in Mexico, eight hundred years later, – No options means you have to succeed
Never listen to a leftist who does not give away his fortune or does not live the exact lifestyle he wants others to follow. What the French call “the caviar left,” la gauche caviar, or what Anglo-Saxons call champagne socialists, are people who advocate socialism, sometimes even communism, or some political system with sumptuary limitations, while overtly leading a lavish lifestyle, often financed by inheritance— not realizing the contradiction that they want others to avoid just such a lifestyle.
Let me make the point clearer: the version of “capitalism” or whatever economic system you need to have is with the minimum number of people in the left column of the Triad. Nobody realizes that the central problem of the Soviet system was that it put everyone in charge of economic life in that nasty fragilizing left column.
The problem of the commercial world is that it only works by addition (via positiva), not subtraction (via negativa): pharmaceutical companies don’t gain if you avoid sugar; the manufacturer of health club machines doesn’t benefit from your deciding to lift stones and walk on rocks (without a cell phone); your stockbroker doesn’t gain from your decision to limit your investments to what you see with your own eyes, say your cousin’s restaurant or an apartment building in your neighborhood; all these firms have to produce “growth in revenues” to satisfy the metric of some slow thinking or, at best, semi-slow thinking MBA analyst sitting in New York.
With the exception of, say, drug dealers, small companies and artisans tend to sell us healthy products, ones that seem naturally and spontaneously needed; larger ones— including pharmaceutical giants— are likely to be in the business of producing wholesale iatrogenics, taking our money, and then, to add insult to injury, hijacking the state thanks to their army of lobbyists. Further, anything that requires marketing appears to carry such side effects. You certainly need an advertising apparatus to convince people that Coke brings them “happiness”— and it works.
Anything one needs to market heavily is necessarily either an inferior product or an evil one. And it is highly unethical to portray something in a more favorable light than it actually is. One may make others aware of the existence of a product, say a new belly dancing belt, but I wonder why people don’t realize that, by definition, what is being marketed is necessarily inferior, otherwise it would not be advertised. – First key to marketing is having a good product so that all you have to do is make others aware.
The glass is dead; living things are long volatility. The best way to verify that you are alive is by checking if you like variations. Remember that food would not have a taste if it weren’t for hunger; results are meaningless without effort, joy without sadness, convictions without uncertainty, and an ethical life isn’t so when stripped of personal risks.
Things that I’m doing differently
Part of the trouble with Antifragile in my my mind is that implications are so large and so contrary to everything that we’ve been conditioned to believe it’s hard to put it into action.
I think that a large part of what I took away from the book and have used is just a somewhat different approach to nearly everything in my life that is not easy to boil down into actionable steps. However, a few things that pop out as having changed:
I’m using the time heuristic for choosing what to consume. I’ve massively reduced the number of podcasts and blogs that I read in exchange for more books and audiobooks and even biasing those towards older options all other factors being the same. Admittedly, I still spend a lot of time on Twitter.
I think Twitter is an example of something that’s Antifraglile. It facilitates collaboration that has the potential for large upside. About half the people at my wedding and many professional contacts come from Twitter.
I’m using the skin in the game heuristic for deciding what advice to listen to. In particular as it relates to financial advice, I always tend to ask people how they are investing their portfolio.
Similarly, just phrasing questions differently to imply skin in the game can change how people respond. For instance, I tend to ask doctors not what they think I should do, but what they would recommend if I was their son or brother. This framing of a relationship where they have more skin in the game, tends to impact how they think about solutions.
Beinng more mindful of the Signal/Noise Ratio. I used to look at my site’s traffic in Google Analytics daily. Now I try to look once a month or so. There is very little to be learned by looking more frequently like that. I’m just looking at more noise. Similarly, I try to be very selective about which new sources or commentators I follow. The vast majority are just trying to stay relevant and get eyeballs which tends to lead to a very noisy stream of output.
Facilitating Collaboration – I’m trying to actively spend more time around other people and collaborate with them. As Taleb says, collaboration gives us huge amounts of optionality, but of course we can’t see it until it’s already happened. Some real world examples of this are living with interesting/cool people, co-working, and cocktail parties (or equivalent, less pretentious, get-togethers). I actually ended up taking on a business partner, in part because of this influence.
I wrote a follow up post called How to Get Lucky: Focus On The Fat Tails where I go into some more examples of how I’ve tried to apply this thinking to my life.
Probably most notably, I started an investment firm whose philosophy is deeply intertwined with Taleb’s own thinking.
I’d love to hear from anyone else about things they’ve changed. As I said, I think the potential implications here are extraordinary and I’m looking for more ways to implement them in my life.
Last Updated on May 14, 2022 by Taylor Pearson
Piero says
Thank you for sharing.
Piero
Taylor Pearson says
Glad you found it helpful Piero. Would love to hear your thoughts as well if you feel like sharing.
Dan says
THANK YOU.
Taylor Pearson says
Thank you. It was mainly your prodding that got me to do it 🙂
Gabe Strauss says
Holy crap Taylor. Do you do this with every (or even most) books you read?
Taylor Pearson says
Just the ones I really want to try and ingrain/implement. I try and keep the notes as brief as possible while still capturing the main points. This particular book is so extensive in the concepts it advances and the implications that I couldn’t condense it down much more.
Jurgen Dhaese says
Damn, are you sure these are just notes? Taleb might consider this piracy.
Talk to you when I finish the book.
Taylor Pearson says
Uhh, keep it on the down low? I’m speculating that he’s more interested in advancing the concepts than anything else. Worst case scenario, I get contacted about it and try and leverage it for an introduction.
Paulo Ribeiro says
>”Worst case scenario, I get contacted about it and try and leverage it for an introduction.”
yes, always looking through the bright side. great work, sir, thanks!
Jurgen Dhaese says
Was joking. From his tweet, it looks like he likes it. 😉
Rav Gaujar says
This is an amazingly good summary! Have you considered the applications of antifragile thinking and tinkering, to corporate strategy specifically? I’ve just started on a MOOC at udemy.com on antifragility for organisations, which seems good although not as detailed as your summary here. Rav.
Taylor Pearson says
Thanks Rav. I’ve thought a LOT about it as it relates to corporate strategy but it’s such a broad framework that the implications will I suspect take decades to flesh out.
A couple I’ve thought about
Software > Hardware – You can’t scale hardware as well because of production cycles and inventory
Back up everything!
A lot more that I’ve thought about but aren’t coming to mind. Reading The Fifth Discipline Right now which definitely has some overlap with the concepts Taleb advances.
TaphaNgum says
This is truly outstanding work Taylor. Thanks for the write-up.
Aps says
great summary. Thanks for providing. Can you pls provide some more examples for practical implications ?
Taylor Pearson says
Spent the last two years of my life working on it. Still in progress. Stay tuned 🙂
kaitangsou says
can;t-express-well stuff…that is the strongpoint here…Still in HCMC btw?
I may head that way…from my place here in Philippines…Klub Safari…btw…
Brian van der Spuy says
I loved Antifragile, and there is much in there that makes a lot of sense, though I found that rather to my frustration, I cannot really put the principles to any good use. I can’t work out whether my own life or the country in which I live, are fragile or not, or how to improve the situation. The book has much general advice, but I have never been able to work out how to apply it to my own specific situation.