The Death of The Middle

And how the internet creates barbell effects

The Death of the Middle

The emergence of department stores opened up a vast array of choices for the everyday consumer. No longer did they have to go to the corner store and hope their store clerk was selling what they desired — these massive stores offered a larger selection of products at cheaper prices. Sears, Macy’s, JCPenney — these were all great businesses.

Until the internet and Amazon came along and reshaped the industry to fit a barbell distribution. We saw an explosion of higher quality products at lower prices (from better economies of scale) on one side, and the rise of the boutique, or the rise of specialists (Gucci, Apple, Lululemon, etc) on the other. So department stores got obliterated. Malls got obliterated. Anything that was in the middle got obliterated.

In this piece we’ll discuss why the internet reshapes industries to fit a barbell distribution curve, and why this results in “the death of the middle.”

This barbell bifurcation is happening across many other sectors, including media, music, and even VC — and the bifurcation is only getting bigger. In media, as we discussed in a previous post, local newspapers got obliterated once the internet went mainstream. General interest magazines too. People today either want the mass market stuff, or the highly niche bloggers (as we see with the prominence of Substack).

We see a similar thing in music. While we all used to listen to the same ~100 bands, now global hits are bigger than ever, and the long tail is bigger too. The middle has been hollowed out.

So why’s this happening?

One reason is the classic innovator’s dilemma. Executives at companies “stuck in the middle” are making so much money they don’t want to take risks, which keeps them optimizing for local maxima at the expense of something truly disruptive. 

Another idea is that the internet is changing our preferences — we’re getting more interested in either exactly what we want, or whatever’s most frictionless. Aggregate or specialize. In other words, give people everything they want or the one thing they need. Everything in the middle gets slaughtered.

Consider McDonald’s as an example. Before McDonald’s there was tons of variance in hamburgers — you never knew what you were going to get, as there was no “one size fits all” burger. But then McDonald’s decided to standardize their product; you knew exactly what you’d get no matter which Mickey D’s you entered. The same thing happened with Starbucks, Gap, and most other big brands that promise consistency as their main value proposition.

Our main decision as internet consumers has become a flat out no, unless we’re getting exactly what we want, in which case that no becomes a yes. We only want the very specific thing, or the good thing that’s most convenient and consistent.

So the choice is now: “I’m going to find this niche obscure thing, or I’ll get the mass market thing.” It’s either Paul Skallas or Malcolm Gladwell. Aesop Rock or ASAP Rocky. Boutique brand or Amazon. You get the idea.

And you’d think removing friction creates a more level playing field — if it’s easier to find & purchase someone’s products, you’d think consumers would pick the easiest option. But that’s not always the case. Instead we see the familiar barbell with global mega hits on one side and a long tail of niche creators on the other. 

Alex Danco’s series on understanding abundance captures it best (Note: I quote him verbatim liberally below):

When you remove friction, more people default to either hyper-targeted “if” options, or default “else” options. When this happens, you get winner-take-all effects, or at least winner-take-most, in the “else” category. This is why on a street with three coffee shops, Starbucks will be crowded, and on a street with ten coffee shops, Starbucks will be even more crowded. 

That said, the other quirky, independent espresso shops are probably doing okay too. They may also have their own little dedicated customer base that chooses them preferentially.

The decision function looks something like this: 

If [I am on a street] and [find an espresso shop that’s exactly what I want]

then [Go there]

else [Go to Starbucks]

Software exacerbates this in any sector it touches, where friction is removed and there should be some level playing field (which, of course, ends up uneven). Lower switching costs, masked complexity, and cheaper options remove the friction from consumer deliberation, which leads to single-variable consumption decisions, thus creating bifurcated, compounding power law outcomes.

Nassim Taleb talks about this in The Black Swan, using the examples of Mediocristan & Extremistan. Mediocristan represents thin tailed events, or moderate outcomes — events in which small changes only affect the individual without affecting the broader collective much at all. This is the case in a world of scarcity.

Extremistan, on the other hand, represents fat tailed events, or power law outcomes, in which small changes can have large effects on not just the individual, but the collective as well. Changes in Extremistan have systemic effects that don’t occur as much in Mediocristan, or, quoting Taleb, “In Extremistan, inequalities are such that one single observation can disproportionately impact the aggregate, or the total.”

The key lesson is that the magnitude of outliers in Extremistan is much larger than that of those in Mediocristan. So in a world wired for scarcity, no wonder the outcomes are so moderate — it’s the system running as designed! However, when we add technology & startups to the equation (along with other things dictated by the power law) we move to a world of abundance where the middle no longer satiates our demands. Companies and their products must be so differentiated that no one else can copy them (the boutique coffee shop), or they must be “full stack” and 100% exactly what we want (Starbucks), as discussed above.

Another way to think about this is using the Internet Rainforest analogy described by Ben Thompson and James Allworth on Exponent: in a rainforest, with its abundantly available water, sunlight, and nutrients, two types of plants thrive: the tiny, highly differentiated plants on the forest floor, and the giant trees that form the canopy. It’s hard to be in the middle. 

As a startup, if you’re in the middle, it’s possible you’ll find customers, but nothing will give you pricing power over the giants, nor the agility to outrun a thousand different little plants (competitors). Your ROIC won’t exceed your cost of capital, a predicament from which growth cannot save you.

So what’s one to do? There are two options:

  1. Go as differentiated as possible and serve the customer exactly what they want.

  2. Power law everything — don’t pick the winners; have the winners all pick you.

Build a “pointy business” that’s purely differentiated, or “no stack”. Or build a “utility business” that does all of the underlying work as a truly “full stack” company/product.

Don’t get stuck in the middle.

Read of the week: Interview with Ben Horowitz

Listen of the week: A16z Live with Marc & Ben — In response to a question, Marc makes a compelling theoretical argument as to why Harvard might out live The United States.

Cosign of the week: Sotonye

Until next week,


Why Are Institutions Failing Us?

Perhaps our expectations of them have changed

Last week we looked at how the internet ate media & why we're never going back. This week we'll dig into how trust in institutions has evolved overtime in a more general sense.

Broadly speaking, the macro trend is an overall collapse of faith in institutions.

Gallup has been running polls on trust across a wide variety of American institutions since the 1970s: federal and city governments, churches, non-profits, public schools, boy scouts, the police, the military, and the press, for example. They found that institutional trust peaked in the 60s, only to decline rapidly in the following decades, and, on a national level, almost every major U.S. institution followed this pattern over the last 50 years.

It's interesting to see how this has evolved. The military used to be at the bottom of the trust list. After Vietnam, soldiers were literally spit on when they came home. Now we've put the military at the top of the list, and what’s fallen is the president, the press, and Congress (and more recently, the CDC and WHO too).

It's worth asking ourselves: is this justified?

Well, the way our institutions have responded to COVID is a recent reminder of just how poor their performance was (and is) — nearly every major institution has failed us.

Another question is if things have always been this way.

Last week we talked about this in terms of media:

Was pre-internet journalism better than what we see today?

One argument is that pre-internet, journalists had a more reliable source of revenue, enabling them to do more investigative work, and the business model shift to online advertising meant they'd now have to produce popular pieces more frequently.

To that I'd say the same social media platforms that changed the business of journalism also changed our perceptions of it. Was past journalism actually better? Or was it the same quality, but we called it great out of ignorance?

I think about how coverage of JFK, FDR, and others would have held up under social media. Walter Duranty, for example, won the Pulitzer in the 1930s for his reporting on the USSR, but we later learned he protected Stalin by denying his war crimes. In the 90s, there were calls to take away his prize, but he kept it.

How many other false stories pre-social media were never overturned?

I'd posit that what we see here in media is just a microcosm of our broader society — a shift away from centralized truth & authority towards a more decentralized way of consuming & producing information. With that shift comes various changes.

In the old days, we experienced media, culture, and politics through the lens of a centralized authority — we were fed whatever narrative they provided, and no one really knew just how bad the "bad news" was, nor did we have context to even wrap our heads around the idea of questioning that authority.

Take public schools for example: are they better today than they were 50 years ago? Maybe. Or, are they just as bad as they were 50 years ago and we don't realize it due to lack of transparency & context back then?

Today we see objections to institutional narratives, partly because we have more information, but mostly because that information is spread at an increasingly greater frequency. Our perception of the media has evolved just like it has with other institutions.

As a result, we've seen that institutions get less competent and more bureaucratic as they get older. It's clear that today's government in 2021 is not the same one that accomplished the Manhattan Project in 1950. The smartest people in the country today, on average, don’t go into government anymore, or at least at the scale that they used to. 

We also realized legacy institutions weren't built for the information age, and they’re beginning to show their cracks. As Balaji puts it, “no institution that preceded the internet will survive the internet.”

But there’s another way to look at this, that only tells a small part of the story but is a new lens by which to view it: Maybe our problems with current institutions aren’t only that they’re functioning poorly. Maybe it’s also that, for some institutions, we merely expect different things from them than we used to. 

Indeed: Maybe what’s changed isn’t that we think institutions and elites are failing us, but instead the scorecard by which we use to evaluate them altogether. 

We used to think institutions were a place for *formation*. Certainly education, that was where people learned how to be citizens. 

This expectation rested on certain assumptions on human nature, namely Christian or Hobbsean ones. We are born savages, the logic goes, and civilization makes us good. The goal of education is to mold us, to transform our caveman-like instincts into productive members of society. This also implied that the highest good for people is to contribute to something bigger than themselves (society, god, etc).

Today, we don’t think the role of institutions is formation. Instead, we think of institutions as places for individuals to *perform*. The role of institutions is explicitly not to form us, or at least not without consent — that would be oppression. 

This expectation rests on Rousseauian assumptions around human nature: We are born pure, and civilization corrupts us, the logic goes. Thus, the goal of institutions/elites is to get out of the way so we can maintain and perform our pureness.  So the goal in Rousseauian society is not to serve society or god, it’s to serve the self first and foremost. It’s embodied by the quote: “Don’t ask what the world needs, ask what makes you feel alive. Because what the world needs, is more people who feel alive.”

You can imagine an alternative version of this quote that might have different expectations of our institutions: “Don’t ask what makes you feel alive, ask what the world needs. Because what makes you feel alive, is giving the world what it needs.”

Put differently, the shift from a Christian/Hobbsean mentality to a Rousseauian one can be looked at as a change from "honor culture" to "dignity culture" (which I explored in more depth in my piece on therapy culture).

"Honor" was thought of as fitting yourself into the role society expected of you — "knowing your place." Whether you had honor depended on whether you could step into the role that was expected of you. When Socrates said “Know thyself” he really meant "know your role in society.” 

Today we live in a “dignity culture” — one where we believe we should have a right to pursue any role we want. Whether you have dignity depends on whether society recognizes you for who you want to be seen as.

In addition to the focus on the self as the highest goal, another corresponding shift is a change in how we think about truth. Given that man is born perfect in a Rousseauian world, any knowledge that asks man to change is oppressive, whether it’s religion, science, or any other external truth. Truth is not “out there”, the logic goes, it’s within you. So the new spin on Descartes is basically “I feel it, and therefore it is true.” Which of course results in relativism — if there’s no way to adjudicate competing truths, then everyone’s truth is fair game. This is a recipe for conflict. 

Which is why stronger institutions — at least institutions that we think of as forming, like education and media — won’t fix this problem. The Rousseauists don’t want institutions to help them acclimate better to society, they want them to get out of the way so they can be themselves — or more precisely, reengineer society so they can be in reality who they are in their imaginations (totally equal to others). The Hobbesians want the opposite; they want formation and cohesion.

There are plenty of institutions that could be performing better in ways everyone agrees with, where we all have the same expectations of them and their failures are seen and agreed upon by all.

But there are other institutions where our expectations diverge — education being a prominent example — and if we want them to perform “better”, we should first align on exactly what “better” means and looks like.

Read of the week: Interview with Mike Solana

Watch of the week: Learning to Fight, by the founder of Replit. Excited to see follow up videos.

Listen of the week: Julia Galef and Jonathan Haidt

Cosign of the week: Geoff Schullenberger and his podcast, Outsider Theory. He’s coming on Big Ideas Clubhouse show this Wednesday night at 730 PM PST.

Until next week,


How the Internet Ate Media

And why there's no going back

In Reality is Up for Grabs, I wrote about how people are opting into their own versions of reality, and the result is increasing fragmentation.

Many people long for a world pre-Twitter, where we all had a shared sense of reality, but that world isn’t coming back. We’ll discuss why in this piece.

How we got here

When the printing press was invented in 1440, we opened up a new era in information distribution. Thoughts and ideas could spread faster than word of mouth in the first form broadcast media — a one to many dynamic.

Over time, the invention of newspapers, TV, and radio helped solidify this dynamic, acting as highly centralized media aggregators — technologies that held society together. Everyone read the same newspapers, watched the same TV channels, and listened to basically the same radio stations, and this single voice became thought of as a source of truth.

Big newspapers or media properties didn’t exist back then. It was all decentralized and all local — no Walter Cronkite, no Don Lemon, no Rachel Maddow, nothing like that. Instead we heard many voices, mostly pseudonymous, since people were worried about getting arrested or killed for saying the wrong thing. Ben Franklin himself had 15 different pseudonyms, each with different points of view, everiscating each other to sell newspapers. Sounds like a higher stakes version of Twitter today.

We’re not used to this because we grew up in an era of peak centralization.

The book Infamous Scribblers chronicles the history of the news business and makes the case that the fragmentation we see in media today has been the norm throughout history, and the consolidation of the last 50 years was the aberration.

Everything used to be fractured and fragmented by definition. Then came the telegraph, then the telephone, and mass manufacturing, public education and more. We’re now returning to that early way of living before Peak Centralization. Structurally, we have more in common with the 1800s than we did with 1950s. 

Explosion of Information

The internet, and more specifically social media, cause this explosion of available information by giving voices to many people who weren't otherwise heard. In turn, this increased the number of voices in the public sphere, which also increased the number of truths and versions of reality someone could believe.

As a result, we see more distrust in society because we can’t seem to reconcile which version of reality is more “true” than another.

To put this explosion of information in perspective: In 2002, the world produced double the amount of information that had been produced throughout all history before that. In 2003, it doubled again. And instead of increased knowledge leading to greater understanding, we saw social and political chaos. Why?

As Martin mentioned on our podcast: “Suddenly this flood of information comes and you realize you knew…nothing!”

Media institutions had a monopoly on truth — they had legitimacy because they had authority, and they had authority because they had a monopoly on information. 

This had been changing over time. The advent of talk radio, cable news, CNN, and the fairness doctrine led to more voices being on the media.

But it was social media that blew the whole thing open.

Even a couple decades ago, you didn't hear from that many people.  It was a pretty big deal for someone to be on TV, and there weren’t that many voices contending for your attention. Even when Facebook started, it was only people on your campus. But in 2010, Facebook expanded well beyond colleges, and now suddenly all these people could speak who could never speak to each before. Now we knew what everyone was thinking all the time, and it was *problematic*. 

It’s the multiple voices that causes distrust. If there's only one voice and it's broadcast, then no one can argue with the voice. But if there’s a cacophony of voices and they’re all arguing with each other telling they’re side of the story, it’s hard to know who to believe.

Before social media, a wide swath of the population was just utterly ignored by traditional print/audio/visual media — they never heard what each other had to think. This was the original filter bubble.

Social media flattened the relationship between the elites and everyone else — it gave the public a voice. When JFK would give a speech, he was uncontested. Now, with everyone transitioning from a passive reader to a veritable journalist, every statement is challenged. A journalist, instead of being the sole arbiter of truth, is now just another person on social media. The public, not a group of elites, determined who was an *actual* journalist.

Indeed, as Noah Smith put it: "We all wanted to give a voice to the voiceless and the marginalized. And they did! But don't be surprised when the first thing the voiceless and marginalized say is 'Fuck you.'"

Twitter is the real world

Martin Gurri has cataloged the different revolutions in information, detailing each one's social impacts. From writing, to the alphabet, to the printing press, to mass media, to social media — just as people were originally concerned that the printing press would enable larger scale propaganda, we still see similar concerns today.

Some would call this the "post-truth" era, but that would be misleading — there's more truth than ever. So much so that we don't really know how to process it, since many truths contradict each other. Instead, it feels more like a "post-monopoly-on-truth" era.

Think about it this way: today, burning books is considered heretical. Why? One reason is because we ultimately figured out how to process and utilize all the information they contained — so there’s hope we’ll do the same today with social media, and we’ll look back at those who want to get rid of social media in a similar way.

Put simply, what the explosion of truth did was expose the superficiality of existing media institutions, create a "public" that could challenge the elites directly and create platforms for individuals to effectively become legible institutions and develop one to many relationships.

Twitter is an interesting case study, because it’s basically eaten all of media. 

People sometimes say "Twitter isn't the real world", which is to say that activity on Twitter is just a small % of the population, and doesn’t manifest or influence how the broader population thinks. This might be technically true, but substantially false. Whoever you think influences the "real world" — journalists, media companies, educators — they're all strongly influenced by Twitter. Everything begins on Twitter and then branches out.

To understand this, a friend gave me the framework of the "OODA Loop": Observe, Orient, Decide, Act.

Observe - what's happening around me?

Orient - where am I relative to what's happening around me?

Decide - how should I respond given where I am and what's happening around me?

Act - respond.

Generally speaking, whoever runs the OODA loop the fastest wins — the idea being the faster you move through the loop, the harder it is for your opponent to understand reality, thus hurting their decision making ability. If you're running your loop faster than they are, you can sometimes act before they've been able to complete the loop — you'll be acting before they can orient, or deciding before they even finish observing.

So what does this have to do with media institutions?

Well, one of my theories is that the internet offers a hyper-accelerated OODA loop, and Twitter is the fastest loop across the entire internet. As a result, Twitter has the ability to disrupt all other forms of reality perception. Which means what you see take off on Twitter will also take off (or influence) what you see elsewhere.

TV & newspapers are too slow. By the time they've aired, their stories are already old news. Twitter is always more up to date.  Legacy institutions can’t keep up with the speed of the internet.

This means the most important news will always spread faster on Twitter, but the opposite is also true — the most toxic behavior on Twitter can (and probably will) be mimicked elsewhere.

This is exactly why the stakes on Twitter are so high — what gets resolved on Twitter will influence what you see in the media, the education system, and elsewhere in society.

A Golden Age?

So that begs the question: was pre-internet journalism better than what we see today?

One argument is that pre-internet, journalists had a more reliable source of revenue, enabling them to do more investigative work, and the business model shift to online advertising meant they'd now have to produce popular pieces more frequently.

To that I'd say the same social media platforms that changed the business of journalism also changed our perceptions of it. Was past journalism actually better? Or was it the same quality, but we called it great out of ignorance?

I think about how coverage of JFK, FDR, and others would have held up under social media. Walter Duranty, for example, won the Pulitzer in the 1930s for his reporting on the USSR, but we later learned he protected Stalin by denying his war crimes. In the 90s, there were calls to take away his prize, but he kept it.

How many other false stories pre-social media were never overturned?


So maybe we shouldn’t lament the end of peak centralization, particularly in media. The idea of there being one telephone company, two superpowers, three television stations, and four internet companies - it’s too homogenous. The fragmentation will lead to new cultures, ways of thinking, new ideas, and ultimately more cultural innovation. 

And for those who still miss peak centralization, the truth is that the internet ate media, and there’s no going back.

Writing of the week: ICYMI, Noah Smith interviewing Patrick Collison.

Watch of the week: Balaji on Tim Ferriss. As they say, this mf don’t miss.

Listen of the week: Geoff Shullenberger and Justin Murphy

Cosign of the week: Andrew Yu launched On Deck Product Management

Until next week,


On Cost Disease

A primer on what it is and how it works

There are two types of American industries: one that has had rapid innovation and falling prices, and one that has had the opposite.

In the first—e.g. software, manufacturing—consumers have grown accustomed to paying less for more. 

In the second—e.g healthcare, education, construction—they pay more for the same (or less).

As you can see, industries like electronics, media, food, and clothing have had spectacular productivity growth. Productivity is growing, prices are declining, and wages are exploding, mostly because workers in those sectors, thanks to technology, can produce a lot more with less input.

However, we can also see health, education, and construction costs rising. To put this in perspective: over the past fifty years, university and health insurance costs have gone up 10x, and housing costs have increased by about 50%.

You’d expect these industries to get cheaper because of technology and globalization, but instead they’re getting far more expensive (much faster than inflation) while the product doesn’t improve.

Why is this the case?

One explanation is Baumol’s cost disease — the idea that wages in stagnating industries rise in response to rising wages in industries with positive productivity growth. 

Basically, these industries are getting more expensive because wages are rising even though labor productivity isn’t, largely to offset gains in other industries.

Put differently: as technology increases productivity for workers in productive industries, unproductive industries have to pay their workers more to compete. That’s why salaries rise even though labor productivity doesn’t.

Since workers can migrate from sector to sector and wages get set across industries, you now see industries with neutral (and sometimes negative) productivity growth setting wages as if they had positive productivity growth. So prices for consumers in sectors like healthcare, education, and housing just explode, and the same stuff gets more expensive every year without getting any better. In some cases, those same things may even be getting worse. And there’s no reason why this would ever stop.

But this can’t be the entire story. 

Another explanation is poorly implemented government intervention — examples being occupational licensing in healthcare and education subsidies via loans. 

In these cases, the government uniquely distorts markets in ways that makes them neither market driven nor government provided (like other countries have). This results in:

  1. Opaque markets with no transparent pricing

  2. Entrenched industries in which we can’t modify regulations

  3. Intermediated markets where the patient isn’t the customer

Yet another explanation, and this one positive, is that we simply give more people access to these services. More patients & more students means more staff — total cost goes up because we serve and employ more people.

Alongside explanations, we try to justify cost-disease in numerous ways. 

One claim is that cost disease is endemic to service sectors — but we don’t see skyrocketing prices in other service industries where people pay out of pocket (e.g restaurants or fast food places).

Others try to reframe it entirely by saying it’s less of a cost disease and more of a “wage bonus.” They say our teachers and doctors get to make more money, and we can thus attract great talent to those professions too (which is great!).

Zooming out: In the 1930s, Keynes predicted, based on the rate of GDP growth, that his grandchildren would only work 15 hour weeks because that’s all that would be required in a richer society.

But looking at where we are today, we’ve gotten as rich as we expected, but we’re nowhere near where Keynes' predicted — partially because the cost of basic needs (healthcare, education) has gone up faster than wages have.

Think about how big of a deal cost disease is: If we didn’t have technological and productivity growth, would healthcare be 50x as expensive as it is now? 100x?

Our problem isn't too much technology or people being too excited about technology — it’s that we don't have nearly enough of it. These cartel-like legacy industries are way too hard to disrupt.

Indeed, because of the lack of disruption, a lot of what doctors are doing today is the equivalent to what farmers were doing 300 years ago: if you took a modern farmer with modern technology and told them they had to go back to how farming used to be, it would lead to much worse food produced at higher prices, with many more people going hungry. 

Hopefully we’ll look back in horror at how expensive and ineffective our current healthcare system is relative to what it will be in the future.

More market-based solutions would likely lead to more technological growth, and we see this in LASIK eye surgery. It’s actually quite inexpensive, and the price has been dropping over time. This is no different than heart or brain surgery, and yet the quality of improvement and cost reduction curve are completely unlike any other surgical procedure. This is because it’s paid out of pocket and not through the insurance system. It’s not something that other people pay for, so there isn’t as much politics around it. 

And that’s just one example. We’re also seeing others in education and housing, which are absolutely critical. Software must eat the world or else these unproductive industries will eat the economy and we’ll have no option but to have the government redistribute wealth across the board to allow workers (99% in healthcare, education, etc) to afford the services they provide, which we obviously need.

Effectively, we need a call to arms that acknowledges cost disease and the need to see technological innovation bend the cost curves in these unproductive industries.

After all, technology is what allows us to feed billions of people, lift people out of poverty, and provide goods & services for 10x the number of people for 1/10th of the cost. Technology allows us to do more with less, which gets us out of our zero-sum competition and into a positive-sum world.

This should have bipartisan support. Liberals should love this, since they want cheaper education and cheaper healthcare, and conservatives should love this too, since they understand that without rapid innovation the government will rush to fill the gap.

It’s only a matter of time before we realize this, but I hope it’s soon lest we remain stuck with the cost disease.

Read of the week: Katherine Boyle on Seriousness. I also had her on my podcast this week.

Watch of the week: RIP, DMX. Classic freestyle with him and friends

Listen of the week: Alex Kaschuta’s podcast w/ Patrick Deneen.

Cosign of the week: Kyla Scanlon, who makes great finance content and also launched On Deck Investing this week

Until next week,


On Economic Growth

And why it matters so much

I was trying to find a short read on why economic growth is so important, but since it wasn’t readily available, I decided to collate one myself. To do so, I’ve borrowed liberally (and literally) from Our World in Data and Tyler Cowen’s Stubborn Attachments, which are much better (but also longer).

When you look at economic history, it’s a very simple story with only two parts:

The first part is the very long time in which the average person was very poor, life expectancy was low, and child mortality rate was high.

This lasted through most of history. What people used for shelter, food, clothing, and energy stayed basically the same for a very long time. If you lived during the 17th century, for example, your quality of life was pretty similar to those who lived a thousand years earlier.

The second part is much shorter — a time in which average incomes grew rapidly.This was the time when we first experienced economic growth.

Before this, population density determined living standards — the economy was zero sum. More people meant fewer resources per person. More births meant lower incomes; more deaths meant higher incomes.

The latter actually happened during the Black Death, which killed 8% of the population, but left survivors better off. Indeed: The economy was a zero-sum game where the death of others meant more resources for everyone else.

This zero sum dynamic is what Malthus feared would exist in perpetuity — that any increase in productivity would only increase the population size, leaving no change to living standards. Thus, poverty would always remain the condition of the masses.

This is why there was resistance to help the poor in pre-growth times, because the concern was they’d have more kids, and that would make everyone else worse off.

Then from around 1685 in England (and soon everywhere else) onwards, population size and income per person started growing in tandem. We had officially escaped the Malthusian Trap.

As we now know, Malthus was right about describing our past (or the present, from his perspective), but he was wrong about the future. He failed to appreciate how productivity growth would mean that higher populations would drive higher incomes & better living standards for all.

Indeed: It was now possible for population and income to rise in lockstep. Before that, technological growth only produced more people, not richer people.  Economic growth meant population growth and rising prosperity could go together.

It’s hard to understate how much economic growth has improved our standard of living. The average person in the world is 4.4-times richer than they would have been in 1950, and over that same time period, the world has 3x’ed population. If growth didn’t increase along with the population, everyone in the world would be 3x poorer than they would have been in 1950. 

And yet we still don’t intuitively understand this. 

Russ Roberts once asked journalists how much they thought we’ve improved economic growth since 1900, and they said about 50%. In reality, the answer is about 5 to 7 times. 

Economic growth is positively correlated with almost every measure we care about. People with higher living standards live longer lives, suffer less pain, recover better from trauma, etc. The richer the population, the happier and healthier it’s people are. Just compare Cuba & Singapore to see:

Even small amounts of economic growth can have enormous impacts. If you take a time period of about 100 years, and you have the American economy grow one percentage point less each year over that time, we would today have the national wealth of Mexico, not the United States. 

Tyler Cowen expounds: 

“Extra wealth also serves as a cushion against very bad events, or at least against later declines in wealth. Ten or fifteen years ago, it was common to hear the claim that once a nation reaches the level of material wealth found in Greece, happiness more or less flatlines. Indeed, this was more or less where the flatlining point seemed to be. Yet since the Greek economic crisis, dating from 2009, no one uses the Greek example to make a point about the flatlining of the happiness-income relationship. The country lost almost a quarter of its economic output, unemployment has risen to over twenty percent, there have been riots in the streets, a neo-Nazi party was elected to the legislature, and at times basic medicines have been unavailable. Having some additional reserves of wealth prior to the crisis would have helped the country a good deal, and might have prevented the troubles altogether by easing debt repayment.”

Tyler’s book, more broadly, is an argument for why pursuing economic growth is a moral imperative for our society. He inverts Rawls argument: if Rawls says we should maximize growth to make the worst person better off, Cowen says we should redistribute wealth only so long as it leads to economic growth (since growth is what expands the pie/increases wealth to begin with). 

In a world where we optimize for economic growth (with caveats for respecting human rights and environmental sustainability), some people are worse-off short-term, but all are better off long-term, as evidenced by how people live today vs. 100 years ago.

Cowen’s conceit is if we sacrifice economic growth today, we’re stealing from future people — and we should obviously care about our future children. 

Indeed: Rawls’ Veil of Ignorance takes into account where you were born, but not when. If it did, you’d prioritize anything that promoted economic growth — because if you didn’t, you’d be effectively stealing from future people.

Redistribution can be a one-off effect where economic growth compounds, and yet people don't fully appreciate it. When we redistribute wealth in a way that sacrifices economic growth, we're also redistributing resources away from our future children. The key lesson? Good redistribution is about increasing growth, not the opposite.

This begs the question: if economic growth is so obviously good for us, why is it unpopular? 

Well, first off, capitalism has no PR department. Our government takes credit for private gains, even though the private sector’s growth is what supports government transfers in the first place. 

But because these transfers come from the government itself, people fail to make the much needed connection back to private enterprise. We say billionaires shouldn’t exist, but we don’t mind the government itself being a trillionaire because of the illusion of accountability: though 90% of the government isn’t elected, they still get moral high ground.

To fix this, CEOs and their corporations need to make their societal impact legible. Maybe we tie UBIto the S&P? This would align incentives such that when corporations get rich, people do too.

Or maybe a better way to redistribute wealth is to broaden equity ownership to everyone. If people use Facebook, for example, perhaps they could earn equity in the company as a result of being a daily active user of their product.

We underestimate the psychological effects of equity — it aligns everyone on the way up (unity) instead of letting them fight for scraps on the way down (divide).

So if we agree that equity ownership & economic growth are positive externalities on the world, then the demonization of billionaires argument is misguided. When people get rich in a sustainable way, we should celebrate them, because, among other reasons, some of that money goes to the government, which (hopefully) means more education, healthcare, and other programs.

Without growth, however, we wouldn’t be able to pay for these services. Each public job is in effect “sponsored” by a private job via taxes.

Indeed, a lot of capitalism is just unintuitive. It's tough to explain how a minimum wage won’t help someone who’s making $8 an hour. It's tough to explain compounding returns. We’re evolved for a zero sum world that Malthus predicted would continue, and we still haven’t internalized that the economy is now positive-sum.

Capitalism hasn’t been portrayed inspiringly — a system with the pursuit of self-interest as its guiding light doesn’t exactly nourish the soul.

By contrast, communism & socialism are very inspiring, calling for the best of humans and resonating most with how we relate to our friends and family ("From each according to his ability; to each according to his needs"). 

Since communism is the approach we take with our families, we assume it should also work at scale. And since capitalism is the opposite — unnatural within families — the fact that it feeds billions of people is counterintuitive.

This is how you get global corporations promoting Marxism.

Indeed: during the Spanish Civil War, soldiers were said to have died with the word "Stalin" on their lips. We don’t see people dying with “free markets” on their lips...

This presents an opportunity: How do we make capitalism more popular?

In 2019, Patrick Collison and Tyler Cowen presented their case for why Progress Studies should be an academic field. The explicit idea was to help us study how institutions and cultures lead to more economic growth, while the implicit idea was to raise the status of economic growth in the process.

This has often been contrasted with transhumanism as an approach to popularize technological growth. They’re both responses to the failure of liberalism/libertarianism to support pro-growth policies towards immigration, free-trade, or lower government spending.

People need a mission, and so perhaps the Progress movement needs to go one of two ways to resonate with a wider audience. It should either lean into futurism (space, longevity, etc) or moral authority (ending poverty). The only options are "let's end poverty for all" or "let's end death for all". Of course, we could do both, it’s just a matter of emphasis. 

More broadly, I wonder if a cultural ethos/movement around “positive sum” would resonate widely. It’d recognize our increasingly intertwining fates, the importance of expanding and sharing  the pie, and the need to refocus our attention towards abundance rather than scarcity, envy, and status games. 

The movement would need to concede how unintuitive capitalism is, and acknowledge that positive-sum thinking is a learned behavior. It would have socialism on a local relationship and community level (which provides priceless connection and reciprocal benefits, while being inefficient with no metrics/scale), and  capitalism on a global level (a world with market-based prices, comparative advantages, optimization, scale, and wealth creation).

It would need to have a vigorous defense of technology, especially since society survives or dies based on its ability to sustainably grow the pie. Without a growing pie, there’s a winner for every loser, and we engage in zero sum conflict. The bigger the pie, the more there is to redistribute as well. Technology innovation, if done well, helps do more with less—growing the pie without hurting the environment. 

In the coming weeks we’ll dive deeper into some central questions around what causes and hinders economic growth.

Until next time...

Read of the week: Alex Kaschuta on Gatekeeping

Listen of the week: Jim Rutt and Samo Burjia

Watch of the week: Jesse Walden on Means of Creation

Cosign of the week: Bitclout is ingenius (the mechanic of stock investing in people, separate from their specific implementation and surrounding controversy), I just can’t believe I didn’t think of it first. We are doubly inspired to bring Cosign back, maybe it should have an economic component as well.

On Deck Updates:

  • We launched On Deck Miami w/ Keith Rabois which will take place during May

  • I’m hiring a Chief of Staff for On Deck, and also an EA.

  • We’re hiring a Thinker in Residence who will help incubate this idea—a personal trainer for intellectual growth. Reach out if appropriate.

Until next week,


Loading more posts…