The Great Filter

Earlier this year I participated in a thread on Facebook about where all the aliens are, since none seem to have contacted us – i.e., the Fermi paradox. Through this I learned about the Great Filter, a theory that we haven’t encountered aliens because a series of obstacles eventually prevents any species from being able to contact other planets before they die out.

Current events and (relatively) recent history have made me think that the Great Filter seems very plausible. Consider some of the threats to our species’ survival over the last hundred years:

  • Global thermonuclear war. We seem to have dodged this bullet (although, since the genie can never be put back in the bottle, there’s always some chance of this).
  • Resource Exhaustion. It’s unclear whether this will happen in a meaningful way, since it seems likely that we’ll transition to renewable energy before we use up all the fossil fuels, and it’s basically unknown whether we’ll use up any other key resources before we manage to get into space to get access to more.
  • Climate change. This is the current threat of great concern, and it is looking less and less likely that we’ll overcome it. It might not actually wipe humanity out – we could stop it and be forced to live farther from the equator for some centuries or millennia – but it could still result in a Great Pause in our species’ development.
  • Biological genocide. I.e., being wiped out through a virus or other biological agent, either that we developed, or which evolved naturally. There’s always a risk here, but my guess is that unless we engineer something ourselves it’s unlikely that this is the way we’ll go.
  • The Internet.

This last is the one that’s been on my mind lately, as we’ve watched a slide in many nations towards authoritarianism and fascism, combined with a growing populism rooted in conspiracy theories, disdain for science and education, and extremism. I think what’s been happening is that the advent of the Internet – and generally global, cheap mass and one-to-one communication – has amplified the voices who believe in those things, while taking advantage of an innate tendency to treat things that sound authoritative as being authoritative. Combining this with the human mind’s tendency to see patterns even where none exist, and I think this is pushing a significant and growing fraction of humanity down the authoritarian/fascist path.

Not that the pre-Internet days were perfect, of course (one large and obvious drawback being rampant gatekeeping by the dominant culture), but it seemed that the limits of communication before the Internet were a natural – if accidental – check on the ability of the more lunatic voices to spread and gain an audience. Again, not perfect, as plenty of lunatic ideas ended up being ingrained in human cultures (slavery, anyone?). And of course the Internet allows supports the spread of sane voices that were formerly on the fringe. But the results so far are not making it look like the Internet has been a good thing for humanity.

The Internet itself isn’t going to destroy humanity, but I think its effects make it more likely that something else will. For example, the crazy resistance to reasonable measures to combat COVID-19, including to simply being vaccinated. And even as we have mounting evidence of, and scientific consensus regarding, climate change, there is strong opposition to doing anything to fight it, including a large contingent who don’t believe it’s even happening. Moreover, authoritarian leadership – especially of the narcissistic, incompetent Donald Trump variety – could even revive the prospect of global thermonuclear war.

So my take on the Great Filter is that any time a civilization develops an Internet, it becomes a significant impediment to fighting threats to that civilization’s survival, making it much less likely that that civilization is able to solve the in-and-of-themselves huge obstacles to interstellar space flight. Granted, all of this is very human-centric, but there may well be characteristics of many (theoretical) sentient species which are susceptible to the Internet, even if they’re not the same characteristics that humanity has.

Even assuming developing interstellar civilization – or at least colonization – is possible, and even if there are (or have been) billions of technological civilizations in the galaxy, it doesn’t seem at all implausible to me that they’ve all killed themselves off before they got there.

And these days, it seems like we’re well on our way to doing the same.

(Postscript: If this sort of thing interests you, I recommend the audio drama Out of Place, the second season of which explores a dozen ways that humanity could go extinct. I think most of them are unlikely-to-implausible, but they arguably make for better drama than the ones above.)

2 thoughts on “The Great Filter”

  1. An interesting case of “death by Internet” is what can happen when “actual” AI becomes a real thing.

    Given the communications amplification effect of the Internet, even a very simple/stupid AI could do massive amounts of social damage just by trying to “maximize engagement”, for example – exactly the kind of task a company like Facebook would set for a general AI, as soon as they had one.

    Even a very simple/stupid AI could do an awful lot with a sufficient amount of online social capital.

  2. My optimistic take on the Fermi Paradox is that the lifetime for interesting special communications is along the lines of thousands of years, where the likelihood of any two overlapping and being near enough to matter is so low as to make it a non-starter.

    As for the Great Filter, I’m not sure what to think. It is certainly true that the Internet (and all its dependent technologies) can be misused both innocently and maliciously … but so can pretty much everything else we develop. The more promise a technology holds, the more potential for destruction.

    It is also perhaps uniquely true of humanity that a communication tool would be the one that amplifies all the fracture lines inherent in the species. Humans form communities, for good and ill, while still competing with each other.

    We really can’t know for sure until we actually encounter an alien species and find how they’re like us, and how they’re, well, alien.

Leave a Reply to Subrata Sircar Cancel reply

Your email address will not be published. Required fields are marked *