The distant enemy fallacy and artificial intelligence

I’ve been reading a few articles pondering whether artificial intelligence could ever reach the state of AGI (artificial general intelligence), and whether that consequent new being could then destroy us.

In my mind, this is not the most important issue. I already wrote at length about this in my article, How will AI destroy us?

In short, we have far more to fear from AI fundamentally altering society. This is true even if AI remains as a series of specialized models that aren’t intelligent at all in the traditional, sci-fi sense of the Terminator or Data from Star Trek.

Hence, this focus on AGI has inspired me to define a new type of fallacy: the distant enemy fallacy. It is the fallacy of assuming that the enemy is always far away, and that consequently, we can safely discuss the danger of the enemy as if debating the moves of two chess players.

But with AI, I’m sorry to say that the danger is already here. It’s the same with climate change. It’s not some future enemy that we have the luxury of dissecting academically. It’s already here, even though it may not be hitting with the immediate effect of a tidal wave. This isn’t the movies. This is real life, and in real life with AI, the enemy doesn’t need to travel back in time to kill us. It’s already here, and it’s starting to kill us.

All my posts are written without AI. Feel free to download and copy this image to support the fight against AI!

Leave a Reply

Your email address will not be published. Required fields are marked *