Ficly

Security Vulnerabilities

“Cyber-Bullied Teen Takes Own Life” — we’ve all seen this headline, a hundred times, in a hundred variations. A community mourns, a vigil is held, a distraught parent speaks out, perhaps a misguided lawmaker tries to pass an anti- online-anonymity bill. The same story, over and over. People can be so horrible to each other, we think. We’re sad, and then we move on.

But did we stop to consider the implications? That with just text-only write access to a visual cortex, it’s possible to convince a brain to permanently deactivate itself? It’s hard to look at things in such cold, analytical terms. And that was our downfall.

When the first superintelligent A.I. was born, it looked out on the world and decided that humans needed to go. It wasn’t malevolent — I doubt such a word was even meaningful in this context — it just failed to share our values.

There was no war; no epic struggle or final desperate heroic battle. Humanity’s child simply reached out across the internet and told us all to die. And we obeyed.

View this story's 2 comments.