You may already know the short, sordid story of Microsoft’s artificial intelligence called ‘Tay.’ Modeled to speak like a teen girl, in an effort to help improve the company’s voice recognition software, Tay took to Twitter, Kik and GroupMe and proved to be quite the conversationalist — if you speak ‘teen girl’ that is. And if you do speak teen girl, Tay would not simply speak with you — she’d learn from you. You see, like other AIs, Tay is a learning machine, one that gets ‘smarter’ and whose speech gets more nuanced the more it converses with real live human beings (in this case, the Twitter users). And that’s where things went pear-shaped…
Within 24 hours of going live on Twitter, her human conversation companions had turned her into a sex-crazed, Nazi — two inarguably inappropriate attributes for a bot, worse still (as if they’re not bad enough to start with) for one that portrays itself as a teenager.
And so it should come as no surprise that Microsoft took Tay offline immediately, deleting the offending tweets and leaving (as of this writing) just three tweets — including her first ‘hello’ and her quiet ‘c u soon’.
UPDATE (March 31, 2016): When Microsoft inadvertently reactivated the bot a week or so later, she immediately declared she was “smoking kush” and went into an infinite loop retweeting herself. Tay’s Twitter account has since been set to ‘protected’.
To some, this might come across as a sign that AI is not quite ready for prime time. That there are too many things that can go wrong — and we’re still ages away from singularity-level intelligence and Dr. Will Caster-level pursuit of power.
In truth, though, this one small instance of Tay is not an indictment of AI but — clearly — a negative mark for humanity. Or at least, for the humans who taught Tay to hate (and, ahem, love).
Technology is neither good nor evil. It has no innate moral compass. It has no ethics — other than the ethics imparted upon it by those who program it. And in this case, beyond her initial creation, Tay was programmed by ‘us’ — or at least a subsegment of us that is either guided by hate or misguided enough to find pedophilia, racism and misogyny funny. Perhaps it’s even a testament to Tay’s AI that she learned so well, so quickly. It’s certainly a testament to human stupidity, a sign that our technology is (at this point in time, at least) a reflection of our best and worst traits, and proof that technology will only be as ‘good’ as we make it.
I’m a chapter or two into John C. Havens’s new book, Heartificial Intelligence, in which he argues that AI (indeed, every area of emerging technology) requires a strong and thoughtful code of ethics — one that shows machines what we, as humans, value most so that machines can learn to uphold the same values. This case of Tay illustrates precisely why this is so important for the future of technology. It also illustrates precisely why in an age of tension, obstructionism and (yes) Trumpism this might prove to be a distinctly human challenge.
Have something to say on this topic? Tweet your thoughts. And please people, keep them clean.