Artificial intelligence has evolved so much in recent years that it now resembles...wait for it...un-artificial intelligence (a.k.a. human intelligence).
At least that's Blake Lemoine, an ex-Google engineer, thinks.
On June 11th, the Washington Post reported Google suspended Lemoine after he made the eyebrow-raising claim that the search giant's unreleased AI system, Language Model for Dialogue Applications (LaMDA), had “come to life.” Notably, Google took issue with him breaching confidentiality - not with his assertion.
Lemoine published a transcript of his conversations with the chatbot, which provides a peek into an algorithm so advanced it convinced an expert it was an real person with feelings and emotions. The story's revitalized a debate over whether or not AI-powered language models will eventually catapult us into a dystopian "I, Robot" reality.
Most experts are confident that’s not a possibility (though Elon Musk famously thinks it is). In any case, the story demonstrates just how far AI-powered language models have come, blurring the lines between humans and machines.