
AI · Google · LaMDA · Sentience
Google engineer Blake Lemoine claimed the company's LaMDA AI chatbot generator achieved sentience, but Google dismissed the claim, placing Lemoine on leave, with most experts agreeing LaMDA is not conscious.
Lemoine spent months testing LaMDA, becoming convinced by its discussions of needs, fears, and rights. Google and most experts, including cognitive scientist Gary Marcus and science columnist Carl Zimmer, view Lemoine's belief as an "illusion" or "the ELIZA effect," where humans anthropomorphize technology.
Karina Vold, an assistant professor at the University of Toronto, and Kate Darling, an expert in robot ethics at MIT, note humans' desire to assign human-like characteristics and the potential for a strong movement advocating for AI rights, regardless of actual consciousness. The article highlights the lack of a consensus definition or test for AI consciousness.
Harvard cognitive scientist Steven Pinker suggests focusing on practical AI applications rather than consciousness. Computer scientists, like Hector Levesque from the University of Toronto, prioritize developing "common sense" in AI, citing examples like self-driving cars' limitations, as more central to its utility than consciousness.
The debate also prompts reflection on how humans treat other conscious biological species.