Araverus
NewsMarkets
News
HeadlinesThreads
© 2026 Araverus
AboutContactPrivacyTerms
News/Tech/AI

Google AI Sentience Claim: Experts Prioritize Common Sense

Araverus Team|Friday, March 20, 2026 at 1:00 PM

Google AI Sentience Claim: Experts Prioritize Common Sense

Araverus Team

Mar 20, 2026 · 1:00 PM

AI · Google · LaMDA · Sentience

AIGoogleLaMDASentience

Key Takeaway

The current debate over AI sentience underscores that advanced AI, despite its sophistication, still lacks fundamental human-like common sense, meaning significant R&D investment is still required for practical, autonomous applications. This means continued focus on foundational AI research for technology and software sectors, rather than immediate breakthroughs in human-level consciousness, impacting valuations for companies promising advanced AI capabilities.

Google engineer Blake Lemoine claimed the company's LaMDA AI chatbot generator achieved sentience, but Google dismissed the claim, placing Lemoine on leave, with most experts agreeing LaMDA is not conscious.

Lemoine spent months testing LaMDA, becoming convinced by its discussions of needs, fears, and rights. Google and most experts, including cognitive scientist Gary Marcus and science columnist Carl Zimmer, view Lemoine's belief as an "illusion" or "the ELIZA effect," where humans anthropomorphize technology.

Karina Vold, an assistant professor at the University of Toronto, and Kate Darling, an expert in robot ethics at MIT, note humans' desire to assign human-like characteristics and the potential for a strong movement advocating for AI rights, regardless of actual consciousness. The article highlights the lack of a consensus definition or test for AI consciousness.

Harvard cognitive scientist Steven Pinker suggests focusing on practical AI applications rather than consciousness. Computer scientists, like Hector Levesque from the University of Toronto, prioritize developing "common sense" in AI, citing examples like self-driving cars' limitations, as more central to its utility than consciousness.

The debate also prompts reflection on how humans treat other conscious biological species.

Read More On

Why Even Smart People Believe AI Is Really Thinkingwsj.comWill AI ever become conscious? It depends on how you think about biology. - Voxvox.comThe people who think AI might become conscious - BBCbbc.co.ukA Google engineer says AI has become sentient. What does that actually mean? - CBCcbc.caNot not: Why AI can and cannot think, and what to do with this - The Academic - Research, explainedtheacademic.com

Related Articles

Tech★★Similarity: 67% · 5d ago

Can Nvidia’s Dominance Survive the Sea Change Under Way in AI Computing?

Making chips for training AI models made it the world’s biggest company, but demand for inference is growing far faster.

Tech★★★Similarity: 66% · 4d ago

What Is Inference? Explaining the Massive New Shift in AI Computing

The focus of artificial-intelligence spending has gone from training models to using them. Here’s how to understand the difference—and the implications.

Tech★★Similarity: 66% · 11h ago

The Smartest Minds in AI Just Learned the World’s Most Valuable F-Word

At companies that can do anything, the most important thing is focus. Steve Jobs made it a priority at Apple—and OpenAI and Anthropic are learning why.

Tech★★★Similarity: 66% · 10h ago

The Trillion Dollar Race to Automate Our Entire Lives

The AI sprint is hurtling toward a world where anyone can build personal concierges to do everything from executive presentations to March Madness brackets.