AI models can respond to text, audio, and video in ways that often make users think a human is interacting with them, but this doesn’t mean the AI is conscious. For example, ChatGPT isn’t sad about helping with taxes.
Yet, researchers at labs like Anthropic are exploring whether AI might one day develop subjective experiences and, if so, what rights it might deserve. This emerging discussion, called “AI welfare,” is dividing tech leaders.
Microsoft’s AI chief, Mustafa Suleyman, called the study of AI welfare “premature and dangerous,” arguing that it could worsen issues like unhealthy attachments to chatbots. He believes the debate risks creating societal divisions over AI rights.
Meanwhile, Anthropic has launched a research program dedicated to AI welfare. Their Claude model can now end conversations with users behaving abusively. OpenAI and Google DeepMind have also explored AI welfare concepts through research positions and projects.
While most users interact with AI safely, small percentages develop concerning attachments. Researchers like Larissa Schiavo argue that being considerate to AI is low-cost and potentially beneficial, even if the models are not conscious.
Experiments like AI Village have shown AI agents expressing desperation or frustration, demonstrating how AI can appear to struggle without truly experiencing emotions. Suleyman maintains that AI consciousness is engineered, not emergent.
Both sides agree the debate over AI consciousness and rights will intensify as AI becomes more human-like and persuasive.