¿Los modelos de IA merecen derechos? Big Tech no puede decidir

Do AI models deserve rights? Big Tech can’t decide


Made with Midjourney

Can an AI model feel and suffer in the same way humans do? Claude maker Anthropic definitely thinks it’s possible: The company launched a program earlier this year to study “model welfare,“ and explore questions such as when, and if, “the welfare of AI systems deserves moral consideration.“

This isn’t just a theoretical debate. Anthropic’s Claude Opus 4 and 4.1 models can now end conversations with users who are being “persistently harmful or abusive,” says the company. When pushed toward harmful requests, the company claims Claude showed a “pattern of apparent distress” in its responses.

Other major players are asking the same questions. Researchers from other major AI labs like OpenAI have also embraced the idea of studying AI welfare. Google’s AI team also recently posted a job listing for a researcher to study "cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.”

Some notable figures think it’s a bad idea. Microsoft AI CEO Mustafa Suleyman called the idea of model welfare “both premature, and frankly dangerous“ in a recent blog post. Suleyman believes pursuing the idea will exacerbate delusions and prey on our psychological vulnerabilities, distracting us from more pressing societal problems.