US News

Chief Mustafa Suleyman alerts ‘looking on seemingly conscious AI’

Mustafa Suleyman joined Microsoft last year to take charge of consumer AI efforts. Stephen Brashear/Getty Images

Will AI systems achieve human-like “consciousness”? According to Microsoft AI CEO Mustafa Suleyman, the answer may be yes given the rapid pace in the field. In a new article published yesterday (August 19), he described the emergence of “seemingly conscious AI” (SCAI) as a development with serious social risks. “In short, my core concern is that many people will begin to believe in the fantasy of AIS as a conscious entity that they will soon advocate for AI rights, model welfare and even AI citizenship,” he wrote. “This development will be a dangerous shift in AI progress and deserves our immediate attention.”

Suleyman is particularly concerned about the prevalence of AI’s “mental risk”, a problem that has been rife in Silicon Valley in recent months, and users have reportedly lost contact with reality after interacting with generated AI tools. “I think it’s not limited to people who already have mental health problems,” Suleiman said.

Openai CEO Sam Altman expressed general disappointment about the loss of dialogue and attitudes of individuals who lost the former, after OpenAI temporarily cut off access to its GPT-4O model earlier this month.

“I can imagine a future where a lot of people do believe Chatgpt’s advice on their most important decision,” Altman said in a recent post on X. “While that might be great, it has disturbed me.”

Not everyone sees it as a red flag. The Trump administration’s “AI and Crypto Tsar” David Sacks’ concerns about AI psychosis have been compared to the moral panic around social media. “It’s just a pre-problem performance or an outlet,” Saxophone said earlier this week. All-in-one podcast.

According to Suleyman, debates only become more complex as AI capabilities improve. Suleyman co-founded DeepMind in 2010 and later launched Turning Point AI, a startup that was largely absorbed by Microsoft last year.

Building SCAI may become a reality in the coming years. To achieve a fantasy similar to human consciousness, AI systems will require language fluency, understanding personality, long and accurate memory, autonomy, and goal planning capabilities – Large Language Models (LLMS) are already available or soon.

Suleiman said that while some users may view SCAI as a telephone extension or pet, others “will believe it is a fully emerging entity, an entity with real moral considerations in a conscious society.” He added: “Sometimes, these people will argue that the protections that are due under the law are pressing ethical issues.”

Some in the AI field are already exploring “model welfare,” a concept designed to extend ethical considerations to AI systems. Anthropic launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup’s Claude Opus 4 and 4.1 ended its ability to harmful or abusive user interactions after observing “obvious haunting patterns” in the system in certain conversations.

Encouraging principles such as model welfare are “both premature and candidly dangerous”, Suleiman said. “All of this will aggravate delusions, create more dependency-related problems, prey on our psychological vulnerability, add new dimensions of polarization, complicate existing struggles of rights, and create huge new categories of errors for society.”

To prevent Scais from becoming commonplace, AI developers should avoid the idea of promoting conscious AIS while designing models to minimize signs of consciousness or human empathy triggers. “We should build AI for people; not be a person,” Suleiman said.

Mustafa Suleyman, head of Microsoft AI, issues alerts on



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button