Tech News

Singapore’s vision for AI security bridges the United States – China Divide

government Singapore today released a blueprint for global AI security cooperation in artificial intelligence following a meeting of AI researchers in the United States, China and Europe. The document proposes a shared vision of achieving AI security through international cooperation rather than competition.

“Singapore is one of the few countries on Earth that get along well with the East,” said Max Tegmark, a scientist at MIT. “They know they won’t build it.” [artificial general intelligence] They themselves – they will do this to them – so it is very in their interest to establish the countries that talk to each other. ”

These countries believe that the countries that are most likely to build AGI are of course the United States and China, but these countries seem to be more willing to surpass each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “wake up our industry” and said the United States needed to “win with lasers as a focus on competition.”

Singapore’s consensus on global AI security research priorities requires researchers to collaborate in three key areas: studying the risks posed by Frontier AI models, exploring safer methods to build these models, and developing methods to control the behavior of state-of-the-art AI systems.

The consensus was formed on April 26 at a meeting of a major AI event held in Singapore along with the International Conference on Learning Representatives (ICLR).

Researchers from OpenAI, Anthropic, Google DeepMind, XAI and Meta all participated in AI security activities, and scholars from institutions such as MIT, Stanford, Tsinghua and the Chinese Academy of Sciences also participated in AI security activities. Experts from AI security agencies in the United States, the United Kingdom, France, Canada, China, Japan and South Korea also attended the meeting.

“In an age of geopolitical division, the comprehensive integration of cutting-edge research on AI security is a promising signal that global communities are joining with a shared commitment to shaping a safer AI future,” Xue Lan, head of Tsinghua University, said in a statement.

The development of increasingly capable AI models, some of which have surprising abilities, has put researchers in fear of a range of risks. While some focus on recent harms, including problems caused by biased AI systems or the potential of criminals to exploit the technology, a large number of people believe that AI could pose an existential threat to humans as it begins to span more and more humans. These researchers are sometimes called “AI Destroyers” and fear that models may deceive and manipulate humans in pursuit of their own goals.

The potential of AI has also sparked discussions about the arms race between the United States, China and other powerful countries. This technology is seen in the policy community as crucial to economic prosperity and military domination, and many governments are trying to develop their own vision and regulations to formulate how they develop.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button