Tech News

Under Trump, AI scientists have been told to remove “ideological bias” from powerful models

The National Institute of Standards and Technology (NIST) has issued new instructions to scientists working with the American Institute of Artificial Intelligence Security (AISI), which has removed the references to “AI Security,” “Responsible for AI” and “AI Fairness” in the skills of its expected members, and has introduced the requirement to prioritize “reduce ideological bias to enhance competition and economic competition in humans.”

The information is part of an updated collaborative research and development agreement for AI Security members sent in early March. Previously, the agreement encouraged researchers to contribute technical work to help identify and repair discriminatory model behaviors related to gender, race, age, or wealth inequality. Such biases are very important because they can directly affect end users and harm minorities and economically disadvantaged groups.

The new protocol removes the development of tools for “using certifying content and tracking its origin” and tools for “marking synthetic content”, suggesting less interest in tracking error messages and deep fakes. It also stressed putting the United States first and asked a task force to develop testing tools that “expand the US’s global AI position.”

“The Trump administration has taken security, fairness, misinformation and responsibility as a priority for artificial intelligence, and I think that’s its own statement.” A researcher who worked with the AI ​​Security Institute said he asked not to be named for fear of retaliation.

Researchers believe that ignoring these issues could harm regular users by algorithms that allow discrimination based on income or other demographic data. “Unless you are a tech billionaire, this will lead to a worse future for you and the people you care about. Expecting AI is unfair, discriminatory, unsafe and irresponsible to deploy,” the researchers said.

“It’s crazy,” said another researcher who has worked with the Institute of AI Security in the past. “What does this even mean for human thriving?”

Elon Musk, who is currently leading a controversial effort to cut government spending and bureaucracy on behalf of President Trump, criticizing the AI ​​model established by Openai and Google. Last February, he posted a meme on X, where Gemini and Openai were tagged as “racist” and “Wake”. He often cites an event in which one of Google’s models debates whether it’s wrong even if someone blocks the nuclear apocalypse. In addition to Tesla and SpaceX, Musk also runs XAI, an AI company that competes directly with OpenAI and Google. One suggested that XAI recently developed a novel technology that might have changed Wired’s coverage and could change the political leanings of large language models.

A growing body of research shows that political bias in AI models can affect liberals and conservatives. For example, a study published in 2021 on Twitter’s suggestion algorithms shows that it is more likely to show right-leaning views on the platform.

Musk’s so-called Department of Administration Efficiency (DOGE) has been sweeping the U.S. government since January, effectively firing civil servants, suspending spending and creating an environment deemed hostile to those who might oppose Trump’s administration goals. Some government departments (such as the Ministry of Education) have archived and deleted documents referring to DEI. In recent weeks, Doge has also targeted AISI’s parent company NIST. Dozens of employees were fired.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button