The National Institute of Standards and Technology (NIST) has issued new instructions to scientists working with the American Artificial Intelligence Institute of Artificial Intelligence (AISI) which eliminates the mention of ‘AI safety’, ‘Responsible AI’, and ‘AI -Righteousness’ in the skills that expect to set members of the members and to have prioritization.
The information comes as part of an updated cooperative research and development agreement for Ai Safety Institute Consortium members sent in early March. Previously, the agreement urged researchers to contribute technical work that can help identify and solve discriminatory model behavior related to gender, race, age or wealth inequality. Such prejudices are very important because they can directly affect end users and disadvantage minorities and economically disadvantaged groups.
The new agreement removes mention of the development of instruments “to verify content and detect its origin”, as well as “labeling of synthetic content”, which indicates less interest in the detection of misinformation and deep falsification. It also adds the emphasis to the first place of America and asks one working group to develop test instruments “to expand America’s global AI position.”
“The Trump administration has removed safety, fairness, wrong information and responsibility as things that appreciate it for AI, which I think is obvious,” says one researcher at an organization working with the AI security institute, who asked not to be named for fear of retaliation.
The researcher believes that the ignoring of these issues can harm regular users by allowing possibly algorithms that discriminate on the basis of income or other demographs not to be marked. ‘Unless you are a technical billionaire, it will lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe and irresponsible deployed, ‘the researcher claims.
“It’s wild,” said another researcher who worked with the AI Safety Institute in the past. “What does it even mean for people to thrive?”
Elon Musk, who is currently making a controversial effort to reduce government spending and bureaucracy on behalf of President Trump, has criticized AI models built by Openai and Google. Last February, he placed a meme on X in which Gemini and Openai were called “racist” and “awake”. He often mentions an incident where one of Google’s models debated or it would be wrong to give someone wrong, even if it would occur a nuclear apocalypse – a highly unlikely scenario. Besides Tesla and SpaceX, Musk Xai runs an AI business that competes directly with Openai and Google. A researcher who advises Xai has recently developed a new technique to change the political trends of major language models, as reported by Wired.
A growing amount of research shows that political prejudice in AI models can affect both liberals and conservatives. For example, a study of Twitter’s recommendation algorithm published in 2021 showed that users were more likely to display direct perspectives on the platform.
Since January, Musk’s so-called Department of Government Efficiency (Doge) has flowed by the US government, fired effective civil servants, interrupted expenses and created an environment that is considered hostile to those who can oppose the goals of the Trump administration. Some government departments such as the Department of Education have archived and deleted documents called Dei. Doge has also targeted Nist, the parent organization of Aisi, over the past few weeks. Dozens of employees have been fired.