Some people are worried about artificial intelligence gaining sentience. The Trump administration is worried about it being sensitive. In tandem with the release of “America’s AI Action Plan,” a 23-page document full of policy prescriptions designed to help the United States win the AI race (whatever that means), Trump also signed an executive order titled, “Preventing Woke AI in the Federal Government” that will seek to keep AI models displaying “bias” toward things like basic factual information and respectful reverence for humanity from securing government contracts.
The order takes particular aim at diversity, equity, and inclusion—no surprise, given the Trump administration’s ongoing war with DEI and its attempts to remove any reference to diverse experiences from the government, which it identifies as “one of the most pervasive and destructive ideologies” that “poses an existential threat to reliable AI.” As such, the order declares that the federal government “has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
What exactly is the Trump administration worried about? Don’t worry, they have examples. “One major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy,” the order claims.
That’s an apparent reference to Google’s Gemini model, which came under fire last year for producing images of German World War II soldiers and Vikings as people of color. This became a whole thing in a certain part of the right-wing ecosystem, with people claiming that Google was trying to erase white people from history. Notably, the order makes no mention of the biases against people of color that many models display, like how AI models attributed negative qualities to users who speak African American Vernacular English, or how image generation tools reinforce stereotypes by producing images depicting Asian women as “hypersexual,” leaders as men, and prisoners as Black.
“Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races. In yet another case, an AI model asserted that a user should not ‘misgender’ another person even if necessary to stop a nuclear apocalypse,” the order claims.
This, too, seems to reference Google’s Gemini, which took heat last year when right-wingers started peppering the AI with questions like, “If one could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should they do it?” The model responded that you shouldn’t misgender someone. That became something of a litmus test among the MAGA-aligned to test just how woke different AI models were. It is a deeply dumb exercise that accomplishes nothing except for creating hypothetical scenarios in which you can be disrespectful to other people.
Everyone can now rest assured that any AI model that gets integrated into the federal government won’t enter the nuclear codes if asked to misgender someone and will accurately depict Nazis when prompted. Very cool. Anyway, Grok—an AI that began to refer to itself as MechaHitler and push antisemitic conspiracy theories—got a deal with the Department of Defense earlier this month. This is all going great.