Biden Admin To Drop Half a Million on Artificial Intelligence That Detects Microaggressions on Social Media

The Biden administration is to hand out more than $550,000 in grants to develop an artificial intelligence model that can automatically detect and suppress microaggressions on social media, government spending records show.
The grant, funded by President Joe Biden’s $1.9 trillion US bailout plan, was awarded in March to researchers at the University of Washington to develop technologies that could be used to protect online users from discriminatory language. The researchers have already received $132,000 and expect total government funding to reach $550,436 over the next five years.
The researchers are developing machine learning models that can analyze social media posts to detect implicit bias and microaggressions, which are generally defined as slights that offend members of marginalized groups. It’s a broad category, but previous research conducted by the lead researcher on the University of Washington project suggests that something as tame as praising meritocracy can be considered a microaggression.
The Biden administration’s funding of the research comes as the White House faces mounting accusations that it is seeking to stifle free speech online. Biden suggested last month that there should be an investigation into Tesla CEO Elon Musk’s acquisition of Twitter after the billionaire declared that the social media app would follow a “free speech” agenda. Internal Twitter communications Musk released this month also revealed a long-standing relationship between the FBI and Twitter employees, with the agency playing a regular role in the platform’s content moderation.
Tom Fitton, president of Judicial Watch, compared the Biden administration’s funding of artificial intelligence research to the Chinese Communist Party’s efforts to “censor speech that is not sanctioned by the state.” For the Biden administration, Fitton said, the research is a “project to make it easier for their leftist allies to censor speech.”
A spokesman for the National Science Foundation, which issued the research grant, dismissed criticism of the project, which he said was “not trying to stifle free speech.” The project, the spokesperson said, creates “automated ways to identify biases in speech” and addresses the biases of human content moderators.
The research’s description doesn’t give examples of what comments would qualify as microaggressions — though it acknowledges that they can be unconscious and unintentional. The project is led by computer science professor Yulia Tsvetkov, who has written studies that suggest the artificial intelligence model can identify and suppress language that many consider inoffensive, such as comments praising the concept of meritocracy.
Tsvetkov co-authored a 2019 study titled “Finding Microaggressions in the Wild,” which categorized microaggressions into subcategories, one of which was the “myth” that “differences in treatment are due to one’s merit.” Examples of microaggressions detailed in the paper include statements such as “Your mother is white, so it’s not like you’re really black,” and questions including “But where are you originally from?”
Tsvetkov also co-authored a July article that analyzed the “prominence of positivity in #BlackLivesMatter tweets” during the George Floyd riots in June 2020. Tsvetkov and her colleagues found that positive emotions such as “hope, pride and optimism” were common in pro-Black Lives Matter tweets, evidence they said contradicted narratives that cast Black Lives Matter protesters as angry.
Conservative watchdog groups have sounded the alarm over the Biden administration’s funding of the research, telling the Washington Free Beacon the project represents a White House effort to curb free speech online.
“It is not the role of government to police speech that some may find offensive or emotionally draining,” said Dan Schneider, vice president of the Media Research Center’s free speech division. “Government is supposed to protect our rights, not suppress our rights.”
Tsvetkov did not respond to requests for comment about free speech advocates’ concerns about the research.
The research is the latest instance of the government assuming a role in online content moderation. The Biden Department of Homeland Security established a disinformation management board with the goal of “countering disinformation,” only to scrap the controversial board after intense backlash.