If you plan on firing up your console and chasing kill streaks in your favorite skins, be warned that artificial intelligence is now being used to monitor you. Activision, the studio behind the wildly popular “Call of Duty” franchise, has partnered with the AI company Modulate. The AI firm uses machine learning software called ToxMod to scan voice chat and identify “toxic behavior.”
Modulate has raised over $30 million to create a solution to monitor voice chat and enforce community guidelines for various companies. The technology automatically flags “hateful speech” and reports it to human moderators.
“This is a problem that everyone in the industry has desperately needed to solve,” said Mike Pappas, CEO of Modulate, when interviewed by GamesBeat. “This is such a large-scale market need, and we were waiting to prove that we’ve actually built the product to satisfy this.”
Reining in “toxic” players and moderating speech in the “Call of Duty” franchise has been a goal of Activision’s for years. This is the first time the company has turned to a large language model to police gamers’ speech.
“The core business is proactive voice moderation,” Pappas said. “Rather than just relying on player reports, this is saying you can actually fulfill that duty of care and identify all of the bad behavior across your platform and really do something about it in a more comprehensive way.”
ToxMod is engineered to detect explicit toxicity, such as hate speech and adult language, and identify subtler language, like child grooming, violent radicalization, and self-harm. This intelligent system has undergone expansive training on a dataset comprising more than 10 million hours of audio.
Anyone who’s played “Call of Duty” will recognize the familiar situation of getting killed by a kid who sounds like he’s 12 and then being on the receiving end of a series of invectives that would make a biker blush. In some ways, it’s always been part of the fun of using voice communication while you’re playing. Muting the voice feed has always been an option, too.
The vitriol online players spew at each other is often racist and insane, and it’s understandable that companies would want to have tools to block it. However, technology like this and undoubtedly others in development signal a change in online speech monitoring. In the future, will everything we say or type be fed into a super-AI and some algorithm decide whether what you said is “offensive” or “toxic”?
How long will it be before these AI technologies are rolled out in all the communication apps we use daily? We don’t have to concoct sci-fi fantasies around this, because the Chinese have already deployed AI to censor their population. Our corporate overlords are undoubtedly delighted at the prospect of having this level of speech control. In the future, your Amazon Echo might remind you to curse less and praise the government more to receive your free trial of Prime.
Post a Comment