Meta’s Metaverse Is Already Populated With the Worst Examples of Internet Behavior: Report

Researchers who spent time on Meta’s Horizon Worlds said that users' misogyny and racism is an issue for a company that's failed to moderate bad behavior.

We may earn a commission from links on this page.
An avatar of a person holds up a bottle of "vodka" to an avatar in Horizon Worlds
In Meta’s Horizon Worlds, a player’s avatar is force-fed a bottle of “vodka.” Report writers noted that it taking place virtually did not make it feel any less invasive.
Screenshot: SumOfUs Video

It took less than an hour for one researcher studying Meta’s titular metaverse to be “virtually raped” after putting on the VR headset for the first time. After days of witnessing and experiencing rampant conspiracy theories, sexual harassment, racism, and homophobia, writers of a recent report said the tech giant is entirely ill-prepared to reach its dream of creating a shared online space for millions of users.

The nonprofit corporate watchdog group SumOfUs released its report last week describing Meta’s née Facebook’s transition from a social media-focused company to one trying to define what it means to be online in a pseudo-physical environment. The group also documented how, as Meta’s VR platforms have grown to over 300,000 users, Meta’s flagship metaverse product Horizon Worlds is already playing host to the internet’s worst kinds of racist and misogynist caricatures. SumOfUs even included a link to a video (note, trigger warning for sexual assault) of when users led a researcher into a room and proceeded to move their avatar-torsos back and forth in a kind of humping motion, all while another user tried to pass around a vodka bottle.

Advertisement

SumOfUs has not been shy of its stance against Meta and its advocacy against major corporations on a host of other issues, but this report shares quite a lot of evidence for how little moderation is going on within Horizon’s play space. The report’s researchers were apparently stalked around through different worlds in the Meta-owned product. There were examples of fake drugs laid out on tables and users constantly calling each other racist and homophobic slurs.

Advertisement

“Meta is pushing ahead with the Metaverse with no clear plan for how it will curb harmful content and behavior, disinformation, and hate speech,” the report said. Not only does the company know content moderation is a problem, they don’t have a precise plan for how they would fix it. Report writers cited a March internal memo shared by the Financial Times and penned by Meta’s vice president of VR Andrew Bosworth. The Meta VP said moderating users “at any scale is practically impossible,” according to FT.

Advertisement

The explicit promise of the metaverse is to occupy a digital realm and interact with people as if you are all really there. Although online harassment is nothing new, explicit content of that nature takes on a more visceral nature once you put on the goggles meant to make you feel like you exist in the space.

The watchdog group also pointed out several other examples where users with female avatars reported sexual assault, including one where a male user said he recorded a female player’s voice to “jerk off” to. Meta had already introduced a “personal boundary” feature in February that restricts other avatars from venturing too close to another player’s body. Other virtual chatrooms like VR Chat have already included similar features, but SumOfUs researcher’s avatars were constantly asked to take down personal boundary setting. When another user attempts to touch or interact with you, the VR controllers vibrate “creating a very disorienting and even disturbing physical experience during a virtual assault,” according to the report.

Advertisement

A Meta spokesperson told Insider the personal boundary setting is on by default, and it does not recommend turning it off when around strangers, adding, “We want everyone using our products to have a good experience and easily find the tools that can help in situations like these, so we can investigate and take action.”

Though Horizon Worlds does allow for parental controls as well as the ability to mute other users, the platform still represents a major issue for many young users, especially as researchers saw other avatars encouraging people to take down safety features. The app is technically 18+, but current users said the platform is already full of people who are underaged. Unlike social media platforms that can use systems to monitor written content or even videos, current virtual reality chat rooms rely on individual users to report bad behavior.

Advertisement

Current self-styled “metaverses” like the kid-centered Roblox have shown just how difficult it is to curb nasty player behavior. There have been past examples of avatars sexually assaulting other players as young as 7. Horizon Worlds supposedly includes in-game moderators to enforce guidelines, but the report says that in interviews players said there aren’t nearly enough of them. And it’s not just a problem for Meta’s platform. VR Chat has played host to disturbing content on a platform young children who know how to fake a birth date can easily access.

The report’s authors are not the only ones to take swings at Meta and its CEO Mark Zuckerberg, specifically for the kind of product they’re trying to build. Amazon’s head of devices recently noted how nobody can actually define what the “metaverse” really is. Snap CEO Evan Spiegel called the tech more “hypothetical” than anything. Ex-Nintendo of America exec Reggie-Fils-Aimé said Meta is “not an innovative company,” adding that the self-styled metaverse pioneer has not shown it knows how to lead innovation.

Advertisement

But more than tech execs throwing slaps at competition, the comments do mark just how much ambiguity still exists for the idea of a shared digital space. Nick Clegg, the global affairs head at Meta, recently wrote that asking Meta to record player speech in order to moderate content would be like asking a bar manager to listen in on conversations and silence things they don’t like. Instead, the company wants to focus on AI-driven systems that help respond to user reports.

“We’re in the early stage of this journey,” Clegg wrote.

But the extremely Meta-critical writers at SumOfUs essentially said that if anyone is going to be steering the metaverse ship, it better not be Meta.

Advertisement

Meta has repeatedly demonstrated that it is unable to adequately monitor and respond to harmful content on Facebook, Instagram and WhatsApp—so it is no surprise that it is already failing on the Metaverse too,” the SumOfUs writers said.

Advertisement