Facebook Has No Clue How to Solve Its Image Problem, Leaked Doc Shows

Internal documents suggest that Facebook's proposed approach for weathering PR crises isn't much of an approach at all.

We may earn a commission from links on this page.
Image for article titled Facebook Has No Clue How to Solve Its Image Problem, Leaked Doc Shows
Photo: Greg Nash (Getty Images)

Despite what the company’s stock prices might tell you, Facebook is a company with an image problem. At best, critics warily regard Facebook as a company that ruthlessly strips users for their data so it can curbstomp competitors in increasingly hostile ways. At worst, people call the company a threat to democracy itself. And while the case against Facebook continues to grow, employees are left scrambling to figure out how the hell it can win back the public—and coming up pretty empty-handed.

At least, that’s what’s suggested by some internal research done in September of last year that attempted to measure Facebook’s “perceived legitimacy” in the eyes of the public and with stakeholders. The full document, which you can read here, quizzed a handful of reporters, regular users, and (weirdly enough) actors about their general perceptions of Facebook.

Advertisement

The results were pretty much what you’d expect: trust in the company was low, confusion about the company’s content moderation processes was high, and nobody believed Facebook was motivated by anything but fat stacks of cash. The researcher’s planned approach for fixing this PR crisis? “Build trust through product experiences,” get more people of color on staff, and, uh, not much else.

“Users don’t trust us to do the right thing because they believe we prioritize revenue and growth over safety and society,” explained an unnamed member of an internal Facebook “Legitimacy Team” whose stated mission is to, well, increase the company’s legitimacy in the eyes of the public.

Advertisement

While the research happened more than a year ago, we’ve heard that same narrative echoed over and over by the source of these documents—Facebook whistleblower Frances Haugen—just this month. CEO Mark Zuckerberg, meanwhile, balked at the idea in spite of all conceivable evidence pointing to that being the case and the company reportedly mulling a complete name change.

Advertisement

“Because users don’t trust FB due to past incidents, they don’t believe we have good intentions or motivations when it comes to integrity efforts,” the report reads. “Users don’t perceive our content regulation system as legitimate because they don’t trust our motivations.”

Ignoring the fact that Facebook is a company and companies generally exist to turn a profit, the report goes on to note that users “perceive [Facebook’s] systems to be ineffective and biased toward minority groups,” citing the experiences of Facebook users that are LGBTQ+, along with people of color and other marginalized groups. The report states that these users feel “FB is censoring or over-enforcing on minority groups,” and described being banned from the site “for speaking out to their communities about their lived experiences.”

Advertisement

While Zuckerberg and his ilk have reportedly spent a long time ignoring the very apparent fact that its hate-speech sniffing systems tend to unfairly target marginalized groups, the company has since come around on the idea that, hey, maybe it should do something about the issue. Last December, the company started an internal effort to overhaul the moderation systems involved, but this report (rightfully!) acknowledges this might not be enough.

“Many participants acknowledged much of this enforcement is done by automation and algorithms,” the report reads. At the same time, they “believe that the people who have built the algorithms are at best naive and at worst racist.” (Spoiler: Both can be true!)

Advertisement

Facebook did not yet respond to a request for comment about the internal report.

Artificial intelligence—and the algorithms that run big chunks of Facebook’s moderation efforts—are frequently built by white guys, with white guy biases. The report recommends bringing more members of “minority groups” to the table when building out its algorithms to mitigate baked-in bias, along with “[conducting] audits of actions” taken on content from people of color. Two very good ideas! Unfortunately, it all goes downhill from here.

Advertisement

Most of the other suggestions for restoring trust in the company involve few specifics. Recommendations like “continue to invest in restoring trust in the FB brand,” and “build trust by ensuring what we ship shows care,” for example, are just hand-wavey nonsense. When the surveyed users said that a company of Facebook’s behemoth size and scale should put more of its resources towards moderation, the report brushed the whole idea of money aside, focusing instead on, um, how tough content moderating is.

“The narrative that content regulation is difficult and complex might not land well with users,” the report reads. “Instead, we should understand if focusing on highlighting what we are doing to address problems would be more effective.”

Advertisement

How? With the same weirdly aggressive PR tactics taken by Facebook’s public-facing staff? With the same buzzwordy blog posts and misleading company policies? I don’t know, and the report doesn’t say. But it sure sounds like Facebook’s plan to fight back against all of this bad press is to just... keep on doing what it’s always done.

This story is based on Frances Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were obtained by a consortium of news organizations, including Gizmodo, the New York Times, Politico, the Atlantic, Wired, the Verge, CNN, and dozens of other outlets.

Advertisement