Skip to main content

Facebook’s content review policy: How it works, the teams & tech behind the reviews & the results so far

Last week, Facebook announced it had removed 32 Pages and accounts from its platform and Instagram for “coordinated inauthentic behavior” — a term used by Facebook to define efforts by a network of accounts aiming to spread malicious content. The bad actors behind the misinformation campaigns included eight Facebook Pages, 17 Facebook accounts and seven Instagram accounts.

“This kind of behavior is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing,” wrote Facebook in its July 31 announcement that the accounts had been taken down.

One week later, Facebook took down four more Pages that belonged to conspiracy theorist and Infowars founder Alex Jones for repeatedly posting content that broke the company’s Community Standards Guidelines. (Spotify, Apple, YouTube and others have also restricted or removed Jones’ content on their platforms.)

Facebook’s decisions to take down content, and the accounts attached to it, are a direct result of the fallout after the company failed to identify a surge in misinformation campaigns plaguing the platform during the 2016 US election cycle. Since admitting it did not do enough to police malicious content and bad actors, Facebook has pledged to prioritize its content review process.

How do these efforts affect marketers? While Facebook’s actions are aimed at people and organizations with malicious intent, marketers looking to build and foster brands on Facebook need to be aware of Facebook’s rules around content — especially since the content review policies and systems apply to Facebook ad policies as well. We’ve put together a rundown on Facebook’s content review process, the teams involved and how it’s working so far.

Removing content vs. limiting distribution

In April, Facebook released its first ever Community Standards guidelines — a rule book outlining the company’s content policies broken down into six different categories: violence and criminal behavior, safety, objectionable content, integrity and authenticity, respecting intellectual property, and content-related requests. At the time, Facebook said it was using a combination of artificial intelligence and reports from people who have identified posts for potential abuse. Posts reported for violating content policies are reviewed by an operations team that is made of up of more than 7,500 content reviewers.

“Here’s how we think about this: if you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook.”

Regarding the review process, Facebook says its content review team members are assigned a queue of reported posts to evaluate one by one. Facebook says the reviewers are not required to evaluate any set number of posts — there is no quota they must meet when it comes to the amount of content being reviewed.

In a July 24 Q&A on Election Integrity, Facebook’s News Feed product manager, Tessa Lyons, said the company removes any content that violates its Community Standards guidelines, but that it only reduces the distribution of problematic content that may be false but does not violate Community Standards. According to Lyons, Facebook will show stories rated false by fact-checkers and display them lower in the News Feed so dramatically fewer people see them. (According to Facebook’s data, stories that were ranked lower in the News Feed resulted in future views being cut by more than 80 percent.)

Lyons addressed criticism around Facebook’s policy to limit the distribution of content identified as false versus removing it, explaining it’s not Facebook’s policy to censor content that doesn’t violate their rules.

“Here’s how we think about this: if you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive,” said Lyons.

More recently, Facebook offered a deeper dive into the reasons behind why it would remove a Page.

“If a Page posts content violates our Community Standards, the Page and the Page admin responsible for posting the content receive a strike. When a Page surpasses a certain threshold of strikes, the whole Page is unpublished.”

Facebook says the effects of a strike vary depending on the severity of the content violation, and that it doesn’t give specific numbers in terms of how many strikes a Page may receive before being removed.

“We don’t want people to game the system, so we do not share the specific number of strikes that leads to a temporary block or permanent suspension.” Facebook says multiple content violations will result in an account being temporarily blocked or a Page being unpublished. If an appeal is not made to reinstate the Page — or if an appeal is made, but denied — the Page is then removed.

Announced in April, the appeal process is a new addition to Facebook’s content review system.

Facebook’s content review teams & technology

In recent months, Facebook has said multiple times it would be hiring 20,000 safety and security employees during the course of this year. As of July 24, the company confirmed it had hired 15,000 of the 20,000 employees it plans to recruit.

The content review teams include a combination of full-time employees, contractors and partner-companies located around the world, along with 27 third-party fact-checking partnerships in 17 countries. In addition to human reviews, Facebook uses AI and machine learning tech to identify harmful content.

“We’re also investing heavily in new technology to help deal with problematic content on Facebook more effectively. For example, we now use technology to assist in sending reports to reviewers with the right expertise, to cut out duplicate reports, and to help detect and remove terrorist propaganda and child sexual abuse images before they’ve even been reported,” wrote Facebook’s VP of global policy management, Monika Bickert, on July 17.

Facebook’s content review employees undergo pre-training, hands-on learning and ongoing coaching during their employment. The company says it also has four clinical psychologists on staff, spread across three regions, to design and evaluate resiliency programs for employees tasked with reviewing graphic and objectionable content.

What we know about recently removed content

Regarding the 32 Pages and accounts removed last week, Facebook said it could not identify the responsible group (or groups), but that more than 290,000 Facebook accounts had followed at least one of the Pages. In total, the removed Pages and accounts had published more than 9,500 organic posts on Facebook, one piece of content on Instagram, ran approximately 150 ads (costing a total of $11,000) and created about 30 Events dating back to May 2017 — the largest of which had 4,700 people interested in attending and 1,400 users who said they would attend.

The Alex Jones Pages were taken down because they violated Facebook’s graphic violence and hate speech policies. Before being removed, Facebook had removed videos posted to the Pages for violating hate speech and bullying policies. The Page admin, Alex Jones, was also placed on a 30-day block for posting the violating content. Within a week, Facebook made the decision to remove all the Pages after receiving more reports of content violations.

Looking beyond these two specific actions, Facebook says that it is currently stopping more than a million accounts per day at the point of creation using machine learning technology. The company’s first transparency report, released in May, showed Facebook had taken action against 1.4 billion pieces of violating content, including 837 million counts of spam and 583 million fake accounts. Excluding hate speech violations, Facebook says more than 90 percent of the content was removed without being reported in nearly all categories, including spam, nudity and sexual activity, graphic violence and terrorist propaganda.

In the Q&A on Election Integrity issues, Facebook said it took down tens of thousands of fake likes from Pages of Mexican candidates during Mexico’s recent Presidential elections, along with fake Pages, groups and accounts that violated policies and impersonated politicians running for office. (In advance of the November US midterm elections, Facebook has launched a verification process for any person or group wanting to run political ads and a searchable archive of political ad content going back seven years that lists an ad’s creative, budget and the number of users who viewed it.)

But is it working?

While Facebook’s transparency report offered insight into just how many spam posts, fake accounts and other malicious content the company has identified since last October, there is still work left to do.

Last month, advertisers discovered Facebook ads with words like “Bush” and “Clinton” were removed after being tagged as political ads by advertisers that had failed to be verified. A barbecue restaurant ad that listed the businesses location on “President Clinton Avenue” and a Walmart ad for “Bush” baked beans were both removed — most likely the result of Facebook’s automatic systems incorrectly identifying the ads as political ads.

More concerning, a report from the BBC’s Channel 4 news program “Dispatches” showed a Dublin-based content review company contracted by Facebook failed to act on numerous counts of content that violated the app’s Community Standards. The report also accused Facebook of practicing a “shielded review” process, allowing Pages that repeatedly posted violating content to remain up because of high follower counts.

Facebook responded to the charge by confirming it did perform “Cross Check” reviews (its definition for shielded reviews), but that it was part of a process to give certain Pages or Profiles a “second layer” of review to make sure policies were applied correctly.

“To be clear, Cross Checking something on Facebook does not protect the profile, Page or content from being removed. It is simply done to make sure our decision is correct,” wrote Bickert, in response to the Channel 4 report.

Ever since admitting Facebook was slow to identify Russian interference on the platform during the 2016 elections, CEO Mark Zuckerberg has said time and time again that security is not a problem that can ever be fully solved. Facebook’s News Feed product manager spoke to the complicated intersection of security and censorship on the platform during the company’s Q&A on Election Integrity: “We believe we are working to strike a balance between expression and the safety of our community. And we think it’s a hard balance to strike, and it’s an area that we’re continuing to work on and get feedback on — and to increase our transparency around.”

From the Q1 transparency report to its latest actions removing malicious content, Facebook continues to prove it is trying to rid its platform of bad actors. The real test of whether or not the company has made any progress since 2016 could very well be this year’s midterm election in November. As Facebook puts more focus on content and its review process, marketers and advertisers need to understand how these systems may impact their visibility on the platform.

The post Facebook’s content review policy: How it works, the teams & tech behind the reviews & the results so far appeared first on Marketing Land.

via Marketing Land


Popular posts from this blog

How to Get SMS Alerts for Gmail via Twitter

How do you get SMS notifications on your mobile phone for important emails in your Gmail? Google doesn’t support text notifications for their email service but Twitter does. If we can figure out a way to connect our Twitter and Gmail accounts, the Gmail notifications can arrive as text on our mobile via Twitter. Let me explain:Twitter allows you to follow any @user via a simple SMS. They provide short codes for all countries (see list) and if you text FOLLOW to this shortcode following by the  username, any tweets from that user will arrive in your phone as text notifications. For instance, if you are in the US, you can tweet FOLLOW labnol to 40404 to get my tweets as text messages. Similarly, users in India can text FOLLOW labnol to 9248948837 to get the tweets via SMS.The short code service of Twitter can act as a Gmail SMS notifier. You create a new Twitter account, set the privacy to private and this account will send a tweet when you get a new email in Gmail. Follow this account …

Instagram Story links get 15-25% swipe-through rates for brands, publishers

Instagram may arrived late as a traffic source for brands and publishers, but it’s already showing early signs of success, driving new visitors to their sites and even outperforming its parent company, Facebook.For years brands, publishers and other have tried to push people from the Facebook-owned photo-and-video-sharing app to their sites. Outside of ads and excepting a recent test with some retailers, Instagram didn’t offer much help to companies looking to use it to drive traffic. So they had to find workarounds. They put links in their Instagram bios. They scrawled short-code URLs onto their pictures. And they typed out links in their captions.Then last month Instagram finally introduced an official alternative to these hacky workarounds: the ability for verified profiles to insert links in their Instagram Stories.Almost a month after the launch, 15% to 25% of the people who see a link in an Instagram Story are swiping on it, according to a handful of brands and publishers that h…

Five great tools to improve PPC ads

Every digital marketer wants to reach the top position on the search engine results. However, if you’ve recently launched a new website or your niche is saturated, starting with paid search ads sounds like a good idea.Strategically created PPC campaigns can drive leads, sales or sign-ups to your websites. You know what? In fact, businesses earn an average of $8 for every dollar they spend on Google Ads.Optimizing PPC campaigns is not easy, but it’s very powerful if you do it properly. Just like SEO, it is essential to conduct extensive keyword research, optimize ad copy, and design high-converting landing pages.Fortunately, there are a lot of effective PPC tools that will help you analyze your competitors’ PPC strategies, figure out tricks in their campaigns, and improve your PPC campaigns.If you are ready to take an evolutionary leap in your PPC advertising, take a look at my list of five amazing tools to save you time, give you crucial insights, and raise money for your business.Fiv…