Social media screening

March 29, 2021

Social Media Screening: How & Why It Works

Picture this. You interview a candidate. Later that day, you find their LinkedIn. Then their Facebook, their Twitter, their Instagram, where you see that they’re part of a protected class. After speaking with your team, you decide to proceed with a different candidate. You made the decision based on experience, but you can’t unsee what you saw on social media. By doing your own research, is it possible you discriminated against the other applicant?

It’s entirely possible because you cannot prove your choice was only business-driven. Ten years ago, the Federal Trade Commission (FTC) confirmed social media screening is just another part of a consumer report, which means it falls under the Fair Credit Reporting Act. This means you can’t just be scrolling on Twitter during your lunch break, trying to find suspicious tweets from your latest applicant. That’s where we come in. Using third-party guarantees, you are getting certified results from trained analysts, with personal and protected information redacted: you’ll only receive results of flagged content belonging to the candidate. But what exactly are the four types of flagged content that a screening report could show?

Potentially Unlawful Content

If a candidate is under 21, the last thing you’d want to hear is about how they finished a whole bottle of Patron over spring break. And even if they’re legally old enough to drink, bragging about how they parked their car while intoxicated would definitely raise brows. Flagged content in this section can include a mention of illegal drugs, and alcohol usage, and mentions of crime in posts, comments, or images.


In a world embracing diversity and inclusion, knowledge of prejudice in a candidate can absolutely be a make-or-break for your company. Whether they’re actively reposting messages of bigotry or following a page that promotes sexism, it’s important to know exactly where their morals lie. Social Intelligence reports that 68% of flagged content is racism, which includes any derogatory language or behavior used against a group of people because of their protected class status.

Potentially Violent Content

Let’s be honest: nobody wants to be concerned with the safety of their employees. While a criminal background check can show FCRA-reportable crimes such as assault, a social media screening can give insight into material that displays threats or acts of violence. This could vary from posting what they’d do to a celebrity with a baseball bat if they got the chance to images that glorify the usage of guns or other potentially dangerous activity.

Sexually Explicit Material

From posting more than provocative photographs to intently sexualizing what is a piece of art in a museum, sexually explicit material is the perfect example of “you can’t unsee what you’ve already seen”. Deeming a photo or message as explicit is up to the FCRA-certified analysts, who basically decide if the subject would be deemed as inappropriate if it came up in a workplace conversation. Context is a huge part of this type of flagged content, which is why it is so important to use a third-party rather than searching on Facebook yourself.

According to Social Intelligence, only 1% of the population has absolutely no social media footprint – leaving the other 99% open for interpretation when it comes to potentially problematic online activity.  Searching across different sites for the above material can be expensive, and timely, and put your company at risk when making decisions. Our world is transforming, so we’re transforming our background screening methods along with it. Looking for some examples of flagged content? Our friends at Social Intel put together a collection of TV characters who would fail a social media background check, which you can view here. If you’re interested in more sample reports, contact our office today.