How Facebook Handles Security – Transparency & “Bad Guys”

Posted: May 10, 2013 in Cyber Security
Tags: , , ,
Security teams generally have a bad reputation within most companies. They are known to be anti-innovation, bureaucratic, and most teams hesitate to contact them.Here at Facebook, we want our engineers to be able to “Move Fast and Break Things” and not avoid our team at all costs. To do that we are very open with the rest of the company about the threats we face every day.  We’re also very open with other companies and researchers about these threats. This strong belief in our open culture reduces Facebook’s risks and also prevents some of the downsides a security team can introduce into a growing company.First, some notes on these downsides:

Some companies are averse to chasing bad guys through legal means or even acknowledging their incidents.
This may be because they think it will draw attention to a negative story. We believe its what our users would want us to do and it increases employee morale in the process. It also unifies efforts to improve security instead of creating a massive internal blame game.

Security teams must be allowed to confront their biggest threats.

Being restricted from confronting the bad guys instantly causes low morale. No one likes being pushed around…. but being pushed around without any recourse is even worse.

To have a truly great Security Team, you need a great Legal Team that is comfortable sending C&D’s, assisting in lawsuits, cooperating with law enforcement when necessary, and are excited to disrupt very bad people in partnership with a Security team that deals with these threats.


We built the ‘SCALP’ wall to showcase our successful disruptions of “bad guy” endeavors. It is a scrapbook of court orders, settlement checks, apology letters, mugshots, etc… all of the worst bad guys that mess with Facebook Users. It’s a reminder of our impact.

Here’s the latest iteration of the scalp wall. The picture is with John Walsh of America’s Most Wanted. (He said if he had one of these walls, it would be huge… and I believe him!)

This wall keeps the “bad guy” forefront in the Security team’s mission and is a constant reminder of what we’re fighting.

Companies that fear fighting back can easily find themselves in a really bad place. If you don’t see a company actively fighting those who abuse the internet in egregious ways (pedophiles, spammers, extortionists, fraudsters, etc) then you’ve successfully found a company that isn’t serious about their commitment to security.

I’m proud to work somewhere that is proactive and gives a shit.

Security teams are unreasonably risk averse if they can’t talk about incidents.

It is damn near impossible to convince someone that a security measure is a good idea without being able to cite a specific incident. Being in that position means you’re left to cite best practices, metrics, laws, regulations, media cycles, and other pretty bland arguments to improve security instead of really tangible incidents.

Much worse… it becomes easy to blame co-workers and teams for an incident instead of the criminals in an incident they may be gagged from speaking about. This is an incredibly poisonous spiral that destroys morale and tight coupling within a culture.

This spiral continues when the security team has successfully inserted itself as an approval checkbox in high throughput business functions throughout a company. Once this happens, a security team is at risk of being the most hated organization in a company for stifling innovation.

Observing this in practice in a few places, my conclusion is that transparency into incidents is an amazing tool to have. Transparency helps rally efforts and real discussions about threats and risks. When employees can empathize with the specific details of an incident that caused monetary damages or embarrassment, they’re more willing to help than if they were strong armed (and they won’t hate you for it)

Security teams earn respect by fighting back.

One example of this was against the Koobface gang in 2012, when we disclosed their identities to the security community which resulted in the botnet being self-dismantled and the authors supposedly going into hiding.

Koobface Gang That Spread Worm on Facebook Operates in the Open

We helped take down the Butterfly botnet with the FBI:

Facebook Helps FBI Smash 11-Million-Machine ‘Butterfly’ Botnet

Similarly, we helped take down a spammer with the FBI:

Facebook’s Spam King Nabbed by FBI Once and For All

Similarly, other hackers from around the world trying to attack Facebook:

Don’t Even Think About Trying to Hack Facebook

Even if we’re not bragging about catching bad guys, we’re still outgoing about threats to Facebook:

Facebook computers compromised by zero-day Java exploit

Facebook employees get to hear about incidents and their outcomes and cheer when we succeed. This transparency makes it easy to rally the troops and keep Facebook secure. Compare this with a Security team that doesn’t have this level of transparency and must rally troops with arbitrary policies, imposed regulations, and legal threats to maintain a certain level of Security. This is why so many Security teams are anti-innovation and end up being excluded from the rest of the company.

View Post on Quora


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s