Stamos said on stage, during a keynote dubbed "Security at Facebook Scale," that the company has a responsibility to the world to build products that bring people closer together, but at the same time keep them safe from abuse. He added that one of the biggest challenges Facebook has had to face recently is knowing that, even though its tools are designed to create positive connections between people, building them comes with a risk. And that's something the company has had to learn the hard way in the past couple of years, after what happened with the 2016 election in the US and, most recently, the Cambridge Analytica data misuse nightmare.
"Protecting people's data is extremely important," Stamos said. "But doing that is not enough. I personally had to adjust to understand that security is more than building systems, it's understanding how tech can be abused to cause harm." He said that while Facebook "can build technically perfect products, there are still bad things that could happen," and it needs to take that into consideration with anything it makes going forward. "When you think about AI you have to think about risks in how you're training it," Stamos said. "With VR, it's not just the great, fun things, but what is safety in a VR world? How are we gonna build that at scale when there are not any examples to go off?"
Stamos said that Facebook's goal isn't just to figure out the answers to these questions on its own, by investing heavily in new technologies and hiring more people to filter out bad content, but also to work together with academics and even other tech companies. "When we do this work, we also have to build relationships around the world. We have a lot of work to do to understand our responsibility," he said. "Theres no magic solution to these problems, but we're also not going to allow these challenges to paralyze us. We're going to build the playbook for companies to follow us, build at scale and mitigate those risks."
Click here to catch up on the latest news from F8 2018!