Tuesday, October 17, 2017

Facebook Security on Biased Algorithm

Mini Speaker
"I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos" wrote Facebook Chief Security Officer Alex Stamos last 8 October in a reeling tweetstorm.

He claims journalists misunderstand the complexity of attacking fake news, deride Facebook for thinking algorithms are neutral when the company knows they aren't, and encourages reporters to talk to engineers who actually deal with these problems and their consequences.

Some believes that this argument minimizes many of Facebook's troubles. The issue isn't that Facebook doesn't know algorithms can be biased or that people don't know these are tough problems. The issue is that the company didn't anticipate abuses of the platform and work harder to build algorithms or human moderation processes that could block fake news and fraudulent ad buys before they impacted the 2016 U.S. Presidential election. And Stamos’ tweetstorm allegedly glosses over the fact that Facebook will fire employees that talk to the press without authorization.

The sprawling response to recent backlash comes right as Facebook starts making the changes it should have implemented before the election. Today, Axios reports that Facebook just emailed advertisers to inform them that ads targeted by "politics, religion, ethnicity or social issues" will have to be manually approved before they're sold and distributed.

And a few days ago, Facebook updated an October 2nd blog post about disclosing Russian-bought election interference ads to congress to note that "Of the more than 3,000 ads that we have shared with Congress, 5 percent appeared on Instagram. About US$ 6,700 was spent on these ads", implicating Facebook's photo-sharing acquisition in the scandal for the first time.

Stamos' tweetstorm was set off by Lawfare associate editor and Washington Post contributor Quinta Jurecic, who commented that Facebook's shift towards human editors implies that saying "the algorithm is bad now, we're going to have people do this" actually "just entrenches The Algorithm as a mythic entity beyond understanding rather than something that was designed poorly and irresponsibly and which could have been designed better."

Understanding of the risks of algorithms is what's kept Facebook from over-aggressively implementing them in ways that could have led to censorship, which is responsible but doesn't solve the urgent problem of abuse at hand.
Now Facebook's CSO is calling journalists' demands for better algorithms as fake news, because these algorithms are hard to build without becoming a dragnet that attacks innocent content too.

No comments:

Post a Comment