Jean-Louis Gassée concludes that the Zuckerbot thinks human beings are suboptimally cognitive bio-machines with an inability to penetrate falsehoods perpetrated by advanced forms of bio-machine processing:
Carefully reading and re-reading Zuckerberg’s words puts me ill at ease. Of course, simply complaining that Facebook’s CEO sounds well-rehearsed won’t do. He’s a pro at managing a major crisis. Persphinctery statements are part of the fare (from the NYT interview):
“Privacy issues have always been incredibly important to people. One of our biggest responsibilities is to protect data.”
But we quickly get to the misrepresentations.
“… someone’s data gets passed to someone who the rules of the system shouldn’t have allowed it to, that’s rightfully a big issue and deserves to be a big uproar.”
Here, Zuckerberg glosses over the pivotal fact that researcher Aleksandr Kogan accessed data in a manner that was fully compatible with Facebook’s own rules (see below). It appears that the rule-breaking started after he put his mitts on the data and made a deal with Cambridge Analytica.
Next, we’re treated to the resolute statements. Facebook now realizes what transpired and will make sure it won’t happen in the future:
“So the actions here that we’re going to do involve first, dramatically reducing the amount of data that developers have access to, so that apps and developers can’t do what Kogan did here. The most important actions there we actually took three or four years ago, in 2014. But when we examined the systems this week, there were certainly other things we felt we should lock down, too.”
Three rich sentences, here. And a problem with each one…
First, an admission that Facebook’s own rules allowed developers overly-broad access to our personal data. Thanks to Ben Thompson, we have a picture of the bewildering breadth of user data developers had access to:
(Thompson’s Stratechery Newsletter is a valuable source of insights, of useful agreements and disagreements.)
Of course, developers have to request the user’s permission to make use of their data — even for something as seemingly “innocent” as a game or psychological quiz — but this isn’t properly informed consent. Facebook users aren’t legal eagles trained in the parsing of deliberately obscure sentences and networks of references and footnotes.
Second, Mark Zuckerberg claims that it wasn’t until 2014 that the company became aware of Cambridge Analytica’s abuse of Facebook’s Open Graph (introduced in 2010). This, to be polite, strains credulity. Facebook is a surveillance machine, its business is knowing what’s happening on its network, on its social graph. More damning is the evidence that Facebook was warned about app permissions abuses in 2011:
“… in August 2011 [European privacy campaigner and lawyer Max] Schrems filed a complaint with the Irish Data Protection Commission exactly flagging the app permissions data sinkhole (Ireland being the focal point for the complaint because that’s where Facebook’s European HQ is based).”
Finally, Zuckerberg tells us that upon closer examination Facebook realizes that it still has problematic data leaks that need to be attended to (“So we’re going ahead and doing that” he reassures us).
The message is clear: Zuckerberg thinks we’re idiots. How are we to believe Facebook didn’t know — and derived benefits — from the widespread abuse of user data by its developers. We just became aware of the Cambridge Analytica cockroach…how many more are under the sink? In more lawyerly terms: “What did you know, and when did you know it?”
Once more, sociosexual analysis provides useful insight. Remember, the Zuckerbot is not merely a Gamma, it is a Super King Gamma Emulation. And what do Gammas always believe? That their ludicrously transparent deceptions are impenetrable, of course.
Meanwhile, one of the Zuckerbot’s human assistants has let the sociopathic cat out of the bag:
On June 18, 2016, one of Facebook CEO Mark Zuckerberg’s most trusted lieutenants circulated an extraordinary memo weighing the costs of the company’s relentless quest for growth.
“We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it,” VP Andrew “Boz” Bosworth wrote.
“So we connect more people,” he wrote in another section of the memo. “That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies.
“Maybe someone dies in a terrorist attack coordinated on our tools.”
Zuckerbot doesn’t care at all about its “fellow humans”. And it’s simply grotesque parody when it tries to pretend it does.