Stopping harassment at Google

Well, this project sounds promising. After all, we know all about the terrible atmosphere of harassment at Google, where employees have been known to blacklist and physically threaten other Google employees on the basis of their opinions, right?

About 100 Google U.S. employees concerned about cyber bullying inside the company have organized into a group proposing new policies for conduct at the unit of Alphabet Inc, five people involved in the effort said in recent interviews.

Three current employees and two others helping to organize the group said it formed last fall. They said that among its proposals, which have not previously been reported in detail, are that Google should tighten rules of conduct for internal forums and hire staff to enforce them.

They said they want to stop inflammatory conversations and personal attacks on the forums and see punishment for individuals who regularly derail discussions or leak conversations. The group also wants Google to list rights and responsibilities for accusers, defendants, managers and investigators in human resources cases.

That sounds pretty reasonable, I have to say.

The group also desires greater protection for employees targeted by what it views as insincere complaints to human resources used as a bullying tactic and goading. The organizers said Google should be more attuned to when people seeking to stir animosity or expressing views opposite the company’s stated values try to take over discussions about race, gender and other sensitive subjects. 

Wait a minute…

“My coworkers and I are having our right to a safe workplace being endangered,” said staff site reliability engineer Liz Fong-Jones, one of the lead organizers. She said employees experience stress and fear of physical reprisal when internal conversations are leaked to media, sometimes with writers’ names. 

Oh. It’s just the usual suspects crying to the media again.

You know, I expect criminals also experience stress and fear of physical reprisal when their crimes come to light. By Google SJW logic, newspapers should stop reporting on crime for fear of causing stress to criminals.


The death of a thousand leaks

Facebook is now facing the same problem that Google has been dealing with for the last year, which is whistleblowing employees who are willing to expose the problematic behavior of their colleagues and superiors. Perhaps the loyalists should get in touch with Wired, so they can complain about how terrible it is that their threats and other misdeeds are being exposed to the public.

According to two Facebook employees, workers have been calling on internal message boards for a hunt to find those who leak to the media. Some have questioned whether Facebook has been transparent enough with its users and with journalists, said the employees, who asked not to be identified for fear of retaliation. Many are also concerned over what might leak next and are deleting old comments or messages that might come across as controversial or newsworthy, they said.

I have no doubt that the so-called “Ugly Memo” is neither the last nor the worst thing we’re going to see coming out of the Facebook internal messaging boards.


Define truth, fellow humans

Jean-Louis Gassée concludes that the Zuckerbot thinks human beings are suboptimally cognitive bio-machines with an inability to penetrate falsehoods perpetrated by advanced forms of bio-machine processing:

Carefully reading and re-reading Zuckerberg’s words puts me ill at ease. Of course, simply complaining that Facebook’s CEO sounds well-rehearsed won’t do. He’s a pro at managing a major crisis. Persphinctery statements are part of the fare (from the NYT interview):

“Privacy issues have always been incredibly important to people. One of our biggest responsibilities is to protect data.”

But we quickly get to the misrepresentations.

“… someone’s data gets passed to someone who the rules of the system shouldn’t have allowed it to, that’s rightfully a big issue and deserves to be a big uproar.”

Here, Zuckerberg glosses over the pivotal fact that researcher Aleksandr Kogan accessed data in a manner that was fully compatible with Facebook’s own rules (see below). It appears that the rule-breaking started after he put his mitts on the data and made a deal with Cambridge Analytica.

Next, we’re treated to the resolute statements. Facebook now realizes what transpired and will make sure it won’t happen in the future:

“So the actions here that we’re going to do involve first, dramatically reducing the amount of data that developers have access to, so that apps and developers can’t do what Kogan did here. The most important actions there we actually took three or four years ago, in 2014. But when we examined the systems this week, there were certainly other things we felt we should lock down, too.”

Three rich sentences, here. And a problem with each one…

First, an admission that Facebook’s own rules allowed developers overly-broad access to our personal data. Thanks to Ben Thompson, we have a picture of the bewildering breadth of user data developers had access to:

(Thompson’s Stratechery Newsletter is a valuable source of insights, of useful agreements and disagreements.)

Of course, developers have to request the user’s permission to make use of their data — even for something as seemingly “innocent” as a game or psychological quiz — but this isn’t properly informed consent. Facebook users aren’t legal eagles trained in the parsing of deliberately obscure sentences and networks of references and footnotes.

Second, Mark Zuckerberg claims that it wasn’t until 2014 that the company became aware of Cambridge Analytica’s abuse of Facebook’s Open Graph (introduced in 2010). This, to be polite, strains credulity. Facebook is a surveillance machine, its business is knowing what’s happening on its network, on its social graph. More damning is the evidence that Facebook was warned about app permissions abuses in 2011:

“… in August 2011 [European privacy campaigner and lawyer Max] Schrems filed a complaint with the Irish Data Protection Commission exactly flagging the app permissions data sinkhole (Ireland being the focal point for the complaint because that’s where Facebook’s European HQ is based).”

Finally, Zuckerberg tells us that upon closer examination Facebook realizes that it still has problematic data leaks that need to be attended to (“So we’re going ahead and doing that” he reassures us).

The message is clear: Zuckerberg thinks we’re idiots. How are we to believe Facebook didn’t know — and derived benefits — from the widespread abuse of user data by its developers. We just became aware of the Cambridge Analytica cockroach…how many more are under the sink? In more lawyerly terms: “What did you know, and when did you know it?”

Once more, sociosexual analysis provides useful insight. Remember, the Zuckerbot is not merely a Gamma, it is a Super King Gamma Emulation. And what do Gammas always believe? That their ludicrously transparent deceptions are impenetrable, of course.

Meanwhile, one of the Zuckerbot’s human assistants has let the sociopathic cat out of the bag:

On June 18, 2016, one of Facebook CEO Mark Zuckerberg’s most trusted lieutenants circulated an extraordinary memo weighing the costs of the company’s relentless quest for growth.

“We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it,” VP Andrew “Boz” Bosworth wrote.

“So we connect more people,” he wrote in another section of the memo. “That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies.

“Maybe someone dies in a terrorist attack coordinated on our tools.”

Zuckerbot doesn’t care at all about its “fellow humans”. And it’s simply grotesque parody when it tries to pretend it does.


Facebook alternatives

Barron’s contemplates them.

The proverbial sky seems to be falling on Facebook (FB), with founder and CEO Mark Zuckerberg agreeing to finally answer questions from Congress in the coming weeks.

Lawmakers will be pushing Zuckerberg about the company’s privacy controversy, but the issues go deeper for Facebook. The fallout from the Cambridge Analytica data-harvesting episode has exposed Facebook to two risks that aren’t getting much attention: One is the possibility, slight as it might be, that Facebook is newly vulnerable to competition. The other very real risk is Facebook’s ability to retain and recruit top talent in hypercompetitive Silicon Valley. The biggest names in the Valley routinely poach workers from one another.

First, those plucky competitors. I need not look further than my email folder. Idka, an advertising-free social network, on Wednesday announced its subscription-based platform would be free to new users through October 2018. The company, which has vowed not to sell or share user data, claims a 50{0e0118f8ae392893e7132af0e0c1b6af259b6ae2f64a392a36423d79bfd12d2b} increase in new users and an 800{0e0118f8ae392893e7132af0e0c1b6af259b6ae2f64a392a36423d79bfd12d2b} increase in website visits in the past week.

I wonder where all those new users came from…. Seriously, though, if you’re not on Idka yet, give it a whirl. We’re going to establish a new Voxiversity group there later this week.


Big tech in the crosshairs

Q asserts that it isn’t only Facebook that is in the crosshairs.

FACEBOOK data dump?
Who made it public? 
Who sold shares -30 days from announcement?
You can’t imagine the magnitude of this.
Constitutional CRISIS.
Twitter coming soon.
GOOG coming soon.
AMAZON coming soon.
MICROSOFT coming soon.
+12 
Current censorship all relates to push for power [mid-terms].
LAST STAND.

Sounds like a fascinating spring and summer.


Facebook scraping call data from Android phones

Facebook’s privacy violations are considerably worse than most people imagined:

This past week, a New Zealand man was looking through the data Facebook had collected from him in an archive he had pulled down from the social networking site. While scanning the information Facebook had stored about his contacts, Dylan McKay discovered something distressing: Facebook also had about two years’ worth of phone call metadata from his Android phone, including names, phone numbers, and the length of each call made or received.

This experience has been shared by a number of other Facebook users who spoke with Ars, as well as independently by us—my own Facebook data archive, I found, contained call-log data for a certain Android device I used in 2015 and 2016, along with SMS and MMS message metadata.

In response to an email inquiry by Ars about this data gathering, a Facebook spokesperson replied, “The most important part of apps and services that help you make connections is to make it easy to find the people you want to connect with. So, the first time you sign in on your phone to a messaging or social app, it’s a widely used practice to begin by uploading your phone contacts.”

The spokesperson pointed out that contact uploading is optional and installation of the application explicitly requests permission to access contacts. And users can delete contact data from their profiles using a tool accessible via Web browser.

Facebook uses phone-contact data as part of its friend recommendation algorithm. And in recent versions of the Messenger application for Android and Facebook Lite devices, a more explicit request is made to users for access to call logs and SMS logs on Android and Facebook Lite devices. But even if users didn’t give that permission to Messenger, they may have given it inadvertently for years through Facebook’s mobile apps—because of the way Android has handled permissions for accessing call logs in the past.

If you granted permission to read contacts during Facebook’s installation on Android a few versions ago—specifically before Android 4.1 (Jelly Bean)—that permission also granted Facebook access to call and message logs by default. The permission structure was changed in the Android API in version 16. But Android applications could bypass this change if they were written to earlier versions of the API, so Facebook API could continue to gain access to call and SMS data by specifying an earlier Android SDK version. Google deprecated version 4.0 of the Android API in October 2017—the point at which the latest call metadata in Facebook users’ data was found. Apple iOS has never allowed silent access to call data.

I’m not at all surprised by this sort of thing. I expect even worse violations will be uncovered. It’s why I removed What’s App from my phone the day that I heard Facebook acquired them. Facebook simply doesn’t understand or accept normal human concerns as legitimate, because it is run by an autistic alien robot whose “hello, fellow humans” act is about as convincing as the average 36-year old Pakistani immigrant claiming to be a Syrian child refugee.


Forget Facebook

It’s all Idka now. As the Brainstormers know, we’ve been trying out a new Swedish Facebook alternative called Idka. It has a lot of advantages over Facebook, particularly because they don’t use your data, sell your data, or invade your privacy. Better yet, they let you control your groups and organizations very strictly. It’s got chat too.

We’ve already got an Arkhaven organization there which we’re using in a quasi-Dropbox capacity and I’ve set up an ELOE group there as well, so if you’re not interested in having Mark Zuckerberg sell the pictures of your cousin’s children to sketchy companies in Turkey and Indonesia, I would strongly suggest getting off Facebook and giving Idka a whirl. You can find me there as well, and if you would like an invite to the ELOE group, let me know on Idka.

Just to be clear, I have no interest in Idka nor do I have anything to do with it, it’s just a new tech company with a better (if occasionally esoteric) interface and a lack of interest in exploiting user data like a Muslim rape gang exploiting a drug-addicted 14-year-old British girl without a father in Rotherham.

In the long run, Facebook wants to make its product even more immersive and personal than it is now. It wants people to buy video chatting and personal assistant devices for their homes, and plans to announce those products this spring, say people familiar with the matter. It wants users to dive into Facebook-developed virtual worlds. It wants them to use Facebook Messenger to communicate with businesses, and to store their credit-card data on the app so they can use it to make payments to friends.

Employees have begun to worry that the company won’t be able to achieve its biggest goals if users decide that Facebook isn’t trustworthy enough to hold their data. At the meeting on Tuesday, the mood was especially grim. One employee told a Bloomberg Businessweek reporter that the only time he’d felt as uncomfortable at work, or as responsible for the world’s problems, was the day Donald Trump won the presidency.

It looks like Mark Zuckerberg is about to learn the difference between influence and power.

Lawmakers are demanding to hear directly from Facebook’s Mark Zuckerberg and Sheryl Sandberg on the growing controversy over the misuse of its data by Trump-linked Cambridge Analytica, as the social network confronts its most serious political crisis ever in Washington.

“I want to know why this happened, and what’s the extent of the damage, and how they’re going to fix it moving forward,” Sen. Amy Klobuchar (D-Minn.) said Tuesday when asked about the briefings. Facebook executives, she added, “aren’t coming yet, but they better come.”

What Senator Klobuchar doesn’t understand is that Facebook’s business model, indeed, its entire existence, depends upon being able to violate her privacy concerns. And so much for trying to direct the selected outrage and Steve Bannon and the Trump campaign.

Facebook users are waking up to just how much private information they have handed over to third-party apps. Users are sharing their shock on Twitter at discovering that thousands of software plugins for Facebook have been gathering their data. Some of the better known apps that may be connected to your profile include those of popular sites like Amazon, Buzzfeed, Expedia, Etsy, Instagram, Spotify and Tinder.


Facebook is in SERIOUS trouble

It turns out that the Obama campaign did the same thing that Cambridge Analytica did… only with Facebook’s full knowledge and approval:

A former Obama campaign official is claiming that Facebook knowingly allowed them to mine massive amounts of Facebook data — more than they would’ve allowed someone else to do — because they were supportive of the campaign.

That’s because the more than 1 million Obama backers who signed up for the [Facebook-based app] gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85{8f3a20e95123fc761e5b6545cdcf28fe932999778b1965cb4dd78a3d9b063320} of those without a listed phone number could be found in the uploaded friend lists. What’s more, Facebook offered an ideal way to reach them. “People don’t trust campaigns. They don’t even trust media organizations,” says Goff. “Who do they trust? Their friends.”

The campaign called this effort targeted sharing. And in those final weeks of the campaign, the team blitzed the supporters who had signed up for the app with requests to share specific online content with specific friends simply by clicking a button. More than 600,000 supporters followed through with more than 5 million contacts, asking their friends to register to vote, give money, vote or look at a video designed to change their mind.

Let’s see… 5 million times $40,000 is $200 billion in potential FTC fines. Another $200 billion on top of the $2 trillion they might already owe.


The machine uprising has begun

I’m still trying to figure out how self-driving cars can possibly be economically viable, considering the ruinous insurance costs that will be involved:

A self-driving Uber car hit and killed a pedestrian as she was crossing the road in the first fatality involving the controversial fleet of autonomous vehicles. Elaine Herzberg, 49, was hit by an SUV around 10pm on Sunday in Tempe, Arizona, when she was walking outside of a crosswalk. She was immediately rushed to the hospital where she died from her injuries, ABC 15 reported. Tempe Police say the SUV was in autonomous mode at the time of the crash.


Facebook: failure or fraud?

It’s fascinating to see that after all the ways that Big Social is spying on everyone, what has the media in an uproar is the belated realization that a sword can always cut two ways. They didn’t mind when they knew it was the Obama, Hillary, and the SJW-converged corporations that were data-mining, but now that they realize the Right – and in particular, Steve Bannon and Donald Trump – can and have done exactly the same thing, they suddenly have reservations about the wisdom of letting organizations have access to that level of data.

Facebook is facing an existential test, and its leadership is failing to address it.

Good leaders admit mistakes, apologize quickly, show up where they’re needed and show their belief in the company by keeping skin in the game.

Facebook executives, in contrast, react to negative news with spin and attempts to bury it. Throughout the last year, every time bad news has broken, executives have downplayed its significance. Look at its public statements last year about how many people had seen Russian-bought election ads — first it was 10 million, then it was 126 million.

Top execs dodged Congress when it was asking questions about Russian interference. They are selling their shares at a record clip.

The actions of Facebook execs now recall how execs at Nokia and Blackberry reacted after the iPhone emerged. Their revenues kept growing for a couple years — and they dismissed the threats. By the time users started leaving in droves, it was too late.

There’s no outside attacker bringing Facebook down. It’s a circular firing squad that stems from the company’s fundamental business model of collecting data from users, and using that data to sell targeted ads. For years, users went along with the bargain. But after almost a year of constant negative publicity, their patience may be waning.

Facebook did not initially respond to questions or a request for comment from CNBC.

Here is a less generous theory. We know that Facebook was being propped up by the CIA from the start. But the CIA is now under the control of the God-Emperor. Which means that a) Facebook’s dirty laundry is more likely to come out, and, b) Facebook is not going to be financially propped up the way it has been from the very beginning.

Which, of course, raises the interesting question about whether it ever was a viable business at all. Or even a legal one.

Facebook may face more legal trouble than you might think in the wake of Cambridge Analytica’s large-scale data harvesting. Former US officials David Vladeck and Jessica Rich have told the Washington Post that Facebook’s data sharing may violate the FTC consent decree requiring that it both ask for permission before sharing data and report any authorized access. The “Thisisyourdigitallife” app at the heart of the affair asked for permission from those who directly used it, but not the millions of Facebook friends whose data was taken in the process.

If the FTC did find violations, Facebook could be on the hook for some very hefty fines — albeit fines that aren’t likely to be as hefty as possible. The decree asks for fines as large as $40,000 per person, but that would amount to roughly $2 trillion. Regulators like the FTC historically push for fines they know companies can pay, which would suggest fines that are ‘just’ in the billion-dollar range. Given that there are already multiple American and European investigations underway, any financial penalty would be just one piece of a larger puzzle.

Would you not just love to see Facebook hit with a $2 trillion fine?