Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook suspended the account of whistleblower who exposed Cambridge Analytica (yahoo.com)
683 points by rock57 on March 19, 2018 | hide | past | favorite | 172 comments


The arrogance of Facebook's response to this breach, quibbling over what to call it and now this, is mind-blowing. Their "it wasn't a robbery because we left the front door open" excuse may finally bring about trans-Atlantic regulation of social media.


>The arrogance of Facebook's response to this breach

Is it even clear that there was a 'breach' of any kind that Facebook was responsible for? Correct me if I'm wrong here, but it seems like the chain of events is:

1.) Third party (Aleksandr Kogan) creates 'personality quiz' app, Facebook users opt-in to share information from their profile

2.) Aleksandr Kogan hands off data gathered by the app to Cambridge Analytica, violating Facebook TOS

3.) Whistleblower (Christopher Wylie) lets world know that (2) happened

4.) Media / public gets out pitchforks and blames incident on Facebook

It really seems like Aleksandr Kogan, not Facebook, is the problem here.


>"Is it even clear that there was a 'breach' of any kind that Facebook was responsible for?"

How about a breach of basic responsibility to inform users that their data has been used inappropriately and transferred to a third party. FB knew about this as far back as 2015[1]. Did they let users know at any point? No.

Further FB's Chief Security Officers's tweets on Friday failed to show any concern for FB users who were used as pawns. His main concern was to point out that this wasn't actually a FB problem.

And let's not forget that Mark Zuckerberg dismissed the idea that fake news on Facebook influenced the US elections as "a pretty crazy idea."[2]

So the "pitchforks" are a culmination of a significantly longer time frame and not just a reaction to this single news story.

[1] https://www.theguardian.com/us-news/2015/dec/11/senator-ted-...

[2] https://www.theguardian.com/technology/2016/nov/10/facebook-...


"These guys unlawfully got data from a lot of our users. Surely they will delete it if we ask them to, right?

"Now they want to buy a lot of ads on our platform, great!

"Also their ads are getting a lot of engagement somehow, let's make it cheaper for them to buy more!"


According to an NYT article [0] the tweets had been requested by FB comms:

"Over the weekend, after news broke that Cambridge Analytica had harvested data on as many as 50 million Facebook users, Facebook’s communications team encouraged Mr. Stamos to tweet in defense of the company, but only after it asked to approve Mr. Stamos’s tweets, according to two people briefed on the incident.

After the tweets set off a furious response, Mr. Stamos deleted them."

[0]: https://nytimes.com/2018/03/19/technology/facebook-alex-stam...


> It really seems like Aleksandr Kogan, not Facebook, is the problem here

Kogan probably breached the terms he agreed to with Facebook. But fifty million people trusted Facebook with their data and, when asked in their privacy settings, said they didn't want it shared with third parties. That information was then shared with third parties.

If someone calls my bank and convinces customer service they are me, it would be reasonable to say the bank was breached. Not electronically. But breached nonetheless.


>If someone calls my bank and convinces customer service they are me, it would be reasonable to say the bank was breached. Not electronically. But breached nonetheless.

That's a bad analogy. A more appropriate one would be if you called your bank and told them to allow a 3rd party to have access to all of your accounts, then blamed your bank when the 3rd party drained all your accounts.

Facebook has no obligation to protect your data from yourself any more than your bank has the obligation to control what you spend your money on.


Agree with your general statement, but still a bad analogy. You tell your bank to release all your info to third party, then a fourth party learns all your balances.

The problem seems to be that people do not realize what it means to agree to information access for a third party, and the question remains weather Facebook is to blame for it being release to a fourth party.

So really the problem is that people are not aware of the scope of the information aggregate of their actions on social media and how it is or may be used or abused.


>the question remains weather Facebook is to blame for it being release to a fourth party.

And I personally don't think Facebook should be to blame for it ending up in a fourth party's hands. They gave it to the 3rd party at the request of the user. What that 3rd party does with it should be of no concern to Facebook.

If Facebook has to police what users are doing with their own data on other platforms, then strict DRM is going to become a legal necessity. That is what I see when I read comments saying how Facebook should take responsibility here.

I agree that people don't seem to grasp what it means when they click the "i agree" button, and that Facebook (and others) should work much harder on getting the user to understand the full extent of what they are doing, as well as greater control over what data is shared and when. But that's not a problem that's easily solvable (or possibly even solvable at all). And calling for Facebook to simply not accept this information (which isn't a possibility, they are a social network, so social data has to exist on their systems), or to not allow users to have control over their own information is the exact opposite of what I and many others have been fighting for for years.

If I want to export my information, or give it to a 3rd party, that is my right. And Facebook should have no ability to stop me from doing that. This will lead to people giving their personal information to parties that they do not intend to, but I feel that is a risk worth taking to keep your information yours.


Should you be able to share your data with whomever you wish? Yes, I agree to that.

OTOH you and I both know 99.99% of users don't understand what's going on with their data, so the question here is if adult users should be held responsible for something they don't understand and whether Facebook is actually responsible for not informing its users clearly of what was going on.

Do you think it would have made a difference if Facebook had used a red blinking message alerting users before sharing their personal data with a third party? I think so. So yeah, the responsibility is not 100% on Facebook's users.


I never said that the blame was 100% on users, or 100% anywhere.

Like most things in life, it's shared. Facebook should absolutely be held to a higher standard here. They should be explaining what a user is really doing when they try to do it, as well as giving more fine-grained controls over what they provide to these apps.

But I also don't think that a red blinking message would have changed anything, going by how android permissions used to work (big warning that you have to agree to saying what the app gets on install). Apps will make excuses for why they need data, users will want the app and not care or not want to think about what they are giving up, and not a damn thing will change (I can't count the number of times that I've been told "there will be a warning saying that X needs to access Y, you'll need to say yes to use this software" in various programs, and i've never seen a user say no...)

Like I said, having this ability WILL cause people to give out information they didn't mean to. There's no hesitation there, it's going to happen, and probably pretty frequently, but I still feel that's a necessary evil to allow data liberation.

Also, I think stronger legal structures would help here as well. You aren't going to stop it, so make the consequences for getting caught much more strict, and heavily punish companies that are caught using data in ways that's not okay.


> So really the problem is that people are not aware of the scope of the information aggregate of their actions on social media and how it is or may be used or abused.

Simple case: G-fucking-mail. Lot of people on it so even if you're not using it most of your emails end there with a free link to infos from other people's contact list (names, phone number etc.). Shadow profiles are what people will be complaining about 5 years from now if they just discovered what sharing things with 3rd parties mean.


> That's a bad analogy. A more appropriate one would be if you called your bank and told them to allow a 3rd party to have access to all of your accounts, then blamed your bank when the 3rd party drained all your accounts.

...after the bank provided the infrastructure to give full control to a 3rd party, frequently encouraged me to do it and gave me the impression that I delegated access only for a specific task and a limited time - even though actually, such a limitation was impossible to implement on technical means, which the bank knew.


Users don't understand how data works and what can be done with it, so it isn't reasonable to expect that users will know how to protect themselves. The entire system is designed to hide what is being done to users from the users.


> A more appropriate one would be if you called your bank and told them to allow a 3rd party to have access to all of your accounts, then blamed your bank when the 3rd party drained all your accounts

Better analogy: you call your bank to allow a third party to have access to all your accounts. My account gets drained. Still a breach.


... But you previously gave me the ability to manage your accounts, then I gave it to the 3rd party that then drained your account.

Still not a breach of the bank, nor a breach of facebook.

The app didn't have any "extra" access or anything other than what the person who installed it would have. The app used the information that the person gave it, and that included information on friends that the person had access to. If you were impacted, blame yourself, your friends that you gave information to who gave it to a 3rd party, and/or the company that made the app.

All facebook did was make the metaphorical parking lot that you got robbed in.

While I agree that there is more they can do here to prevent this kind of thing (like specific and explicit controls on what data an app wants/needs at install time), acting like this is a breach of their information is wrong.


Just like in real life, if someone slips and falls in the lot, the liability is on the owner of the lot. Owners of parking lots are incentivized to make sure people don't slip and fall or they will get sued.


How about your parents make you a joint account holder on their bank accounts to help them with their finances as they age, and then you allow a fraud to drain their account because you're ethically compromised?


Everyone sets up their own account's privacy settings though, and as far as I know, this information didn't include anything that wasn't already publicly available.


That is not clear at all, based on another article. [1] Further, Facebook's lawyers told Wylie that the data was illicitly obtained, though now apparently they've changed their approach.

[1] 'Kogan was able to throw money at the hard problem of acquiring personal data: he advertised for people who were willing to be paid to take a personality quiz on Amazon’s Mechanical Turk and Qualtrics. At the end of which Kogan’s app, called thisismydigitallife, gave him permission to access their Facebook profiles. And not just theirs, but their friends’ too. On average, each “seeder” – the people who had taken the personality test, around 320,000 in total – unwittingly gave access to at least 160 other people’s profiles, none of whom would have known or had reason to suspect.' --from https://www.theguardian.com/news/2018/mar/17/data-war-whistl...


The whistleblower said they had access to private messages. Are those ever publicly available? I don't see any privacy settings for messages in my FB profile.


It depends on when this all happened. In the past a lot more information was available than now.


It was a breach followed by business motivated poor handling of the breach. Facebook learned about the TOS violation, requested that CA delete the data, and did not follow up with verification that it was indeed deleted. The fail to follow up is compounded by the fact that:

a) Facebook was notified that CA did not delete the data; and

b) The CA-held profiles were used to target politically motivated advertising on Facebook.

One can't avoid linking the two facts. Facebook has no incentive to aggressively protect its user data when it hurts ad spending on the platform.


The app was allowed to scrape friend data as well, meaning that many more people’s information got exposed. That is on facebook.


I keep seeing it listed just as "friend data", and no one clarifying that.

Does it mean "friends lists", or "the data of the app user's friends profiles"? And in that second case, "the data of the app user's friends visible to the app user", or "as much data as if the friends had installed the app themselves"?

Unless it's that final situation, it's exactly how I assumed Facebook scraping worked already.


It's something closer to the latter--presumably, it was scraping what friends posted publicly.


Using an "honor system" doesn't work for privacy, considering thousands of apps have access to the data. The problem is not Aleksandr Kogan.


People are going to call it a breach because it fits their PoV. When this happened with Craigslist people were calling Newmark all kinds of names and quibbling about ownership of the data, etc. [that since it was user generated Craigslist didn't really own it, so that all those *pad companies were not in breach, etc.

That is all it is.


Clearly, Facebook loses hold on the users' data to some 3rd party players, and even now that they have no control that the data is actually deleted or not.

If Facebook has no means to detain such malicious usage of users' data, they should be more careful to open the access of it in the first place.


Facebook trusted user data to a third party who didn't turn out to be trustworthy. How are they not due some share of the blame? Do you think the TOS provides sufficient (or any) protection for this data in the real world?


And if you want to know what kind of company Cambridge Analytica is, check out this under-cover reporting, jus tout this morning: https://www.nytimes.com/2018/03/19/us/cambridge-analytica-al...

SLIMY


That sounds like the use-case for nearly everyone shady on Facebook advertising. Surely there are many other, privately-gathered troves of data that are much larger out there?


With that logic anyone making a quiz should be able to get data on 50M users and that’s not a problem?


My understanding is the people who answered the quiz were a fraction of the 50 million people's whose data was "used." The quiz was a small sample so they could run their analytics on the larger data set.


The issue was back in the day the FB APIs were not as tightly scope controlled as they are now - you were not just giving access to your data but also of that of your friends - even if they did not.


The political momentum is building for America to import GDPR. Feels good.


America has already basically imported GDPR because all of the companies building GDPR tools are not going to restrict them to the EU for fear of accidentally blocking a genuine EU citizen.


Might as well formalize it by enacting it into law. Wouldn't want to have conflicting regulatory regimes decreasing business confidence.


Yeah like those accept cookies pop up which I despise


The cookie pop-ups are not actually the fault of regulators. The intention of the "cookie law" was to prevent tracking of users without their consent (on a much more granular level), but they neglected to include GDPR-style rules disallowing degradation of service. So companies just made all cookies technically opt-in with a single button (but where opting out means you cannot use the website), which defeats the spirit and purpose of the law.

It is my understanding that GDPR will not allow for such simple workarounds by companies to just continue doing what they were doing previously.


Can someone please explain to me what the breach is here?

As far as I can tell this was just an app using the API as intended. Except they did some additional modeling on their backend to organize/profile users.

They abused the TOS for sure, but was there an actual security breach?


Probably worth rehashing the infamous messages:

> Zuck: People just submitted it.

> Zuck: I don't know why.

> Zuck: They "trust me"

> Zuck: Dumb fucks.

A cynic would interpret the current state of the world to suggest that Zuckerberg, and by extension Facebook, considers a large segment of the world population "Dumb fucks"

http://www.businessinsider.com/well-these-new-zuckerberg-ims...


That text was so long ago I don't think it's relevant anymore. Oprah Winfrey was also crack cocaine smoker in a cycle of abusive relationships in her late twenties. If people choose to change their perspectives they can, and everything indicates that Mark has matured quite a bit since then.


What indicates that Mark has matured quite a bit since then?


He doesn’t just outright say that stuff anymore.

No evidence he doesn’t still think it, but he doesn’t say it.


It's also the kind of thing i'd say to some of my friends as a joke. Maybe he meant it, but in isolation it's not particularly damning.


No wonder. Wasn't he planning to run for president?


That was from a leak. So how do you know he doesn't say it anymore?


[flagged]



She did say:

> there are still generations of people, older people, who were born and bred and marinated in it, in that prejudice and racism, and they just have to die.

She didn't actually say white, but what else was she referring to?


Let's not be reactionary and engage with the context of what she's saying.

This is exactly the way social tides change. At a first approximation, people develop attitudes when young and retain them until they die (it takes quite a lot of effort to change the mind of an adult, very little to change the mind of a child). If you have some social issue where you can see a generational wave, part of what's going to change the "average" attitude is for the older generation to die off.

She's not saying "kill all the old racists" or even "I'd like these people to die". She's saying "our society's average attitude towards race relations is going to shift in [what she thinks is the right direction] as the generation of people who were raised in a time when open racism was the prevailing attitude die off".


> Eine neue wissenschaftliche Wahrheit pflegt sich nicht in der Weise durchzusetzen, daß ihre Gegner überzeugt werden und sich als belehrt erklären, sondern vielmehr dadurch, daß ihre Gegner allmählich aussterben und daß die heranwachsende Generation von vornherein mit der Wahrheit vertraut gemacht ist.

> "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."

- Max Planck, Nobel Prize winning physicist who helped invent quantum mechanics

The impact of generations on social change is not an uncommon or violent way to look at the world.


Especially true of the evolution in scientific thought as Thomas Kuhn famously argued: https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...


If that is an accurate quote, I don't really see it calling for violence which is what people seem to be damning her for. Rather, it seems like that's what she's suggesting is that a certain segment of the population will never be convinced not to be racist because they've just been racist for too long. The only way for racism to die out in America, then, is for this segment of the population to die out as well.

A lot of American GIs coming back from WWII, Korea, and Vietnam developed a real and lasting hatred of all Asians. Nothing would change their mind because of their wartime experiences. The only thing that will change that feeling in American culture is when those men all die and nobody else carries such experiences with them.


Exactly. I didn't hear it as a call for genocide but just a rational reflection on the fact that a certain generational segment will have to die before significant changes are possible.


This is an extremely common opinion among progressive Southerners of every color.


I see, so the proof that she didn't say so is a website with 3 embedded youtube videos where ... she says exactly that.

Hmm, okay.


Yes, she said it. From what I can tell you've used what she said out of context. The way you used the quote implies that Oprah Winfrey wants these people to die or is imploring other people to kill the subjects of her quote. If you did not intend to use the quote out of context I'd suggest, in the future, using the full quote. When given the full context of the quote your post makes no sense.


"Everything indicates"?

Facebook's vision for their product and the future world is appalling. Zuckerberg may be doing his best, but he continues to helm a product/service which is structurally bad for its users even before you consider the surveillance aspect. What could you possibly have in mind as evidence for Zuckerberg being any more than superficially competent for the social role he is playing?


Social role? Until further notice, America is a capitalist country and Zuckerberg’s score says it all. Who needs social roles?


The US is not purely capitalist and has not been for a long time (if ever). There are many laws and regulations to limit the influence of capitalism and to reign back it's (in a pure form) detrimental effects on society. You might not agree with that, but that is the system as it is.


Sure, if by matured, you mean that zuck has refined his ability to harvest and use personal data without any sort of qualms.

What Facebook has become is technically brilliant, but the "dumb fuck" attitude shows no signs of having lessened.


More importantly, those views were of the early days of the company. It's very likely that the early hires and executives also shared the same views. Unless everyone in that cohort had a change of heart or the entire executive slate was turned over, the company likely reflects the same views about the "dumb fucks"


I think more importantly is that it was the prevalent attitude when potentially / probably core systems were being developed which are undoubtedly still in place, in some form, today, and that also very likely have carried forward those same biases. If you have little respect for your users outwardly then I'm sure it reflects itself in the code. I mean, isn't this how we've ended up with GDPR in the first place?


He has had a little or no blowback from it so did he learn anything to change or was his view confirmed that people are dumb


2018: Dumb fucks at scale


>> A cynic would interpret the current state of the world to suggest that Zuckerberg, and by extension Facebook, considers a large segment of the world population "Dumb fucks"

I wish I could contradict you but I can't...a large majority of the world population is exactly like that.


Didn't Zuck take the idea for Facebook from someone else and then run with it?


It's a popular story, but out of all gripes you have against Zuck it's perhaps the least relevant. Especially so in HN, where the daily mantra is "the idea is worthless but the execution is".

There are many valid criticisms against Zuck, and I don't think this one quite holds value as much as the others.


No. The "idea" was already quite old. He scammed the scammers: he stole the company from the "idea guys" who tried to hire a naive nerd to build a company for them and hand it over.


Maybe he should have built his own platform then instead without accepting payment from the “idea guys” to build it for them.


ConnectU wasn't even supposed to be a social network originally, but rather a dating site. You can still see the original mockups of it on archive.org, and it was honestly quite crappy.


They're now getting their doors kicked in by the Information Commissioner: https://twitter.com/traciemac_Bmore/status/97582507693856768...

(how much will be left after they pre announced the raid on twitter I don't know)


I'm still shaken with the magnitude and strength of the revelations that have been made in the last five hours on Cambridge Analytica (though some of the things sound more like SCL -- they share the same CEO).

At 7pm GMT we had the Channel 4 News investigation[0] which featured Alexander Nix, the CEO of Cambridge Analytica, in which he appeared to be bragging to a fictional Sri Lankan businessman (who was in fact an undercover reporter) about the things they can do to discredit his opponents involving (with a delicious dose of irony) hidden cameras. Such tactics sounded a lot like they may involve trafficking of Ukrainian sex workers. There were also things that sounded a lot like blackmail and spreading of things that may not be true.

Then, just after that undercover story broke we had the Facebook raid, which really looked a lot to my untrained eye like heroic efforts to protect data of the more evidential variety from being unnecessarily breached to the authorities or the public.

At 10.30pm we got an interview [1], filmed before the undercover reporting broke, with Alexander Nix. Most memorable to me was Mr Nix seemed to attempt to confidently assure us that Dr Aleksandr Kogan had merely shared with them the gradients with which to build additional models upon and had never shared the data harvested from FB as the whistleblower in this article had alleged. We were also either told or given the impression that this was a great big misunderstanding and all part of a spectacularly coordinated attack by journalists who were upset about Trump.

[0] https://www.channel4.com/news/cambridge-analytica-revealed-t...

[1] https://twitter.com/BBCNewsnight


Just get the f### off Facebook, already.

If you're the kind of person who votes based on targeted advertising, or the hyperbolic posts of people who vote based on targeted advertising, don't ask Facebook to change. Get the f### off Facebook.

If you don't like how Facebook is being used as an addictive propaganda tool by any and all political actors, including Facebook itself, then get the f### off Facebook.

Ask the Facebook "friends" you care about for an email address, phone number, or other messaging account and get the f### off Facebook.

You don't need up to the minute information on the playdate of your cousin's college room mate's toddler. Get the f### off Facebook.


Enough with the #. If you want to swear, swear. If you don't want to swear, don't.

For fuck's sake.


Hell yeah, as a New Yorker I wholeheartedly agree.


As a fecking Irishman, I also agree


Moi aussi, crotte.


There's some interesting analysis waiting to be done on the preference for words from a certain category to be used as expletives in different languages. Let's start with English, a language most here seem to be familiar with. English seems to prefer sexual acts (fuck), sexual organs (cunt, dick, bollocks), excrement (shit, piss), the combination of the two (fucking asshole/cunt/...), religion (damn, goddamn, etc) and more.

In Dutch, my native language, one is more likely to use diseases (kanker (=cancer), tering (=tuberculosis), typhus (=typhoid fever), etc)), hardly any sexual acts or organs (only 'lul (=dick)' really although 'mierenneuker' (=ant fucker) is also a common mild swear word), excrement (schijt (=shit), zeiken/zeikert/zeikstraal (=to piss/someone who pisses/jet of piss)), religion (verdomme (=damn), godverdomme (=goddammit).

Swedish, my second language, is not a good language for swearing as it uses rather silly, powerless words of sometimes dubious origin, mostly related to religion (fan (=the devil), satan, (i) helvete (=(what the) hell)), sexual organs (kuk (=dick), fitta (=cunt)), excrement (skit (=shit)) and what seem to be random words (sjutton (=the number seventeen...?)), weird combinations (jävla skitstövel (=devilish shit boot)).

German is the most proficient swearing language I know, both due to its rich vocabulary as well as the satisfying vocalisation offered by the language, using words from just about all categories except for diseases (wherein it differs from the related Dutch language). I also know French but my knowledge of French swear words is lacking beyond the basics.


I remember a Louis C.K. routine about the phrase “the N word.” He summarized it as “I’m not going to say the word, but I’m still going to put it in your head.”



I'm feeling very outraged.


>If you're the kind of person who votes based on targeted advertising, or the hyperbolic posts of people who vote based on targeted advertising, don't ask Facebook to change. Get the f### off Facebook.

Yeah - see that's not how it works. CA and companies like them have developed far more clever ways to influence you than targeted advertising. The best of these techniques go undetected by you, and are coordinated by machine learning systems which know more about you than you do. Basically, if you are online, you are under their influence whether you know it or not. Don't fall for the illusion that you aren't susceptible. As emotional as you appear to be from your posting style, you are exactly the kind of person that they target.


Don't mistake my form of emphasis as overly-emotional behavior, or my statements as any indication that I see myself as significantly more rational than the average person.

There are steps you can take to make yourself less susceptible to propaganda, and it starts with disengaging from the Huxleyan, social media, dopamine machine. Until you do that it's near impossible to get a real sense of your own cognitive biases.


So the millions of voters that are by any definition technically illiterate and consumed by said dopamine machine...possible your loved ones...do you not seek justice for their sake?


Justice? Because political actors used the available tools to their advantage?

If we're going to go down that line, what justice should be served to Jimmy Kimmel, who regularly pushes his political agenda in what's supposed to be a variety show?


My problem is that they compile information on people whether or not they have a fb profile (the shadow profile). There is something about this (and various other privacy violating aggregators for hire) that really bother me. We should be having a discussion on a broader solution that applies to control of a persons data from all third parties, not just singling out facebook.

This is why I am on the fence between deleting the account and just poisoning the data.


>If you're the kind of person who votes based on targeted advertising, or the hyperbolic posts of people who vote based on targeted advertising, don't ask Facebook to change. Get the f### off Facebook.

The people that vote based on targeted advertising may not be the ones that have a problem with it. This is like if people complained the pool has too much pee in it, you told the complainers to stop peeing in it.


This is a very privileged perspective.

Facebook spent years getting billions of people on the platform and those users haven't left due to lock-in. They are connected to their families, their businesses, their friends.


Indeed.

I barely use Facebook anymore but because I live in a different country that's how I keep up with friends and family. If someone wanted to contact me they would do it through Facebook.


I have a personal story from inside facebook to share. and when I shared this story on my facebook my personal facebook was suspended too.

5 years ago Facebook recruiter reached out to me and invited me to the W hotel in Chicago. I was very excited -not for the job- but for the opportunity to meet with senior Facebook managers and tell them about an evil thing Facebook does. Here is the background story:

I am Kurdish from Iran. And Iran has many provinces. one of them is called Kurdistan. In Facebook profile section for Hometown you could pick all of the Iranian provinces except Kurdistan.

And at first I thought it was a bug. For years and years we submitted bug reports and collected petitions for Facebook they never responded why the Kurdistan province cannot be picked while other provinces could be picked.

Till one day, An internal document -guidance- leaked out of Facebook. That explained it all ! One of the pages was talking about Kurdistan. In which they had explained any reference to Kurdistan is considered terrorism. That was on the request of Turkish government.

In "Turkey", the word Kurdistan is forbidden. and many people in Turkey been prisoned for speaking Kurdish. however in "Iran" we officially have a province called "Kurdistan Province). and Iranian government recognizes the name Kurdistan for my homeland. https://en.wikipedia.org/wiki/Provinces_of_Iran

But Facebook decided to enforce the Turkish government racist rule on other countries that have Kurdistan (Iran, Iraq, Syria...)

Also in that leaked guidance memo. Kurdistan flag was considered illegal. And hundreds of Kurdish pages and accounts got banned for having Kurdistan flag.

While Kurdish flag is illegal in Turkey. Kurdish flag is officially recognized in the Constitution of Iraq for Kurdistan regional government.

So when they invited me to W Hotel to recruit me. I was like yes finally I can meet the people in person. Because as a Kurd I have no importance and they will never respond to me but a software engineer I am pretty attractive on the market.

So I asked the question from one of the managers. And told them my story this for years and years I send them emails and nobody got back to me and we made petitions about this so-called bug.

He said these things are decided by higher management.

I told him how often do you show this disagreement to higher managers or Mark Zuckerburg's policies if you have a different opinion. He responded if I disagree with them I wouldn't work there.

I left the W Hotel in Chicago 5 years ago refusing to proceed with a job on FB. I knew Facebook is on the wrong path. And today I see that prediction coming true.

Even today when Turkey committed a massacre in Kurdish city of Afrin, Facebook blocked many voices inside the city who were showing massacres by Turkish government.

10 years ago FB came after kurds and you said not my problem. Today they are coming after all of u


I just find it disgusting that, as a company, they feel they have the right to act in such a way. At the end of the day I guess it's safe to assume that somebody inside or outside of Facebook has an agenda to proceed with actions like these and I'm sure there are many other cases of things like this around the world but the entire thing just leaves the worst taste in my mouth.


> I just find it disgusting that, as a company, they feel they have the right to act in such a way.

Facebook's culture has a pretension that their internal policies is something like the law itself:

https://talkingpointsmemo.com/edblog/facebooks-heading-towar...:

> Facebook is so accustomed to treating its ‘internal policies’ as though they were something like laws that they appear to have a sort of blind spot that prevents them from seeing how ridiculous their resistance sounds. To use the cliche, it feels like a real shark jumping moment. As someone recently observed, Facebook’s ‘internal policies’ are crafted to create the appearance of civic concerns for privacy, free speech, and other similar concerns. But they’re actually just a business model. Facebook’s ‘internal policies’ amount to a kind of Stepford Wives version of civic liberalism and speech and privacy rights, the outward form of the things preserved while the innards have been gutted and replaced by something entirely different, an aggressive and totalizing business model which in many ways turns these norms and values on their heads. More to the point, most people have the experience of Facebook’s ‘internal policies’ being meaningless in terms of protecting their speech or privacy or whatever as soon as they bump up against Facebook’s business model.


Yeah it's all a ruse. The act of "rolling out new privacy features" was literally them introducing the features they'd use to capture as much data as possible. Tech people were screaming to anybody who would listen while those out of the loop looked at us as, you guessed it, crazy people.


It's a culture of silencing disagreements, and that of not having leadership with a spine; in reality this is less expensive for a company, with external cost to society of course.


My disgust lies with the regime that's suppressing that regional identity. Unless we have reason to believe that Facebook would still disallow the Kurdistan option even if governments didn't criminalize recognition of Kurdistan, then this blame rests on Turkey.

I can't find a reason to fault Facebook's response to this harmful government policy. Would it be better to allow people to select the "Kurdistan" option, knowing full well that this could cause people to be imprisoned, or killed? "Facebook disallows selecting of contested regional identities" is bad, but not nearly as bad as "Facebook helps oppressive governments hunt down disenfranchised people".


I can confirm iranian government not only does not have any problem with word Kurdistan, Iran has an official Province called Kurdistan, and it is constitutional. (same with Iraq) and I am form Iran living in USA, I should be allowed to enter my hometown's province, just like every other iranian proviince( Tehran, Isfahan, Shiraz, ...) but Facebook enforces Turkey's disgusting rules on Iran too.


Well you know what? I'm not going to lie, I didn't think that far into it but now that you mention it it really should have been very obvious to me. In one way FB is helping shield people from harm. On the other hand if enough people could get information thru FB via groups created and populated by people marked with that Kurd option it may help in some way, maybe a roundabout way, I'm unsure. FB can be a useful tool but at the same time you are definitely right and it's not their sole responsibility nor should they be held responsible for just trying to minimize the damage they cause through their service / website / features.


It is surprising to me how easy Internet companies are bullied into censorship. Services should start carrying the "Banned in Turkey" flag as a badge of honor. "Banned in China" is required nowadays if you want to be taken seriously anyway.

For individuals, "Banned from Facebook" has a similar note. For a time now, Kurdish interests have been prosecuted by Facebook. And not just in Turkey.


I think the likes of WeChat and Alibaba have Western companies thinking twice before proudly wearing a "Banned in China" badge.


turkey is no china, china is a economy power, contributes a lot to the world. Turkey is nothing like that, they do contribute in promoting Jihadi and extremist version of islam but that is not the contribution you want.


In all fairness Turkey does have a sizable economy. Not clear whether that will be true in a year though. Not just humans, a lot of capital has been fleeing the seizures.


Turkey is an embarrassment to NATO members and their will should never have influenced Facebook. Facebook is evil if they do not recognize Kurdistan.


Thank you for telling this story :(

Situations where countries try to dictate this kind of thing outside their borders produce stupid bad results. A previous example: no maps are legal in both India and Pakistan. https://blogs.msdn.microsoft.com/oldnewthing/20030822-00/?p=...


For the government we have the bill of rights amongst many protections, but for corporations like Facebook, we have literally nothing. Can't even sue them in court unless you're a millionaire. And much of the public supports their operation and exploitation in the name of money.

If only there was a way to create a bill of rights to protect us against corporations... if only there was a force more powerful than them that could keep them in check ... Maybe we'd give this force a monopoly on violence so they could protect us from the fucking assholes at Facebook... If only /s


Thanks for sharing this


"And many people in Turkey been prisoned for speaking Kurdish"

The ban of other languages than Turkish came after 1980 military coup and as far as I know the law abandoned in 1991. I don't think there are any cases where someone was imprisoned solely because they spoke Kurdish.


Thank you. Upvoting and replying so I can find this again.


-


Response to your bullets with their corresponding number.

1- The story I am talking about was 5 years ago, at that time you could not select Kurdistan in hometown. it was the previous version of facebook profile (before they change it and it was like that for more than 10 years) we fought 10 years, hundreds thousands of pettitions. so your screenshot is Irrelevant.

The story of kurdish accounts being banned for having Kurdish flag is still true, specially during the massacare that happened in the city of Afrin Massacre just a few days ago.

2- I did not work at FB, FB reached out to me to hire me, I went there and I asked them the questions inside FB. I will NEVER ever work for FB. Crappy technologies, Crappy company, not inline with my values.

4- They invited me to W hotel in Chicago, I might still have the conversation in my LinkedIn, what are you trying to say ? are trying to imply I made this story up ? I believe the the recruiting team (including managers and developers) was visiting Chicago, and the job was not in Chicago.


and in the end you are suspecting my credibility, you can checkout my Github I am a public person, I am actually presenting K8Guard for Kuberentes for Linux Foundation in a few weeks. I can stand and testify for every claim I made.

The leaked document inside facebook was also verifies a lot of things I said. (the leaked document was very old not sure how many years ago 6-7 years ago maybe) that they clearly had examples of what flags to and what words to ban. including innocent kurdish flags.


Why are people calling this a data breach? As I understand it, CA just scraped the data from users who authorized their app to do so. Am I missing something here?


CA also scraped the data of friends of those users, meaning that many millions of people had their info exposed instead of a few hundred thousand.


I don't think that's really the problem. I mean, Obama's 2012 campaign scraped the data of Facebook friends of people who'd signed into the campaign website in order to figure out the most effective way to convince those friends to vote for him via microtargeting and other techniques, in particular working out which friends their volunteers could most easily convince to vote for him. We're not just talking about general profile information, but photos and feed activity. Apparently they were so aggressive about this it tripped Facebook's internal alerts. The New York Times spun all this as some clever, sophisticated breakthrough that marketers could take advantage of: http://www.nytimes.com/2013/06/23/magazine/the-obama-campaig... (While they were more honest about getting those users' permission than Cambridge Analytica, they didn't have any kind of permission from the friends they were hoping to profile and target, much like CA didn't. The viral tweet doing the rounds attacking CA for comparing the two is misleading.)

The main problem Facebook seem to have is that the wrong candidate won the 2016 presidential election and the press need someone else to blame.

Incidentally, based on the reporting come out of the Trump campaign, it's not clear they were even doing much or anything in this area. Their campaign leaned heavily on exactly the kind of "political gut instinct" that article decries - used as an input to models rather than directly, and augmented with stuff like A/B testing, but still nothing like the data-guzzling microtargeting machine of Obama 2012. There doesn't seem to have been much sign of CA involvement aside from the initial check the campaign wrote them and their attempts to use Trump's victory in their marketing.


One significant difference is the authorized app. Obama's version of friend graph collection was owned by OFA and API creds to campaign.

The data collection discussed today used API creds granted to an academic who then used the data he collected through the lens of academic survey for commercial gain, passing it to a 3rd party (CA) not listed in the FB app/api and this seems to be the crux of the violation.


What you talk about Trump campaign is clearly far from the reality:

https://youtu.be/zb6-xz-geH4

Starting around 11:00

The CA person clearly shows, from the screenshots, they have personal level targeting to predict whether a person is neurotic or not, what ads should be used to exploit that.


That was (and still is) CA's supposed selling point. What's doubtful is that the Trump campaign were convinced this worked, or that there's any reason why anyone else should be. It just doesn't match up with the reporting I've seen based on campaign sources talking about how they designed ads and targetted people. It also seems absurd on its face - they're essentially claiming to be able to not only do a accurate psychological classification on someone based on what they post online, but actually use that to work out how to manipulate their minds. It's hard enough designing a repeatable and meaningful personality classification with the full co-operation of the test subjects.

Not only that, the one campaign we can confirm did use CA's tech - Ted Cruz in the primaries - flopped, and it certainly didn't get rave reports on its accuracy, ability to convince, and understanding of how they thought from those it was targetting.

Edit: Brad Parscale's actually on video saying that the campaign didn't use psychographics because they didn't think it actually worked: https://www.cbsnews.com/video/secret-weapon/ (6:35ish).


> It also seems absurd on its face - they're essentially claiming to be able to not only do a accurate psychological classification on someone based on what they post online, but actually use that to work out how to manipulate their minds.

This is not dissimilar to the idea that Russia spending a few million dollars on Facebook ads and automated Twitter bots somehow played a defining role in influencing an election. Even within the context of 1.5yrs+ of 24/7 mainstream TV/internet news coverage, billions of dollars in marketing spend by both parties, the personal influence of two of the most famous celebrities in American history (Clintons and Trumps), a multi-decade legacy of highly partisan politics, etc, etc.

It seems measuring the real-world impact of these tools and tactics is completely ignored in favour of believing we're living in some fantasy scifi world where bots and pseudosciencey psychological profiles can make anyone president.

Has anyone asked how much impact these tools have had on the sales of consumer products over the last decade? If they've hardly revolutionized online advertising of consumer products to make people buy products they didn't want (which AFAIK it hasn't), I highly highly doubt it played a huge role in the election of someone a portion of the voting populace didn't want.


It's easy to start wildfires, that's what paid agitators do and likely did. The problem is - of course - millions of Americans were (and still are) very susceptible to particular kinds of bullshit. (InfoWars, religon, guns don't kill people, "gay agenda", anti-vaxxers, other new age fuckvoodoo, and so on.)

Yes, it's probably unknowable how effective the russian bots were exactly.


> Yes, it's probably unknowable how effective the russian bots were exactly.

Possibly but I'd say it's a very safe bet the amount of outrage and blame being put on this foreign super villain boogieman with his infinitely powerful technology far far outweighs it's real world influence.

Oh well, I'm sure it will be a boon for the tech industry the more people believe in this magical nonsense.


That was their pitch. The campaign considered using them as a hedge against the RNC not providing its data, but ultimately did not use CA for the general election.

Source: ABC News https://www.cbsnews.com/news/trump-campaign-phased-out-use-o...


Here's a timestamp for anybody who needs it:

https://youtu.be/zb6-xz-geH4?t=10m33s (10:33)


Ah yes, that old classic, whataboutism.

You'd have us believe Cambridge Analytica was some vendor the Trump Campaign used sparingly, had very loose connections with, and saw limited success with.

Yet Cambridge Analytica is a Robert Mercer and Steve Bannon outfit. Steve Bannon, the campaign manager, and senior adviser to the president. Or at least it was during the time period in question. Steve Bannon was THE integral player who essentially managed the creation of the tool.

Here's a video of Brad Parscale gushing about the software tool they were able to use to glean insights that lead them to spend heavily in the states that mattered, but we're not in the conventional wisdom.

https://youtu.be/_fFbVwuU8bM

While your at it, here's the story from the software developer turned whistle blower himself:

https://www.theguardian.com/news/2018/mar/17/cambridge-analy...


According to ABC News, the Trump campaign considered using CA as a hedge bet in-case the RNC wouldn't share its data. The campaign ultimately rejected their bid for the general election [0].

[0] https://www.cbsnews.com/news/trump-campaign-phased-out-use-o...

As for the parent, people love to cry about whataboutism, but it is useful to see how people respond when something is done by their favored politician versus an opponent.


He didn't claim it wasn't affiliated with Trump, he said they didn't rely on it that much. And the appeal to 'whataboutism' is a complete non-sequitur, as there is no whataboutism in the parent comment.


https://en.wikipedia.org/wiki/Whataboutism

Claiming that Obama's campaigns were the true microtargeters, and that if we are angry at the actions of generic people, we must then be angry at Obama, because he did these very things. That is the Whataboutism, or tu quoque fallacy.

In lieu of anything to back that claim up, I'll just stick with Hutchins Razor, and reply "Nuh Uh."


Accusations of "whataboutism" are becoming worse than the sin they are trying to prevent. It shouldn't be used as a way of just flushing away the fact that "your side" may also do bad things. It's especially bad when it is used like "A did this bad thing." "Well, Not-A did this bad thing too." "That's just whataboutism... what's important is that A did this bad thing."

Well, it must not be that bad if it doesn't bother you when "your side" does it.

What if... and try to stick with me here, because this is a pretty radical thought here in 2018... what if both sides are doing a despicable thing, and rather than argue with each other about "whataboutism" we should resist both of them?

(In this particular case, I don't think that the campaigns did the exact same things... I think they've been doing all they can possibly get away with for a very long time. So it gets worse every campaign not necessarily because anybody is worse than ever before, but because especially in this century, every four years "all they can possibly get away with" has been growing like gangbusters.)


I guess I've become hardened by the general principle underlying this tactic of debate. There are plenty of people who employ it as their singular or at least favored tactic to nearly every debate, and don't for one second convince me of their good faith. It is especially common to see with respect to Hillary Clinton. It is nearly universally used as discussion ender. If the whatabouter can change the subject to their debater's hypocrisy, nearly any reasonable question can be simply ignored wholesale, by repeatedly bringing up the false equivalency over and over.

So you can blame the countless people employing this tactic maliciously, including the House of Representatives Intelligence Committee, for any disproportionate skepticism I apply to its use.


Text comes in forums that try to exonerate some evil by using the evils of another. If it didn't work it wouldn't be so common. It is a fair term to identify.


> Claiming that Obama's campaigns were the true microtargeters, and that if we are angry at the actions of generic people, we must then be angry at Obama, because he did these very things. That is the Whataboutism, or tu quoque fallacy.

Obama's campaigns were pretty well known to be engaged in extensive microtargeting. Here:

https://www.mediavillage.com/article/how-data-and-micro-targ...


Right, but that was a privilege that the users authorized them to grant, no? As in, the people who signed up for CA's app had access to those profiles, and they explicitly granted CA's app access to those profiles. And then CA scraped them and saved them. Am I understanding correctly?


> that was a privilege that the users authorized them to grant

Kogan was granted permission to ephemerally use, for academic purposes, the data of a quarter of a million people who authorized the access. Kogan got fifty million peoples' data (i.e. Facebook let him scrape the data of people who had not given authorization, some of whom had explicitly gone into their privacy settings to turn off the sharing of their data with third parties), kept it, and then forwarded a copy to CA.


This is hyperbole. He created an app, wrote in the description that it was for academic purposes, and then later amended the description / TOS to allow selling of data. Facebook is faulting him because he did not update them on the purpose of the app.

There were no systemic barriers in place that Kogan had to circumvent to get access to the data; the Facebook API worked as expected and gave it to him. Regardless of the original “purpose” of the app (a small textarea input you supply to FB), the API would provide the data. The “purpose” of an app does not affect what data is available to it in any meaningful way.

The idea that he should have notified Facebook of the changes is laughable; all he had to do is change a few text inputs to update the TOS. The FB platform does not treat an app differently based on its purpose. This is CYA language from FB trying to obfuscate the fact that any and every app has access to this same data, and FB has no control over what happens to the data once an app extracts it. Indeed, much of their business model depends on this premise.

Further, the practice of changing terms / functionality of an app is a laughably commonplace way of circumventing the nearly non-existent FB platform review process.

I have personally seen much worse incidents of FB app abuse in the wild. For example I once reverse engineered a top 10 iOS social app and discovered they were injecting custom JS into the WebView provided by FB to get the ID of all your friends, rather than the top 50 you can see. The app’s FB “app” was classified as a game which gave it the requisite permissions for abuse and even allowed the app to secretly invite all your FB friends to it without you ever knowing.


> the Facebook API worked as expected and gave it to him

"Breach" doesn't have to involve a technical malfunction. An employee handing confidential information to an outsider is a breach. Facebook collected users' information. The information was accessed, stored, and distributed without Facebook's (nor their users') authorization. That's a breach.


By that logic does every FB app that stores collected data on non-FB servers represent a breach?

The crucial point here is that the users authorized the app to collect the data. Facebook has an extremely extensive authorization system for you to grant apps access to your data. Did Kogan use this system differently than every other FB app? What technical measures did he need to circumvent in order to get access to the data that you say constitutes a “breach?”

To me, it looks like the system worked exactly as designed and intended. The only system he really circumvented is the honor system, which is about the only limitation on what an app can do with the data FB gives to it.


> The crucial point here is that the users authorized the app to collect the data

A quarter of a million people authorized the app to collect their data. It then gained access to fifty million peoples' data. Those extra data were accessed without proper authorization. All of the data were then used in an unauthorized manner.

> What technical measures did he need to circumvent in order to get access to the data that you say constitutes a “breach?”

"Breach" isn't constrained to technical vulnerabilities. If an FSB agent walks out of Langley with a bunch of sensitive CIA documents, that constitutes a breach.


I get the point you’re trying to make, but I’m skeptical that a breach can be non-technical if there exists a technical framework whitelisting what apps can and cannot do. By implementing an explicit system for granting app access to user accounts, Facebook is effectively setting the boundaries of apps within that system. How can Facebook then arbitrarily pick an app utilizing the system that Facebook setup to protect user data, and say the app is breaching user data? If it’s a breach, the problem is the system by Facebook. If that’s true, then there must be a “bug” (technical or not) that Kogan exploited in the system. In that case I would expect Facebook to fix the “bug.” Yet the bug is the system itself. There is nothing to fix.


> If it’s a breach, the problem is the system by Facebook. If that’s true, then there must be a “bug” (technical or not) that Kogan exploited in the system

Kogan exploited Facebook's lack of verification around restricting third parties' data access to that which users had authorized to be accessed by third parties. He should have only been able to collect a quarter of a million users' data. He was given access to more than he was properly authorized to access.

Kogan also exploited Facebook's lack of verification around his use and retention of the former's users' data.

> Yet the bug is the system itself

Which is why we're talking about regulation.


> Which is why we're talking about regulation.

Regulation to say what? That people can't freely give away their own data? To tell Facebook not to share people's data with other apps, even if the users themselves authorize it?


GDPR would not have permitted the disclosure of users' "facebook friends" data to a third-party automatically without explicitly asking those users first. That is an example of a relevant regulation which would've prevented this.

I don't really care if you want to give a lot of your personal data in exchange for filling out a quiz that is unrelated to what your personal data will be used for, but the network effect (combined with how many things your "facebook friends" can see) of Facebook means that other people in your social graph should care.

(FWIW, I agree that "breach" is the wrong word. It's far too soft on Facebook. "Exploitation of the soon-to-be-criminal disrespect for users' privacy" is much more accurate IMO.)


You bring up a very interesting thought experiment.

We may very soon see a software developer expounding on all of these "clean code" principles before congress as their defense. How conventional wisdom, and industry wide best practices recommend that software be built in a manner that lends itself to all of these positive effects that allow large software projects and companies to proliferate in the first place. Separation of concerns being the main concept that comes to mind. Is this the 21st century's "just following orders"?


Your're missing a key detail (the data was collected by a third party, which then handed the data off to CA against Facebook TOS), but I agree, this doesn't seem to be a 'breach' so much as abuse of personal data by a developer.


The third party handing the data over to Cambridge Analytica is a privacy breach.

Which is what is happening. The initial headline called it a breach and it isn't all that inaccurate to call it a breach, so it stuck.


Facebook did not authorize data retention. CA’s actions were therefore unauthorized use of data, which is the textbook definition of a data breach.


...you mean facebook said "don't keep this" and then they kept it? And people are calling that a breach?


Well, no, the breach happened when the data was acquired under false (bogus “academic use”) pretenses and then transferred to CA, thus taking personal data contrary to both the will of the subjects and the policy of the entity through which it was taken; the “dont keep this” was a (pitiful) post-breach mitigation effort by Facebook.


Yes, that's what Facebook said. And yes, people are calling it a breach because of that. Because it is.

If you pay a company for a service, and then use the service against the terms of use, putting millions of people into danger, you have created a massive data breach of unauthorized data access that has massive real-world physical consequences.

This was a data breach of the highest order.


So you're saying that every company that is screen scraped in violation of their TOS has experienced a data breach?


No. Facebook’s app Developer TOS are different than that of some random website, in that they require explicit consent. It’s a contract, not a one-sided passive statement and is therefore binding in a way a linked TOS is not.


That's not at all what a data breach is, much less a "textbook definition" of one.


> Facebook did not authorize data retention. CA’s actions were therefore unauthorized use of data, which is the textbook definition of a data breach.

But they did authorize data acquisition which means they weren't "breached" but fully consented to handing over the data.

Be like HN saying I can read all the comments I want but can't copy them to my hard drive, if I did copy them and used them for some other purpose would this also be a "textbook definition of a data breach"?


And the friends of the users, including their shares/likes, who are largely unaware of it. That is THE meat of the incident. There is no consent to begin with.


So apparently right now a company that is not contractually bound to Facebook has 50M full profiles from Facebook's API.

From Facebook's perspective, what happens when if csv file gets released publicly. Or a government official or law-enforcement officer sees it. It apparently has >50M real-names connected with political affiliation, gender, sexuality, home town, etc. It is an email attachment away from getting released by a whistle-blower...

So even if the data was taken with scraped without unauthorized privileges, what happens when that data becomes public. Has there (yet) been a public leak of millions of private facebook-like or gmail-like profiles? Sure emails, sure PII, but ever 50M people's admitted sexuality linked to their real name leaked to The Pirate Bay?


I remember when I could make unauthenticated api calls to public posts useds made for their phone numbers…

I'm wondering if I should start a seed of the ~300-500 million useds info in JSON files (I was only interested in name, sex, profile photo) I was able to get from facebook via the thousands of credentials people check in on public repos that everyone else can still do today (beyond the name, sex and profile photos)… but I've been hording it to start up in Indonesia the project I shut down in the US (seeding profiles used to crowd source fur thing information about people like geoip location graphs of people interested in them, pseudo anonymous messaging if you have their email [acting as an email forwarder], and other personality information, and having it publicly accessible by default and/with an api). I'd be willing to give it away to any who ask though as long as they build something with it.

In the past me and my friend monetized with adsense, but now were going to mine monero while people engage with the site and give up information about their friends/enemies/lovers.

We half joked about being an "open source" NSA/BND/[insert ones favorite SIGINT acronym] or what it would be like if everyone had access to the data to leverage for their own ends instead of the privileged few do today.

Thank you "open graph", oauth and the weakest link, other developers and the apathy of most people.

Either way, this genie is not going back into the bottle unless you can convince billions of useds of platforms like facebook to exercise some discretion over things they, for the most part, don't care about nor choose to understand on a technical level.


> users who authorized their app to do so.

Users did not authorize the app to do any of this. You are incorrect in that assumption.

This was a data breach. Facebook also knew about it and did not care. It is a data breach made worse by likely criminal negligence.

Edit: I have been (temporarily?) banned from replying or writing comments on HN. Maybe it is because of all the downvotes. Here is my response to the comment reply below (and with this, goodbye, I'm no longer allowed to post on HN I guess).

> In what sense did they 'not authorize' this?

Users did not authorize the data that was collected by these apps to be retained for more than a day or two, as described by Facebook's terms of service for CA using the data.


> Users did not authorize the app to do any of this. You are incorrect in that assumption.

Are you saying their accounts were hacked? In what sense did they 'not authorize' this?


This has bigger implications than just data breaches, it goes to campaign finance violations of the Trump campaign. Cambridge Analytica shared an address in Beverley Hills with Trump campaign manager Steve Bannon's political consulting company called Glittering Steel. The implication is that GS used Cambridge's data to target users on Facebook with political ads for Trump all while being paid by a PAC called Make America 1 that was believed to be funded by Robert Mercer and his family.

This Twitter user has numerous posts about this. Not sure exactly who they are, but they have multiple sources of information about this story. https://twitter.com/emlas/status/975138624911151104


I was so terrified about how India was almost serious about letting Facebook Zero / free basics or whatever happen.

I feel extremely concerned also that new generations are growing up without knowing how the web was intended to be de-centralised and "free" and self-correcting.

Maybe that doesn't work at scale and things need regulation, but I feel like there was a chance to set culture and tone so that even when a large number of people would come on to the Internet, it would be more with a Wikipedia like attitude perhaps.

Now imagine if the first introduction to the Internet for a billion-ish people in India (current penetration is 460mil) would have been through Facebook's internet.org. Imagine if that happened in a country as large as India set that precedent for other countries with low internet penetration.

I used to scoff in university at a batchmate who told me over lunch that he doesn't use gmail because Google is too large and could become evil. I'm not scoffing anymore I guess.


Reporting that I read said they suspended him because he wouldn't sign what I inferred was an NDA to advise them on how to understand and mitigate the problem.


In about 20 minutes, an explosive documentary about this will be airing on Channel 4 BBC.


>Channel 4 BBC

Just a side-note, Channel 4 is an entirely separate wholly commercial public-service broadcaster, whereas the BBC is publicly funded via a license that's required to watch live TV


Yes, the fact that "BBC4" and "Channel 4" are completely different things is (understandably) missed by nearly everyone outside the UK.


Sorry, my mistake. You learn something new every day.


>Channel 4 is an entirely separate wholly commercial public-service broadcaster […]

For completeness, from [1]:

>Although largely commercially self-funded, it is ultimately publicly owned; originally a subsidiary of the Independent Broadcasting Authority (IBA),the station is now owned and operated by Channel Four Television Corporation, a public corporation of the Department for Culture, Media & Sport […]

[1] https://en.wikipedia.org/wiki/Channel_4


Yahoo just delivered a full-screen, "Your computer has been infected with digital ebola" page when I visited that link.


Guys, you suspended the wrong account.


Damage Control.


Watch how they say it was an error, if this blows up. Just watch.


Somewhere at Facebook is a poor Winston Smith, throwing scraps of paper down the memory hole and editing yesterday's headlines. (edit: And at google, and reddit, and youtube, and ...)


Given how the guy took data then refused to cooperate when asked, I think his account being suspended makes sense...

https://www.facebook.com/boz/posts/10104702799873151


This is arguably correct. While it may be more pragmatic to practice lenience with wistleblowers, there is no moral principle to shield them from all consequences of their actions.

This guy was not just an observer of unethical practices. He was the technical lead for this behavior.

Whistleblower protections usually shield you from retribution by your employer. What people argue for when they criticize Facebook over this is more akin to immunity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: