Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google backs off on previously announced Allo privacy feature (theverge.com)
219 points by Aissen on Sept 21, 2016 | hide | past | favorite | 128 comments


>Allo messages will still be encrypted between the device and Google servers, and stored on servers using encryption that leaves the messages accessible to Google’s algorithms.

'using encryption that leaves the messages accessible to Google’s algorithms' So, not meaningfully encrypted at all then?


Probably stored under the same security infrastructure as Gmail and hangouts messages.

Which, IIRC, means no human is given direct access without the account holder's permission. Algorithms are allowed access, but only if they emit data that is similarly secured, or emits data in aggregate (where, I think, aggregate was defined as 100k+ users per aggregate data point)

It's extremely hard for a Googler or product team to do something directly nefarious, but you do have to trust Google's privacy infrastructure.


> Which, IIRC, means no human is given direct access without the account holder's permission.

Or compulsion by any government to which Google is responsible (e.g. Russia, if I Googled correctly). Or a suitably clever Googler (as you note, it may be difficult, but 'difficult' ≠ 'cryptographically secure').

As when Facebook didn't think HTTPS made sense until someone sniffed Zuckerberg's password in a coffee shop, I don't think Google (or any other cloud firm) will take security seriously (as opposed to paying lip service to it for marketing purposes) until someone in the C-suite is materially embarrassed by its lack.


A "suitably clever Googler", an insider threat, is something that Google actively defends against, at multiple levels. The general assumption is that you can't blindly trust internal users, devices, etc. See the BeyondCorp paper for an example. The internal security infrastructure (monitoring, logging, auditing, analysis) probably consumes more CPU cycles than what's needed to run a large number of entire business out there, especially after the China incident -- remember who alerted the 30+ companies that got infiltrated.

The security whitepaper gives only a hint of what's done, such as checks on former employees' accounts and so on:

https://static.googleusercontent.com/media/1.9.22.221/en//en...


"Russia, if I Googled correctly." You clearly trust Google to some extent, since you search with it.

I don't think you can accuse Google of not taking security seriously. They are making a security tradeoff that allows governments and insiders access to private data. This is a legitimate tradeoff to make, and a legitimate thing to criticize them for. Apple makes a different choice, and it's important that we have a debate around which is the right security design.

But making a claim that Google is not serious about security does a disservice to the really good people there who move mountains to secure the service.


> I don't think you can accuse Google of not taking security seriously.

I don't think they take my security, and the security of my data, seriously. They seem to care very much about their security.

> They are making a security tradeoff that allows governments and insiders access to private data. This is a legitimate tradeoff to make

No, at this point I don't believe that it is legitimate, any more than it's legitimate to sell an oven which will explode if the temperature dial is set above 600° ('just don't set it that high!').

Yes, there are people at Google who work very hard to secure Google's data; there are people at Google (e.g. Adam Langley) who care a lot about users' data. There may even be people at Google who are working very hard to change its course on user privacy.

But Google, the company, does not take the security of user data against privacy threats seriously: if it did, it would use a better architecture (note that Apple doesn't take user-data security seriously, either, since they can MITM any time they want; nor does Mozilla, nor does Microsoft: no organization's hands are clean, so far as I can tell).


Additionally: The google of tomorrow != the google of today. Even if you assume that google currently has the best security/privacy setup in place, they could suddenly turn around tomorrow and hand that to whoever they wanted.


Exactly how does almost singlehandedly transitioning the global Internet to forward-secure curve-based TLS help their security and not yours? All I can see in that work is cost for them.


That's easy, TLS is great for Google as ISPs and wifi providers can't replace ads with their own in transit.


That is some Alex Jones level reasoning. There's no end-user security you can provide that can't be reframed that way.


Belatedly seen this. For the record I just want to note that I claimed that Google have sound business reasons to promote TLS, i.e. that their ads can't be MITMed. I have no clue why tptacek would go ballistic at that suggestion.


What he said isn't even entirely wrong... both adware and shitty ISPs do this, mostly as a vector for malware. Google has admitted as such, and migrating their ad platform to https and teaching Chrome and Firefox to refuse mixed content was part of not allowing their service to be abused for this.

Just because Google sells something we don't like doesn't mean they're evil, but it means they have a responsibility to limit the attack surface outside attackers use. There is no way to eliminate all attack surface while still allowing third party admins or closed source code (technically, its impossible period, but w/e).


Google's business is based on destroying what you and I consider to be privacy, so I don't see how you could expect them to change their minds about that.

Google and similar companies have a coherent worldview in which they collect user data, protect it from outsiders and inside threats, and do benign and wonderful things for users in return. Within that worldview, they do an excellent and commendable job. Calling their beliefs on data collection a security issue muddies the debate.

I'm curious about your comments on Apple. If I back up my phone to iCloud, how can Apple "MITM any time they want"?


> I'm curious about your comments on Apple. If I back up my phone to iCloud, how can Apple "MITM any time they want"?

I was referring to iMessage: my understanding is that it Apple is the CA for all iMessage keys, and thus they can issue a certificate to anyone, if they wish to or are compelled to.


Theoretically, just like they could prepare a broken version of iOS that disables security and push it to targeted users. In reality, such a software/infrastructure does not exist today, and we've been shown what happens when Apple is asked to write it.


I don't think that is enough. Here we are relying on Google being magnanimous enough to not use the data to increase their profitability. And even beyond that, your opinion is a little naive to hold in a post-snowden world.


How is it necessarily wrong if Google uses the data to increase their profitability?


Right - their policies may be perfect today but they may not even exist tomorrow.


That's fine as far as trusting Google keeping their employees from doing bad things.

But that's not the first concern one would have. Pervasive surveillance programs have penetrated service providers' data centers. Law enforcement can get warrants to access this information too easily, and many service providers turn over information on request, rather than requiring a warrant.


Well if you trust Google you don't need on-disk encryption in the first place. SSL already solved the transport issue.


My point that you have no control over the data once you send it to Google. You don't even know if they even encrypt it on their disk/server anyway.


That's really just BS. What's the point of encryption if both the keys and the content is known by Google? They could as well use just SSL and that wouldn't make it safer than this kind of "encryption".


The benefits of this sort of encryption & privacy infrastructure:

* Protects you (user) from eavesdroppers between you and Google.

* Protects your data from eavesdroppers inside Google's data centers.

* Protects your data at rest (on disk).

* Protects your data from malicious employees.

* Enforces a great deal of scrutiny over who can access your data, and what they are allowed to see.

But, yes, they process and store your data, so there's no 100℅ secure approach to be had here.

---

SSL only handles the first two bullet points.


The rest of points are not addressed by this sort of encryption because the keys are on disk too. Google employees have access to the keys. It's like leaving the keys into the lock and claiming it's more secure because it has a lock. In practice the data is protected only by ACL. Encryption is just a marketing keyword in this case.


On disk somewhere (encrypted, with a different key) is not the same as "leaving the keys into the lock" in a large distributed system designed to avoid single points of failure.

There are always weaknesses, but think about why a bank employee can't just decide to take your money. Internal controls are a thing.


>> Internal controls are a thing.

That's what ACL is...The encryption doesn't improve the security of the data if the keys are not stored securely. Simply put the data security is as good as the ACL of the keys is. For your peace of mind you may want to know the processes(i.e. audits, ISO standards/certifications) rather than a marketing keyword("encryption", "military grade" etc). Encryption may be part of that framework but I don't think it's relevant enough to be advertised as a stand alone 'product'. You can be sure that your account balance is not encrypted. It's the ACL process that protects you from a rough employee.

We already know that Google has a relative good security process in place. However this is not about security in the first place. It's about privacy. And here encryption brings no privacy gain because Google has the keys so as I previously said it's just marketing BS. Most people are not worried that Google gets hacked. They are worried that Google is selling their data to 3rd parties for profit and to governments for political reasons.


I think you're defining "security" and "privacy" to mean what you want them to mean. The absolutist definition that's popular in certain circles isn't the common definition.

Privacy is mostly not about keeping data from the government. Your bank account is private for most practical purposes though I'm sure with a warrant, law enforcement can find out about it. It's not going to show up in the news and your competitors aren't going to see your bank statements.

Also, Google isn't handing your private email over to advertisers - never has, probably never will.

Authentication and authorization (ACL's) are useful for both security and privacy. This is what makes cloud computing possible (along with encryption, too).

Yes, we can talk about weaknesses in our systems, but let's not completely dismiss the security and privacy features that are already there.


You and me have different views about privacy. I think you should have control over your data. Google indirectly provides your email information to advertisers. Example: You receive a flight notification. Google reads your email and targets you with ads related to your trip(i.e. hotels, things to do etc based on your age, gender etc) though various channels(i.e. gmail UI, google search, Google Now/AI etc). It wants to do the same with this Allo application. The encryption they claim it's in place it's worthless because it doesn't protect you from this. They try to claim your data is `private` because it's encrypted.

The other case which is uglier is that Google gives your data to foreign governments or other agencies. This may have great consequences. Cloud computing is based on trust and it's possible because the main players(i.e. US government, cloud providers such Amazon) are trusted not to spy their clients but I don't see many rushing to a chinese cloud vendor to host critical data(i.e. email) on their servers.

The security and privacy features we have are currently based on the good will of the provider (i.e. Google). If we talk about encryption of personal data I think it's worth to ask a bit more than a buzzword. One such thing is to do not share the encryption keys.


> SSL only handles the first two bullet points.

Only the first point. If Google works like every other data center in the world, then SSL stops the moment your data hits their first load balancer.


Google does not work like every other data center in the world. All internal services communicate over SSL (or similarly encrypted links)


Though only because they discovered their internal unencrypted links were being systematically compromised on a truly massive scale for years.


False. There was already internal encrypted traffic and the revelations only sped up the migration for the rest.

https://www.washingtonpost.com/business/technology/google-en...


I think the load-balancers do not communicate over SSL. At least not the ones in front of Appengine applications. Perhaps only inter-services communication is encrypted(i.e. app service with database service).


Most datancenters are lost once the internal network is hacked so I think that's a different issue. Encryption wouldn't help either because the attacker could get the keys too.


Encrypted links between processes inside a data center is still a good thing: It protects against malicious actors sniffing traffic (without compromising the machines in the data center)


So again... how is this better than end to end SSL?


I actually trust Google. And I believe they do their best to make it work this way. What I don't trust is the government. :(


Bad news then... Google is not what it seems. http://www.newsweek.com/assange-google-not-what-it-seems-279...


Extremely hard. I am so sick of hearing this. It all boils down to hand waving and assumptions instead of verification and scrutiny.


This is the first time I'm reading about that sort of restrictions at Google. What are your sources? Or did/do you work there?

Still, I wouldn't trust that. I can think of enough cases in which algorithms fed with that information would do things not in my interests. (You could easily imagine some internal scoring or profiling a lgorithms taking advantage of the data for example)

Telling "it's only algorithms" doesn't make it secure it I don't know what the algorithms do. (Which of course I can't as it is the core of their busyness)


It doesn't matter how well its designed. Once they get an NSL, they would need to make compromises, unless the messages are not in a readable form.

All that Google has accomplished, is convinced me to steer clear of Allo.


> but you do have to trust Google's privacy infrastructure.

The NSA have already backdoored it.


In order to receive government protection and favors, they have to play their game. Its not like their advertisement profiling algorithms isn't useful for intelligence gathering and profiling.


> extremely hard for a Googler or product team to do something directly nefarious

Unless the user is the only party that can access the keys (i.e. they are only stored locally on the user's device), then I'm sure Google will it very easy to access personal information when a government demands access.

Or did we forget that Google is part[1] of PRISM?

[1] https://en.wikipedia.org/wiki/File:Prism_slide_5.jpg


And that is exactly why open IM protocols and 3rd party clients are so important.

If both you and your correspondents do use 3rd party IM client ([1], [2], etc), then just run OTR2 or OMEMO on top of the protocol, and let google store whatever it pleases - it's not going to be much use for them.

[1] https://pidgin.im/

[2] https://conversations.im/


I've seen this sentiment in a couple of places now -- and the news media and non-technical folks seem to refer to "encryption" very loosely to mean "protected in the way I want it to be protected at the times I want it to be protected".

Can we please use technical terms with precision?

You can have data encrypted at rest and in transit that is still accessible to the provider and the fact that the provider can decrypt it for processing doesn't make it any less "encrypted". There is not a total ordering of encryption or security schemes.

If you would like to say that the data isn't end-to-end encrypted such that it is opaque to the service provider -- say that. Don't say it isn't meaningfully encrypted.


Perhaps it's some kind of homeomorphic encryption scheme. Hey it technically leaves the original message encrypted!


Ifmeaningful homomorphic encryption was an option, then we'd be living in a quite different world.


Surprisingly, homomorphic encryption is now practical enough to run a Kaggle-like data science competition, but without disclosing data. At least one such competition is known to run with real money prize.

https://medium.com/@Numerai/encrypted-data-for-efficient-mar...


This reads like fake smoke and mirrors to trick investors/data providers/participants instead of anything that's real cryptographically.


Very true. I was just making a tongue-in-cheek conjecture about the justification/rationalization that might be employed.


As rad as it would be if Google started doing privacy-friendly-ish datamining on user data by homomorphically encrypting it, you and I both know that's not what they're doing. You forgot the `/s`, sadly.


Along those lines, there's RAPPOR:

http://research.google.com/pubs/pub42852.html


They could be using some form of homomorphic encryption here, in which case it would still be meaningfully encrypted.


Homomorphic encryption isn't currently practical, even for small-ish problems. It needs to use constant space, and every operation needs to flip on average half of the bits. There aren't obvious ways around these restrictions, though off hand I can only come up with a quick hand-wavy "proof" that this must always be so if less than one bit of the unencrypted state is to be lost with every state update.

So, in order to perform machine learning on terabytes of data, you need to flip terabits for every update of the homomorphic state machine.

That being said, I could imagine someone coming up with a homomorphic encryption algorithm that starts out with a few megabits of excess entropy and leaks entropy at a bounded rate while remaining more efficient at calculation, and where the initial state is set up cleverly such that after a bounded number of steps, the state machine starts making nonsense computations and stops leaking entropy. Though, this just feels very brittle to intentionally leak entropy, and I have no idea how anything remotely like this could be actually constructed.


That is only true for fully homomorphic encryption. There are simpler cryptosystems (e.g. ElGamal for a trivial example) that have more constrained homomorphisms that theoretically could be used here.


The OP was talking about encrypting the chat data and performing machine learning on encrypted data without decrypting it. I really wouldn't know where to begin to construct such a thing using a simpler cryptosystem, such as ElGamal. Do you have any hints as to where to begin?


There is actually some research work on this front because it is highly valuable to analyze some data but not have access to it.

However, it's better just to encrypt everything and not be tempted by the advertising surveillance dollars.


  So, not meaningfully encrypted at all then?
One could also think of it as your private key being with (1) you, (2) Google.

It's in safe hands ;-)


Sure, whatever safe hands mean. Safe in an NSA datacenter. Safe in the hands of the Chinese government. Safe with whatever sysadmin has access to it.

Instead of worrying about what "safe hands" mean, just keep it accessible only to me, and each respective conversation with the people I communicate with.


> just keep it accessible only to me

One could also argue that the service is free and the service provider needs a (or yet another) way to monetize it.

If the privacy settings are insecure by default, one shouldn't have high expectations anyways.


And that's a little scary, you sell a bit of yourself to get access to a service. It's not really the best case scenario I would want.


> One could also argue that the service is free and the service provider needs a (or yet another) way to monetize it.

True, which is why I pay Apple money. No incentive to mine my data... in fact given that privacy is a big marketing angle Apple's incentives are to make it impossible for them to access your data even if subpoenaed or presented with a warrant.



This seems to be where the "backing off" claim is coming from:

http://www.theverge.com/2016/5/18/11699122/google-allo-messa...

> First, all conversations are encrypted "on the wire," which means that nobody on the internet can read them as you send your message. They are read by Google's servers, but Kay assures me that the data is stored "transiently," which is to say that Google doesn't keep your chat logs around to be subpoenaed. And Fulay adds that Google doesn't assign identity to the chat logs on those servers even then.

I think this is a misunderstanding -- either on the part of the authors or from the Google employees on understanding the question asked by the authors.

Kay probably meant that in Incognito mode, messages are stored transiently. I don't believe that has changed, has it?

Did Google really say that non-Incognito messages would not be stored server-side? What happens if you lose your phone -- do you lose all your Allo chat history? That would be a really shitty user experience.


I lost all my WhatsApp chat history after a phone upgrade went wrong, and it didn't cause me any problems at all. I recognise that my use case is not the same as everyone's, but if I want to save something from WhatsApp, I put it somewhere else.


Yeah, and I normally don't bother to copy over my SMSes. I can, and I appreciate that, but they're really so ephemeral that it's not worth the bother.


That basically happens with WhatsApp if you don't back it up somewhere.


Who needs yet another messaging app? Aren't the 10 I have installed enough already?


Don't worry, running by Google, it won't stay around long./s


Google does.


Too right! +1


Was anybody actually planning to use Allo for encrypted communications? I was under the impression it was written off at its announcement.


Why was it written off? It's using Whisper Systems tech to do it's end-to-end encryption[0]. Is there someway Google could inject itself into this, or some reason people shouldn't trust it?

[0] https://whispersystems.org/blog/allo/


There is always a way to inject malicious code in a codebase you control. The Allo apps are closed-source and their code is solely controlled by Google. Doesn't matter which protocols they claim to be using, when they could simply push an update which silently uploads your private keys to their server (or breaks the claim in any of the many different ways).

This is the same reason even WhatsApp's use of 'end-to-end encryption' cannot be considered secure from WhatsApp.


so, do you write your own compiler as well?


Repost


Not sure TBH, I just remember a lot of negative discussion around it and the privacy picture when it was announced. Perhaps bad on me for not doing all the due diligence when it came out but I'm generally biased against the assumption of security or privacy in online chat anyway so didn't bother going further with it.


Yep, I'm with you. At least I know I've written it off.


These privacy articles make a big deal about law enforcement being able to access messages. Surely a much bigger concern is Google being able to access the content?


Yeah, I found tone of these articles distasteful as well. It looks like only criminals are worried about their privacy. I'm pro-privacy, but not so much worried that police gets a warrant and wires my chat as I'm concerned about Google and all kind of 3rd party companies circulating and selling my personal data around for who-knows what purposes and that stays around indefinitely.


One implies the other.


A fair percentage of people (myself included) are far more concerned about protecting our data from marketers, advertisers, and data brokers, than we are about going 100% 'tinfoil hat' mode and worrying whether the NSA is monitoring my messages for thought crimes. These are two very distinct issues and not everyone is concerned about both equally.


> A fair percentage of people (myself included) are far more concerned about protecting our data from marketers, advertisers, and data brokers, than we are about going 100% 'tinfoil hat' mode and worrying whether the NSA is monitoring my messages for thought crimes.

The main issue here is that people aren't worried about either of those problems :) otherwise Facebook, Google and other advertising companies would only have a handful of users.

> These are two very distinct issues and not everyone is concerned about both equally.

Divide and conquer... not. The issue is the same, we should fight together for more privacy, not disregard other people's reasons for requesting more privacy.

In any case, as you were replying to my previous comment, my point was that if the company has access to your data, Law Enforcement has got it too. Also if Law Enforcement has access to your data it means it wasn't also available for the company. Hence why one implies the other. You might be worried about one, but you've got two :)


> we should fight together for more privacy

It needs to be established as a human right.

I'm personally very disappointed to see the how good intentions of the aware subset of the citizenry are consistently channeled towards technical solutions to a problem that is fundamentally political.

At a meta-level, we saw the same (useless) dissipation of energies in the Occupy-x movement. And it should also be pointed out that the subtext of such approaches is de facto deligitimization of legal governance of societies and (speaking of "kings") establishing corporations as the arbitrators of social norms.

Get congress to pass a comprehensive privacy act and "Alphabets" (of the corporate and governmental variety) will have to toe the law of the land.

[edit:spelling]


It needs to be established as a human right.

It already is. Article 12 of the universal declaration of human rights:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.


Thanks, that's clearer to me now.

When marketers collect my personal info, it's far more likely to be distributed widely so I place a greater emphasis on protecting it from them. I don't like the fact that nation-states are spying on the citizenry, but I don't know what I can do about that. I DO know what to do about the surveillance capitalists.


Dr. King was monitored by the NSA for "thought crimes". Is that "100% tinfoil hat mode"?

https://en.wikipedia.org/wiki/Martin_Luther_King_Jr.#NSA_mon...


How is that relevant to the fact that many people are more worried about marketers than the NSA? Nobody's questioning that some people are in fact so monitored. "I am more concerned about A than B" is not a claim that B does not exist, nor is it a claim that "I" am unconcerned about B.


>the fact that many people are more worried about marketers than the NSA?

How do you know this is a fact, was a global survey done?

>I am more concerned about A than B" is not a claim that B does not exist, nor is it a claim that "I" am unconcerned about B.

Indeed, however the phrase "100% tinfoil hat mode" does imply being unconcerned about B, and that anyone who is is a lunatic.


You seem to have a problem with adjectives. "Many" is not most. I don't need a "survey" to establish the "many" when I can just read many people expressing their concerns on HN. That's "many" enough for me.

Now, to be clear, by "people" I mean "humans" not "abstract sentients including AIs that don't yet exist", and by "concerns" I mean "things that are at least slightly negative to the thinker" and not "things that keep the thinker up at night wetting the bed and driving them to fits of existential madness", etc. etc.


Ah, my mistake. I misread it as "many more people..."


And how many "Kings" do you know personally?/s What me (and presumably, OP) trying to say is, that an ordinal person should be much more worried about their private data being collected and used (virtually) unconstrained by the shady companies than three letter agencies. This doesn't imply that mass surveillance isn't wrong and evil, these two things are orthogonal.


> Dr. King was monitored by the NSA for "thought crimes". Is that "100% tinfoil hat mode"?

Probably 'yes' for you, 'no' for him.

Sorry, but a random HN commenter is extremely unlikely to be targeted for the level of surveillance and treachery that Dr King was. If he feared it, he had good reason. If you are some random IT worker building the next smart pillow you cannot expect them to prioritize spying on you, that's all I'm saying. Mass surveillance isn't the same as targeted.


Actually, I'd say a random HN commenter is extremely likely to be targeted for surveillance and exploitation compared to general population at least. Not because they personally are important, but because of their jobs. So many administrators, programmers, etc. with access to relevant data.


Yes, I almost mentioned the Belgacom sysadmins in one of my responses.

How many people here work for Google, Facebook, Apple, etc? What if you could compromise their workstations and get privileged access to the backend of social networks, email systems, etc? We are being actively hunted and there's evidence of that.


Some years ago FreeBSD had an intrusion via one of the commiter's machine or stolen SSH key, I don't remember which any more, but I do remember that it took months for the package building infrastructure to get fully operational again. I think they never got to the bottom of that (who did it or why). Linux had a very similar incident if I'm not mistaken.

It's such a standard and effective method in human intelligence, that it's extremely naive to think an analogue wouldn't be used extensively in signals intelligence too.


Fair enough. Though my concern is not for myself, but rather for any potential great leader who could be silently neutralized/blackmailed/extorted by mass surveillance techniques.

Whether the culprits' organizational classification is public or private is of negligible importance.

>Mass surveillance isn't the same as targeted.

Mass surveillance is the first step in the discovery process, targeted surveillance is the second step after a target has been flagged by the dragnet.


Actually a random HN commenter is precisely the person to target, they're more likely to have technical skills and access to servers. It isn't just about surveillance it's about increased attack vectors when your information is distributed.

A random IT worker building the next smart pillow has no reason to target you...but the creepy pervy sales manager at the same company might have reason to.


One of those things can kill you, or imprison you. The other one can send you mildly annoying ads.


>The other one can send you mildly annoying ads.

That isn't quite accurate. Your information is extremely valuable for identity thieves and for spammers and for running other scams. I guess you've never had your identity stolen or been harassed.


Google can't arrest you though.


Unlike law enforcement however, Google has a much higher level of trust in the public eye.


Here's Moxie's related press release on doing E2E for Allo: https://whispersystems.org/blog/allo/

Curious if he'll comment on what happened:

https://twitter.com/moxie

https://news.ycombinator.com/threads?id=moxie

___

If you don't know about Moxie, highly suggest learning more about him:

https://thoughtcrime.org/

https://en.m.wikipedia.org/wiki/Thoughtcrime

https://en.m.wikipedia.org/wiki/Moxie_Marlinspike


The best new friend of Facebook Moxie?

His "trust us, we checked FB Messenger code and it's all good" pitch made for a very entertaining read.


Yes, moxie "knows it all" wants to control his apps so much he doesn't want them to be on f-droid https://f-droid.org/posts/security-notice-textsecure/


Do you feel f-droid's build security should be trusted, an if so, why?

>> ""F-Droid has received criticism for distributing out-of-date versions of official applications and for its approach to application signing."

https://en.m.wikipedia.org/wiki/F-Droid


> F-Droid has received criticism for distributing out-of-date versions of official applications

Thats not a bug, it's a feature


Aware of that issue (asked moxie about it on HN) and other issues too, but it doesn't change that there's zero reason to believe Moxie's intent is malicious and it's a stretch by any means to claim his contributions are anything but positive in sum.


I'm not sure why Moxie gets so much grief on some of these issues. To his credit, he's released just about everything under a free software license -- with reproducible builds and all.

Ultimately, it's his baby and he can do what he wants with it.


While speculation, Moxie's projects appear to have gotten (deservedly) a lot of attention, which brings a lot of feedback that's unfounded and potential malicious; for example, the f-droid comment about in my opinion fails to reflect both sides of the issue, and build security is VERY IMPORTANT and often ignored.

I don't know Moxie, but we've exchange messages before and never got the sense that off the cuff he was discounting any feedback. At a very highly level we agree about what he's doing, though I get the sense that anonymity is something we don't exactly agree about, but do understand a little of why he feels the way he does.


First thing I thought when saw the release of this app is that it will record everything I do because it's google.

Thanks but no thanks, google. Stay evil.


> Stay evil.

Wow. Google went a full 180 from being a company that promoted itself by saying "Don't be evil" to something evil. Couldn't Google have made its billions still being not evil, without its privacy issues, without its obnoxious desire for tracking everything. Did they turn to this evil for the money or just because they can do it (or if not someone else will).


>Couldn't Google have made its billions still being not evil

No. Google is advertising company and advertisement companies need a lot of user data for targeted ads.


How does it benefit by keeping data forever though? Of what use is aged data?


It doesn't. They need a constant stream of new data for their business model to work. Thus, they benefit from continuously collecting data on users.


So why do they store indefinitely? If they stated a retention period openly they'd get much less criticism over privacy.


I'd assume for machine learning developments, specifically as training data and back testing. Build a new model with 3x the data you had before or be able to retrospectively see how a model would have performed over 3 years rather than 1.


Don't forget that they're also entirely complicit to NSA demands for live access to data as well, per the somewhat-recent leaks. That's another level of evil above regular profit motivations.


Even if they weren't (mainly) an advertising company, even if they charged for all the free things, they'd still need all the data they suck in to provide services they provide.


That's simply not true. They collect a ton of data which they wouldn't have to collect, if they wouldn't run advertisements or could even just delete this data much earlier.

There's even crap like Android not allowing you to selectively turn off the Internet Permission for apps, for which there is no good reason other than Google needing an internet connection to display their ads.


It's not true that machine learning and AI needs a lot of data to do what it does with Google's services? I guess you know something their engineers don't, so I'm looking forward to Sylos' Dataless AI & Co. I bet a lot of us would even pay good money for such a thing.

Sigh, I'm not arguing that they don't collect for advertising, I'm not even defending them at all. It's just that the things they do are impossible to do without a huge amount of data, regardless of advertising. Again, if you're so sure that it is “simply not true”, here we are, YC is your oyster. Or any other tech VC fund for that matter.


Google is a company that uses algorithms to sell ad space to you. How is a company not evil when you are the product?


Was it really expected in the first place that the ordinary messages don't get saved on Google's servers? As someone with a passive interest in Allo but who hasn't been following it closely, I never assumed the "smart AI" messages were anything other than simple hangouts-type messages that get stored on Google's servers. I'd even expect (/hope) them to be someday accessible on the web or transferable to a new phone.

It's the incognito that are end-to-end encrypted and I expect are secure from Google's prying eyes. And I don't think anything has changed there.

This is a non-issue for me, anyway.


And just yesterday someone on HN didn't understand why the recent Google messaging apps weren't being more widely adopted.

It's clear the public wants some assurances around privacy, or at least transparency where it is lacking. Not to mention, this is the umpteenth product they are planning to deprecate, uh, I meant launch (Grand Central, Voice, Wave, Talk, Hang Outs, etc).


Wow, they could generate stylometric profiles and sell that for mega-bucks.

Kind of creepy I say.


You either want encryption, security and privacy or you want other things like history. Allo choosing the latter makes it useless for anyone who really cares about security or privacy.


There's no reason that the history couldn't be encrypted in a way that's only accessible to the client. There are even techniques for encrypted indexing and keyword search.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: