We didn't review the entire source code, only the cryptographic core. That said, the main issue we found was that the WhatsApp servers ultimately decide who is and isn't in a particular chat. Dan Goodin wrote about it here: https://arstechnica.com/security/2025/05/whatsapp-provides-n...
> We didn't review the entire source code
And, you don't see the issue with that? Facebook was bypassing security measures for mobile by sending data to itself on localhost using websockets and webrtc.
An audit of 'they can't read it cryptographically' but the app can read it, and the app sends data in all directions. Push notifications can be used to read messages.
Push-to-Sync. We observed 8 apps employ a push-to-sync strat-
egy to prevent privacy leakage to Google via FCM. In this mitigation
strategy, apps send an empty (or almost empty) push notification
to FCM. Some apps, such as Signal, send a push notification with
no data (aside from the fields that Google sets; see Figure 4). Other
apps may send an identifier (including, in some cases, a phone num-
ber). This push notification tells the app to query the app server
for data, the data is retrieved securely by the app, and then a push
notification is populated on the client side with the unencrypted
data. In these cases, the only metadata that FCM receives is that the
user received some message or messages, and when that push noti-
fication was issued. Achieving this requires sending an additional
network request to the app server to fetch the data and keeping
track of identifiers used to correlate the push notification received
on the user device with the message on the app server.
Maybe I’m mis-interpreting what you mean, but without a notification when a message is sent, what would you correlate a message-received notification with?
Nothing changed, but many people struggle to understand their our own degree of relative ignorance and overvalue high-level details that are leaky abstractions which make the consequentially dissimilar look superficially similar.
Why did you not mention that the WhatsApp apk, even on non-google play installed devices, loads google tag manager's scripts?
It is reproducibly loaded in each chat, and an MitM firewall can also confirm that. I don't know why the focus of audits like these are always on a specific part of the app or only about the cryptography parts, and not the overall behavior of what is leaked and transferred over the wire, and not about potential side channel or bypass attacks.
Transport encryption is useless if the client copies the plaintext of the messages afterwards to another server, or say an online service for translation, you know.
Things like this combined with the countless ways to hide "feature flags" in a giant codebase makes me feel that anything less than "the entire app was verified + there is literally no way to dynamically load code from remote (so even no in app browser) + we checked 5 years of old versions and plan to do this for the next 5 years of update" is particularly meaningful.
Still very important but my issue has never been with zucks inability to produce solid software, rather in its intentions and so them being good engineers just makes them better at hiding bad stuff.
Back in the days people called skype [1] spyware because it had lots of backdoors in it and lots of undocumented APIs that shouldn't have been in there.
The funny part was that skype was probably the most obfuscated binary that was ever available as "legitimate" software, so there were regular reversing events to see "how far" you could get from scratch to zeroday within 48h hackathons. Those were fun times :D
That's protected cryptographically with key transparency. Anyone can check what the current published keys for a user are, and be sure they get the same value as any other user. Specifically, your wa client checks that these keys are the right key.
Unfortunately a lot of investigations start out as speculation/vibes before they turn into an actual evaluation. And getting past speculation/vibes can take a lot of effort and political/social/professional capital before even starting.
I’ve been looking for this everywhere the past few days but I couldn’t find any official information relating the use of https://signal.org/docs/specifications/pqxdh/ in the signal protocol version that WhatsApp is currently using.
Do you have any information if the protocol version they currently use provides post-quantum forward secrecy and SPQR or are the current e2ee chats vulnerable to harvest now, decrypt later attacks?
Signal protocol prevents replay attacks as every message is encrypted with new key. Either it's next hash ratchet key, or next future secret key with new entropy mixed via next DH shared key.
Private keys, probably not. WhatsApp is E2EE meaning your device generates the private key with OS's CSPRNG. (Like I also said above), exfiltration of signing keys might allow MITM but that's still possible to detect e.g. if you RE the client and spot the code that does it.
Wouldn't ratchet keys prevent MITM too? In other words if MITM has your keys and decrypts your message, then your keys are out of sync from now on. Or do I misunderstand that?
The ratchets would have different state yes. The MITM would mix in different entropy into the keys' states. It's only detectable if the MITM ever stops. But since the identity key exfiltration only needs to happen once per lifetime of installation (longer if key is backed up), the MITM could just continue forever since it's just a few cycles to run the protocol in the server. You can then choose whether to read the messages or just ignore them.
One interesting way to detect this would be to observe sender's outgoing and recipient's incoming ciphertexts inside the client-to-server TLS that can be MITM'd by users. Since the ratchet state differs, so do the keys, and thus under same plaintext, so do the ciphertexts. That would be really easy way to detect MITM.
By that standard, it can never be verified because what is running and what is reviewed could be different. Reviewing relevant elements is as meaningful as reviewing all the source code.
Let’s be real: the standard is “Do we trust Meta?”
I don’t, and don’t see how it could possibly be construed to be logical to trust them.
I definitely trust a non-profit open source alternative a whole lot more. Perception can be different than reality but that’s what we’ve got to work with.
What you are saying is empirically false. Change in a single line of executed code (sometimes even a single character!) can be the difference between a secure and non-secure system.
This must mean that you have been paid not to understand these things. Or perhaps you would be punished at work if you internalized reality and spoke up. In either case, I don't think your personal emotional landscape should take precedence over things that have been proven and are trivial to demonstrate.
as long as client side encryption has been audited, which to my understanding is the case, it doesn't matter. That is literally the point of encryption, communication across adversarial channels. Unless you think Facebook has broken the laws of mathematics it's impossible for them to decrypt the content of messages without the users private keys.
Well the thing is, the key exfiltration code would probably reside outside the TCB. Not particularly hard to have some function grab the signing keys, and send them to the server. Then you can impersonate as the user in MITM. That exfiltration is one-time and it's quite hard to recover from.
I'd much rather not have blind faith on WhatsApp doing the right thing, and instead just use Signal so I can verify myself it's key management is doing only what it should.
Speculating over the correctness of E2EE implementation isn't productive, considering the metadata leak we know Meta takes full advantage of, is enough reason to stick proper platforms like Signal.
> That exfiltration is one-time and it's quite hard to recover from.
Not quite true with Signal's double ratchet though, right? Because keys are routinely getting rolled, you have to continuously exfiltrate the new keys.
No I said signing keys. If you're doing MITM all the time because there's no alternative path to route ciphertexts, you get to generate all those double-ratchet keys. And then you have a separate ratchet for the other peer in the opposite direction.
Last time I checked, by default, WhatsApp features no fingerprint change warnings by default, so users will not even notice if you MITM them. The attack I described is for situations where the two users would enable non-blocking key change warnings and try to compare the fingerprints.
Not saying this attack happens by any means. Just that this is theoretically possible, and leaves the smallest trail. Which is why it helps that you can verify on Signal it's not exfiltrating your identity keys.
Ah right, I didn't think about just outright MitMing from the get-go. If WhatsApp doesn't show the user anything about fingerprints, then yeah, that's a real hole.
Not that I trust Facebook or anything but wouldn’t a motivated investigator be able to find this key exfiltration “function” or code by now? Unless there is some remote code execution flow going on.
WhatsApp performs dynamic code loading from memory, GrapheneOS detects it when you open the app, and blocking this causes the app to crash during startup. So we know that static analysis of the APK is not giving us the whole picture of what actually executes.
This DCL could be fetching some forward_to_NSA() function from a server and registering it to be called on every outgoing message. It would be trivial to hide in tcpdumps, best approach would be tracing with Frida and looking at syscalls to attempt to isolate what is actually being loaded, but it is also trivial for apps to detect they are being debugged and conditionally avoid loading the incriminating code in this instance. This code would only run in environments where the interested parties are sure there is no chance of detection, which is enough of the endpoints that even if you personally can set off the anti-tracing conditions without falling foul of whatever attestation Meta likely have going on, everyone you text will be participating unknowingly in the dragnet anyway.
"Many forms of dynamic code loading, especially those that use remote sources, violate Google Play policies and may lead to a suspension of your app from Google Play."
I don’t know these OS’s well enough. Can you MitM the dynamic code loads by adding a CA to the OS’s trusted list? I’ve done this in Python apps because there’s only 2 or 3 places that it might check to verify a TLS cert.
>Not that I trust Facebook or anything but wouldn’t a motivated investigator be able to find this key exfiltration “function” or code by now?
Yeah I'd imagine it would have been found by know. Then again, who knows when they'd add it, and if some future update removes it. Google isn't scanning every line for every version. I prefer to eliminate this kind of 5D-guesswork categorically, and just use FOSS messaging apps.
The issue is what the client app does with the information after it is decrypted. As Snowden remarked after he released his trove, encryption works, and it's not like the NSA or anyone else has some super secret decoder ring. The problem is endpoint security is borderline atrocious and an obvious achilles heel - the information has to be decoded in order to display it to the end user, so that's a much easier attack vector than trying to break the encryption itself.
So the point other commenters are making is that you can verify all you want that the encryption is robust and secure, but that doesn't mean the app can't just send a copy of the info to a server somewhere after it has been decoded.
No closed-source E2EE client can be truly secure because the ends of e2e are opaque.
Detecting backdoors is only truly feasible with open source software and even then it can difficult.
A backdoor can be a subtle remote code execution "vulnerability" that can only be exploited by the server. If used carefully and it exfiltrates data in expected client-server communications it can be all but impossible to detect. This approach also makes it more likely that almost no insider will even be aware of it, it could be a small patch applied during the build process or to the binary itself (for example, a bound check branch). This is also another reason why reproducible builds are a good idea for open source software.
>Detecting backdoors is only truly feasible with open source software and even then it can difficult.
This is absurd. Detecting backdoors is only truly feasible on binaries, there's no way you can understand compiler behavior well enough to be able to spot hidden backdoors in source code.
With all due respect to Stallman, you can actually study binaries.
The claim Stallman would make (after punishing you for using Open Source instead of Free Software for an hour) is that Closed Software (Proprietary Software) is unjust. but in the context of security, the claim would be limited to Free Software being capable of being secure too.
You may be able to argue that Open Source reduces risk in threat models where the manufacturer is the attacker, but in any other threat model, security is an advantage of closed source. It's automatic obfuscation.
There's a lot of advantages to Free Software, you don't need to make up some.
This. Closed source doesn't stop people from finding exploits in the same way that open source doesn't magically make people find them. The Windows kernel is proprietary and closed source, but people constantly find exploits in it anyways. What matters is that there is a large audience that cares about auditing. OTOH if Microsoft really wanted to sneak in a super hard to detect spyware exploit, they probably could - but so could the Linux kernel devs. Some exploits have been openly sitting in the Linux kernel for more than a decade despite everyone being able to audit it in theory. Who's to say they weren't planted by some three letter agency who coerced a developer. Relying on either approach is pointless anyways. IT security is not a single means to all ends. It's a constant struggle between safety and usability at every single level from raw silicon all the way to user-land.
It's weird to me that it's 2026 and this is still a controversial argument. Deep, tricky memory corruption exploit development is done on closed-source targets, routinely, and the kind of backdoor/bugdoor people conjure in threads about E2EE are much simpler than those bugs.
It was a pretty much settled argument 10 years ago, even before the era of LLVM lifters, but post-LLM the standard of care practice is often full recompilation and execution.
> in any other threat model, security is an advantage of closed source
I think there's a lot of historical evidence that doesn't support this position. For instance, Internet Explorer was generally agreed by all to be a much weaker product from a security perspective than its open source competitors (Gecko, WebKit, etc).
Nobody was defending IE from a security perspective because it was closed source.
Obscurity is a delay tactic which raises the time cost associated with an attack. It is true that obscurity is not a security feature, but it is also true that increasing the time cost associated with attacking you is a form of deterrant from attempts. If you are not at the same time also secure in the conventional sense then it is only buying you time until someone puts in the effort to figure out what you are doing and own you. And you better have a plan for when that time comes. But everyone needs time, because bugs happen, and you need that time to fix them before they are exploited.
The difference between obscurity and a secret (password, key, etc) is the difference between less then a year to figure it out and a year or more to figure it out.
There is a surprising amount of software out there with obscurity preventing some kind of "abuse" and in my experience these features are not that strong, but it takes someone like me hours to reverse engineer these things, and in many cases I am the first person to do that after years of nobody else bothering.
This is a tired trope. Depending exclusively on obfuscation (security by obscurity) is not safe. Maintaining confidentiality of things that could aid in attacks is absolutely a defensive layer and improves your overall security stance.
I love the Rob Joyce quote that explained why TAO was so successful: "In many cases we know networks better than the people who designed and run them."
anything that complicates an attacker improves security, at least grossly. That said, then there might be counter effects that make it a net loss or net neutral.
Expalin how you detect a branched/flaged sendKey (or whatever it would be called) call in the compiled WhatsApp iOS app?
It could be interleaved in any of the many analytics tools in there too.
You have to trust the client in E2E encryption. There's literally no way around that. You need to trust the client's OS (and in some cases, other processes) too.
Binary analysis is vastly better than source code analysis, reliably detecting bugdoors via source code analysis tends to require an unrealistically deep knowledge of compiler behavior.
Empirically it doesn't look like there's a meaningful difference, does it?
Not having the source code hasn't stopped people from finding exploits in Windows (or even hardware attacks like Spectre or Meltdown). Having source code didn't protect against Heartbleed or log4j
I'd conclude it comes down to security culture (look how things changed after the Trustworthy Computing initiative, or OpenSSL vs LibreSSL) and "how many people are looking" -- in that sense, maybe "many eyes [do] make bugs shallow" but it doesn't seem like "source code availability" is the deciding factor. Rather, "what are the incentives" -- both on the internal development side and the external attacker side
I don't agree with "vastly better" but its arguable both in the direction and magnitude. I don't think you could plausibly argue that binary analysis is "vastly harder".
This comment comes across as unnecessarily aggressive and out of nowhere (Stallman?), it's really hard to parse.
Does this rewording reflect it's meaning?
"You don't actually need code to evaluate security, you can analyze a binary just as well."
Because that doesn't sound correct?
But that's just my first pass, at a high level. Don't wanna overinterpret until I'm on surer ground about what the dispute is. (i.e. don't want to mind read :) )
Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure", which sounds facile. (presumably something is being written to somewhere for the whole chat thing to work, can't do P2P because someone's app might not be open when you send)
Whether the original comment knows it or not, Stallman greatly influenced the very definition of Source Code, and the claim being made here is very close to Stallman's freedom to study.
>"You don't actually need code to evaluate security, you can analyze a binary"
Correct
>"just as well"
No, of course analyzing source code is easier and analyzing binaries is harder. But it's still possible (feasible is the word used by the original comment)
>Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure",
I didn't say anything about that? I mean those are valid tactics as part of a wider toolset, but I specifically said binaries, because it maps one to one with the source code. If you can find something in the source code, you can find it in the binary and viceversa. Analyzing file accesses and networks, or runtime analysis of any kind, is going to mostly be orthogonal to source code/binary static analysis, the only difference being whether you have a debug map to source code or to the machine code.
This is a very central conflict of Free Software, what I want to make clear is that Free Software refuses to study closed source software, not because it is impossible, but because it is unjustly hard. Free Software never claims it is impossible to study closed source software, it claims that source code access is a right, and they prefer rejecting to use closed source software, and thus never need to perform binary analysis.
What’s the state of the art of reverse engineering source code from binaries in the age of agentic coding? Seems like something agents should be pretty good at, but haven’t read anything about it.
I think there’s a good possibility that the technology that is LLMs could be usefully trained to decode binaries as a sort of squint-and-you-can-see-it translation problem, but I can’t imagine, eg, pre-trained GPT being particularly good at it.
I've been working on this, the results are pretty great when using the fancier models. I have successfully had gpt5.2 complete fairly complex matching decompilation projects, but also projects with more flexible requirements.
Nothing yet, agents analyze code which is textual.
The way they analyze binaries now is by using textual interfaces of command tools, and the tools used are mostly the ones supported by Foundation Models at training time, mostly you can't teach it new tools at inference, they must be supported at training. So most providers are focused on the same tools and benchmarking against them, and binary analysis is not in the zeitgeist right now, it's about production more than understanding.
Nono, you of course CAN teach tool use at inference, but it's different than doing so at training time, and the models are trained to call specific tools right now.
Also MCP is not an Agent protocol, it's used in a different category. MCP is used when the user has a chatbot, sends a message, gets a response. Here we are talking about the category of products we would describe as Code Agents, including Claude Code, ChatGPT Codex, and the specific models that are trained for use in such contexts.
The idea is that of course you can tell it about certain tools in inference, but in code production tasks the LLM is trained to use string based tools such as grep, and not language specific tools like Go To Definition.
My source on this is Dax who is developing an Open Source clone of Claude Code called OpenCode
Claude code and cursor agent and all the coding agents can and do run MCP just fine. MCP is effectively just a prompt that says “if you want to convert a binary to hex call the ‘hexdump’ tool passing in the filename” and then a promise to treat specially formatted responses differently. Any modern LLM that can reason and solve math problems will understand and use the tools you give it. Heck I’ve even seen LLMs that were never trained to reason make tool calls.
You say they’re better with the tools they’re trained on. Maybe? But if so not much. And maybe not. Because custom tools are passed as part of the prompt and prompts go a long way to override training.
LLMs reason in text. (Except for the ones that reason in latent space.) But they can work with data in any file format as long as they’re given tools to do so.
Agents are sort of irrelevant to this discussion, no?
Like, it's assuredly harder for an agent than having access to the code, if only because there's a theoratical opportunity to misunderstand the decompile.
Alternatively, it's assuredly easier for an agent because given execution time approaches infinity, they can try all possible interpretations.
Agents meaning an AI iteratively trying different things to try to decompile the code. Presumably in some kind of guess and check loop. I don’t expect a typical LLM to be good at this on its first attempt. But I bet Cursor could make a good stab at it with the right prompt.
Ex-WhatsApp engineer here. WhatsApp team makes so much effort to make this end to end encrypted messages possible. From the time I worked I know for sure it is not possible to read the encrypted messages.
From business standpoint they don’t have to read these messages, since WhatsApp business API provide the necessary funding for the org as a whole.
Nice! Hey, question: I noticed Signal at one point had same address on Google Play Store as WA. Can you tell us if Signal devs shared office space with WA during integration of the Signal protocol? Related to that, did they hold WA devs' hand during the process, meaning at least at the time it was sort of greenlighted by Moxie or something. If this is stuff under NDA I fully understand but anything you can share I'd love to hear.
From what you know about WA, is it possible for the servers to MitM the connection between two clients? Is there a way for a client to independently verify the identity of the other client, such as by comparing keys (is it even possible to view them?), or comparing the contents of data packets sent from one client with the ones received on the other side?
Whatsapp uses key transparency. Anyone can check what the current published keys for a user are, and be sure they get the same value as any other user. Specifically, your wa client checks that these keys are the right key.
Whatsapp has a blog post with more details available.
The legal and liability protection these messaging services get from E2EE is far too big to break it.
Besides I get the feeling we're so cooked these days from marketing that when I get freaked out that an advert is what I was thinking about. It's probably because they made me think about it.
How would you hide that? Unless you’re assuming nobody ever has to try and fix bugs or audit code to find it, and there’s some kind of closed off area of code that nobody thinks is suspicious. Or you maintain a complete second set of the app core libs that a few clandestine folks can access, and then hope nobody notices that the binaries don’t line up and crash logs are happening in obscured places.
> Ex-WhatsApp engineer here. WhatsApp team makes so much effort to make this end to end encrypted messages possible. From the time I worked I know for sure it is not possible to read the encrypted messages.
None of this makes the point you want to make. Being a former engineer. The team making "so much effort". You "knowing for sure". Like many in security, a single hole is all it takes for your privacy to pour out of your metaphorical bag of sand.
It’s not even “a single hole,” it’s that companies can change their mind at any time.
That person doesn’t work there anymore. For all we know Zuck could wake up one day and say “that’s it, we need the data and revenue from reading WhatsApp chats. Change our policy in the most low key way possible.”
Honestly, it’s too tempting isn’t it? They have the largest conversation network out there.
It doesn’t help that the company has just about zero trust built up among their customers. The whole dang company changed their name arguably to try to shed the “Facebook” baggage.
The backups are either unencrypted by default or have keys held by Meta / your backup provider. I think this means three-letter agencies can see your chats, just with a slight delay.
Another comment above mentions that you can recover conversation histories with just your phone number--if that's true then yup. The E2EE is all smoke and mirrors.
I have no doubt that that rank and file engineers were not aware of the underlying functionality that allowed for plain text content to be read.
Nobody would ever create a SendPlainTextToZuck() function that had to be called on every message.
It would be as simple as using a built in PRNG for client side key generation and then surreptitiously leaking the initial state (dozens of bytes) once in a nonce signing or something when authenticating with the server.
I’ve often thought one of Zuck’s superpowers is in finding ways to get smart and moral people to do truly evil things. Sometimes it’s mind games. Sometimes it’s careful layers of obfuscation.
Here it might be: This analytics package is dynamically loaded at runtime because reasons. This abuse flagging and review system is bundled with analytics because reasons. This add on for reconfiguring how the analytics package behaves at runtime, and has a bunch of switches nobody remembers why they’re here but don’t touch them they’re fragile.
> There’s a lawsuit against WhatsApp making the rounds today, claiming that Meta has access to plaintext. I see nothing in there that’s compelling; the whole thing sounds like a fishing expedition.
None of the statements I’ve seen from Meta, people formerly involved in WhatsApp that chimed in here (thanks!), or the quotes from the investigation are incompatible with the whistleblowers’ allegations.
At this point, I won’t trust anything short of this on the front page of an SEC filing, signed by zuck and the relevant management chain:
“The following statement is material to earnings: Facebook has never (since E2EE was rolled out) and will never access messages sent through whatsapp via any means including the encryption protocol, application backdoor moderation access or backup mechanisms. Similarly, it does not provide third parties with access to the methods, and does not have the technical capability to do so under any circumstances.”
Long story short, the world's largest privacy invaders have every incentive to subvert your privacy.
They do this at every opportunity. It's their bread and butter. It's how they make money.
The only reasonable assumption when dealing with them is that your privacy is being compromised. Proving it may not be possible but the incentive alone should be enough to convince a reasonable person that they do not deserve trust.
Just to throw in a couple of possibly outlandish theories:
1. as others have said, they could be collecting the encrypted messages and then tried to decrypt them using quantum computing, the Chinese have been reportedly trying to do this for many years now.
2. with metadata and all the information from other sources, they could infer what the conversation is about without the need to decrypt it: if I visit a page (Facebook cookies, they know), then I share a message to my friend John, and then John visits the same page (again, cookies), then they can be pretty certain that the contain of the message was me sharing the link.
(1) made me chuckle. I've worked at nearly every FAANG including Meta. These companies aren't nearly as advanced or competent as you think.
I no longer work at Meta, but in my mind a more likely scenario than (1) is: a senior engineer proposes a 'Decryption at Scale' framework solely to secure their E6 promo, and writes a 40-page Google Doc to farm 'direction' points for PSC. Five PMs repost this on Workplace to celebrate the "alignment" so they can also include it in their PSCs.
The TL and PMs immediately abandon the project after ratings are locked because they already farmed the credit for it. The actual implementation gets assigned to an E4 bootcamp grad who is told by a non-technical EM to pivot 3 months in because it doesn't look like 'measurable impact' in a perf packet. The E4 gets fired to fill the layoff quota and everyone else sails off into the sunset.
Re. quantum computing: no chance, the scientific and engineering breakthroughs they would need are too outlandish, like claiming China already had a 2026-level frontier model back in 2016.
I think this is the most likely scenario. The US government is not necessarily trying to read the messages right now, in real-time. But it wants to read the messages at some point in the future.
I wonder how these investigations go? Are they just asking them if it is true? Are they working with IT specialist to technically analyze the apps? Are they requesting the source code that can be demonstrated to be the same one that runs on the user devices and then analyze that code?
That will be step 1. Fear of being caught lying to the government is such that that is usually enough. Presumably at least a handful of people would have to know about it, and nobody likes their job at Facebook enough to go to jail over it.
Companies lie to governments and the public all the time. I doubt that even if something were found and the case were lost, it would lead to prison or any truly severe punishment. No money was stolen and no lives were put at risk. At worst, it would likely end in a fine, and then it would be forgotten, especially given Meta’s repeated violations of user trust.
The reality is that most users do not seem to care. For many, WhatsApp is simply “free SMS,” tied to a phone number, so it feels familiar and easy to understand, and the broader implications are ignored.
I said this in another recent HN thread but all encryption comes down to key management. If you don’t control the keys, something else does. Sometimes that’s a hardware enclave, sometimes it’s a key derivation algorithm, sometimes it’s just a locally generated key on the filesystem.
If you never give WhatsApp a cryptographic identity then what key is it using? How are your messages seamlessly showing up on another device when you authenticate? It’s not magic, and these convenience features always weaken the crypto in some way.
WhatsApp has a feature to verify the fingerprint of another party. How many people do you think use this feature, versus how many people just assume they're safe because they read that WhatsApp has E2EE?
I witnessed something recently that points unambiguously at Whatsapp chats being not private.
Not two months ago I sent a single photo to a friend of some random MacGyver kitchen contraption I made. Never described it, just a photo with the lol. He replied lol. He never reshared nor discussed it with anyone else. We never spoke about this before or after. Two days later he starts seeing ads on Facebook for a proper version of the same. There's virtually no other explanation except for Meta vacuuming and analyzing the photo. None.
Machines are really good at putting two and two together and humans are generally not. Are both of you completely sure that you didn't google anything related to the image before or after?
While Meta doesn't have access to the contents, they do know who you're texting and when.
I don't think the claim was that the commercial device never existed but that it was too obscure for the friend to randomly independently get targeted ads about it..
But I don't think WhatsApp takes many steps to protect media and in many cases the user really wants to backup media or share in other apps, etc, over security for media.
I want whatsapp to decrypt the messages in a secure enclave and render the message content to the screen with a secure rendering pipeline, as is done with DRM'ed video.
Compromise of the client side application or OS shouldn't break the security model.
This should be possible with current API's, since each message could if needed simply be a single frame DRM'ed video if no better approach exists (or until a better approach is built).
Signal uses the DRM APIs to mitigate threats like Microsoft Recall, but it doesn't stop the app itself from reading its own data.
I don't really see how it's possible to mitigate client compromise. You can decrypt stuff on a secure enclave but at some point the client has to pull it out and render it.
> I don't really see how it's possible to mitigate client compromise
Easy: pass laws requiring chat providers to implement interoperability standards so that users can bring their own trusted clients. You're still at risk if your recipient is using a compromised client, but that's a problem that you have the power to solve, and it's much easier to convince someone to switch a secure client if they don't have to worry about losing their contacts.
That's not permissionless afaik. "Users" can't really do it. It's frustrating that all these legislations appear to view it as a business problem rather than a private individual's right to communicate securely.
But in a way, I feel like sometimes it makes sense to not completely open everything. Say a messaging app, it makes sense to not just make it free for all. As a company, if I let you interoperate with my servers that I pay and maintain, I guess it makes sense that I may want to check who you are before. I think?
We probably can't make it free for all, but for something like a messaging app, we also need to recognize that it isn't optional to function in society. It should be regulated more like a utility:
- Facebook can still control the identity, but there needs to be a legal recourse for getting banned, and their policies can't discriminate against viewpoints, for example
- The client specs should be open so that an alternate client can be implemented (sort of like how Telegram is currently)
I meant the platform openness aspect, that you are allowed to use alternate clients, but the identity is centralized E2EE is largely independent of this choice.
> but there needs to be a legal recourse for getting banned
Agreed.
> The client specs should be open so that an alternate client can be implemented
An example that comes to mind is Signal, where they don't want that. They get a lot of criticism for it of course, but I think it the reasoning actually makes sense: in terms of security, allowing third-party clients is a security risk. If your threat model is "people who risk their life using it", it makes sense, right?
Under the EU's Digital Markets Act, WhatsApp is considered a gatekeeper (Signal is not) and has to be open to interoperability. It seems like they do audit the implementations in order to make sure that the security is not too bad. Which makes sense again, but has a cost. For Meta, that's fine. For Signal... I don't know.
Also WhatsApp will - if I understand correctly - make it very clear that you are talking to someone on a third-party client (and again they get a lot of criticism for that). But I think it makes sense... If WhatsApp was so open that every second client was pretty much a spyware, that would defeat the purpose of E2EE messaging.
Not that I strongly disagree, but just saying that it seems... complicated.
I was intending that the alternate client should exist to function as an escape hatch. I fully expect most people will still use the default one, just like how people used the official reddit/telegram client when third party ones were available. The existence of an alternative constrains how much Facebook can enshittify the experience.
E2EE is about secure transport between the endpoints. What happens to the message after the endpoint is not something an app can feasibly enforce. Having control of the clients can at most do things like enforcing deletes, which IMO is not a good idea anyway.
> every second client was pretty much a spyware
Very few people will actually use one since the official app won't be outwardly too hostile, and those who do should be sufficiently discerning.
I don't think that it can work like that. If you make it fully open, you don't know what can happen. It cannot improve the security, it can only worsen it.
Suddenly you go from people using WhatsApp to people using random apps that you have no idea about, I think it's a step backward.
The "escape hatch", IMO, is an alternative messenger (like Signal). If Meta makes WhatsApp really bad, people can just switch to Signal. It's infinitely easier than moving away from AWS or the Microsoft Suite. The lock-in effect is really just that people can't be arsed to install it.
I think that the mere existence of Signal already forces Meta to keep WhatsApp relatively good. And to be fair, around me people like WhatsApp better because it has features they want and that Signal doesn't have.
>I don't really see how it's possible to mitigate client compromise.
You could of course offload plaintext input and output along with cryptographic operations and key management to separate devices that interact with the networked device unidirectionally over hardware data diodes, that prevent malware from getting in or getting the keys out.
Throw in some v3 Onion Services for p2p ciphertext routing, and decent ciphersuite and you've successfully made it to at least three watch lists just by reading this. Anyway, here's one I made earlier https://github.com/maqp/tfc
> don't really see how it's possible to mitigate client compromise.
Think of the way DRM'ed video is played. If the media player application is compromised, the video data is still secure. Thats because the GPU does both the decryption and rendering, and will not let the application read it back.
That's not what signal's doing though. It's just asking the OS nicely to not capture screen contents. There are secure ways of doing media playback, but that's not what signal's using.
Video decryption+decoding is a well-defined enough problem that you can ship silicon that does it. You can't do the same thing for the UI of a social media app.
You could put the entire app within TrustZone, but then you're not trusting the app vendor any less than you were before.
Although now I think about it more, you could have APIs for "decrypt this [text/image] with key $id, and render it as a secure overlay at coordinates ($x, $y)"
Clients can always be compromised. I'm not talking about a client that can't be compromised, but simply a client that is not compromised out-of-the-box.
Windows recall, intrusive addition of AI features (is there even a pinky promise that they're not training on user data?), more builtin ads, and less user control (most notably the removal of using the OS without an account - something that makes sense in the context of undisclosed theft of private information).
This was 2025. I'm excited for what 2026 will bring. Things are moving fast indeed.
“I want whatsapp to decrypt the messages in a secure enclave and render the message content to the screen with a secure rendering pipeline, as is done with DRM'ed video.“
If you are sophisticated enough to understand, and want, these things (and I believe that you are) …
… then why would you want to use WhatsApp in the first place?
This is what a layman would assume happens from Meta’s WhatsApp advertising. They show the e2e process, and have the message entirely unreadable by anyone but the phone owner.
because it used to be that the ends and the middlemen were different entities.
In the universe where they are the same entity (walled-gardens) there is only the middleman.
In such cases you either trust them or you don’t, anything more is not required because they can compromise their own endpoints in a way you can not detect.
It is a bit counter-intuitive because there'd be law enforcement lobby working very hard to make sure that they can read private WhatsApp chats. I don't think it is reasonable to treat the entity that literally runs a spy agency monitoring all digital communication as the arbiter and investigator of what is and isn't private. The incentives just aren't there.
Next time you use true real independently audited e2e communication channel, don’t forget to check who is the authority who says that the "other end" is "the end" you think it is
I know the default assumption with Telegram is that they can read all your messages, but unlike WhatsApp they seem less cooperative and I never got the notion that they ever read private messages until the Macron incident, and even then they do if the other party reports them. How come they are able to be this exception despite not having end to end encryption by default?
>I know the default assumption with Telegram is that they can read all your messages
The client is open source. It's trivial to verify this is 100% factually happening. They have access to every group message. Every desktop message. Every message by default. If you enable secret chats for 1:1 mobile chats, you are now disclosing to Telegram you're actively trying to hide something from them, and if there ever was metadata worth it for Keith Alexander to kill someone over, it's that.
>they seem less cooperative and I never got the notion that they ever read private messages until the Macron incident
Thus, I wouldn't be as much concerned about what they're handing EUROPOL, but what they're handing FSB/SVR.
Even if Telegram never co-operated with Russian intelligence, who here thinks Telegram team, that can't pull off the basic thing of "make everything E2EE" that ~all of its competition has successfully done, can harden their servers against Russian state sponsored hackers like Fancy Bear, who obviously would never make noise about successful breach and data exfiltration.
>How come they are able to be this exception despite not having end to end encryption by default?
They've pushed out lie about storing cloud chats across different servers in different jurisdictions. Maybe that scared some prosecutors off. Or maybe FVEY is inside TG's servers too, and they don't like the idea of going after users as that would incentivize deployment of usable E2EE.
Currently, the Russian government is trying to squeeze people out of Telegram and move them over to MAX: https://caspianpost.com/regions/russia-tightens-telegram-res...
WhatsApp also operates in Russia, despite Instagram and Facebook being banned. So I wouldn't count on its E2EE either.
Signal still requires a phone number and proprietary Google blobs on mobile. Many third-party Telegram clients exist - Signal allows none.
The real story is, MAX is there to scare people into Telegram. Durov isn't your friend, neither is Putin who doesn't bother blocking connections to the server.
>So I wouldn't count on its E2EE either.
This is the worst way to assses E2EE deployment. 5D-chess.
>Signal still requires a phone number and proprietary Google blobs on mobile.
Telegram also requires a phone number. If you didn't have double standards, I bet you'd have no standards.
>Many third-party Telegram clients exist
The official implementation and default encryption matters. 99.99% just assume Telegram is secure because Putin supposedly tries to block it. They don't know it's not E2EE. And no third party TG desktop client offers cross-platform E2EE or E2EE groups. IIRC there's exactly one desktop client that tries to offer E2EE 1:1 chats but that's not seamless. TG has no idea how to make seamless E2EE like Signal.
You ignoring that Signal is both open source and always E2EE and complaining about it's "proprieatry blobs" yet looking past TG's atrocious E2EE speaks volumes.
> “We look forward to moving forward with those claims and note WhatsApp’s denials have all been carefully worded in a way that stops short of denying the central allegation in the complaint – that Meta has the ability to read WhatsApp messages, regardless of its claims about end-to-end encryption.”
My money is on the chats being end to end encrypted and separately uploaded to Facebook.
>being end to end encrypted and separately uploaded to Facebook
That's a cute loophole you thought up, but whatsapp's marketing is pretty unequivocal that they can't read your messages.
>With end-to-end encryption on WhatsApp, your personal messages and calls are secured with a lock. Only you and the person you're talking to can read or listen to them, and no one else, not even WhatsApp
That's not to say it's impossible that they are secretly uploading your messages, but the implication that they could be secretly doing so while not running afoul of their own claims because of cute word games, is outright false.
>That's not to say it's impossible that they are secretly uploading your messages, but the implication that they could be secretly doing so while not running afoul of their own claims because of cute word games, is outright false.
The thing is, if they were uploading your messages, then they'd want to do something with the data.
And humans aren't great at keeping secrets.
So, if the claim is that there's a bunch of data, but everyone who is using it to great gain is completely and totally mum about it, and no one else has ever thought to question where certain inferences were coming from, and no employee ever questioned any API calls or database usage or traffic graph.
Well, that's just about the best damn kept secret in town and I hope my messages are as safe!
Where were the Facebook whistleblowers about the numerous IOS/Android gaps that let the company gain more information than they were to supposed to see? Malicious VPNs, scanning other installed mobile applications, whatever. As far as I know, the big indictments have been found from the outside.
AFAIK that was a separate app, and it was pretty clear that it was MITMing your connections. It's not any different than say, complaining about how there weren't any whistleblowers for fortinet (who sell enterprise firewalls).
I'm not saying they are sending the content back, but WhatsApp has to read your message or it couldn't display it, so I don't even know exactly what that particular claim means?
They most likely mean their service or their employees, but this appears to be marketing fluff and not an enforceable statement.
I wonder if keyword/sentiment extraction on the user's device counts as reading "by WhatsApp"...
There's the conspiracy theory about mentioning a product near the phone and then getting ads for it (which I don't believe), but I feel like I've mentioned products on WhatsApp chats with friends and then got an ad for them on Instagram sometime after.
Also claiming "no one else can read it" is a bit brave, what if the user's phone has spyware that takes screenshots of WhatsApp... (Technically of course it's outside of their scope to protect against this, but try explaining that to a judge who sees their claim and the reality)
The conspiracy theory exists due to quirks of human attention and the wider metadata economy though.
You mention something so you're thinking about it, you're thinking about it probably because you've seen it lately (or it's in the group of things local events are making you think about), and then later you notice an ad for that thing and because you were thinking about it actually notice the ad.
It works with anything in any media form. Like I've had it where I hear a new thing and suddenly it turns up in a book I'm reading as well. Of course people discount that because they don't suspect books of being intelligent agents.
So? You "directly profit" from picking up a bundle of cash robbers dropped as well doing the actual robbery. That doesn't mean someone who does the former is going to do the latter.
My guess is that they are end-to-end encrypted. And because of Facebook's scale that they're able to probabilisticly guess at what's in the encrypted messages (e.g.a message with X hash has Y probability of containing the word "shoes")
If this was happening en-masse, wouldn't this be discovered by the many people reverse engineering WhatsApp? Reverse engineering is hard sophisticated work, but given how popular WhatsApp is plenty of independent security researchers are doing it. I'm quite skeptical Meta could hide some malicious code in WhatsApp that's breaking the E2EE without it being discovered.
I'm technical and work in security. Since it is trivial, please explain. Ideally not using a strawman like "well just run strings and look for uploadPlaintextChatsToServer()".
This was happening en masse, perhaps still does - the cloud backup was unencrypted. Originally it was encrypted. Then, one day, Google stopped counting it towards your storage quota, but it became unencrypted. But even before that, Meta had the encryption keys (and probably still does).
When you get a new phone, all you need is your phone number to retrieve the past chats from backup; nothing else. That proves, regardless of specifics, that Meta can read your chats - they can send it to any new phone.
So it doesn’t really matter that it is E2EE in transit - they just have to wait for the daily backup, and they can read it then.
Well they wouldn't be breaking e2ee, they'd be breaking the implicit promise of e2ee. The chats are still inaccessible to intermediaries, they'd just be stored elsewhere. Like Apple and Microsoft do.
I am not familiar with the state of app RE. But between code obfuscators and the difficulty of distinguishing between 'normal' phone home data and user chats when doing static analysis... I'd say it's not out of the question.
I really doubt this. Any such upload would be visible inside the WhatsApp application, which would make it the world's most exciting (and relatively straightforward) RE project. You can even start with a Java app, so it's extra easy.
Does FAANG apps have antidebug or code obfuscation? At least for google their apps are pretty lightly protected. The maximum extent of obfuscation is the standard compilation/optimization process that most apps go through (eg. r8 or proguard).
Reverse engineering is easy when the source code is available. :)
The difference between source code in a high-level language, and AArch64 machine language, is surmountable. The effort is made easier if you can focus on calls to the crypto and networking libraries.
At some level, the machine code is the source code -- but decompiling AArch64 mobile apps into something like Java is common practice.
As GP alludes, you would be looking for a secondary pathway for message transmission. This would be difficult to hide in AArch64 code (from a skilled practitioner), and extra difficult in decompiled Java.
It would be "easy" enough, and an enormous prize, for anyone in the field.
I've done this work on other mobile apps (not WhatsApp), and the work is not out of the ordinary.
It's difficult to hide subtleties in decompiled code. And anything that looks hairbally gets special attention, if the calling sites or side effects are interesting.
(edit for edit)
> That's certainly the only way messages could be uploaded to Facebook!
Well, there's a primary pathway which should be very obvious. And if there's a secondary pathway, it's probably for telemetry etc. If there are others, or if it isn't telemetry, you dig deeper.
All secrets are out in the open at that point. There are no black boxes in mobile app code.
I've lost track of our points of disagreement here. Sure, it's work, but it's all doable.
Obfuscated code is more difficult to unravel in its orginal form than the decompiled form. Decompiled code is a mess with no guideposts, but that's just a matter of time and patience to fix. It's genuinely tricky to write code that decompiles into deceptive appearances.
Original position is that it'd be difficult to hide side channel leakage of chat messages in the WhatsApp mobile app. I have not worked on the WhatsApp app, but if it's anything like the mobile apps I have analyzed, I think this is the correct position.
If the WhatsApp mobile apps are hairballs of obfuscation and misdirection, I would be a) very surprised, and b) highly suspicious. Since I don't do this work every day any more, I haven't thought much about it. But there are so many people who do this work every day, and WhatsApp is so popular, I'd be genuinely shocked if there were fewer than hundreds of people who have lightly scanned the apps for anything hairbally that would be worth further digging. Maybe I'm wrong and WhatsApp is special though. Happy to be informed if so.
> My money is on the chats being end to end encrypted and separately uploaded to Facebook.
If governments of various countries have compelled Meta to provide a backdoor and also required non-disclosure (e.g. a TCN secretly issued to Meta under Australia's Assistance and Access Act), this is how I imagined they would do it. It technically doesn't break encryption as the receiving device receives the encrypted message.
> My money is on the chats being end to end encrypted and separately uploaded to Facebook.
This is what I've suspected for a long time. I bet that's it. They can already read both ends, no need to b0rk the encryption. It's just them doing their job to protect you from fourth parties, not from themselves.
It encrypts it to all the keys with the phone number registered for that user. Because users are switching phones, but keep their number. But each new WhatsApp app gets a new private key, the old key is not shared. This feature was added later, so the old WhatsApp devs wouldn't know.
So it would be trivial to encrypt to the NSA key also, as done on Windows.
Facebook messenger similarly claims to be end to end encrypted, and yet if it thinks you are sending a link to a pirate site, it "fails to send". I imagine there are a great many blacklisted sites which they shadow block, despite "not being able to read your messages".
My pet conspiracy theory is that the "backup code" which "restores" encrypted messages is there to annoy you into installing the app instead of chatting on the web.
This reads like a nothingburger. Couple of quotes from the article:
> the idea that WhatsApp can selectively and retroactively access the content of [end-to-end encrypted] individual chats is a mathematical impossibility
> Steven Murdoch, professor of security engineering at UCL, said the lawsuit was “a bit strange”. “It seems to be going mostly on whistleblowers, and we don’t know much about them or their credibility,” he said. “I would be very surprised if what they are claiming is actually true.”
No one apart from the firm filing the lawsuit is actually supporting this claim. A lot of people in this thread seem very confident that it's true, and I'm not sure what precisely makes them so confident.
>has always been able to read encrypted iMessage messages
...assuming you have icloud backups enabled, which is... totally expected? What's next, complaining about bitlocker being backdoored because microsoft can read your onedrive files?
If you read the link you would know that contrary to your expectation other apps advertising E2EE such as Google's Messages app don't allow the app maker to read your messages from your backups. And turning off backups doesn't help when everyone else has them enabled. Apple doesn't respect your backup settings on other people's accounts. Again, other apps address this problem in various ways, but not iMessage.
>If you read the link you would know that contrary to your expectation other apps advertising E2EE don't allow the app maker to read your messages.
What does that even mean? Suppose icloud backups doesn't exist, but you could still take screenshots and save them to icloud drive. Is that also "Apple has always been able to read encrypted iMessage messages"? Same goes for "other people having icloud backups enabled". People can also snitch on you, or get their phones seized. I feel like people like you and the article author are just redefining the threat model of E2EE apps just so they can smugly go "well ackshually..."
It means, for example, Google Messages uses E2EE backups. Google cannot read your E2EE messages by default, period. Not from your own backup, not from other peoples' backups. No backup loophole. Most other E2EE messaging apps also do not have a backup loophole like iMessage.
It's not hard to understand why Apple uploading every message to themselves to read by default is different from somebody intentionally taking a screenshot of their own phone.
I'm less concerned with whether it is technically opt in or opt out and more concerned with whether it is commonly enabled in practice.
What would resolve my objection is if Apple either made messages backups E2EE always, as Google did and as Apple does themselves for other data categories like Keychain passwords, or if they excluded E2EE conversations (e.g. from ADP people) from non-E2EE backups, as Telegram does. Anything short of that does not qualify as E2EE regardless of the defaults, and marketing it as E2EE is false advertising.
I remember reading this recently. Not saying it’s true but it got my attention
TUESDAY, NOVEMBER 25, 2025
Blind Item #7
The celebrity CEO says his new chat system is so secure that even he can't read the messages. He is lying. He reads them all the time.
It seems obvious that they can. It's my understanding for FB Messenger that the private key is stored encrypted with a key that is derived from the user's password. So it's not straightforward, but Meta is obviously in a position to grab the user's password when they authenticate and obtain their private key. This would probably leave traces, but someone working with company authorization could probably do it.
For WhatsApp they claim it is like Signal, with the caveat that if you have backups enabled it works like Messenger. Although interestingly if you have backups enabled the key may be stored with Apple/Google rather than Meta, it might be the case that with backup enabled your phone vendor can read your WhatsApp messages but Facebook cannot.
Matrix exists and really isn't too bad to self-host if you just want a small number of people. (If you federate with other servers, then you have more things to worry about -- increased attack surface, more visibility leading to more potential attackers, and the risk of unintentionally storing illegal content (e.g. CSAM) sent by people from other servers.)
The UI of Element (the most popular Matrix client) is more or less in line with any other chat app, but I guess it depends what you mean by "on par to whatsapp". Biggest downside I've found is that you can't search your messages on the mobile clients.
It's a proprietary, closed-source application. It can do whatever it wants, and it doesn't even need to "backdoor" encryption when all it has to do is just forward everything matching some criteria to their servers (and by extension anyone they comply to). It's always one update away from dumping your entire chat history into a remote bucket, and it would still not be in contradiction with their promise of E2EE. Furthermore, it already has the functionality to send messages when reporting [0]. Facebook's Messenger also has worked that way for years. [1] There were also rumors the on-device scanning practice would be expanded to comply with surveillance proposals such as ChatControl a couple years ago. This doesn't mean it's spying on each and every message now, but it would have potential to do so and it would be feasible today more than ever before, hence the importance of software the average person can trust and isn't as easily subject to their government's tantrums about privacy.
You are also using proprietary, closed-source hardware and operating system underneath the app that can do whatever they want. This line of reasoning ultimately leads to - unless you craft every atom and every bit yourself your data isn't secure. Which may be true, but is a pointless discussion.
On Android, if you allow it to backup to your Google cloud storage, it will say the backups are encrypted. That was my experience when I set it up a few weeks ago.
Exactly who has the ability to decrypt the backup is not totally clear.
It may be a different situation for non-Android users, Android users who are not signed in with a Google account, Android users who are not using Google Play Services, etc.
You can explore your Google Cloud's Application Storage part via Rsync, AFAIK. So you can see whether your backups are encrypted or not.
I remember that you had to extract at least two keys from the android device to be able to read "on-device" chat storage in the days of yore, so the tech is there.
If you don't have the keys' copies in the Google Drive side, we can say that they are at least "superficially" encrypted.
Whatsapp is considered insecure and banned from use for military in Russia. Telegram, on the other hand, is widely used. Of course that's not something definitive, but just a food for thought.
...that telegram is backdoored by the russians? The implication you're trying to make seems to be that russians must be choosing telegram because it's secure, but are ignoring the possibility that they're choosing telegram because they have access to it. After all, you think they want the possibility of their military scheming against them?
Maybe OP should clearly state their thesis rather than beating around the bush with "... just a food for thought", so we don't have to guess what he's trying to say.
I think the point was pretty clear if you are able to consider more than one point of view. People who are highly motivated and for whom it could make a life-or-death difference have considerable technical skills in this area in Russia consider Whatsapp to be insecure and prohibit its use, just as in the US, their counterparts consider telegram to be insecure and prohibit its use.
Perhaps you're simply struggling with the concepts here: would it help you to understand things better to add that russia and the US ban the use of Signal by their militaries and intelligence services?
Did you detect an implication that can't be extrapolated from the text without metacontext and secondary unstated axioms, or is your mind totally blank and baffled at what these data points could indicate?
Yeah Telegram only has 1:1 opt-in E2EE, that you can't use across your devices, so either you or your buddy quickly gets tired of whipping out their phone when they're sitting at their laptop, and just replies you through Telegram's non-E2EE cloud chats, and that's the backdoor. The user activated it. It's "their fault".
I'm not going to promote Telegram, just wanted to highlight that Whatsapp is not considered trustworthy by a geopolitical enemy of US. I don't think that Telegram is bad, and when your life depends on it, you can click "Secret Chat" button, it's not a big deal.
Lots of uninformed conspiratorial comments with zero proof in here, but I'd really like WhatsApp to get their encryption audited by a reliable, independent 3rd party.
Nowadays all of the messaging pipeline on my phone is closed source and proprietary, and thus unverifiable at all.
The iPhone operating system is closed, the runtime is closed, the whatsapp client is closed, the protocol is closed… hard to believe any claim.
And i know that somebody’s gonna bring up the alleged e2e encryption… a client in control of somebody else might just leak the encryption keys from one end of the chat.
Closed systems that do not support third party clients that connect through open protocols should ALWAYS be assumed to be insecure.
>Closed systems that do not support third party clients that connect through open protocols should ALWAYS be assumed to be insecure.
So you're posting this from an open core CPU running on an open FPGA that you fabricated yourself, right? Or is this just a game of one-upmanship where people come with increasingly high standards for what counts as "secure" to signal how devoted to security they are?
it doesn't need to be open source for us to know what it's doing. its properties are well understood by the security community because it's been RE'd.
> a client in control of somebody else might just leak the encryption keys from one end of the chat.
has nothing to do with closed/open source. preventing this requires remote attestation. i don't know of any messaging app out there that really does this, closed or open source.
also, ironically remote attestation is the antithesis of open source.
Who do they expect to fall for the claims that a Facebook owned messenger couldn't read your "encrypted" messages? It's truly funny.
Any large scale provider with headquarters in the USA will be subject to backdoors and information sharing with the government when they want to read or know what you are doing.
Me? I'd be very surprised if they can actually read encrypted messages (without pushing a malicious client update). The odds that no one at Meta would blow the whistle seem low, and a backdoor would likely be discovered by independent security researchers.
I'd be surprised as well. I know people who've worked on the WhatsApp apps specifically for years. It feels highly unlikely that they wouldn't have come across this backdoor and they wouldn't have mentioned it to me.
If there is such a back door, it would hardly follow it's widely known within the company. From the sparse reports on why Facebook/Meta has been caught doing this in the past, it's for favor trading and leverage at the highest levels.
> Any large scale provider with headquarters in the USA will be subject to backdoors and information sharing with the government when they want to read or know what you are doing.
Nation state governments do have the ability to coerce companies within their territory by default.
If you think this feature is unique to the USA, you are buying too much into a separate narrative. All countries can and will use the force of law to control companies within their borders when they see fit. The USA actually has more freedom and protections in this area than many countries, even though it’s far from perfect.
> This type of generalized defeatism does more harm than not.
Pointing out the realities of the world and how governments work isn’t defeatism.
Believing that the USA is uniquely bad and closing your eyes to how other countries work is more harmful than helpful.
No, assuming that anything besides what you can verify yourself is compromised isn't "defeatism", although I'd agree that it's overkill in many cases.
But for your data you want to absolutely keep secret? It's probably the only to guarantee someone else somewhere cannot see it, default to assume if it's remote, someone will eventually be able to access it. If not today, it'll be stored and decrypted later.
This is correct. Yes, every government has the ability to use violence and coerce, but that takes coordination among other things. There are still places, and areas within those places, where enforcement and the ability to keep it secret is almost not possible.
I have reached the point that I think even the chat control discussion might be a distraction because essentially they can already get anything. Yeah government needs to fill in a form to request, but that’s mostly automated I believe
>I have reached the point that I think even the chat control discussion might be a distraction because essentially they can already get anything.
Then why are politicians wasting time and attracting ire attempting pushing it through? Same goes for UK demanding backdoors. If they already have it, why start a big public fight over it?
> Any large scale provider with headquarters in the USA will be subject to backdoors and information sharing with the government when they want to read or know what you are doing.
Thats just wrong. Signal for example is headquartered in the US and does not even have this capability (besides metadata)
I do not believe them either. The swift start of the investigation by U.S. authorities only suggests there was no obstacle to opening one, not that nothing could be found. By “could not,” I mean it is not currently possible to confirm, not that there is necessarily nothing there.
Personally, I would never trust anyone big enough that it(in this case Meta) need and want to be deeply entangled in politics.
I always assumed Meta has backdoor that at least allows them to compromise key individuals if men in black ask, but law firm representing NSO courageously defending the people? Come the fuck on.
> Our colleagues’ defence of NSO on appeal has nothing to do with the facts disclosed to us and which form the basis of the lawsuit we brought for worldwide WhatsApp users.
These representations are legally binding. If Meta were intentionally lying on these, it would invite billions of dollars of liability. They use similar terminology as Signal and the best private VPN companies: we can't read and don't retain message content, so law enforcement can't ask for it. They do keep some "meta" information and will provide it with a valid subpoenoa.
The latter link even clarifies Meta's interpretation of their responsibilities under "National Security Letters", which the US Government has tried to use to circumvent 4th amendment protections in the past:
> We interpret the national security letter provision as applied to WhatsApp to require the production of only two categories of information: name and length of service.
I guess we'll see if this lawsuit goes anywhere or discovery reveals anything surprising.
For context, the U.S. is also currently investigating whether Donald Trump actually won the 2020 presidential election (he didn’t), whether aspirin causes autism (it doesn’t), and whether transgenic research is woke (it’s not).
“The U.S. investigates” unfortunately does not mean as much as it used to. That said, I would rest easy in the knowledge that someone deep in the NSA already knows with absolute certainty whether the WhatsApp client app is doing anything weird. But they’re not likely to talk to a reporter or plaintiffs lawyer.
If your personal threat model at this point is not literally:
“everything I ever do can be used against me in court”
…then you are not up-to-date with the latest state of society
Privacy is the most relevant when you are in a position where that information is the difference between your life or your death
The average person going through their average day breaks dozens of laws because the world is a Kafkaesque surveillance capitalist society.
The amount of information that exists about there average consumer is so unbelievably godly such that any litigator could make an argument against nearly any human on the planet that they are in violation of something if there is enough pressure
If you think you’re safe in this society because you “don’t do anything wrong“ then you’re compromised and don’t even realize it
No end-to-end encryption by default. WhatsApp has.
No end-to-end encryption for groups. WhatsApp has.
No end-to-end encryption on desktop. WhatsApp has.
No break-in key-recovery. WhatsApp has.
Inferring Telegram's security from public statements of *checks notes* former KGB officer and FSB director -- agencies that wrote majority of the literature in maskirovka, isn't exactly reliable, wouldn't you agree?
Telegram has private chats. I don't pay attention to his words, indeed. Way before the Ukrainian war, Russia had a massive campaign trying to block Telegram and they failed on a technical level. This has never happened with WhatsApp.
The reality that most encryption enthusiasts need to accept is that true E2EE where keys don’t leave on-device HSMs leads to terrible UX — your messages are bound to individual devices. You’re forced to do local backups. If you lose your phone, your important messages are gone. Lay users don’t like this and don’t want this, generally.
Everything regarding encrypted messaging is downstream of the reality that it’s better for UX for the app developer to own the keys. Once developers have the keys, they’re going to be compelled by governments to provide them when warrants are issued. Force and violence, not mathematical proofs, are the ultimate authority.
It’s fun to get into the “conspiratorial” discussions, like where the P-256 curve constants came from or whether the HSMs have backdoors. Ultimately, none of that stuff matters. Users don’t want their messages to go poof when their phone breaks, and governments will compel you to change whatever bulletproof architecture you have to better serve their warrants.
I mean at the very least if their clients can read it then they can at least read it through their clients, right? And if their clients can read it’ll be because of some private key stored on the client device that they must be able to access, so they could always get that. And this is just assuming that they’ve been transparent about how it’s built, they could just have backdoors on their end.
The PIN is used when you're too lazy to set an alphanumeric pin or offload the backup to Apple/Google. Now sure, this is most people, but such are the foibles of E2EE - getting E2EE "right" (eg supporting account recovery) requires people to memorize a complex password.
The PIN interface is also an HSM on the backend. The HSM performs the rate limiting. So they'd need a backdoor'd HSM.
That added some context I didn’t have yet thanks. I’m not seeing yet how Meta if it was a bad actor wouldn’t be able to brute force the pin of a particular user. Of this was a black box user terminal site, Meta owns the stack here though, seems plausible that you could inject yourself easily somewhere.
If you choose an alphanumeric pin they can't brute force because of the sheer entropy (and because the key is derived from the alphanumeric PIN itself.)
However, most users can't be bothered to choose such a PIN. In this case they choose a 4 or 6 digit pin.
To mitigate the risk of brute force, the PIN is rate limited by an HSM. The HSM, if it works correctly, should delete the encryption key if too many attempts are used.
Now sure, Meta could insert itself between the client and HSM and MITM to extract the PIN.
But this isn't a Meta specific gap, it's the problem with any E2EE system that doesn't require users to memorize a master password.
I helped design E2EE systems for a big tech company and the unsatisfying answer is that there is no such thing as "user friendly" E2EE. The company can always modify the client, or insert themselves in the key discovery process, etc. There are solutions to this (decentralized app stores and open source protocols, public key servers) but none usable by the average person.
That might be a different pin? Messenger requires a pin to be able to access encrypted chat.
Every time you sign in to the web interface or resign into the app you enter it. I don’t remember an option for an alphanumeric pin or to offload it to a third party.
It set off the flamewar detector, which is the usual reason that happens.
We'll either turn off that software penalty or merge the thread into a submission of the original Bloomberg source - these things take a bit of time to sort through!
It does have an amplifying effect when issues such as this happen to where users who don't read in time won't see this due to the amount of other threads.
What even are these low effort, uninformed conspiratorial comments saturating the comment section?
Sure, Meta can obviously read encrypted messages in certain scenarios:
- you report a chat (you're just uploading the plaintext)
- you turn on their AI bot (inference runs on their GPUs)
Otherwise they cannot read anything. The app uses the same encryption protocol as Signal and it's been extensively reverse engineered. Hell, they worked with Moxie's team to get this done (https://signal.org/blog/whatsapp-complete/).
The burden of proof is on anyone that claims Meta bypassing encryption is "obviously the case."
I am really tired of HN devolving into angry uninformed hot takes and quips.
Zuck didn't buy it in good faith. It wasn't "we'll grow you big by using our resources but be absolutely faithful to the privacy terms you dictate". Evidence: Brian Acton very publically telling people that they (Zuck, possibly Sandberg) reneged
Zuck thinks we're "dumb fucks". That's his internet legacy. Copying products, buying them up, wiping out competition
Point taken, but I feel like going into details at this stage is redundant. There have been probably hundreds of discussions on this site regarding this topic. Books have been written about Facebook's and Zuckerberg's absent moral compass. To wit, from three days ago:
"While Zuckerberg reportedly wanted to prevent "explicit" conversations with younger teens, a February 2024 meeting summary shows he believed Meta should be "less restrictive than proposed" and wanted to "allow adults to engage in racier conversation on topics like sex." He also rejected parental controls that would have let families disable the AI feature entirely.
Nick Clegg, Meta's former head of global policy, questioned the approach in internal emails, asking if the company really wanted these products "known for" sexual interactions with teens, warning of "inevitable societal backlash."
Let's assume you're right that going into details is redundant. In that case, however, so is the generic putdown.
Put differently: even if you don't owe megacorps that don't follow basic human decency better, you owe this community better if you're participating in it.
Anyone blindly believing every random allegation is also a fool, especially when the app in question has been thoroughly reverse engineered and you can freely check for yourself that it's using the same protocol as Signal for encryption
Allegations against a company who circumvented Android's security to track users?
I don't have any proof that Meta stores WhatsApp messages but I feel it in my bones that at the very least tried to do so. And if ever that comes to light, precisely nobody will be surprised.
>And if ever that comes to light, precisely nobody will be surprised.
The amount of ambient cynicism on the internet basically makes this a meaningless statement. You could plausibly make the same claim for tons of other conspiracy theories, eg. JFK was assassinated by the CIA/FBI, Bush did 9/11, covid was intentionally engineered by the chinese/US government, etc.
Meta storing WhatsApp messages requires one mental leap: they found a secret way to hoover up WhatsApp messages. Everything else is their MO, including breaking systems to their advantage and having absolutely no scruples.
On the other hand, Occam's razor can barely keep up with the mental gymnastics required to paint Bush (or even Cheney) as the mastermind behind 9/11.
That raises the question of why not just use Signal and avoid a company whose founder thinks we're all "dumbfucks" and has a long history of scandals and privacy violations?
The evidence is pretty clear that Facebook wants to do everything they legally can to track and monitor people, and they're perfectly okay crossing the line and going to court to find the boundaries.
Using a company like that for encrypted messaging seems like an unnecessary risk. Maybe they're not decrypting it, but they're undoubtedly tracking everything else about the conversation because that's what they do.
They got caught torrenting unbelievable amounts of content, an act that committed even just a few times can get my home Internet shut down with no recourse (best outcome). Literally nothing happened. Combine the fact that nothing legally significant ever happens to them with zuckerburg’s colossal ego and complete lack of ethical foundation, and you have quite the recipe.
And I’m not even getting into the obvious negative social/political repercussions that have come directly from Facebook and their total lack of accountability/care. They make the world worse. Aside from the inconvenience for hobbyist communities and other groups, all of which should leave Facebook anyway, we would lose nothing of value if Facebook was shut down today. The world would get slightly better.
If you have to do a thing that obscures your act it doesn’t change the fact that there are rules for me and not them. We know for a fact they did it. Did their ISP threaten them? Did they get their internet service shut off?
Edit: they already won their first case in June against authors. I am very curious to see how that lawsuit goes. Obviously we don’t know the results yet but I would be incredibly surprised to see them lose and/or have to “undo” the training. That’s a difficult world to imagine, especially under the current US admin. Smart money is the damage is done and they’ll find some new way to be awful or otherwise break rules we can’t.
>If you have to do a thing that obscures your act it doesn’t change the fact that there are rules for me and not them. We know for a fact they did it. Did their ISP threaten them? Did they get their internet service shut off?
Is there any indication they didn't use a VPN? If they did use a VPN, how is it "there are rules for me and not them", given that anyone can also use VPN to pirate with impunity?
I don’t understand what you’re doing here. They were caught. We know they did it. There are several articles about it. They have been sued over it because it happened. I don’t think anything will come of it, but they clearly did it. It is public knowledge.
You claim there's some two tier legal system for Meta compared to the average torrenter, by virtue of their internet not getting cut off. However that's a false premise. You only get your internet cut off if you fail to use a VPN. If Meta used a VPN (like most pirates do), then their internet staying up isn't evidence of any special treatment.
If there's some two tier treatment of Meta, it's that they're being sued more aggressively than the average pirate. If you pirated Stranger Things, you can blab all you want about your copyright infringement escapades, and you'll unlikely never face any legal consequences. OTOH once word got out that Meta torrenting books, every copyright lawyer out there is going after them.
The true wealthy live by an entirely different set of rules than the rest of us, especially when they are willing to prostrate themselves to the US President.
This has always been true to some degree, but is both more true than ever (there used to be some limits based on accepted decorum) plus they just dont even try to hide it anymore.
I think the not hiding it part is what’s starting to stick in my craw. We all knew it was happening on some level, but we felt that there were at least some boundaries somewhere out there even if they were further than ours. Now it just feels like the federal government basically doesn’t exist and companies can do whatever they want to us.
if anybody believes that Facebook would allow people to send a totally encrypted message to somebody, they're out of their mind. they're pretty much in bed with law enforcement at this point. I mean I don't know how many people have been killed in Saudi Arabia this year for writing Facebook messages to each other that were against what the government wanted but it's probably a large number.
This reads like another low effort conspiratorial comment.
WhatsApp has been reverse engineered extensively, they worked with Moxie's team to implement the same protocol as Signal, and you can freely inspect the client binaries yourself!
If you're confident this is the case, you should provide a comment with actual technical substance backing your claims.
This should surprise nobody. Do you really think that the intelligence agencies of the US etc would allow mainstream E2E encryption? Please stop being so naive
That's the thing, it does not and it has been known that it does not do this. The keys are stored on the server and the server sends them to your device on login. They do have some kind of machine-id encoded in it, but that is just for show.
Full version here: https://eprint.iacr.org/2025/794.pdf
We didn't review the entire source code, only the cryptographic core. That said, the main issue we found was that the WhatsApp servers ultimately decide who is and isn't in a particular chat. Dan Goodin wrote about it here: https://arstechnica.com/security/2025/05/whatsapp-provides-n...
reply