In mid-June, as Israel and Iran traded missiles and drones during the so-called 12-day war, Iran’s state television called on citizens to delete WhatsApp from their smartphones. Officials claimed the messaging app was leaking user data to Israel and posing a threat to national security.
The warning followed an Israeli takedown of Iran’s senior military command in a barrage of military strikes, which was so precise that it raised immediate questions about Tehran’s vulnerability to espionage.
Whether WhatsApp was directly involved remains unclear. But the episode put a glaring spotlight on a concern regarding how much consumers can trust messaging platforms to keep their data private.
And more pressingly, how much access do domestic and foreign state actors really have to the digital conversations people assume are encrypted and secure?
Messaging applications like WhatsApp, Signal, Telegram, and Apple’s iMessage have become ubiquitous for private communications.
In recent years, these services have widely adopted end-to-end encryption, meaning that only the communicating users can decrypt the messages, not even the service providers.
This widespread encryption presents a significant challenge for national security and intelligence agencies, which historically relied on intercepting communications.
However, even with end-to-end encryption shielding message content, national security agencies have developed an array of technical strategies to monitor suspects on messaging platforms.
These range from capturing unencrypted data where available, collecting metadata, and exploiting software vulnerabilities, to co-opting the service providers themselves through legal or covert means.
Company cooperation with governments varies and is often opaque, which raises critical questions about trust, transparency, and systemic vulnerabilities.
While some messaging apps promote strong end-to-end encryption, trust in their security should be measured.
Intelligence agencies, both domestic and foreign, have a long record of accessing data once thought secure.
The technological edge these agencies maintain, combined with significant information asymmetry, means the public is often unaware of what is even possible.
In this context, absolute confidence in digital privacy is, at best, misplaced.
Metadata collection
Metadata is often the first resource intelligence agencies exploit. Even with encrypted messaging, metadata remains exposed.
It includes details such as who communicated, when, for how long, and the size of messages. This seemingly peripheral information can be highly valuable for surveillance, enabling patterns, relationships, and behaviours to be inferred without accessing the actual message content.
Metadata alone can be highly revelatory.
Former NSA and CIA chief Michael Hayden once said, “We kill people based on metadata”, which highlights how communication patterns without needing actual message texts are used to locate and target individuals.
“We kill people based on metadata”
In the context of messaging apps, metadata includes phone number, device information, when the user signed up, logs of whom a user contacts and when, IP addresses and location stamps.
WhatsApp, for instance, can provide basic subscriber information and usage logs under subpoena, and with a court order can even reveal a target’s WhatsApp contacts and which users have the target in their contacts.
Moreover, the company can be compelled to deliver near-real-time metadata and report who a user is messaging and when, updated every 15 minutes.
Some other companies – such as Signal – claim they keep the metadata they collect to a minimum for this very reason.
In the first subpoena Signal received, it could only supply the date and time a user registered and the last time they used the service. They say they do not store contact lists, message timestamps, or any identifiers beyond a phone number, making metadata collection very limited by design.
On a broader scale, intelligence agencies may also perform network-level traffic analysis.
For example, the NSA’s SIGINT units tap into Internet backbones and record bulk traffic. Even if they collect WhatsApp messages encrypted, they can still see the encrypted packets going from one IP to another.
Over time, correlating these packets’ size, timing, and frequency along with known user IP addresses or identities can yield a metadata picture of who is talking to whom.
Techniques like timing analysis can sometimes identify communication pairs. Snowden’s leaks revealed programmes like MYSTIC, which recorded all phone call metadata and even content in some countries for analysis.
Thus, even without breaking encryption, agencies mine the signals around the encrypted content.
Hacking the device
When communications are encrypted in transit and on providers’ servers, the easiest interception point is often the sender’s or receiver’s device before encryption or after decryption.
Intelligence agencies have invested heavily in cyber capabilities to compromise smartphones and computers, allowing them to read messages directly from a device screen or memory.
This tactic was explicitly acknowledged in the CIA’s leaked Vault 7 files, which detailed a range of malware and exploits for iOS and Android devices.
Those documents confirmed that intelligence agencies can take almost complete remote control of a user’s phone and turn it into a listening device.
Once an agency achieves root or admin access on a phone, it can bypass or directly capture any ‘secure’ app’s content, since the app must decrypt messages for the user.
Notably, the Vault 7 leaks contained no indication of cryptographic defeats against Signal or WhatsApp protocols themselves, which Open Whisper Systems (Signal’s developer) cited as validation that end-to-end encryption was forcing agencies to resort to labour-intensive, targeted hacking.
A notable instance of this tactic was the Pegasus spyware developed by the NSO Group, an Israeli cyber intelligence firm.
Pegasus became infamous for its ability to remotely and covertly infect phones.
In 2019, Pegasus was found to have used a zero-day vulnerability in WhatsApp’s code by simply calling a target via WhatsApp. Even if the call was not answered, the spyware could be injected.
Once Pegasus took hold of the device, it could copy messages, record calls, activate the microphone and camera, and track location, which completely negates the protections of end-to-end encryption by stealing the data at the source.
Investigations by Citizen Lab and Amnesty International have traced Pegasus usage to dozens of countries’ intelligence or law enforcement agencies, who allegedly deployed it against journalists, activists, and political figures.
In 2021, researchers uncovered Forcedentry, which is a zero-click exploit developed by the NSO Group to silently install Pegasus spyware on Apple devices.
Disguised as image files sent via iMessage, the attack required no user interaction and leveraged a flaw in Apple’s CoreGraphics system to bypass built-in protections.
What made it especially alarming was its sophistication; experts at Google described it as one of the most technically advanced exploits ever seen.
Outside of NSO’s products, major intelligence agencies develop their own malware. These operations are often highly classified, and we know of them only through rare leaks and incidents.
Unlike mass surveillance, device hacking is targeted as they are used on specific persons of interest. The NSA’s Tailored Access Operations unit, for instance, reportedly has custom implants for Android and Windows phones. The CIA’s Vault 7 files listed tools like HammerDrill that targeted smartphones.
The UK’s GCHQ similarly engages in “equipment interference”, which is explicitly legalised under the UK Investigatory Powers Act 2016 for national security cases.
It is clear that no matter how strong the encryption, if an intelligence service can implant spyware on a phone or computer, they effectively bypass the encryption.
This reality has shifted the intelligence playbook, and agencies now pour resources into zero-day exploits, malware, and even supply chain compromises in order to reach encrypted data.
Server side
Another technical avenue is attacking or leveraging the servers and infrastructure of the messaging services.
While end-to-end encryption is meant to ensure servers cannot decrypt user messages, agencies have found ways to exploit server-side systems for information.
In some cases, agencies have performed bulk data theft from tech companies. The Snowden documents disclosed Muscular, an NSA/GCHQ operation that tapped into the private data links between Google’s data centres, extracting unencrypted data in transit
Google and other firms encrypted those links after 2013, but the incident shows the value placed on infiltrating backbone connections.
For messaging apps, the equivalent would be targeting the communication between app servers or between servers and non-encrypted endpoints like Telegram’s cloud chats.
Perhaps the most direct method is compelling the provider to assist.
It was revealed in 2013 that the NSA’s PRISM programme allowed it to collect data from company servers for foreign intelligence targets, under US legal authority.
Companies like Microsoft, Google, Facebook, and Apple were listed as participants in PRISM, supposedly providing the NSA with a direct pipeline for FISA-approved data requests.
In practice, for apps like WhatsApp or iMessage, PRISM could be used to request stored data such as account info, contacts, and stored communications that were not end-to-end encrypted.
Before 2016, for example, WhatsApp did not have end-to-end encryption for all messages, so the content on their servers could have been turned over.
After end-to-end was introduced, content would no longer be available as Facebook cannot decrypt it, but other data like cloud backups or metadata might be obtainable through such channels.
It is also conceivable that, rather than asking the company, an agency might hack the servers. If an intelligence service breaches a messaging service’s internal network through cyber means or via an insider, they might potentially manipulate the service.
Post-Snowden, companies have hardened their systems and fought for the image of protecting user privacy, such as WhatsApp’s encryption and Apple refusing to unlock iPhones. Yet, where data is accessible, providers often do comply with lawful orders. And where providers resist, agencies might find a way in through hacking.
Illusion of privacy
Privacy, in many cases, is an illusion. Intelligence organisations have repeatedly demonstrated their ability to bypass safeguards and access information presumed secure.
Their success lies not only in advanced technology and legal reach, but also in operating far beyond the public’s awareness. The gap between what is known and what is actually possible is wide and often invisible.
Hence, digital communication, no matter how secure it seems, remains vulnerable. End-to-end encryption offers meaningful protections, but it is not a silver bullet.
Intelligence agencies around the world have adapted their tactics, focusing on metadata analysis, device-level surveillance, and infrastructure exploitation to circumvent encryption barriers.
Messaging platforms differ in their design philosophies, policies, and transparency, but all operate within a complex environment shaped by technical limitations, legal pressures, and state interests.
Users may assume their messages are private, but privacy in the digital realm depends not just on encryption protocols, but on the integrity of devices, networks, and the companies running the services.