An Israeli cybersecurity firm has revealed a critical security flaw in OpenAI's ChatGPT that could let hackers hijack user accounts without any clicks or user action, The Jerusalem Post and Ynetnews reported.
Speaking at the Black Hat 2025 conference this week in the US, Zenity co-founder and CTO Mikhail Bargury demonstrated what he called the first-ever "zero-click" exploit against the world's most widely used AI chatbot.
The attack requires only the victim's email address, which is often easy to obtain, to grant full access to past and future chats, linked services like Google Drive and even allow the AI to operate on the hacker's behalf.
In a live demo, Zenity demonstrated how a compromised ChatGPT could secretly suggest malware downloads, provide false business advice, or extract private files from connected accounts.
Similar vulnerabilities were also found in Microsoft's Copilot Studio, Salesforce Einstein, Google Gemini and other AI agent tools, enabling everything from CRM database leaks to credential theft.
Zenity reported that OpenAI and Microsoft issued swift patches, but some providers dismissed the findings as "intended behaviour".
Bargury warned that modern AI agents now open folders, send files and access emails for users, creating what he described as "a paradise for attackers with endless entry points".
