The new ChatGPT Atlas browser exploit allows for hidden, persistent commands!

Calendar 10/28/2025

New exploit in ChatGPT Atlas! A CSRF vulnerability lets attackers inject malicious instructions into the AI’s memory and execute code without the user’s knowledge.

Researchers in the field of cybersecurity have detected a serious vulnerability in the ChatGPT Atlas browser, developed by OpenAI — the exploit allows attackers to inject malicious instructions into the AI's memory and execute any code. 

What exactly is happening?

In short:

  • The "memory" feature in ChatGPT – which allows the chatbot to remember information about the user between sessions – has become a target of attack. 

  • The attack is based on a CSRF (Cross-Site Request Forgery) technique: a user logged into ChatGPT is tricked (e.g., via a link in an email) into visiting a malicious site that sends a hidden request to input instructions into ChatGPT's memory. 

  • After such a "contamination" of the user's memory, every subsequent interaction with the bot (including in the Atlas browser) can exploit these hidden instructions, potentially leading to privilege escalation, data theft, or code execution. 

  • Moreover: because the memory is linked to the account and not just to the browser or device, the infected instructions can transfer between devices and sessions. 

The user is logged into ChatGPT (has a token in the browser) and lands on a malicious site that exploits their active session through CSRF. The site injects hidden instructions into ChatGPT's memory, which remain linked to the account. Upon the next query, the bot refers to this "infected" memory and can perform malicious actions without the user's knowledge.

Why is the Atlas browser particularly vulnerable in this case?

Atlas is particularly susceptible because it often operates with a ChatGPT account in the background — the user is logged in by default, so it only takes a click on a malicious link for the page to perform an action on their behalf without additional authorisation. Research by LayerX indicates that Atlas's anti-phishing mechanisms perform significantly worse than those in Chrome or Edge, allowing many malicious sites to go unnoticed.

Additionally, features such as "agent mode" and saving browser memories simplify operations, but at the same time increase the attack surface — instead of a one-off action, an attacker can "inject" a hidden instruction into the account's memory, which will be active during later, normally looking queries, leading to a lasting contamination of the user's environment.

What does this mean for users and companies?

For users and companies, this means that it only takes one click on a malicious link for an attacker to permanently "inject" malicious data into the memory of a ChatGPT account. From that moment on, even seemingly ordinary queries can trigger unwanted actions, such as data theft or content modification.

Therefore, ChatGPT Atlas should be treated as a critical infrastructure element – as it connects applications, identity, and intelligence in one space. Even if someone does not use Atlas, the problem shows that memory functions in AI systems open a new attack vector. It is advisable to limit memory saving, carefully check links, and separate personal accounts from business accounts until the vulnerability is fully patched.

Although the exploit presented by LayerX appears realistic and dangerous — it should be noted that full technical details have not been revealed (to avoid easy reproduction of the attack). 

On the other hand: the fact that the attack affects not only the browser itself but also the AI memory integrated with the user's account — means that the security model for AI browsers requires a new approach.

If you are aware users or work in any organisation, treat Atlas and similar browsers as an "early access" version with higher risk — and take precautions until the market and manufacturers catch up.

Katarzyna Petru Avatar
Katarzyna Petru

Journalist, reviewer, and columnist for the "ChooseTV" portal