New ChatGPT Atlas browser exploit allows for hidden, persistent commands!

Calendar 10/28/2025

New exploit in ChatGPT Atlas! A CSRF vulnerability lets attackers inject malicious instructions into the AI’s memory and execute code without the user’s knowledge.

Researchers in the field of cybersecurity have detected a serious vulnerability in the ChatGPT Atlas browser, developed by OpenAI — the exploit allows attackers to inject malicious instructions into the AI's memory and execute any code. 

What’s actually happening?

In short:

  • The “memory” feature in ChatGPT – which allows the chatbot to remember information about the user between sessions – is becoming a target for attacks. 

  • The attack is based on a CSRF (Cross-Site Request Forgery) technique: a user logged into ChatGPT is tricked (e.g., via a link in an email) into visiting a malicious site that sends a hidden request to input instructions into the memory of ChatGPT. 

  • After such “infection” of the user’s memory, every subsequent interaction with the bot (including in the Atlas browser) may leverage these hidden instructions, leading to things like privilege escalation, data theft, or code execution. 

  • Furthermore: because memory is linked to the account and not solely to the browser or device, the infected instructions can traverse between devices and sessions. 

The user is logged into ChatGPT (has a token in the browser) and lands on a malicious site that exploits their active session through CSRF. The site injects hidden instructions into the memory of ChatGPT, which remain linked to the account. With the next query, the bot refers to this "infected" memory and can perform malicious actions without the user's knowledge.

Why is the Atlas browser particularly vulnerable in this case?

Atlas is particularly susceptible because it often operates with a ChatGPT account in the background — the user is logged in by default, so just clicking a malicious link can cause the site to perform actions on their behalf without additional authorisation. LayerX research indicates that Atlas's anti-phishing mechanisms perform significantly worse than those in Chrome or Edge, allowing many malicious sites to go unnoticed.

Furthermore, features like "agent mode" and saving browser memories simplify tasks, but at the same time increase the attack surface — instead of a one-time action, an attacker can "inject" a hidden instruction into the account's memory that will become active during later, seemingly normal queries, leading to a persistent contamination of the user's environment.

What does this mean for users and businesses?

For users and businesses, it means that just one click on a malicious link is enough for an attacker to permanently "inject" malicious data into the memory of a ChatGPT account. From that point on, even seemingly ordinary queries can trigger unwanted actions, such as data theft or content modifications.

Therefore, ChatGPT Atlas should be treated as a piece of critical infrastructure – as it connects applications, identity, and intelligence in one space. Even if someone is not using Atlas, the problem shows that memory functions in AI systems open a new attack vector. It is worth limiting memory storage, carefully checking links, and separating private accounts from business ones until the vulnerability is fully patched.

Although the exploit presented by LayerX looks realistic and threatening — it should be noted that full technical details have not been disclosed (to avoid easy reproduction of the attack). 

On the other hand: the fact that the attack affects not only the browser itself but also the AI memory integrated with the user account — means that the security model for AI browsers requires a new approach.

If you are aware users or work in some organisation, treat Atlas and similar browsers as an 'early access' version with higher risk — and take precautions until the market and manufacturers follow suit.

Katarzyna Petru Avatar
Katarzyna Petru

Journalist, reviewer, and columnist for the "ChooseTV" portal