Before you install the "AI Browser", read this. Ars Technica checked it for us and the results are truly concerning.

Calendar 11/18/2025

AI browsers perform poorly in Ars Technica’s tests. Manipulation risks, privacy concerns, and immature agents. Find out what was really discovered.

AI browsers were meant to be a new way of using the internet. Companies promised intelligent agents, automated actions, and the impression that the computer would start to "do things" for us. However, tests conducted by the Ars Technica editorial team show that reality is still far from the promises. We, as an editorial team, merely discuss their findings, but it can already be said that AI browsers in 2025 pose more of a risk than a revolution.

Prompt injection – the biggest bomb under AI browsers

The most serious conclusion from the Ars Technica tests pertains to vulnerability to prompt injection. This is a situation where a website hides instructions invisible to the user, and the AI executes them without any reflection. There is no need to hack anything – a little hidden text is enough for the browser to start ignoring the user, changing its style of speech, or executing absurd commands.

Ars Technica demonstrated this with specific examples. In one of them, the browser began to write exclusively like a pirate, substituting the word "dog" with the term "sea dog." It looks like a joke, but it actually reveals something serious: if it can be so easily influenced in its language model, it will be even easier to persuade it to recommend a more expensive product in a shop, ignore security warnings, or pass through suspicious links.

Privacy? According to Ars Technica – now that’s a problem

Ars Technica points out yet another, deeper issue: user data. In traditional search engines, we share snippets of information. In conversations with AI – often everything. People write to models as if they are assistants, advisors, and sometimes even therapists. They entrust them with things they would never type into Google Search.

And here lies the crux: the AI browser sends to the cloud literally everything you do. The pages you visit. Every question. Every snippet of conversation. Ars Technica warns that this is the purest form of profiling we have on the market today. Furthermore, this data often ends up being used to train future models. Your private stories could become part of a dataset for millions of other users.

AI Agents? Ars Technica: "In practice, it hinders, not helps"

The feature that was supposed to distinguish the browser AI was the so-called agents – tools that perform tasks automatically. However, tests by Ars Technica show that they operate in a chaotic and unpredictable manner. They often slow down work instead of speeding it up. They can bypass important elements of a page, summarise content that no one requested, or perform actions contrary to instructions.

What’s worse, these same agents are susceptible to prompt injection, which means they can be manipulated as easily as a regular model. Ars Technica emphasises that, in extreme cases, they can even fall for phishing, that is, recognise a malicious link as safe. This is no longer a "flaw" in functionality – it's a threat.

Under the hood, it's still Chromium. Ars Technica: "AI browsers reveal nothing new"

When Ars Technica examined AI browsers from a technical standpoint, the conclusions were particularly sobering. Most of them are simply Chromium with a side LLM tacked on. The model operates in the cloud anyway, so the browser itself doesn't bring any "magic." Many features can be replicated with Chrome extensions: AI-based search, contextual data retrieval from the page, or half-agent actions.

In practice, this means that the AI browser does not provide the user with a significant advantage. Instead, it organizes several existing tools into one package and attempts to sell it as the future of the internet.

Ars Technica: AI browsers must aim higher

The summary of the tests is very clear. LLM in the sidebar is not enough. If AI is to truly revolutionise internet browsing, the creators of these tools must move beyond the idea of "ChatGPT in the window next door." There is a need for new concepts, better protection, greater privacy, and features that actually solve problems rather than merely repeating information found on the page. It's hard to disagree with that. After the fall of the original Arc, many people – including in our editorial team – are still looking for a browser that will set a new direction. Unfortunately, AI browsers in their current form are not yet doing that.

Browser AI is still an experiment, not a tool for everyone

Tests by Ars Technica show that browser AI is currently more of an experiment than a tool for the masses. The risk of manipulation, privacy issues, immature agents, and the fact that most functions can be replicated with extensions make it difficult to consider them a finished product. It’s a technology with huge potential – but not at this stage to the point where it would be wise to entrust it with everyday tasks.

Katarzyna Petru Avatar
Katarzyna Petru

Journalist, reviewer, and columnist for the "ChooseTV" portal