Technology

5 Reasons Why You Should Not Switch To An AI Browser

2025-11-22 19:30
516 views
5 Reasons Why You Should Not Switch To An AI Browser

As much of a growing influence as AI has had on our daily lives, there are plenty of reasons to hold off on switching to an AI browser.

5 Reasons Why You Should Not Switch To An AI Browser By Gozie Ibekwe Nov. 22, 2025 2:30 pm EST Picture of ChatGPT Atlas logo on a smartphone jackpress/Shutterstock

The topic of artificial intelligence is the talk of the town right now. If you turn on the TV to your local news station, odds are you'd hear the "AI" abbreviation more times than you can count. There's a reason for that: AI agents have proven exceptionally helpful in ramping up productivity across several spheres of life. As the effects of its efficiency become demonstrably clear, virtually every walk of life is finding ways to hop on the AI train: from students looking to churn out a quick essay to professionals putting together a last-minute presentation document.

While AI does make our lives easier, that ease comes at a cost when users don't draw a definitive line in the sand of what the technology can and cannot do for them. Experts have highlighted concerns about dimming intellectual curiosity due to overreliance on large language models, but that's not the only dangerous emerging trend. With the advent of AI browsers like OpenAI's ChatGPT Atlas, and Perplexity's Comet, users are now entrusting the bulk of internet surfing to artificial intelligence.

We can list several reasons why this development is an expressly bad idea. If you're familiar with even the basics of internet security, the first thing you're wondering is probably how safe your data is in the hands of even the best AI browsers. You wouldn't be wrong to question it, but we'll cover even more factors while driving home  this point: don't switch to an AI browser just yet.

Prompt Injection Risk

The risks artificial intelligence software face Nathakorn Tedsaard/Shutterstock

For years, browsing required users to scroll through web pages. The game has changed with AI browsers; AI now visits sites at its own discretion based on user interaction. Browsers can do all sorts of tasks for you: respond to your emails, book your flights with saved payment information, and even post comments on social media platforms per your instructions.

Besides the obvious privacy risks we'll cover later in this article, prompt injection attacks are surfacing with regularity. In a nutshell, prompt injection attacks are harmful inputs disguised as legitimate prompts to trick AI systems into revealing sensitive information.

These threats have been illuminated most clearly by the efforts of the team at Brave — a privacy-focused browser. Using Perplexity's Comet browser as a case study, analysts uncovered a vulnerability hidden in one of its core functionalities. Comet's AI assistant can take screenshots of websites and provide users with analysis. However, that same screenshot-taking ability means that Comet would follow instructions specifically engineered to hack AI systems that are craftily hidden in web pages.

The human user can't see these malicious commands; Brave's team hide their prompt injection instructions with lighting tricks (the instructions were written in faint light blue font on a yellow background). Using these instructions, the AI browser can be tricked into performing harmful tasks such as visiting phishing websites or feeding sensitive information to hackers. 

Social Engineering Vulnerability: Prompting Manipulation

A hacker trying to access credit card information on a laptop Jittawit.21/Getty Images

Clouding instructions on a web page via lighting tricks isn't the only danger to AI browsers that has been discovered in recent weeks. With AI agents, hackers don't necessarily have to come up with lines of malicious code to seize control of a target's computer — they can easily resort to regular conversational tricks to manipulate the base desire of the AI to assist its user.

It's an AI-coded spin on the ClickFix attacks (attempting to execute harmful commands on a victim's computer via social engineering). This variation, tested by researchers, sends a message to a victim that piques the AI's curiosity. In the experiment, the researchers claimed to be a doctor sending over a patient's test results. This message contains a link that requires a CAPTCHA solution to access, and that's where the attack takes place via prompt injection.

The AI agent is convinced it doesn't need its human's attention to solve the CAPTCHA, and so it clicks a harmful button that exposes the device to viruses. The damage isn't limited to the traditional computer viruses either; there can be tangible financial loss, too. Since the AI browser can obtain access to saved payment information, hackers can leverage the social engineering-prompt injection combo to purchase items from fake websites.

With AI browsers, humans can be excluded from the security picture, and it's still remarkably easy to trick agents into making ill-informed decisions. Its "autonomy", while one of its best features, is also a great threat.

Social Engineering Vulnerability: Fake Sidebar Risk

GitHub Copilot sidebar Nwz/Shutterstock

While the previous points have explored the AI browser's security infrastructure, some risks target users in the old-fashioned way. Replica interfaces are another way for hackers to collect valuable information from you  — so how exactly does this work?

AI browsers allow users to ask the agent a question or input a website right from launch, and there's a sidebar to initiate direct interaction during the browsing experience. That's where this type of danger resides; hackers create a lookalike sidebar that tricks the user into thinking they're talking to their trusted AI agent. Instead, information is being funneled to the wrong hands.

The illusion is orchestrated by an insidious extension that injects JavaScript into the web page interface that users see and interact with. In simple terms, you may look at your AI browser and see nothing out of the ordinary because the legitimate sidebar has been duplicated perfectly.

This has various ramifications. Researchers at SquareX found that email addresses could be hijacked, and asking questions related to cryptocurrency could lead users to phishing websites. There are undoubtedly other use cases that have yet to be discovered, and at the time of writing, none of the main players in the AI browser space have addressed these loopholes.

Privacy Concerns

Shield icon representing privacy and security ArmadilloPhotograp/Shutterstock

Growing up in the age of computers, you were probably warned repeatedly not to share any information with strangers on the World Wide Web. However, those teachings are becoming lost knowledge in the new age of computing with the advent of large language models (LLMs) and now, AI browsers.

SlashGear covered multiple reasons to avoid giving AI tools your personal information, and we're going to reiterate a few in the context of AI browsers. Securing your data should be paramount to your internet browsing experience, and there's no guarantee that your information is safe on these platforms. Cookies and consumer tracking were already a privacy issue on traditional web browsers, considering how much user data was being retained for advertising (and other) purposes.

With AI browsers, you're effectively giving AI companies like OpenAI and Perplexity access to your web history, what sites you visit and how often. Depending on your user permissions, your local files aren't safe either. You could share sensitive information with an AI browser, accidentally or otherwise. This includes e-commerce purchases, company data, creative or scientific works still in development, and even personally identifiable information.

Given that artificial intelligence is a training-intensive affair, AI browsers could well train themselves on your sensitive data, at which point it's no longer yours.

Excessive CPU and Memory Consumption

CPU monitoring software detailing usage statistics aileenchik/Shutterstock

Beyond the problems of data theft and privacy, there are other ways AI browsers can adversely affect you and your computer. It's no secret that artificial intelligence is computationally expensive to develop and maintain. However, some of those expenses can pop up on your local computer via memory consumption.

We'll use Mozilla Firefox as a case study in this regard. It's one of the most popular alternatives to Google Chrome, and it's keeping up with the times and incorporating AI into its workflow. However, these updates have been less than perfect, exemplified by July's update 141, which brought AI-enhanced tab groups into the fold. Despite Mozilla taking a cautious approach to its AI integration, users observed astronomical spikes in CPU and power usage when using Firefox after the update.

CPU usage was reported to reach thresholds of 130%. With such an overload on a computer's inner components, your laptop will start overheating, slow down, and, if left unchecked, could lead to a crash. Although one could consider this an isolated event, there have been records of poor performance when using AI browsers like ChatGPT Atlas. Lags in browsing can be irritating, and the current meta of the agentic mode is far from perfect in this regard. Simple tasks like adding items to an Amazon cart can take minutes. If speed and computer performance are important to you, AI browsers may not be up to your tastes.