When AI browses, who is in control
AI is starting to permeate every part of our lives and apps. What began as a dedicated AI tool now shows up in our office suites, video editors, photo tools, note apps, and video meetings. Helpful, yes. Also, a growing behaviour risk.
Why is this important
Researchers have just demonstrated how an AI that browses for you can be misled by hidden text on a public page. The AI reads the page, combines its observations with your permissions, and then follows those quiet instructions as if they were yours. Brave’s team demonstrated this against Perplexity’s Comet using a simple Reddit post to trigger sensitive actions.
Is this a new form of cyber attack caused by our overreliance and overconfidence in AI?
It looks like the old playbook with a new front door. The technique is indirect prompt injection. The twist is a human habit. We are allowing assistants to act within logged-in sessions because it is convenient. When we over-trust the tool, the attacker does not need to phish us. They can whisper to the AI instead. OWASP already calls out prompt injection as a top risk for LLM-powered apps. What is emerging is the scale and ease when the agent has our browser privileges.
What is the impact on you?
If you ask an AI assistant to summarise a page while your email or banking is open, it may act with your access. It can read one-time codes, switch between tabs, and access data you didn't intend to share. The tech is not evil. It is obedient. The risk lies in how we use it and where we let it roam.
What is the significance of this?
Agentic browsing is coming to many products. Traditional borders, such as the same-origin policy, are ineffective when the AI operates within your session, interpreting page content as instructions. We need clear guardrails in product design and simple habits that people can repeat without thinking.
What to do this week?
- Use AI assistants only on trusted pages. Treat random links as off limits.
- Keep a separate browser profile for AI. No saved logins. No banking. No email.
- Turn off account connections you do not need. Remove email and calendar access if they are not essential.
- Reconfirm any payment or transfer in a fresh tab before you approve it.
- Use an authenticator app or a hardware key for two-factor. Do not let an agent read codes.
- Keep the assistant in help mode. Ask it to explain and summarise, not execute.
- Update your browser and extensions as fixes ship
My take
Security is a set of small choices, repeated. The goal is not fear. The goal is clarity. Know what the tool can do. Know what you are letting it see. Know when to say no.
Call to action
- Rethink the urge to switch on the latest feature.
- Treat every new convenience as a gift that needs checking.
- Be curious and investigate.
- The most important thing is to know what you are doing and how you are using it.
Does this make sense? I would love to hear your thoughts about this topic. I look forward to reading your sharing in the comments
Source of the discovery - Brave’s write-up and disclosure timeline:
Related coverage for context:
#AlvinSRatwork ✦ #AlvinSRCyberGuy ✦ #ExecutiveDirector ✦ #CyberCulture ✦ #CyberHygiene ✦ #CyberAwareness ✦ #BusinessTechnologist ✦ #CyberHabits
I would like to thank my friends who have sent me private messages and WhatsApp messages. Your care means a lot. Many of you have asked why I did not link some of my posts back to my services. You are right. Clear links help people see how I can help. I will do more of that. But can I ask? Must every post sell? Or is there space for a simple public service when the risk is immediate? Now that I am thinking about this, is there a way to share without selling, and prompt those who have found value and are interested to engage and ask how I can assist them? (Open to suggestions) I believe I was given Top Voice because I share openly and strive to add value in everything I write, whether for companies or everyday people. That will not change. Now back to AI. Are we pasting client details into public tools? Are we using approved accounts? Who reviews outputs before they touch customers or money? What guardrails do our teams follow today? Small habits now prevent big pain later. What do you want from me here? Pure value with no link. Or value with a clear path if you want help. If you'd like, I can develop a one-page AI safety starter and comment checklist. If you want help shaping simple team habits, send me a message.