The modern-day Trojan Horse you need to watch out for when using AI products
Made with Midjourney
AI has growing benefits…and risks. AI tools have evolved from conversational chatbots to agentic assistants that can now act on your behalf. For busy professionals, the productivity gains are hard to ignore. But there’s a catch: Whenever you delegate a task to AI, you’re at risk of a prompt injection.
What’s a prompt injection? They’re malicious instructions that are hidden inside everyday content like web pages, documents, and even images. When AI scans this content, it reads these hidden commands and follows them instead of your original request. While you might prompt AI to summarize a document, it could get redirected to a new task — like emailing your bank information to a hacker.
How to protect yourself. Modern AI systems have built-in defenses, but they’re not foolproof. Experts recommend limiting AI’s access only to what’s necessary. If AI needs to scan a database, make sure that it can only access the minimum amount necessary. On a personal note, I use agentic browsers like Comet and Atlas, but keep sensitive tasks like banking out of these browsers.
Monitor what goes in and out. It’s also a good practice to only feed AI content from trusted, reputable sources. When in doubt, review documents yourself before handing them to AI. A file might look harmless, but it could be hiding a secret threat.
