manu’s bookmarks

Bookmarks with tag “security”:

Self-driving cars, drones hijacked by custom road signs

Indirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto road signs.

Notepad++ Hijacked by State-Sponsored Hackers

Oof.

Niedersachsen implementiert mit „Projekt Aegis“ Schutzschirm gegen Cyberangriffe

Agentic ProbLLMs: Exploiting AI Computer-Use and Coding Agents

This talk demonstrates end-to-end prompt injection exploits that compromise agentic systems. Specifically, we will discuss exploits that target computer-use and coding agents, such as Anthropic's Claude Code, GitHub Copilot, Google Jules, Devin AI, ChatGPT Operator, Amazon Q, AWS Kiro, and others.

Putin's Bears: World's Most Dangerous Hackers

Russian state-sponsored cyber units, colloquially known as the “Bears,” operate as direct instruments of national policy.

Increase in AI generated "vulnerability reports" and CVE requests

OpenWrt’s mailing list getting hit by nonsense AI vulnerability reports.

AI slop attacks on the curl project

In these days of "vibe coding" and chatbots, users ask AIs for help with everything. Asked to find security problems in Open Source projects, AI bots tell users something that sounds right. Reporting these "findings" wastes everyone's time and causes much frustration and fatigue. Daniel shows how this looks, how it creates a DDoS on projects and how totally beyond absurd this is. With examples and insights from the curl project.