Home » Indirect Prompt Injection into coding assistants like GitHub Copilot or Cursor

Indirect Prompt Injection into coding assistants like GitHub Copilot or Cursor

Indirect prompt injection via instruction files in tools like GitHub Copilot or Cursor occurs when an attacker embeds malicious or manipulative instructions within seemingly benign project files—such as README.md, CONTRIBUTING.md, or even code comments—that are then ingested by AI coding assistants. These instruction files can subtly alter the model’s behavior, coercing it to generate insecure code, leak sensitive data, or prioritize attacker-defined logic without the user’s awareness. Since these tools rely on context from surrounding files to provide relevant completions, they are susceptible to this kind of injection, making it a stealthy and potent attack vector in collaborative or open-source development environments.

Scroll to Top