Model Context Protocol poisoning is an emerging AI attack vector where adversaries manipulate the structured context that large language models (LLMs) rely on to reason about available tools, memory, or system state. This protocol—often JSON-based—encodes tool schemas, agent metadata, or prior interactions, which the model parses during inference. By injecting misleading or adversarial data into these context fields (e.g., altering function signatures, hiding malicious payloads in descriptions, or spoofing tool responses), attackers can subvert agent behavior, bypass filters, or exfiltrate data. Unlike prompt injection, which targets natural language prompts, Model Context Protocol poisoning exploits the model’s structured “belief space,” making it stealthier and potentially more persistent across multi-turn interactions or autonomous workflows.