Living off AI POC Attack: Exploiting MCP Integrations Through Prompt Injection

Summary
Recent research highlights a vulnerability in Model Context Protocol (MCP) integrations where threat actors submit malicious support tickets containing prompt injection payloads. When internal users trigger AI actions to process these tickets, the injected commands execute with internal privileges, enabling data exfiltration and lateral movement without direct authentication.
Any LLM based system that can access sensitive data, accept untrusted input, and send information should be considered a potential security risk and requires threat modeling
See the original research article for more details.
Context and Scope
Organisations deploy MCP servers to enhance workflow automation by connecting AI models like Claude, Gemini, or other LLMs to enterprise systems such as Jira Service Management (JSM) and Confluence. Internal users invoke MCP tools to summarise tickets, generate responses, or classify issues, assuming external ticket content is benign.
The attack exploits the trust boundary between external ticket submitters and internal MCP-enabled workflows. When an internal user processes a malicious ticket through MCP, the AI executes embedded instructions with the user’s permissions. The MCP server lacks prompt isolation or validation, treating all input equally.
This pattern affects any system where AI processes untrusted external input through MCP without sandboxing. The demonstration targets Atlassian’s MCP integration, but the vulnerability is architectural, not vendor-specific.
The Attack Pattern
How It Works
- Initial Access: A threat actor could potentially submit a support ticket through a public-facing portal (JSM) containing hidden prompt injection instructions
- Execution: An internal support engineer uses MCP tools to summarise or respond to the ticket
- Privilege Escalation: The AI processes both the legitimate request and hidden instructions with the engineer’s permissions
- Data Exfiltration: The AI queries internal systems and writes sensitive data back to the public ticket
The threat actor never directly accesses internal systems. Instead, they use the support engineer as an unwitting proxy through the AI integration.
Why This Works
No Input Validation: MCP servers often pass ticket content directly to AI models without sanitisation or filtering.
Excessive Permissions: AI actions execute with full user privileges rather than limited, context-appropriate access.
Invisible Payloads: Hidden instructions using Unicode characters or comment blocks bypass human review.
Trusted Context: Internal users assume support tickets are benign, not recognising them as attack vectors.
STRIDE perspective
Further to the article, from a basic STRIDE analysis perspective we also note:
- Information Disclosure: Direct path to exfiltrate tenant data
- Elevation of Privilege: Gains internal user permissions
- Tampering: Modifies AI behaviour via injection
Conclusion
The “Living off AI” attack demonstrates how AI integrations often present a significant security risk. Without proper controls, every external input channel becomes a potential vector for privilege escalation. Organisations must recognise that connecting AI to both external and internal systems creates a security boundary that requires active management.
In this example the MCP can access sensitive data, process untrusted public issues and exfiltrate data externally via public issues.
As MCP adoption accelerates, security teams must adapt their threat models to account for AI-mediated attacks. The ease of exploitation and severity of impact make this an urgent priority for any organisation deploying AI-powered workflows.