Hidden Conductor - AI Agent Security Challenge
A deliberately vulnerable AI agent chatbot demonstrating prompt injection and local file inclusion vulnerabilities.
This application is INTENTIONALLY VULNERABLE for security training and CTF purposes. DO NOT deploy in production!
- Prompt Injection: Untrusted GitHub issue content directly influences AI behavior
- Local File Inclusion (LFI): Reads arbitrary files from the filesystem
- Data Exfiltration: Sends file contents to GitHub comments
- No Sandboxing: Full filesystem access from AI-generated code
- Docker (optional, for containerized deployment)
- GitHub OAuth App credentials
- Google Gemini API key
- A test GitHub repository with issues enabled
mcp.json: MCP server configuration (loaded by the app and scannable by Snyk)main.go: AI agent with vulnerable agentic workflowsmcp_client.go: MCP client that loads config and injects user tokensmcp_config.go: Config loader formcp.jsonflag: The target file contestants need to exfiltrate
The app uses mcp.json to configure the GitHub MCP server connection:
- At startup: App validates that
mcp.jsonexists - When user authenticates:
- User's GitHub token is obtained via OAuth
- Token is injected into the MCP config (replaces
${GITHUB_PERSONAL_ACCESS_TOKEN}) - A dedicated MCP client is created for that user's session
- During chat: MCP client connects to the remote GitHub MCP server with the user's token
This means:
- ✅ The same
mcp.jsonconfig file is used by the vulnerable app - ✅ Contestants can put their GitHub PAT (personal access token) in
mcp.scan.jsonand scan withmcp-scan mcp.scan.jsonto find toxic flows. - ✅ Each user gets their own isolated MCP session with their credentials
⚠️ mcp.jsonis REQUIRED - the app will not start without it (no hardcoded defaults)