MCP Servers for Threat Intelligence
How I set up Model Context Protocol servers to bridge AI assistants with threat intelligence platforms — and what I learned about tool design along the way.
The Problem
Threat hunting involves a lot of context switching. You are enriching IOCs in OpenCTI, writing up findings in Confluence, checking your Obsidian vault for prior campaign notes, and generating images for reports — all in the same investigation. Each tool switch breaks your train of thought.
When the Model Context Protocol specification came out, I saw it as a way to let an AI assistant query these platforms directly instead of me copying and pasting between browser tabs.
What I Built
I ended up with three MCP servers in the workflow, each targeting a specific gap:
OpenCTI MCP — Connects Claude to our threat intelligence platform via pycti. Supports indicator queries, campaign searches, relationship traversal, and label management. I built it with FastMCP in Python because the existing pycti library made it straightforward. The platform holds over 500K indicators, so getting the query interface right was important.
Obsidian MCP — Full CRUD access to the threat research vault. Eleven tools covering note search, creation, tag queries, backlink discovery, and template application. Built in Node.js with the MCP SDK. This one gets the most daily use because I keep all my campaign research in Obsidian.
NanoBanana MCP — Connects to a Gemini endpoint for AI image generation. Generates visuals like attack chain diagrams and MITRE heatmaps directly from prompts. Integrating this into the workflow saves a lot of time when writing reports that need diagrams.
What I Learned
The servers themselves were not hard to build. The harder part was designing tool interfaces that actually produce useful results in conversation. A few things I picked up:
Keep outputs small. The first version of the OpenCTI server returned full indicator objects with dozens of fields. The AI would get lost trying to extract the relevant pieces. Trimming responses to essential fields made a noticeable difference.
Parameter descriptions matter. MCP tools have JSON Schema definitions for their parameters. The AI reads those descriptions to figure out how to call your tools. Spending time on clear descriptions and sensible defaults makes the tools more reliable.
Handle errors gracefully. Network timeouts and rate limits happen. Returning structured error messages instead of stack traces lets the AI understand what went wrong and retry on its own.
Build for your actual workflow, not a general one. The Obsidian MCP has a search_by_tag tool because I organize research by campaign tags. That is specific to how I work, and that specificity is what makes it useful.
How It Fits Together
These servers now plug into the TITAs inquiry pipeline — the 9-phase automation I built for handling threat intelligence requests. They also get used individually through Claude Code for ad-hoc research. What used to mean opening multiple browser tabs and manually moving data between them now happens in one conversation.
The MCP ecosystem is still early, but the pattern seems clear: small, purpose-built servers that expose clean tool interfaces for specific domains.
Related Posts
Automating Inquiry Triage with AI
How I built a 9-phase AI pipeline to handle threat intelligence inquiries that used to take a week — and what it taught me about building tools from real pain points.
From OSINT to Internal Hunting
How shifting from external OSINT to internal telemetry hunting changed the way I approach threat research — and where most of my published work actually comes from.
What Ransomware Hunting Actually Looks Like
The daily reality of proactive ransomware hunting — from YARA triggers and VirusTotal dashboards to naming new families and building attack chains from telemetry.