AgenticMCP Server
Agentic · Hub

Agentic AI
Hub

Most people chat with AI. I built the infrastructure that lets AI actually do the work — trigger real automations, search real knowledge, write to my tools. Custom servers, self-hosted, fully under my control.

0MCP Tools
0Pipelines
Scroll
First, a primer

USB for AI

MCP — Model Context Protocol — is one standard plug that lets any AI client talk to any tool, without writing glue code per pairing. An MCP server exposes named tools with typed inputs; the AI calls them like functions.

Concretely, in my stack: two custom MCP servers I built from scratch — one that lets Claude drive Make.com, one that gives it read/write access to my Obsidian vault — plus self-hosted n8n workflows on a VPS for the event-driven stuff. Every chip below is a real MCP tool Claude can call while I work — the Make and Vault ones I built, the rest are third-party MCPs wired into my Claude config.

make.run-scenario
vault.query
asana.create-task
notion.query-database
make.list-scenarios
vault.write
asana.search-tasks
notion.create-page
make.get-execution-logs
asana.update-task
vault.recall
vault.read
excalidraw.save-checkpoint
stitch.generate-screen
figma.implement-design
make.update-scenario

16 of the 28 tools — click through to see what each one does

01

MCP Tool Router

21 tools across 4 groups. Claude Code sends a request, the MCP server routes to the right tool, Zod validates inputs at runtime, and the upstream API delivers the response.

Click a tool to inspect its schema, request, and response.

CC
Claude Codeagent runtime
MCP
M
MCP Server21 tools · 4 groups
route
Obsidian Vaultdestination
Asana6
Notion6
n8n5
Second Brain4
vault_querySemantic search
Obsidian Vault
validated · zod
z.object({
  query: z.string().describe("Natural language query"),
  types: z.array(z.string()).optional(),
})
stdio transport · type-safe
online

What you're looking at. When Claude Code wants to do anything outside of reading and writing local files — create a Notion page, trigger a Make scenario, query the vault, create an Asana task — it doesn't speak directly to those upstream APIs. It calls a named MCP tool, and an MCP server's router takes over: pick the right handler, validate the input, hit the upstream system, normalize the response back into MCP's structured content format. Every tool you see in the demo is a real handler I can call from an agent right now.

Why a router instead of one endpoint per tool. 21 tools as 21 separate processes would mean 21 connections to maintain, 21 auth setups, 21 lifecycle bugs. A single server with grouped routing keeps the surface coherent — one process per integration domain — shares auth and rate-limit logic, and lets the agent discover related tools through MCP's capability negotiation instead of guessing names. When I add a new tool, the router picks it up automatically; nothing on the client side has to change.

Why Zod sits in the middle. The schema isn't just for nice TypeScript types — it's the runtime gate. Every input from the LLM is parsed against the schema before the handler runs. Wrong type, missing field, out-of-range number — caught here, not three API calls deep. The same schema is exported as JSON Schema to the client, so the LLM sees exactly what the server expects. One source of truth, two consumers (the runtime and the model).

02

Pipeline Visualizer

Three event-driven pipelines running on a self-hosted n8n instance — voice-to-Notion capture, lead handling with discovery-call generation, and a YouTube production line — each with explicit human gates where judgment can't be automated.

Hover a node for details. Gates mark the points where I step in.

Idea Catcher

LIVE1 gate
Hover a node for details

Lead Pipeline

PARTIAL1 gate
Hover a node for details

YouTube Pipeline

STAGING2 gates
Hover a node for details
trigger
process
ai
human gate
output

Why pipelines, not just MCP tools. MCP tools are great for things Claude does while I'm actively working with it. Pipelines are for things that happen withoutme — a webhook fires at 3am, an event arrives while I'm asleep, a meeting ends and a transcript needs to land somewhere. n8n runs on a VPS I control, listens for those events, and does the work in the background. By the time I open my laptop, the work is done.

What each pipeline actually does. Idea Catcher turns voice memos from an iOS shortcut into structured Notion pages — Whisper transcribes, GPT-4o-mini cleans and tags, n8n routes to the right database, and a human-in-the-loop gate decides whether the idea becomes an Asana task. Lead Pipelinepolls a Notion booking-lead DB every minute, scrapes the lead's site, generates a discovery-call brief in Notion, and waits for me to review — the final “link into pipeline” step is still on the TODO list. YouTube Pipeline takes a topic from the vault, drafts a script with Claude, waits for me to approve, renders with Remotion, then waits again before upload — two gates, because publishing is irreversible.

Why the gates matter. Full automation sounds great until the AI mistakes a voice memo for a confidential note and posts it publicly, or the script generator hallucinates a fact about a real client. Every gate is a deliberate stopping point where a human checks before irreversible work happens. I keep automation honest by drawing clear lines between “AI can do this alone” and “AI proposes, human approves.”

03

Knowledge Graph

A semantic map of my Obsidian vault. Type a query, the graph highlights the notes that match by meaning — not by keyword. This is how the agent finds context before answering anything that needs project memory.

Type "automation", "mcp", or "pipeline" into the search.

vault_query
>
idle
decisionn8n over MakelearningMCP Zod ValidationskillWebhook PatternsprojectPortfolio SpecfleetingAgent Memory IdeaskillClaude Code Workf…projectRemotion Pipeline
7 notes · 8 links
results
awaiting query…
decision
learning
skill
project
fleeting

What the agent actually does with this. When I ask Claude something like “why did we pick n8n over Make for the lead pipeline?”, it doesn't guess. It calls vault_query with the question, gets back a ranked list of relevant notes — decisions, learnings, project entries — and then answers, grounded in what I actually decided. No hallucinated history. No invented rationale. The graph you see is the result that comes back: nodes lit by relevance, connections showing related context.

Why semantic, not keyword. A keyword search misses everything where I phrased it differently — “automation platform” vs. “workflow tool”, “decision” vs. “trade-off”. Embedding-based semantic search matches by meaning, so the agent finds the relevant note even when the query and the title don't share a single word. That's the difference between an assistant that searches your notes and one that actually understands them.

The structure makes it work. Every note is atomic — one decision per file, one learning per file, one skill per file — with typed frontmatter (type, tags, project, created). The graph isn't a cosmetic visualization; it's the actual shape of the vault. When the agent searches, it can filter by note type before even looking at content (“only show me decisions about agents in the last quarter”), which is impossible against a wall of free-form Markdown.

04

Agent Orchestration

What agentic work actually looks like in practice. A main agent reads a goal, plans a sequence of tool calls, spawns sub-agents to parallelize independent work, and reports back. This is execution, not chat.

Three workflows cycle: Deploy, Research, Bug Fix.

prompt
>Deploy the new feature to staging
orchestrating
idleHover a bar for details, or watch the cursor
claude (main)
build-validator
test-runner
0.0s
0.5s
1.1s
1.6s
2.2s
2.7s
3.2s
3.8s
4.3s
4.9s
5.4s
spawn agent
Rread
Eedit
$bash
Qquery
commit
MCPn8nBraintools:19uptime:47d

What “agentic” means here. Most people's mental model of AI is a chat box: you ask, it answers. An agentic workflow is fundamentally different — you give it a goal (“deploy the new feature to staging”), and it figures out the steps: read the deploy checklist from the vault, run typecheck and tests in parallel, edit the staging config, push the branch, report the result. No further input from me until something actually requires a decision.

Why sub-agents matter. A single agent doing everything sequentially is slow and context-greedy. By spawning sub-agents — one for build validation, one for tests, one for code-search — the main agent can run independent work in parallel and keep its own context clean. Each sub-agent gets a narrow job and a small context window, which is faster, cheaper, and more reliable than one giant context full of unrelated tool calls. The orchestration view above shows exactly that: bars on parallel lanes, sub-agents indented under the main thread.

What this replaces. Without orchestration, every multi-step job needs me in the loop the whole time — paste output, tell it the next step, paste again. With orchestration, the multi-step job becomes a single instruction. The three workflows in the demo (Deploy, Research, Bug Fix) are real shapes I run regularly; what used to be 30 minutes of copy-paste between Claude and the terminal is now one prompt and a finished result.

Why this is built on MCP. Every action in the timeline — vault queries, file reads, edits, bash commands, git operations, sub-agent spawns — is an MCP tool call. The agent doesn't need a custom framework to orchestrate; it needs a typed surface of tools and the freedom to call them in whatever sequence the goal demands. MCP is the substrate; orchestration is what you get for free once the substrate is right.

TypeScript
MCP SDK
n8n
Obsidian
Claude Code
Node.js
make-mcp-server/src/tools/scenarios.ts
server.tool("list-scenarios", {
  teamId: z.number().describe("Make.com team ID"),
  folderId: z.number().optional(),
}, async ({ teamId, folderId }) => {
  const scenarios = await makeApi.get("/scenarios", {
    params: { teamId, folderId }
  });
  return { content: scenarios.data };
});

Key Decisions

  • 01
    MCP over REST wrappers

    The obvious shortcut would have been a custom REST client per integration — fastest to ship, hardest to scale. Picking MCP up front meant a steeper learning curve (capability negotiation, JSON-RPC transport, schema serialization) but bought protocol-level decoupling: any MCP-aware agent client uses my tools without a single line of bespoke glue. Swap Claude Code for Cursor tomorrow and nothing breaks.

  • 02
    Self-hosted n8n on a VPS, not a SaaS automation platform

    Make.com and Zapier are convenient until you hit operation limits, vendor-managed webhooks under their domain, and data sitting on someone else's server. n8n on my own VPS gives me unlimited operations, webhook endpoints under my own domain (so I can revoke or rotate without begging vendor support), and physical control over data. Make.com is still in the stack — but as a target the MCP server drives, not as the orchestrator.

  • 03
    Obsidian as the knowledge layer, not a custom database

    The temptation was to build a structured database with a schema, an admin UI, an API. The result would have been a knowledge silo nobody wanted to touch. Obsidian gives me Markdown files in a folder with YAML frontmatter — git- friendly, editor-agnostic, cloud-syncable, zero-vendor- lockin. The MCP server adds the agent layer on top. If the AI side disappears tomorrow, the knowledge stays usable.

  • 04
    Atomic notes over mega-docs

    One decision per file, one learning per file, one skill per file — never sprawling combo documents. Big docs are good for humans skim-reading, terrible for agents reasoning. With atomic notes the agent can pull exactly the context it needs, score relevance precisely, and link cleanly between pieces. The graph in section 03 only works because the underlying notes are at the right granularity.

  • 05
    Human gates inside automation, not around it

    Pipelines aren't all-or-nothing. Each one is mostly autonomous but pauses at deliberate gates — “is this idea worth a task?”, “is this script ready to shoot?”, “is this video ready to publish?”. The gates are where judgment is required and reversibility is poor. Everything else runs without me. This is the opposite of the “AI assists, human does” pattern: AI does, human gates.

  • 06
    Sub-agents for parallel work, single context for sequential

    The instinct with agentic work is to spawn sub-agents for everything. That's wrong. Sub-agents add coordination cost and lose the parent's context. The right rule: spawn sub-agents only for genuinely independent work (running tests while editing code, exploring multiple files at once). Sequential reasoning stays in the main agent's context where it can see everything. The orchestration view in section 04 reflects this — most bars are on the main lane, sub-agents only appear where parallelism actually pays off.

  • 07
    No platforms, no wrappers, no orchestration frameworks

    LangChain, AutoGen, CrewAI — every framework solves a problem that goes away once you have MCP and a capable agent runtime. Custom servers in plain TypeScript, Claude Code as the runtime, n8n for event-driven jobs, Obsidian for memory. Six layers of abstraction collapse to three. Less to learn, less to upgrade, less that breaks.