Agentic
Agentic · Second Brain

Second
Brain

A self-organizing knowledge vault where AI agents read, write, and connect structured notes — 140+ atomic files, one idea each.

0+Atomic Notes
0Note Types
0MCP Tools
Scroll
The vault, live

Instead of a database, the Second Brain uses Markdown files with structured frontmatter — one decision per file, one learning per file, one skill per file. Nine MCP tools give any AI agent direct read and write access to the entire knowledge base.

140+atomic notes, linked by topic, queried semantically. Here's how it works ↓

illustrative — generated to mirror vault topology
Anatomy

What an atomic note looks like

Every file is a typed record: YAML frontmatter on top, free-form Markdown below. One concept per file. Four canonical types in the vault — decision, learning, skill, reference — plus an index page per project. Each type has a defined trigger rule for when an agent autonomously creates one.

01-projects/luccafaust-dev/decisions/portfolio-bilingual-toggle.mdtype: decision
---
type: decision
project: luccafaust-dev
tags: [i18n, next-intl, toggle]
created: 2026-04-20
---
# Portfolio bilingual — EN + DE with toggle
Portfolio will ship bilingual (EN + DE), switchable via a top-nav toggle. i18n pass comes AFTER content is solid — not in parallel, to avoid drift.
**Why:** …   **How to apply:** …
01

Knowledge Map

Every note is a node. Every line is a semantic connection. Hover to explore how decisions, learnings, and skills relate to each other.

decisionslearningsskillsreferences

n8n statt Make
decision
Zod Validation
learning
Webhook Patterns
skill
MCP Protocol Spec
reference
Agent Orchestration
skill
MCP Server Design
skill
Atomic Notes
learning
Tool Schema Design
decision
decision
learning
skill
reference
02

Vault Explorer

The folder structure mirrors how knowledge is organized: projects, patterns, resources. Every note carries typed frontmatter — type, tags, project, creation date.

Click any note to inspect its metadata.

Vault Explorer~/second-brain

Zod Validation

#mcp#validation#zod
---
type:learning
tags:[mcp, validation, zod]
created:2026-03-15
project:second-brain-bridge
---
Content

Content preview...

dedecision
lrlearning
skskill
rfreference
03

Vault MCP — the nine tools

Nine tools ship with the server. Three do the heavy lifting on retrieval: vault_search (semantic search via local Ollama embeddings), vault_query (structured frontmatter filter — type, project, status, tags, since), vault_recall (hybrid task-aware retrieval). The remaining six — vault_read, vault_write, vault_update, vault_context, vault_link, vault_skill — cover reading, writing, project context-packing, and linking.

Select a tool on the left to see its schema, request, and response.

Vault MCP — 4 tools

vault_query

Structured frontmatter filter

Schema
z.object({
type: z.string().optional(), // decision | learning | skill | reference
project: z.string().optional(),
status: z.string().optional(), // active | archived | draft
tags: z.array(z.string()).optional(),
since: z.string().optional(), // YYYY-MM-DD
})
Request
{
"type": "decision",
"project": "second-brain-bridge",
"status": "active",
"since": "2026-04-01"
}
Response
{
"results": [
{ "path": "01-projects/second-brain-bridge/decisions/2026-04-26-data-management-iteration.md",
"title": "Vault Data-Management Iteration",
"frontmatter": { "type": "decision", "status": "active",
"tags": ["vault","mcp","findability"] } }
]
}
04

Live Query

Watch a semantic search in real-time. The agent asks a natural question, the system finds and ranks the most relevant notes.

Try it: type "mcp", "automation", or "portfolio".

vault_query
Obsidian
MCP SDK
TypeScript
Markdown
Frontmatter
YAML

How It Works

The Second Brain is an Obsidian vault structured around atomic notes — one decision, one learning, one skill per file. Every note carries frontmatter metadata: type, tags, project, status, and date. This schema enables precise, typed queries instead of full-text search.

Nine MCP tools expose the vault to AI agents. The retrieval surface is three tools: vault_search (semantic search via local Ollama embeddings), vault_query (structured frontmatter filter — no query string, only type/project/status/tags/since), and vault_recall (hybrid task-aware retrieval that merges semantic and keyword results). The remaining six — vault_read, vault_write, vault_update, vault_context, vault_link, and vault_skill — handle reading, writing, project context-packing, and linking.

On top of the MCP layer, seven slash commands wire the vault into day-to-day flow: /brain-context, /braindump, /daily-brief, /weekly-checkin, /knowledge-consolidation, /url-dump, /auto-research.

second-brain/.mcp-server/src/index.ts
// vault_search — semantic search via local Ollama embeddings.
// Falls back to keyword search if Ollama is unreachable.
server.tool(
  "vault_search",
  "Semantische Suche ueber den Vault.",
  {
    query: z.string().describe("Search term or question"),
    limit: z.number().optional(),
    project: z.string().optional(),
    type: z.string().optional(),
  },
  async ({ query, limit = 10, project, type }) => {
    let results;
    if (ollamaAvailable) {
      const queryEmbedding = await ollama.embed(query);
      results = findTopK(queryEmbedding, embeddingsCache, limit, { project, type });
    } else {
      // Graceful degradation: keyword over title + tags + content
      const notes = listNotes(VAULT_PATH).map(p => {
        const n = readNote(VAULT_PATH, p);
        return { path: p, title: n.title, tags: n.frontmatter.tags ?? [], content: n.content };
      });
      results = keywordSearch(query, notes).slice(0, limit);
    }
    for (const r of results) {
      const note = readNote(VAULT_PATH, r.path);
      r.excerpt = note.content.slice(0, 200);
    }
    return { content: [{ type: "text", text: JSON.stringify({ results }, null, 2) }] };
  }
);

The Search Stack

"Queried semantically" is doing a lot of work in a single sentence. Here's what actually happens when an agent asks a question.

The vault is not indexed by a cloud search service. Every note is embedded locally via Ollama running nomic-embed-text on localhost:11434. A file watcher re-embeds any note that changes, stamping each vector with the embedding model's version so the index can detect and replace stale entries after a model upgrade. At query time the same model embeds the question, then cosine similarity against every cached vector gives a ranked top-K — with optional filters by project and type applied before scoring. If Ollama isn't reachable, the server falls back to keyword search — graceful degradation, not a hard failure.

Nothing leaves the machine. Zero API cost, single-digit-ms latency per query, privacy-by-design.

embeddings/ollama-client.ts
// Local embedding via Ollama — no cloud round-trip
async embed(text: string): Promise<Float32Array> {
  const prepared = this.prepareText(text); // truncate to ~6k tokens
  const res = await fetch(`${this.host}/api/embeddings`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ model: this.model, prompt: prepared }),
  });
  const { embedding } = await res.json() as { embedding: number[] };
  return new Float32Array(embedding);
}

// Cosine similarity — the ranking function that makes "semantic" mean something
export function cosineSimilarity(a: Float32Array, b: Float32Array): number {
  let dot = 0, normA = 0, normB = 0;
  for (let i = 0; i < a.length; i++) {
    dot += a[i] * b[i];
    normA += a[i] * a[i];
    normB += b[i] * b[i];
  }
  return dot / (Math.sqrt(normA) * Math.sqrt(normB));
}

Key Decisions

  • 01
    Obsidian over Custom DB

    Markdown files with frontmatter. Human-readable, git-trackable, editable without tooling. The MCP layer adds programmatic access on top — the vault stays portable.

  • 02
    Atomic Notes

    One decision = one file. No mega-docs, no nested structures. Every note is individually queryable, linkable, and surfaceable.

  • 03
    Frontmatter as Schema

    Typed fields (type, tags, project, created) enable precise queries via MCP. Agents filter by note type and project context instead of full-text search.

  • 04
    Local embeddings over a cloud API

    Ollama running nomic-embed-text on localhost. Trade-off: slightly lower embedding quality than OpenAI's text-embedding-3, in exchange for privacy-by-default, zero per-query cost, single-digit-ms latency, and a vault that works offline on a train.

  • 05
    Model-versioned index

    Each cached vector stores the name of the model that produced it. When the embedding model changes, the index manager can list stale entries and re-embed only those — no full-vault wipe, no drift between "old vectors from model A" and "new vectors from model B" polluting the ranking.

  • 06
    Graceful keyword fallback

    If Ollama isn't reachable at query time, the server doesn't error — it switches to keyword search over title + tags + content. The agent gets worse ranking, but the vault stays usable. Hard failures break workflows; soft degradation doesn't.