Facebook Icon X Twitter Icon LinkedIn Icon LinkedIn Icon
Stellaris MCP: semantic search inside your codebase

Stellaris MCP: semantic search inside your codebase

If you develop with the help of Claude, Cursor, or another AI agent, you’ve probably experienced this frustration: the AI makes mistakes that have nothing to do with its capabilities. It suggests code that duplicates a function that already exists. It hallucinates imports. It fixes one bug while creating three others, because it never saw that the logic was already handled elsewhere.

This isn’t an intelligence problem — it’s a context problem.

An AI agent, however powerful, cannot read your entire project at every exchange. That would be too slow, and above all far too expensive in tokens. So it works with what you explicitly provide, and guesses the rest. What works fine on small isolated files quickly becomes a problem on a real project with dozens of files, hundreds of functions, and an architecture that has evolved over several months.

The solution is to give the AI a way to search your code on demand — to offer it a kind of internal search engine that understands what you’re looking for, even if you don’t know the exact name of the function or file.

That’s exactly what Stellaris MCP does: an open source tool I developed and published on GitHub, designed to integrate directly with Claude Desktop (and any MCP-compatible client).

It allows the AI to precisely locate the right piece of code at the right time, without needing to load your entire project into memory.

Result: fewer errors, fewer tokens consumed, and an assistant that truly understands your codebase.

You don’t need to be an experienced developer to benefit from it — if you use Claude to help you code, even occasionally, Stellaris MCP will save you time from the very first session.

What is an MCP server?

Imagine Claude being able to press buttons on your behalf: open a file, search your database, call an external API. That’s exactly what the Model Context Protocol (MCP) enables — an open standard developed by Anthropic that gives AI agents the ability to interact with third-party tools in a structured and secure way.

In practice, an MCP server exposes “tools” — functions the AI can invoke during a conversation, as needed. Claude doesn’t use them constantly: it calls them only when relevant, the way you’d ask a colleague for specific information rather than forwarding them the entire dossier upfront.

This “on-demand” logic is what makes MCP servers so effective at saving tokens and reducing inference costs. Instead of loading 50 files into context just in case, Claude can ask precisely for what it needs, when it needs it.

AI robot searching a codebase without context, approximate and inefficient search

Stellaris MCP exposes six tools of this type, split into two complementary categories: semantic search (which understands your intentions) and structural exploration (which parses the syntax of your code).

How it works: two complementary approaches

Stellaris MCP has two operating modes, depending on what you want to do.

Search by intent, not by keyword

The first mode is semantic search. In practice, Stellaris analyses your project and creates a kind of digital fingerprint for each function, component, or class. These fingerprints are stored locally on your machine in a small hidden folder.

The advantage? When Claude needs something, it can search by intent — “the function that handles user login” — without knowing the exact name of the file or method. It’s like Google for your code: you describe what you’re looking for, and the engine finds the most relevant match.

This indexing uses the OpenAI API (a key is required), but the cost is minimal — a few cents for an entire project — and the index is updated incrementally: only modified files are re-analysed at each startup.

Three tools cover this mode:

  • Code search — in natural language, across all your source files
  • Documentation search — across your Markdown files
  • Reindexing — to force an index update

Explore the structure, without spending a single token

The second mode is entirely free: no API key, no network call. It simply parses the syntax of your code to extract its structure — exactly as an IDE would.

Three tools in this category:

  • Tree view — the complete list of all your files, with stats by language
  • File outline — all functions and classes in a file, with their line numbers
  • Symbol source — the exact code of a specific function, with its surrounding context (see below)

This mode is particularly useful for Claude to “discover” your project at the start of a session, without having to load everything into memory.

The real bonus: automatic context

Here’s a detail that genuinely changes the quality of results. When a search tool returns a function, it gives you just… the function. Which seems logical, but creates a practical problem: Claude doesn’t know where the used variables come from, or whether similar functions already exist in the same file.

Stellaris does something smarter: when it retrieves a function for Claude, it automatically attaches what surrounds it — the file’s import list, the names of neighbouring functions, and any warning comments (TODO, FIXME, etc.) present in the file.

That’s a small overhead (around a hundred tokens), but it avoids the most common mistakes: blind refactoring, code duplication, fixes that break something else without anyone noticing. A solid quality-to-cost ratio.

Semantic code search with Stellaris MCP — laser precision, targeted result

Compatible with your everyday languages

Stellaris MCP natively supports TypeScript, JavaScript, React (TSX/JSX), Python, Go, Rust, PHP, HTML and CSS — as well as Markdown files for documentation. That covers the vast majority of current web and backend projects.

What does it look like in practice?

Once installed, the natural workflow with Claude becomes:

  1. At the start of a session, Claude asks to see your project structure — in a single call, it knows what exists and where
  2. When it needs something specific, it searches by intent rather than asking you to paste code
  3. When it modifies a function, it retrieves the full context before writing anything

Most of these operations are entirely free — only semantic search uses the OpenAI API, and only during initial indexing or updates. In regular use, you spend almost nothing.

Installation and Claude Desktop integration

git clone https://github.com/GDM-Pixel/stellaris-code-search.git
cd stellaris-code-search
npm install
npm run build

Then in your claude_desktop_config.json:

{
  "mcpServers": {
    "stellaris-mcp": {
      "command": "node",
      "args": ["/path/to/stellaris-code-search/dist/index.js"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

Without an OpenAI key, the server starts normally — the three AST tools remain available.

Genuinely open source

The project is published under the MIT licence on GitHub: github.com/GDM-Pixel/stellaris-code-search

Contributions welcome — issues, pull requests, suggestions for additional language support. If you use it on your projects and find edge cases, open an issue.

This is a tool I built for my own development workflows at GDM-Pixel and on Nova-Mind. I’m sharing it because the code context problem for AI agents is universal — and an open solution is worth more than a proprietary one locked inside a SaaS.

Charles Annoni

Charles Annoni

Front-End Developer and Trainer

Charles Annoni has been helping companies with their web development since 2008. He is also a trainer in higher education.