Context Engine
Self-hosted AI retrieval stack with hybrid search, micro-chunking, and pluggable models. One command deploys enterprise-grade code indexing for any MCP client.
Technology Stack
Context‑Engine — Key Differentiators
"Self‑hosted, code‑aware retrieval and context compression layer for AI agents - hybrid search, deep AST indexing, and pluggable models built for private, enterprise‑grade deployment."
Fully Self‑Hosted
Run on your infra, your vector DB, your LLM, your rules. No vendor lock‑in, complete control over your data and deployment.
Hybrid Retrieval by Design
Dense + lexical fusion, path priors, symbol boosts, recency weighting, and MMR for diversity — precision engineered for code.
Deep Code‑Aware Indexing
AST symbols/imports/calls, semantic chunking, optional micro‑chunks (ReFRAG) for tighter context windows and precise retrieval.
Pluggable Models
Swap embeddings/rerankers per workload or hardware budget — scale from laptop to cluster with the same architecture.
Powerful Features
Hybrid Search
Dense + lexical + reranker for precise code discovery
ReFRAG Chunking
Micro-chunking for optimal context retrieval
Local LLM
Enhanced prompts with local model processing
MCP Compatible
Works with Cursor, Windsurf, Roo, Cline, and more
Get Started in Minutes
Deploy Context Engine
Three simple commands to get running
Comprehensive Documentation
Everything you need to master Context Engine