Open-source context retrieval layer for AI agents
-
Updated
Mar 20, 2026 - Python
Open-source context retrieval layer for AI agents
Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.
14-stage Fusion Pipeline for LLM token compression — reversible compression, AST-aware code analysis, intelligent content routing. Zero LLM inference cost. MIT licensed.
Semantica 🧠 — A framework for building semantic layers, context graphs, and decision intelligence systems with explainability and provenance.
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Grov automatically captures the context from your private AI sessions and syncs it to a shared team memory. It auto injects relevant memories across developers and future sessions to save tokens and time spent on tasks.
Open-source protocol suite standardizing LLM, Vector, Graph, and Embedding infrastructure across LangChain, LlamaIndex, AutoGen, CrewAI, Semantic Kernel, and MCP. 3,330+ conformance tests. One protocol. Any framework. Any provider.
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require approvals, and produce audit-ready decision trails.
Route inference across LLM providers. Track cost per request.
Distributed data mesh for real-time access, migration, and replication across diverse databases — built for AI, security, and scale.
A Rust runtime that unifies relational tables, graph relationships, and vector embeddings in a single tensor-based storage layer with distributed consensus and semantic search
AI Infrastructure Engineer Learning Track - Production ML infrastructure curriculum (2-4 years experience)
MachineAuth provides authentication and permission infrastructure that allows AI agents to securely access APIs, tools, and services.
NPU powered On-device AI Mobile applications using Melange
CX Linux — AI-powered Linux OS. Natural language system administration for Ubuntu & Debian. The AI layer for Linux infrastructure.
Stop paying for AI APIs during development. LocalCloud runs everything locally - GPT-level models, databases, all free.
A curated list of awesome tools, frameworks, platforms, and resources for building scalable and efficient AI infrastructure, including distributed training, model serving, MLOps, and deployment.
Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval
Zero-code LLM security & observability proxy. Real-time prompt injection detection, PII scanning, and cost control for OpenAI-compatible APIs. Built in Rust.
TME: Structured memory engine for LLM agents to plan, rollback, and reason across multi-step tasks.
Add a description, image, and links to the ai-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the ai-infrastructure topic, visit your repo's landing page and select "manage topics."