๐ง OntoMem: The Self-Consolidating Memory¶
OntoMem is built on the concept of Ontology Memoryโstructured, coherent knowledge representation for AI systems.
Give your AI agent a "coherent" memory, not just "fragmented" retrieval.
Traditional RAG (Retrieval-Augmented Generation) systems retrieve text fragments. OntoMem maintains structured entities using Pydantic schemas and intelligent merging algorithms. It automatically consolidates fragmented observations into complete knowledge graph nodes.
It doesn't just store dataโit continuously "digests" and "organizes" it.
๐ฐ News & Updates¶
- [2026-01-28] v0.2.0 Released:
- โจ Lookups Feature: O(1) multi-dimensional exact-match queries with automatic index consistency during merges
- ๐ Secondary Indices: Create custom indices on any data field for blazing-fast lookups without vector search overhead
- ๐ Automatic Sync: Indices automatically update when items merge, maintaining perfect consistency
- ๐ฏ Composite Keys: Support for complex indexing strategies (composite keys, hierarchical keys, etc.)
-
[2026-01-21] v0.1.5 Released:
- ๐ฏ Production Safety: Added
max_workersparameter to control LLM batch processing concurrency - โก Rate Limit Protection: Prevents hitting API rate limits from providers like OpenAI, preventing account throttling
- ๐ง Fine-Grained Control: Customize concurrency per merge strategy (default: 5 workers)
-
[2026-01-19] v0.1.4 Released:
- API Improvement: Renamed
merge_strategyparameter tostrategy_or_mergerfor better clarity and flexibility - Enhancement: Added
**kwargssupport to directly pass merger-specific parameters (likeruleanddynamic_ruleforCUSTOM_RULE) throughOMemwithout pre-configuration - Benefit: Cleaner API and more intuitive usage patterns for advanced merging scenarios
-
[2026-01-19] v0.1.3 Released:
- New Feature: Added
MergeStrategy.LLM.CUSTOM_RULEstrategy for user-defined merge logic. Inject static rules and dynamic context (via functions) directly into the LLM merger! - Breaking Change: Renamed legacy strategies for clarity:
KEEP_OLDโKEEP_EXISTINGKEEP_NEWโKEEP_INCOMINGFIELD_MERGEโMERGE_FIELD
- Learn more โ
โจ Key Features¶
๐งฉ Schema-First & Type-Safe¶
Built on Pydantic. All memories are strongly-typed objects. Say goodbye to {"unknown": "dict"} hell and embrace IDE autocomplete and type checking.
๐ Auto-Consolidation¶
When you insert different pieces of information about the same entity (same ID) multiple times, OntoMem doesn't create duplicates. It intelligently merges them into a Golden Record using configurable strategies (field overrides, list merging, or LLM-powered intelligent fusion).
๐ Hybrid Search¶
- Key-Value Lookup: O(1) exact entity access
- Vector Search: Built-in FAISS indexing for semantic similarity search, automatically synced
๐พ Stateful & Persistent¶
Save your complete memory state (structured data + vector indices) to disk and restore it in seconds on next startup.
๐ฏ Use Cases¶
๐ค AI Research Assistant¶
Consolidate researcher profiles, papers, and citations from multiple sources.
๐ค Personal Knowledge Graph¶
Build a living profile of contacts, their preferences, skills, and interaction history from conversations.
๐ข Enterprise Data Hub¶
Unify customer/employee records from CRM, email, support tickets, and social media.
๐ง AI Agent Long-Term Memory¶
An autonomous agent accumulates experiences and observationsโOntoMem keeps them organized and searchable.
๐ Quick Example¶
from ontomem import OMem, MergeStrategy
from pydantic import BaseModel
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
# Define your schema
class BugFixExperience(BaseModel):
error_signature: str
solutions: list[str]
prevention_tips: str
# Initialize memory
memory = OMem(
memory_schema=BugFixExperience,
key_extractor=lambda x: x.error_signature,
llm_client=ChatOpenAI(model="gpt-4o"),
embedder=OpenAIEmbeddings(),
strategy_or_merger=MergeStrategy.LLM.BALANCED
)
# Add experiences
memory.add(BugFixExperience(
error_signature="ModuleNotFoundError: pandas",
solutions=["pip install pandas"],
prevention_tips="Check requirements.txt"
))
# Query
experience = memory.get("ModuleNotFoundError: pandas")
print(experience.solutions) # Auto-merged across all observations!
๐ Why OntoMem?¶
Most memory libraries store Raw Text or Chat History. OntoMem stores Consolidated Knowledge.
| Feature | OntoMem ๐ง | Mem0 / Zep | LangChain Memory | Vector DBs (Pinecone/Chroma) |
|---|---|---|---|---|
| Core Storage Unit | โ Structured Objects (Pydantic) | Text Chunks + Metadata | Raw Chat Logs | Embedding Vectors |
| Data "Digestion" | โ Auto-Consolidation & merging | Simple Extraction | โ Append-only | โ Append-only |
| Time Awareness | โ Time-Slicing (Daily/Session Aggregation) | โ Timestamp metadata only | โ Sequential only | โ Metadata filtering only |
| Conflict Resolution | โ LLM Logic (Synthesize/Prioritize) | โ Last-write-wins | โ None | โ None |
| Type Safety | โ Strict Schema | โ ๏ธ Loose JSON | โ String only | โ None |
| Ideal For | Long-term Agent Profiles, Knowledge Graphs | Simple RAG, Search | Chatbots, Context Window | Semantic Search |
๐ก The "Consolidation" Advantage¶
- Traditional RAG: Stores 50 chunks of "Alice likes apples", "Alice likes bananas". Search returns 50 fragments.
- OntoMem: Merges them into 1 object:
User(name="Alice", likes=["apples", "bananas"]). Search returns one complete truth.
๐ Next Steps¶
- Getting Started - 5-minute setup guide
- Merge Strategies - Learn about different merging approaches
- API Reference - Complete API documentation
Built with โค๏ธ for AI developers who believe memory is more than just search.