Skip to content

๐Ÿง  OntoMem: The Self-Consolidating Memory

OntoMem is built on the concept of Ontology Memoryโ€”structured, coherent knowledge representation for AI systems.

Give your AI agent a "coherent" memory, not just "fragmented" retrieval.

OntoMem Framework Diagram

PyPI version Python 3.11+ License: Apache 2.0 PyPI downloads Documentation

Traditional RAG (Retrieval-Augmented Generation) systems retrieve text fragments. OntoMem maintains structured entities using Pydantic schemas and intelligent merging algorithms. It automatically consolidates fragmented observations into complete knowledge graph nodes.

It doesn't just store dataโ€”it continuously "digests" and "organizes" it.


๐Ÿ“ฐ News & Updates

  • [2026-01-28] v0.2.0 Released:
  • โœจ Lookups Feature: O(1) multi-dimensional exact-match queries with automatic index consistency during merges
  • ๐Ÿ” Secondary Indices: Create custom indices on any data field for blazing-fast lookups without vector search overhead
  • ๐Ÿ”„ Automatic Sync: Indices automatically update when items merge, maintaining perfect consistency
  • ๐ŸŽฏ Composite Keys: Support for complex indexing strategies (composite keys, hierarchical keys, etc.)
  • Learn more โ†’

  • [2026-01-21] v0.1.5 Released:

  • ๐ŸŽฏ Production Safety: Added max_workers parameter to control LLM batch processing concurrency
  • โšก Rate Limit Protection: Prevents hitting API rate limits from providers like OpenAI, preventing account throttling
  • ๐Ÿ”ง Fine-Grained Control: Customize concurrency per merge strategy (default: 5 workers)
  • Learn more โ†’

  • [2026-01-19] v0.1.4 Released:

  • API Improvement: Renamed merge_strategy parameter to strategy_or_merger for better clarity and flexibility
  • Enhancement: Added **kwargs support to directly pass merger-specific parameters (like rule and dynamic_rule for CUSTOM_RULE) through OMem without pre-configuration
  • Benefit: Cleaner API and more intuitive usage patterns for advanced merging scenarios
  • Learn more โ†’

  • [2026-01-19] v0.1.3 Released:

  • New Feature: Added MergeStrategy.LLM.CUSTOM_RULE strategy for user-defined merge logic. Inject static rules and dynamic context (via functions) directly into the LLM merger!
  • Breaking Change: Renamed legacy strategies for clarity:
    • KEEP_OLD โ†’ KEEP_EXISTING
    • KEEP_NEW โ†’ KEEP_INCOMING
    • FIELD_MERGE โ†’ MERGE_FIELD
  • Learn more โ†’

โœจ Key Features

๐Ÿงฉ Schema-First & Type-Safe

Built on Pydantic. All memories are strongly-typed objects. Say goodbye to {"unknown": "dict"} hell and embrace IDE autocomplete and type checking.

๐Ÿ”„ Auto-Consolidation

When you insert different pieces of information about the same entity (same ID) multiple times, OntoMem doesn't create duplicates. It intelligently merges them into a Golden Record using configurable strategies (field overrides, list merging, or LLM-powered intelligent fusion).

  • Key-Value Lookup: O(1) exact entity access
  • Vector Search: Built-in FAISS indexing for semantic similarity search, automatically synced

๐Ÿ’พ Stateful & Persistent

Save your complete memory state (structured data + vector indices) to disk and restore it in seconds on next startup.


๐ŸŽฏ Use Cases

๐Ÿค– AI Research Assistant

Consolidate researcher profiles, papers, and citations from multiple sources.

๐Ÿ‘ค Personal Knowledge Graph

Build a living profile of contacts, their preferences, skills, and interaction history from conversations.

๐Ÿข Enterprise Data Hub

Unify customer/employee records from CRM, email, support tickets, and social media.

๐Ÿง  AI Agent Long-Term Memory

An autonomous agent accumulates experiences and observationsโ€”OntoMem keeps them organized and searchable.


๐Ÿš€ Quick Example

from ontomem import OMem, MergeStrategy
from pydantic import BaseModel
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

# Define your schema
class BugFixExperience(BaseModel):
    error_signature: str
    solutions: list[str]
    prevention_tips: str

# Initialize memory
memory = OMem(
    memory_schema=BugFixExperience,
    key_extractor=lambda x: x.error_signature,
    llm_client=ChatOpenAI(model="gpt-4o"),
    embedder=OpenAIEmbeddings(),
    strategy_or_merger=MergeStrategy.LLM.BALANCED
)

# Add experiences
memory.add(BugFixExperience(
    error_signature="ModuleNotFoundError: pandas",
    solutions=["pip install pandas"],
    prevention_tips="Check requirements.txt"
))

# Query
experience = memory.get("ModuleNotFoundError: pandas")
print(experience.solutions)  # Auto-merged across all observations!

๐Ÿ“Š Why OntoMem?

Most memory libraries store Raw Text or Chat History. OntoMem stores Consolidated Knowledge.

Feature OntoMem ๐Ÿง  Mem0 / Zep LangChain Memory Vector DBs (Pinecone/Chroma)
Core Storage Unit โœ… Structured Objects (Pydantic) Text Chunks + Metadata Raw Chat Logs Embedding Vectors
Data "Digestion" โœ… Auto-Consolidation & merging Simple Extraction โŒ Append-only โŒ Append-only
Time Awareness โœ… Time-Slicing (Daily/Session Aggregation) โŒ Timestamp metadata only โŒ Sequential only โŒ Metadata filtering only
Conflict Resolution โœ… LLM Logic (Synthesize/Prioritize) โŒ Last-write-wins โŒ None โŒ None
Type Safety โœ… Strict Schema โš ๏ธ Loose JSON โŒ String only โŒ None
Ideal For Long-term Agent Profiles, Knowledge Graphs Simple RAG, Search Chatbots, Context Window Semantic Search

๐Ÿ’ก The "Consolidation" Advantage

  • Traditional RAG: Stores 50 chunks of "Alice likes apples", "Alice likes bananas". Search returns 50 fragments.
  • OntoMem: Merges them into 1 object: User(name="Alice", likes=["apples", "bananas"]). Search returns one complete truth.

๐Ÿ”— Next Steps


Built with โค๏ธ for AI developers who believe memory is more than just search.