haive.agents.memory.agent ========================= .. py:module:: haive.agents.memory.agent .. autoapi-nested-parse:: Memory Agent - ReactAgent with persistent memory, KG extraction, and auto-summarization. Phase 1: Memory tools (save/search/KG) + auto-context + auto-summarize Phase 2: KG extraction from conversations + context-length pre-hook Optional integration with document_modifiers (summarizer, KG extraction) Uses haive.core store for persistence, NOT langmem. Supports InMemoryStore (dev) and PostgresStore (production). Classes ------- .. autoapisummary:: haive.agents.memory.agent.ConversationExtraction haive.agents.memory.agent.ExtractedTriple haive.agents.memory.agent.MemoryAgent Functions --------- .. autoapisummary:: haive.agents.memory.agent.create_memory_agent Module Contents --------------- .. py:class:: ConversationExtraction(/, **data) Bases: :py:obj:`pydantic.BaseModel` Structured extraction from a conversation turn. Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `self` is explicitly positional-only to allow `self` as a field name. .. py:class:: ExtractedTriple(/, **data) Bases: :py:obj:`pydantic.BaseModel` A single knowledge graph triple extracted from conversation. Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `self` is explicitly positional-only to allow `self` as a field name. .. py:class:: MemoryAgent(/, **data) Bases: :py:obj:`haive.agents.react.agent.ReactAgent` ReactAgent with persistent memory, KG extraction, and auto-summarization. Features: 1. Memory tools bound to a store (save/search memories + KG triples) 2. Auto-context: searches store for relevant memories before each response 3. Auto-summarize: when context length exceeds threshold, summarize and store 4. KG extraction: extracts knowledge triples from conversations (post-response) 5. Integration points for document_modifiers (summarizer, KG pipelines) Store options: - Pass store= directly (any LangGraph BaseStore) - Pass connection_string= for PostgreSQL (preferred for production) - Default: InMemoryStore (dev only) Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `self` is explicitly positional-only to allow `self` as a field name. .. py:method:: connect_neo4j(config = None) Connect to Neo4j for graph-based KG queries. :param config: Neo4jKGConfig instance, or None for env var defaults :returns: Neo4jKGStore instance .. py:method:: create_graph_query_tool() Create a LangChain tool that queries the Neo4j KG with natural language. Uses GraphDBRAGAgent internally to generate Cypher from questions. Requires Neo4j connection (connect_neo4j() must be called first). .. py:method:: extract_kg_from_document(text, allowed_nodes = None) Extract KG triples from a document using GraphTransformer. Uses haive.agents.document_modifiers.kg for full document-level knowledge graph extraction (more thorough than conversation extraction). :param text: Document text to extract from :param allowed_nodes: Optional list of entity types to extract :returns: List of extracted triples as dicts .. py:method:: get_store() Get the underlying store for direct access. .. py:method:: query_kg(entity) Query KG triples for an entity from Neo4j. :param entity: Entity name to look up :returns: List of {subject, predicate, object} dicts .. py:method:: query_kg_cypher(cypher, params = None) Execute a raw Cypher query against the Neo4j KG. :param cypher: Cypher query string :param params: Query parameters :returns: List of result records .. py:method:: run(input_data = None, debug = None, **kwargs) Run with memory: load context -> respond -> extract KG -> maybe summarize. Flow: 1. Pre-hook: Load memory context (memories + KG triples + summaries) 2. Execute: ReactAgent responds (may call memory tools) 3. Post-hook: Extract KG triples from conversation 4. Post-hook: Auto-summarize if context length exceeds threshold .. py:method:: sync_kg_to_neo4j() Sync all KG triples from store to Neo4j. Creates the KG in Neo4j from existing triples in the LangGraph store. Must call connect_neo4j() first. :returns: Number of triples synced .. py:attribute:: model_config Configuration for the model, should be a dictionary conforming to [`ConfigDict`][pydantic.config.ConfigDict]. .. py:function:: create_memory_agent(name = 'memory_agent', store = None, connection_string = None, extra_tools = None, user_id = 'default', auto_extract_kg = True, summarize_threshold = 4000, neo4j_config = None, **kwargs) Factory for creating a memory agent with store + memory tools pre-wired. This is the recommended way to create a MemoryAgent. It resolves the store, creates memory tools, and passes them into the engine so ReactAgent's tool routing works correctly. :param name: Agent name :param store: Direct store instance (takes precedence over connection_string) :param connection_string: PostgreSQL connection string for production :param extra_tools: Additional tools beyond memory tools :param user_id: User ID for memory scoping :param auto_extract_kg: Enable automatic KG triple extraction from conversations :param summarize_threshold: Token count threshold for auto-summarization :param neo4j_config: Neo4jKGConfig instance, True for env var defaults, or None to skip :param \*\*kwargs: Additional MemoryAgent/ReactAgent kwargs