haive.core.engine.aug_llm.factory¶

Factory for creating LLM chain runnables from AugLLMConfig.

from typing import Any This module provides a specialized factory implementation that transforms AugLLMConfig configurations into executable LLM chain runnables. It enforces a clean separation between configuration (AugLLMConfig) and runtime creation (AugLLMFactory), allowing for runtime overrides and specialized handling.

Key features: - Runtime configuration overrides for flexible deployment - Structured output handling with multiple approaches (v1/v2) - Comprehensive tool binding with graceful fallbacks - Chain composition with preprocessing and postprocessing - Detailed logging for debugging and monitoring

The factory handles the complex process of assembling different components (LLMs, prompts, tools, parsers) into a cohesive, executable chain while respecting the configuration specifications from AugLLMConfig.

Classes¶

AugLLMFactory

Factory for creating structured LLM runnables from AugLLMConfig with flexible message handling.

SanitizedBaseModelTool

Wrapper for BaseModel tools with sanitized names.

Module Contents¶

class haive.core.engine.aug_llm.factory.AugLLMFactory(aug_config, config_params=None)[source]¶

Factory for creating structured LLM runnables from AugLLMConfig with flexible message handling.

This factory class takes an AugLLMConfig instance and transforms it into an executable LLM chain runnable, applying any runtime configuration overrides in the process. It handles the complex assembly of various components including LLM initialization, tool binding, structured output configuration, and chain composition.

The factory follows a builder pattern, handling each aspect of chain creation in discrete steps while maintaining proper validation and logging throughout the process. It provides graceful fallbacks for various scenarios and specialized handling for different tool and output configurations.

aug_config¶

The configuration object that defines how the runnable should be constructed.

Type:

AugLLMConfig

config_params¶

Runtime configuration overrides that take precedence over the settings in aug_config.

Type:

Dict[str, Any]

Examples

>>> from haive.core.engine.aug_llm.config import AugLLMConfig
>>> from haive.core.engine.aug_llm.factory import AugLLMFactory
>>>
>>> # Create a base configuration
>>> config = AugLLMConfig(name="text_summarizer", system_message="Summarize text concisely.")
>>>
>>> # Create a factory with runtime overrides
>>> factory = AugLLMFactory(
...     config,
...     config_params={"temperature": 0.3, "max_tokens": 200}
... )
>>>
>>> # Build the runnable
>>> summarizer = factory.create_runnable()
>>>
>>> # Use the runnable
>>> summary = summarizer.invoke("Long text to summarize...")

Initialize the factory with an AugLLMConfig.

Parameters:
  • aug_config (Any) – Configuration for the LLM chain

  • config_params (dict[str, Any] | None) – Optional runtime parameters to override defaults

create_runnable()[source]¶

Create the complete runnable chain with proper message handling.

Assembles a fully configured runnable chain based on the AugLLMConfig settings and any runtime overrides. This method performs several key steps: 1. Ensures messages placeholders are properly configured 2. Initializes the LLM with appropriate parameters 3. Binds tools to the LLM if specified 4. Configures structured output handling 5. Builds the complete chain with prompt templates 6. Adds pre/post processing functions if specified

Returns:

A complete, executable LLM chain that can be invoked with

input data to generate responses.

Return type:

Runnable

Raises:

ValueError – If the LLM cannot be instantiated from the configuration.

Examples

>>> factory = AugLLMFactory(config)
>>> runnable = factory.create_runnable()
>>> response = runnable.invoke("What is the capital of France?")
>>> print(response)
class haive.core.engine.aug_llm.factory.SanitizedBaseModelTool(base_model_class)[source]¶

Wrapper for BaseModel tools with sanitized names.

This ensures BaseModel tools have OpenAI-compliant names that match the force_tool_choice configuration.

Init .

Parameters:

base_model_class (type[pydantic.BaseModel]) – [TODO: Add description]

model_json_schema(*args, **kwargs)[source]¶

Delegate model_json_schema to the original BaseModel.

schema(*args, **kwargs)[source]¶

Delegate schema generation to the original BaseModel.