haive.core.models.llm.base¶
Base LLM configuration with model metadata support.
This module provides base classes and implementations for LLM providers with support for model metadata, context windows, and capabilities.
Deprecated since version 0.2.0: This module is deprecated. Use haive.core.models.llm.providers instead.
The individual provider configurations have been moved to separate modules
for better organization and maintainability.
Classes¶
Configuration for AI21 models. |
|
Configuration for Aleph Alpha models. |
|
Configuration for Anthropic models. |
|
Configuration specific to Azure OpenAI. |
|
Configuration for AWS Bedrock models. |
|
Configuration for Cohere models. |
|
Configuration for Databricks models. |
|
Configuration for DeepSeek models. |
|
Configuration for Fireworks AI models. |
|
Configuration for Google Gemini models. |
|
Configuration for GooseAI models. |
|
Configuration for Groq models. |
|
Configuration for Hugging Face models. |
|
Base configuration for Language Model providers with security and metadata support. |
|
Configuration for Llama.cpp local models. |
|
Configuration for Mistral models. |
|
Configuration for MosaicML models. |
|
Configuration for NLP Cloud models. |
|
Configuration for NVIDIA AI Endpoints models. |
|
Configuration for Ollama local models. |
|
Configuration for OpenAI models. |
|
Configuration for OpenLM models. |
|
Configuration for Perplexity AI models. |
|
Configuration for Petals distributed models. |
|
Configuration for Replicate models. |
|
Configuration for Together AI models. |
|
Configuration for Upstage models. |
|
Configuration for Google Vertex AI models. |
|
Configuration for IBM Watson.x models. |
|
Configuration for xAI models. |
Module Contents¶
- class haive.core.models.llm.base.AI21LLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for AI21 models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate AI21 Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.AlephAlphaLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Aleph Alpha models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Aleph Alpha Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.AnthropicLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Anthropic models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Anthropic Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.AzureLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration specific to Azure OpenAI.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Azure OpenAI Chat model with robust error handling.
- Return type:
Any
- classmethod load_api_base(v)¶
Load API base from environment if not provided.
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- classmethod load_api_type(v)¶
Load API type from environment if not provided.
- class haive.core.models.llm.base.BedrockLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for AWS Bedrock models.
AWS Bedrock provides access to foundation models from various providers including Anthropic, AI21, Cohere, and Amazon’s own models.
- Parameters:
data (Any)
- model_id¶
The Bedrock model ID (e.g., ‘anthropic.claude-v2’)
- region_name¶
AWS region for Bedrock service
- aws_access_key_id¶
AWS access key (optional, uses AWS credentials chain)
- aws_secret_access_key¶
AWS secret key (optional, uses AWS credentials chain)
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- instantiate(**kwargs)¶
Instantiate AWS Bedrock Chat model.
- Return type:
Any
- class haive.core.models.llm.base.CohereLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Cohere models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Cohere Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.DatabricksLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Databricks models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Databricks Chat model.
- Return type:
Any
- class haive.core.models.llm.base.DeepSeekLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for DeepSeek models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate DeepSeek Chat model.
- Return type:
Any
- class haive.core.models.llm.base.FireworksAILLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Fireworks AI models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Fireworks AI Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.GeminiLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Google Gemini models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Google Gemini Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.GooseAILLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for GooseAI models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate GooseAI Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.GroqLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Groq models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Groq Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.HuggingFaceLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Hugging Face models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Hugging Face model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.LLMConfig(/, **data)¶
Bases:
haive.core.common.mixins.secure_config.SecureConfigMixin,haive.core.models.metadata_mixin.ModelMetadataMixin,haive.core.models.llm.rate_limiting_mixin.RateLimitingMixin,pydantic.BaseModelBase configuration for Language Model providers with security and metadata support.
This class provides: 1. Secure API key handling with environment variable fallbacks 2. Model metadata access (context windows, capabilities, pricing) 3. Common configuration parameters 4. Graph transformation utilities 5. Rate limiting capabilities via RateLimitingMixin
All LLM configurations inherit from this base class, providing a consistent interface for configuration, instantiation, and management of language models from various providers.
- Parameters:
data (Any)
- provider¶
The LLM provider enum value
- model¶
The specific model identifier
- name¶
Optional friendly name for the model
- api_key¶
Secure storage of API key with env fallback
- cache_enabled¶
Whether to enable response caching
- cache_ttl¶
Time-to-live for cached responses
- extra_params¶
Additional provider-specific parameters
- debug¶
Enable detailed debug output
Examples
Direct instantiation (not recommended):
config = LLMConfig( provider=LLMProvider.OPENAI, model="gpt-4", api_key=SecretStr("your-key") )
Using provider-specific config (recommended):
config = OpenAILLMConfig( model="gpt-4", temperature=0.7 ) llm = config.instantiate()
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- check_context_window_fit(messages, tools=None, reserve_output_tokens=1000)¶
Check if messages fit within the model’s context window.
This method helps prevent “context length exceeded” errors by validating message length before making API calls.
- Args:
messages: Sequence of chat messages to check tools: Optional sequence of function schemas for tool calls reserve_output_tokens: Number of tokens to reserve for output
- Returns:
Dictionary with fit analysis: {
“fits”: bool, “input_tokens”: int, “context_window”: int, “available_tokens”: int, “tokens_over_limit”: int # 0 if fits, positive if over
}
Example:
Examples
>>> config = OpenAILLMConfig(model="gpt-3.5-turbo") >>> >>> # Check if messages fit >>> fit_check = config.check_context_window_fit(messages) >>> if not fit_check["fits"]: >>> print(f"Messages exceed context window by {fit_check['tokens_over_limit']} tokens")
- Parameters:
messages (collections.abc.Sequence[langchain_core.messages.AnyMessage])
tools (collections.abc.Sequence[dict[str, Any]] | None)
reserve_output_tokens (int)
- Return type:
- create_graph_transformer()¶
Creates an LLMGraphTransformer instance using the LLM.
- Return type:
Any
- estimate_cost_from_messages(messages, tools=None, include_output_estimate=True, estimated_output_tokens=None)¶
Estimate the cost of processing messages with this model.
This method combines token counting with pricing metadata to estimate costs before making API calls, helping with budget management and cost optimization.
- Args:
messages: Sequence of chat messages tools: Optional sequence of function schemas for tool calls include_output_estimate: Whether to include estimated output costs estimated_output_tokens: Manual override for output token estimation
- Returns:
Dictionary with cost breakdown: {
“input_tokens”: int, “input_cost”: float, “estimated_output_tokens”: int, “estimated_output_cost”: float, “total_estimated_cost”: float
}
Example:
Examples
>>> from langchain_core.messages import HumanMessage >>> >>> config = OpenAILLMConfig(model="gpt-4") >>> messages = [HumanMessage(content="Write a short story about AI.")] >>> >>> cost_estimate = config.estimate_cost_from_messages(messages) >>> print(f"Estimated total cost: ${cost_estimate['total_estimated_cost']:.6f}")
- Parameters:
messages (collections.abc.Sequence[langchain_core.messages.AnyMessage])
tools (collections.abc.Sequence[dict[str, Any]] | None)
include_output_estimate (bool)
estimated_output_tokens (int | None)
- Return type:
- estimate_cost_from_text(text, include_output_estimate=True, estimated_output_tokens=None)¶
Estimate the cost of processing a single text string.
- Args:
text: Raw text string to estimate cost for include_output_estimate: Whether to include estimated output costs estimated_output_tokens: Manual override for output token estimation
- Returns:
Dictionary with cost breakdown (same format as estimate_cost_from_messages)
Example:
Examples
>>> config = AnthropicLLMConfig(model="claude-3-opus-20240229") >>> text = "Explain quantum computing in simple terms." >>> >>> cost_estimate = config.estimate_cost_from_text(text) >>> print(f"Input cost: ${cost_estimate['input_cost']:.6f}")
- format_metadata_for_display()¶
Format metadata for structured display or comparison.
- get_num_tokens(text)¶
Count tokens in a single text string.
This method instantiates the model temporarily to count tokens, preserving the serializability of the configuration object.
- Args:
text: Raw text string to count tokens for
- Returns:
Integer count of tokens in the text
Example:
Examples
>>> config = OpenAILLMConfig(model="gpt-3.5-turbo") >>> text = "Hello, world!" >>> token_count = config.get_num_tokens(text) >>> print(f"Tokens in text: {token_count}")
- get_num_tokens_from_messages(messages, tools=None)¶
Count tokens in a sequence of messages.
This method instantiates the model temporarily to count tokens, preserving the serializability of the configuration object.
- Args:
messages: Sequence of chat messages (HumanMessage, AIMessage, etc.) tools: Optional sequence of function schemas for tool calls
- Returns:
Integer count of tokens across all messages
Example:
Examples
>>> from langchain_core.messages import HumanMessage, AIMessage >>> >>> config = OpenAILLMConfig(model="gpt-3.5-turbo") >>> messages = [ >>> HumanMessage(content="Translate 'Hello' to French."), >>> AIMessage(content="Bonjour"), >>> ] >>> >>> token_count = config.get_num_tokens_from_messages(messages) >>> print(f"Total tokens: {token_count}")
- Parameters:
messages (collections.abc.Sequence[langchain_core.messages.AnyMessage])
tools (collections.abc.Sequence[dict[str, Any]] | None)
- Return type:
- abstractmethod instantiate(**kwargs)¶
Abstract method to instantiate the configured LLM.
This method must be implemented by all provider-specific subclasses to handle the actual creation of the LLM instance.
- Parameters:
**kwargs – Additional parameters to pass to the LLM constructor
- Returns:
Instantiated LLM object ready for use
- Raises:
NotImplementedError – If not overridden by a subclass
- Return type:
langchain.chat_models.base.BaseChatModel
- load_model_metadata()¶
Load and validate model metadata after initialization.
- Return type:
Self
- set_default_name()¶
Set a default name for the model if not provided.
- Return type:
Self
- model_config¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class haive.core.models.llm.base.LlamaCppLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Llama.cpp local models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Llama.cpp Chat model.
- Return type:
Any
- class haive.core.models.llm.base.MistralLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Mistral models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Mistral Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.MosaicMLLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for MosaicML models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate MosaicML Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.NLPCloudLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for NLP Cloud models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate NLP Cloud Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.NVIDIALLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for NVIDIA AI Endpoints models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate NVIDIA Chat model.
- Return type:
Any
- class haive.core.models.llm.base.OllamaLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Ollama local models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Ollama Chat model.
- Return type:
Any
- class haive.core.models.llm.base.OpenAILLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for OpenAI models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate OpenAI Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.OpenLMLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for OpenLM models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate OpenLM Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.PerplexityLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Perplexity AI models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Perplexity AI Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.PetalsLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Petals distributed models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Petals Chat model.
- Return type:
Any
- class haive.core.models.llm.base.ReplicateLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Replicate models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Replicate Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.TogetherAILLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Together AI models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Together AI Chat model.
- Return type:
Any
- classmethod load_api_key(v)¶
Load API key from environment if not provided.
- Parameters:
v (pydantic.SecretStr)
- Return type:
pydantic.SecretStr
- class haive.core.models.llm.base.UpstageLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Upstage models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Upstage Chat model.
- Return type:
Any
- class haive.core.models.llm.base.VertexAILLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for Google Vertex AI models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Google Vertex AI Chat model.
- Return type:
Any
- class haive.core.models.llm.base.WatsonxLLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for IBM Watson.x models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate Watson.x Chat model.
- Return type:
Any
- class haive.core.models.llm.base.XAILLMConfig(/, **data)¶
Bases:
LLMConfigConfiguration for xAI models.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- Parameters:
data (Any)
- instantiate(**kwargs)¶
Instantiate xAI Chat model.
- Return type:
Any