haive.core.models.llm.providers.groq¶
Groq Provider Module.
This module implements the Groq language model provider for the Haive framework, supporting ultra-fast inference with Groq’s Language Processing Units (LPUs).
The provider handles API key management, model configuration, and safe imports of the langchain-groq package dependencies for high-speed LLM inference.
Examples
Basic usage:
from haive.core.models.llm.providers.groq import GroqProvider
provider = GroqProvider(
model="mixtral-8x7b-32768",
temperature=0.7,
max_tokens=1000
)
llm = provider.instantiate()
With streaming for real-time responses:
provider = GroqProvider(
model="llama2-70b-4096",
streaming=True,
temperature=0.1
)
Classes¶
Groq language model provider configuration. |
Module Contents¶
- class haive.core.models.llm.providers.groq.GroqProvider(/, **data)¶
Bases:
haive.core.models.llm.providers.base.BaseLLMProviderGroq language model provider configuration.
This provider supports Groq’s high-speed LLM inference including Mixtral, Llama 2, and other optimized models running on Language Processing Units.
- Parameters:
data (Any)
- provider¶
Always LLMProvider.GROQ
- Type:
Examples
High-speed inference:
provider = GroqProvider( model="mixtral-8x7b-32768", temperature=0.7, max_tokens=2000 )
Streaming responses:
provider = GroqProvider( model="llama2-70b-4096", stream=True, temperature=0.1 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.