haive.core.models.llm.providers.fireworks¶
Fireworks AI Provider Module.
This module implements the Fireworks AI language model provider for the Haive framework, supporting fast inference for open-source models through Fireworks’ optimized infrastructure.
The provider handles API key management, model configuration, and safe imports of the langchain-fireworks package dependencies.
Examples
Basic usage:
from haive.core.models.llm.providers.fireworks import FireworksProvider
provider = FireworksProvider(
model="accounts/fireworks/models/mixtral-8x7b-instruct",
temperature=0.7,
max_tokens=1000
)
llm = provider.instantiate()
With streaming:
provider = FireworksProvider(
model="accounts/fireworks/models/llama-v2-70b-chat",
temperature=0.1,
stream=True,
top_p=0.9
)
Classes¶
Fireworks AI language model provider configuration. |
Module Contents¶
- class haive.core.models.llm.providers.fireworks.FireworksProvider(/, **data)¶
Bases:
haive.core.models.llm.providers.base.BaseLLMProviderFireworks AI language model provider configuration.
This provider supports high-speed inference for open-source models through Fireworks’ optimized infrastructure, including Mixtral, Llama, and others.
- Parameters:
data (Any)
- provider¶
Always LLMProvider.FIREWORKS_AI
- Type:
Examples
Mixtral for reasoning:
provider = FireworksProvider( model="accounts/fireworks/models/mixtral-8x7b-instruct", temperature=0.3, max_tokens=2000 )
Llama 2 with streaming:
provider = FireworksProvider( model="accounts/fireworks/models/llama-v2-70b-chat", temperature=0.7, stream=True, top_p=0.9 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.