HAP ModelsΒΆ
The models module provides the core data structures for HAP workflows.
HAPContextΒΆ
The HAPContext class is the central state container that flows through HAP execution.
Key FeaturesΒΆ
StateSchema Inheritance: Properly integrates with Haiveβs state management
Execution Tracking: Records the path through the graph
Metadata Storage: Keeps agent-specific information
Backward Compatibility: Supports legacy properties from earlier versions
Usage ExampleΒΆ
from haive.hap.models.context import HAPContext
# Create context
context = HAPContext()
# Track execution
context.execution_path.append("analyzer")
context.execution_path.append("summarizer")
# Store metadata
context.agent_metadata["analyzer"] = {
"duration": 1.5,
"tokens_used": 150,
"tool_calls": ["word_counter"]
}
# Use backward compatibility
context.inputs["text"] = "Document to process"
context.outputs["summary"] = "Processed summary"
# Serialize/deserialize
data = context.model_dump()
restored = HAPContext.model_validate(data)
HAPGraphΒΆ
The HAPGraph class manages the workflow structure.
Graph BuildingΒΆ
from haive.hap.models.graph import HAPGraph
graph = HAPGraph()
# Add nodes with agents
graph.add_agent_node("start", agent1, next_nodes=["middle"])
graph.add_agent_node("middle", agent2, next_nodes=["end"])
graph.add_agent_node("end", agent3)
# Or use entrypoints
graph.add_entrypoint_node(
"processor",
"mymodule.agents:ProcessorAgent",
next_nodes=["validator"]
)
# Set entry point
graph.entry_node = "start"
# Get execution order
order = graph.topological_order() # ["start", "middle", "end"]
HAPNodeΒΆ
Individual nodes in the graph.
Node TypesΒΆ
Nodes can contain either:
Agent Instance: Direct agent object
Agent Entrypoint: String like
"module:ClassName"
from haive.hap.models.graph import HAPNode
# Node with agent instance
node1 = HAPNode(
id="worker",
agent_instance=my_agent,
next_nodes=["reviewer"]
)
# Node with entrypoint
node2 = HAPNode(
id="reviewer",
agent_entrypoint="haive.agents.simple:SimpleAgent"
)
# Load agent when needed
agent = await node2.load_agent()
Backward CompatibilityΒΆ
For backward compatibility with earlier versions, the following aliases are provided:
from haive.hap.models import (
HAPContext, # Same as HAPContext
AgentGraph, # Alias for HAPGraph
AgentNode # Alias for HAPNode
)
Property MappingsΒΆ
HAPContext maintains these backward-compatible properties:
Old Property |
New Field |
Usage |
|---|---|---|
|
|
Input data storage |
|
|
Output data storage |
|
|
State information |
|
|
Metadata storage |
Model RelationshipsΒΆ
HAPGraph
βββ nodes: Dict[str, HAPNode]
βββ entry_node: str
βββ metadata: Dict[str, Any]
HAPNode
βββ id: str
βββ agent_instance: Optional[Agent]
βββ agent_entrypoint: Optional[str]
βββ next_nodes: List[str]
HAPContext (extends StateSchema)
βββ execution_path: List[str]
βββ agent_metadata: Dict[str, Any]
βββ graph_context: Dict[str, Any]
βββ legacy fields (backward compatibility)
Best PracticesΒΆ
Use Type Hints: Define clear types for all fields
Validate Early: Use Pydantic validation
Track Metadata: Store useful debugging info
Handle None: Check optional fields
Serialize Safely: Use model_dump/model_validate
Common PatternsΒΆ
Sequential WorkflowΒΆ
graph = HAPGraph()
for i, agent in enumerate(agents):
next_nodes = [f"step_{i+1}"] if i < len(agents)-1 else []
graph.add_agent_node(f"step_{i}", agent, next_nodes)
graph.entry_node = "step_0"
Branching WorkflowΒΆ
graph = HAPGraph()
graph.add_agent_node("classifier", classifier, ["type_a", "type_b"])
graph.add_agent_node("type_a", handler_a)
graph.add_agent_node("type_b", handler_b)
Parallel ExecutionΒΆ
graph = HAPGraph()
graph.add_agent_node("splitter", splitter, ["worker1", "worker2", "worker3"])
graph.add_agent_node("worker1", w1, ["joiner"])
graph.add_agent_node("worker2", w2, ["joiner"])
graph.add_agent_node("worker3", w3, ["joiner"])
graph.add_agent_node("joiner", joiner)