haive.agents.common.models.task_analysis.branching¶
Task branching and decomposition analysis.
This module analyzes how tasks can be broken down into subtasks, identifying parallel execution opportunities, sequential dependencies, and optimal decomposition strategies.
Classes¶
Types of task branches and execution patterns. |
|
Individual branch in task decomposition. |
|
Complete task breakdown into subtasks and execution branches. |
Module Contents¶
- class haive.agents.common.models.task_analysis.branching.BranchType¶
-
Types of task branches and execution patterns.
- SEQUENTIAL¶
Tasks that must be executed in order
- PARALLEL¶
Tasks that can be executed simultaneously
- CONDITIONAL¶
Tasks that depend on conditions or outcomes
- ITERATIVE¶
Tasks that repeat with feedback loops
- CONVERGENT¶
Multiple branches that merge into one
- DIVERGENT¶
One task that splits into multiple branches
- INDEPENDENT¶
Completely independent execution streams
- DEPENDENT¶
Branches with complex interdependencies
Initialize self. See help(type(self)) for accurate signature.
- class haive.agents.common.models.task_analysis.branching.TaskBranch(/, **data)¶
Bases:
pydantic.BaseModelIndividual branch in task decomposition.
Represents a single execution path or subtask within a larger task decomposition, including its dependencies, requirements, and characteristics.
- Parameters:
data (Any)
- branch_id¶
Unique identifier for this branch
- name¶
Human-readable name for the branch
- description¶
Detailed description of what this branch accomplishes
- branch_type¶
Type of execution pattern for this branch
- estimated_effort¶
Relative effort required (1-10 scale)
- estimated_duration¶
Expected time to complete
- prerequisites¶
Other branches that must complete first
- enables¶
Branches that this branch enables
- resources_needed¶
Specific resources required for this branch
- parallel_compatible¶
Whether this can run in parallel with others
Example
# Finding Wimbledon winner's birthday - first branch winner_branch = TaskBranch( branch_id="find_winner", name="Find Recent Wimbledon Winner", description="Look up the most recent Wimbledon championship winner", branch_type=BranchType.SEQUENTIAL, estimated_effort=3, estimated_duration=timedelta(minutes=5), prerequisites=[], enables=["find_birthday"], resources_needed=["web_search", "sports_database"] ) # Cancer research - complex branch research_branch = TaskBranch( branch_id="mechanism_research", name="Research Cancer Mechanisms", description="Deep investigation into cellular mechanisms of cancer development", branch_type=BranchType.ITERATIVE, estimated_effort=10, estimated_duration=timedelta(weeks=52), prerequisites=["literature_review", "lab_setup"], resources_needed=["research_lab", "expert_oncologists", "funding"] )
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- get_duration_category()¶
Get duration category classification.
- Returns:
String describing duration category
- Return type:
- get_effort_category()¶
Get effort category classification.
- Returns:
String describing effort category
- Return type:
- has_dependencies()¶
Check if this branch has prerequisite dependencies.
- Returns:
True if branch has prerequisites
- Return type:
- is_enabling()¶
Check if this branch enables other branches.
- Returns:
True if branch enables others
- Return type:
- is_high_risk()¶
Check if this is a high-risk branch.
- Returns:
True if risk level is 4 or 5
- Return type:
- is_likely_to_succeed(threshold=0.7)¶
Check if branch is likely to succeed.
- model_config¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class haive.agents.common.models.task_analysis.branching.TaskDecomposition(/, **data)¶
Bases:
pydantic.BaseModelComplete task breakdown into subtasks and execution branches.
Analyzes how a complex task can be decomposed into manageable subtasks, identifying execution patterns, dependencies, and optimization opportunities.
- Parameters:
data (Any)
- task_description¶
Original task being decomposed
- branches¶
List of individual execution branches
- execution_pattern¶
Overall execution pattern
- critical_path¶
Sequence of branches on the critical path
- parallelization_opportunities¶
Groups of branches that can run in parallel
- bottlenecks¶
Branches that are likely to be bottlenecks
- total_estimated_effort¶
Sum of all branch efforts
- estimated_duration_sequential¶
Duration if executed sequentially
- estimated_duration_optimal¶
Duration with optimal parallelization
Example
# Simple factual lookup task decomposition = TaskDecomposition.decompose_task( task_description="Find the birthday of the most recent Wimbledon winner", complexity_hint="simple_research" ) # Complex research task decomposition = TaskDecomposition.decompose_task( task_description="Develop a cure for cancer", complexity_hint="breakthrough_research" ) print(f"Branches: {len(decomposition.branches)}") print(f"Critical path: {decomposition.critical_path}") print(f"Parallelizable: {decomposition.parallelization_opportunities}")
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- calculate_parallelization_speedup()¶
Calculate potential speedup from parallelization.
- Returns:
Speedup ratio (sequential_time / optimal_time)
- Return type:
- classmethod create_simple_sequential(task_description, branch_descriptions, effort_estimates=None, duration_estimates=None)¶
Create a simple sequential task decomposition.
- Parameters:
task_description (str) – Description of the overall task
branch_descriptions (list[str]) – List of branch descriptions
effort_estimates (list[int] | None) – Optional effort estimates (defaults to 3 for all)
duration_estimates (list[datetime.timedelta] | None) – Optional duration estimates (defaults to 1 hour each)
- Returns:
TaskDecomposition with sequential branches
- Return type:
- find_independent_branches()¶
Find branches with no dependencies.
- find_terminal_branches()¶
Find branches that don’t enable anything else.
- get_complexity_metrics()¶
Get various complexity metrics for the decomposition.
- get_dependency_graph()¶
Get dependency graph as adjacency list.
- get_enables_graph()¶
Get enables graph as adjacency list.
- get_execution_recommendations()¶
Get recommendations for optimal execution.
- validate_decomposition_consistency()¶
Validate that decomposition is internally consistent.
- Returns:
Self if validation passes
- Raises:
ValueError – If decomposition has inconsistencies
- Return type: