haive.agents.research.open_perplexity.models¶

Models for the open_perplexity research agent.

from typing import Any This module defines data models used for representing, tracking, and evaluating research sources, findings, and summaries. It includes enumerations for categorizing data source types, content reliability, freshness, and research depth.

Classes¶

ContentFreshness

Enumeration of content freshness levels.

ContentReliability

Enumeration of content reliability levels.

DataSourceConfig

Configuration for a data source.

DataSourceType

Enumeration of data source types.

ResearchDepth

Enumeration of research depth levels.

ResearchFinding

Model for a specific research finding.

ResearchSource

Model for tracking and evaluating research sources.

ResearchSummary

Summary of research findings and assessment.

Module Contents¶

class haive.agents.research.open_perplexity.models.ContentFreshness¶

Bases: str, enum.Enum

Enumeration of content freshness levels.

Categorizes how recent or up-to-date the information content is.

VERY_RECENT¶

Content from the last few days

RECENT¶

Content from the last few weeks

SOMEWHAT_RECENT¶

Content from the last few months

OUTDATED¶

Content from years ago

UNKNOWN¶

Content with unknown or unclear publication date

Initialize self. See help(type(self)) for accurate signature.

class haive.agents.research.open_perplexity.models.ContentReliability¶

Bases: str, enum.Enum

Enumeration of content reliability levels.

Categorizes the trustworthiness and reliability of information sources.

HIGH¶

Highly reliable sources (peer-reviewed, authoritative)

MEDIUM¶

Moderately reliable sources (reputable but not authoritative)

LOW¶

Low reliability sources (potentially biased or unverified)

UNKNOWN¶

Sources with unknown or unclear reliability

Initialize self. See help(type(self)) for accurate signature.

class haive.agents.research.open_perplexity.models.DataSourceConfig(/, **data)¶

Bases: pydantic.BaseModel

Configuration for a data source.

Specifies parameters for interacting with a particular data source, including API keys and search parameters.

Parameters:

data (Any)

name¶

Name of the data source

source_type¶

Type of data source

enabled¶

Whether this source is enabled

priority¶

Priority (1-10, higher = more important)

api_key¶

API key for the data source if required

max_results¶

Maximum number of results to return

search_params¶

Custom search parameters

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_priority(v)¶

Ensure priority is between 1 and 10.

Parameters:

v – The priority value to validate

Returns:

The validated priority value, clamped between 1 and 10

Return type:

int

class haive.agents.research.open_perplexity.models.DataSourceType¶

Bases: str, enum.Enum

Enumeration of data source types.

Categorizes the different types of sources where research information can be found.

WEB¶

General web content

GITHUB¶

Code repositories and issues from GitHub

ACADEMIC¶

Academic papers and research publications

NEWS¶

News articles and press releases

SOCIAL_MEDIA¶

Content from social media platforms

DOCUMENTS¶

Uploaded or local documents

API¶

Data retrieved from APIs

OTHER¶

Any other source type not covered above

Initialize self. See help(type(self)) for accurate signature.

class haive.agents.research.open_perplexity.models.ResearchDepth¶

Bases: str, enum.Enum

Enumeration of research depth levels.

Categorizes the comprehensiveness and thoroughness of the research.

SUPERFICIAL¶

Basic overview with minimal sources

INTERMEDIATE¶

Moderate depth with several sources

DEEP¶

In-depth research with many high-quality sources

COMPREHENSIVE¶

Exhaustive research with extensive sources

Initialize self. See help(type(self)) for accurate signature.

class haive.agents.research.open_perplexity.models.ResearchFinding(/, **data)¶

Bases: pydantic.BaseModel

Model for a specific research finding.

Represents an individual insight or finding from the research, including supporting sources and confidence assessment.

Parameters:

data (Any)

finding¶

The actual finding or insight

confidence¶

Confidence level in this finding (0.0 - 1.0)

sources¶

Sources supporting this finding

explanation¶

Explanation of the finding’s significance

related_findings¶

Related findings

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_confidence(v)¶

Ensure confidence is between 0 and 1.

Parameters:

v – The confidence value to validate

Returns:

The validated confidence value, clamped between 0.0 and 1.0

Return type:

float

class haive.agents.research.open_perplexity.models.ResearchSource(/, **data)¶

Bases: pydantic.BaseModel

Model for tracking and evaluating research sources.

Represents a source of information used in research, including metadata about its reliability, relevance, and content.

Parameters:

data (Any)

url¶

URL of the source

title¶

Title of the source

source_type¶

Type of data source

content_snippet¶

Snippet of relevant content

reliability¶

Assessed reliability of the source

freshness¶

Content freshness/recency

relevance_score¶

Relevance score from 0.0 to 1.0

citation¶

Formatted citation for the source

access_timestamp¶

When the source was accessed

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

classmethod validate_relevance_score(v)¶

Ensure relevance score is between 0 and 1.

Parameters:

v – The relevance score to validate

Returns:

The validated relevance score, clamped between 0.0 and 1.0

Return type:

float

class haive.agents.research.open_perplexity.models.ResearchSummary(/, **data)¶

Bases: pydantic.BaseModel

Summary of research findings and assessment.

Provides an overall summary of the research, including key findings, assessment of source quality, and confidence evaluation.

Parameters:

data (Any)

topic¶

Research topic

question¶

Specific research question

key_findings¶

Key findings from research

sources_count¶

Total number of sources consulted

high_reliability_sources¶

Number of high reliability sources

recent_sources¶

Number of recent sources

research_depth¶

Overall research depth

contradictions¶

Contradictory findings identified

confidence_score¶

Overall confidence score

limitations¶

Research limitations

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

assess_depth()¶

Assess research depth based on source counts and diversity.

Evaluates the depth of research based on the number of sources and the proportion of high reliability sources.

Returns:

The assessed research depth level

Return type:

ResearchDepth

classmethod validate_confidence_score(v)¶

Ensure confidence score is between 0 and 1.

Parameters:

v – The confidence score to validate

Returns:

The validated confidence score, clamped between 0.0 and 1.0

Return type:

float