API reference

Domain models

Core domain models for prompt definitions, rendering, and versioning.

This module contains every Pydantic model that flows through the system. Nothing here touches the database or filesystem — it is pure data and logic.

Prompt kinds and template formats:

PromptKind.STRING   — single-template prompt, rendered to a string
PromptKind.CHAT     — multi-message prompt, rendered to a list of dicts

TemplateFormat.FSTRING   — Python f-string syntax: ``{name}``
TemplateFormat.JINJA2    — Jinja2 syntax: ``{{ name }}``
TemplateFormat.MUSTACHE  — Mustache syntax: ``{{ name }}``

Building a string prompt:

spec = PromptSpec(kind=PromptKind.STRING, template="Hello {name}")
spec.render_text({"name": "Will"})          # => "Hello Will"
spec.declared_variables                      # => ["name"]

Building a chat prompt:

spec = PromptSpec(
    kind=PromptKind.CHAT,
    messages=[
        ChatMessage(role=MessageRole.SYSTEM, template="You are {persona}."),
        ChatMessage(role=MessageRole.HUMAN, template="{question}"),
    ],
    partial_variables={"persona": "a helpful assistant"},
)
spec.render_messages({"question": "Hi"})
# => [{"role": "system", "content": "You are a helpful assistant."}, ...]

Prompt references:

ref = PromptRef.parse("support/triage:production")
ref.namespace   # => "support"
ref.name        # => "triage"
ref.selector    # => "production"

Version views wrap a stored version with render and LangChain helpers:

view = PromptVersionView(...)
view.render({"name": "Will"})        # => PromptRenderResult
view.as_langchain()                  # => LangChain PromptTemplate
view.wrap()                          # => ResolvedPrompt (ergonomic wrapper)
class promptdb.domain.PromptKind(*values)[source]

Bases: StrEnum

Supported prompt kinds.

Parameters:

None.

Returns:

Enumeration members for prompt families.

Return type:

PromptKind

Raises:

None.

Examples

>>> PromptKind.CHAT.value
'chat'
STRING = 'string'
CHAT = 'chat'
class promptdb.domain.TemplateFormat(*values)[source]

Bases: StrEnum

Supported template formats.

Parameters:

None.

Returns:

Enumeration members for supported renderers.

Return type:

TemplateFormat

Raises:

None.

Examples

>>> TemplateFormat.MUSTACHE.value
'mustache'
FSTRING = 'f-string'
JINJA2 = 'jinja2'
MUSTACHE = 'mustache'
class promptdb.domain.PromptAssetKind(*values)[source]

Bases: StrEnum

Kinds of relational asset records linked to a prompt version.

Parameters:

None.

Returns:

Enumeration members for persisted asset categories.

Return type:

PromptAssetKind

Raises:

None.

Examples

>>> PromptAssetKind.EXPORT_BUNDLE.value
'export_bundle'
EXPORT_BUNDLE = 'export_bundle'
ATTACHMENT = 'attachment'
EXAMPLE_DATASET = 'example_dataset'
SNAPSHOT = 'snapshot'
class promptdb.domain.MessageRole(*values)[source]

Bases: StrEnum

Supported chat message roles.

Parameters:

None.

Returns:

Enumeration members for message roles.

Return type:

MessageRole

Raises:

None.

Examples

>>> MessageRole.SYSTEM.value
'system'
SYSTEM = 'system'
HUMAN = 'human'
AI = 'ai'
GENERIC = 'generic'
class promptdb.domain.PromptMetadata(**data)[source]

Bases: BaseModel

Rich metadata attached to a prompt version.

Parameters:
  • title – Human-friendly title.

  • description – Longer description.

  • tags – Search tags.

  • owners – User or team identifiers.

  • labels – Arbitrary key-value labels.

  • source_path – Optional file path used during import.

  • user_version – Optional caller-friendly version label.

Returns:

Metadata payload.

Return type:

PromptMetadata

Raises:

None.

Examples

>>> PromptMetadata(title="Classifier", tags=["support"]).title
'Classifier'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

title: str | None
description: str | None
tags: list[str]
owners: list[str]
labels: dict[str, str]
source_path: str | None
user_version: str | None
class promptdb.domain.ChatMessage(**data)[source]

Bases: BaseModel

Concrete chat message template.

Parameters:
  • role – Message role.

  • template – Message template body.

  • name – Optional participant name.

  • additional_kwargs – Additional message metadata.

Returns:

Chat message template object.

Return type:

ChatMessage

Raises:

None.

Examples

>>> ChatMessage(role=MessageRole.HUMAN, template="{question}").template
'{question}'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

role: MessageRole
template: str
name: str | None
additional_kwargs: dict[str, Any]
class promptdb.domain.MessagePlaceholder(**data)[source]

Bases: BaseModel

Placeholder for a runtime list of messages.

Parameters:
  • variable_name – Input variable containing a list of messages.

  • optional – Whether an empty value is allowed.

Returns:

Placeholder model.

Return type:

MessagePlaceholder

Raises:

None.

Examples

>>> MessagePlaceholder(variable_name="history").variable_name
'history'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

variable_name: str
optional: bool
class promptdb.domain.FewShotBlock(**data)[source]

Bases: BaseModel

Lightweight few-shot configuration.

Parameters:
  • examples – Example variable mappings.

  • string_template – Template used for string examples.

  • chat_messages – Message templates used for chat examples.

  • insert_at – Insertion index in chat mode.

  • example_separator – Separator used in string mode.

Returns:

Few-shot configuration.

Return type:

FewShotBlock

Raises:

ValueError – If neither a string template nor chat messages are supplied.

Examples

>>> FewShotBlock(examples=[{"x": "1"}], string_template="{x}").examples[0]["x"]
'1'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

examples: list[dict[str, Any]]
string_template: str | None
chat_messages: list[ChatMessage]
insert_at: int
example_separator: str
class promptdb.domain.PromptSpec(**data)[source]

Bases: BaseModel

Prompt definition that can render directly or materialize into LangChain.

Parameters:
  • kind – Prompt kind.

  • template_format – Template engine.

  • template – Root template for string prompts.

  • messages – Message sequence for chat prompts.

  • input_variables – Declared required variables.

  • optional_variables – Declared optional variables.

  • partial_variables – Stored partial variables merged at render time.

  • few_shot – Optional few-shot examples.

  • metadata – Rich prompt metadata.

Returns:

Prompt definition.

Return type:

PromptSpec

Raises:

ValueError – If the shape is invalid for the selected prompt kind.

Examples

>>> PromptSpec(kind=PromptKind.STRING, template="Hello {name}").declared_variables
['name']
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

kind: PromptKind
template_format: TemplateFormat
template: str | None
messages: list[ChatMessage | MessagePlaceholder]
input_variables: list[str]
optional_variables: list[str]
partial_variables: dict[str, Any]
few_shot: FewShotBlock | None
metadata: PromptMetadata
property declared_variables: list[str]

Return discovered and explicitly declared variables.

Parameters:

self – Model instance.

Returns:

Sorted variable names.

Return type:

list[str]

Raises:

None.

Examples

>>> PromptSpec(kind=PromptKind.STRING, template="{x} {y}").declared_variables
['x', 'y']
merged_variables(variables=None)[source]

Merge runtime variables with stored partial variables.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

Merged variables.

Return type:

dict[str, Any]

Raises:

None.

Examples

>>> spec = PromptSpec(
...     kind=PromptKind.STRING, template="{name}",
...     partial_variables={"name": "Will"},
... )
>>> spec.merged_variables({})
{'name': 'Will'}
render_text(variables=None)[source]

Render a string prompt.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

Rendered text.

Return type:

str

Raises:

TypeError – If called on a chat prompt.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hello {name}")
>>> spec.render_text({"name": "Will"})
'Hello Will'
render_messages(variables=None)[source]

Render a chat prompt.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

Rendered message payloads.

Return type:

list[dict[str, Any]]

Raises:
  • TypeError – If called on a string prompt.

  • KeyError – If a required placeholder variable is missing.

Examples

>>> msgs = [ChatMessage(role=MessageRole.HUMAN, template="Hi {name}")]
>>> spec = PromptSpec(kind=PromptKind.CHAT, messages=msgs)
>>> spec.render_messages({"name": "Will"})[0]["content"]
'Hi Will'
to_langchain()[source]

Materialize the prompt into a LangChain prompt object.

Parameters:

self – Model instance.

Returns:

LangChain prompt object.

Return type:

Any

Raises:

ImportError – If langchain-core is unavailable.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> spec.to_langchain().__class__.__name__
'PromptTemplate'
promptdb.domain.render_template(template, variables, fmt)[source]

Render a template with the selected engine.

Parameters:
Returns:

Rendered text.

Return type:

str

Raises:

KeyError – If a required variable is missing.

Examples

>>> render_template("Hello {name}", {"name": "Will"}, TemplateFormat.FSTRING)
'Hello Will'
promptdb.domain.extract_variables(template, fmt)[source]

Extract variable names from a template.

Parameters:
Returns:

Sorted variable names.

Return type:

list[str]

Raises:

None.

Examples

>>> extract_variables("Hello {name}", TemplateFormat.FSTRING)
['name']
class promptdb.domain.PromptRef(**data)[source]

Bases: BaseModel

Reference to a prompt and selector.

Parameters:
  • namespace – Prompt namespace.

  • name – Prompt name.

  • selector – Alias, user-facing version label, or concrete version id.

Returns:

Reference payload.

Return type:

PromptRef

Raises:

None.

Examples

>>> PromptRef.parse("support/triage:production").selector
'production'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

namespace: str
name: str
selector: str
classmethod parse(value)[source]

Parse a compact namespace/name:selector reference.

Parameters:

value (str) – Compact reference string.

Returns:

Parsed prompt reference.

Return type:

PromptRef

Raises:

ValueError – If the input is malformed.

Examples

>>> PromptRef.parse("support/triage")
PromptRef(namespace='support', name='triage', selector='latest')
property resource_id: str

Return the stable prompt resource identifier.

Parameters:

self – Model instance.

Returns:

namespace/name identifier.

Return type:

str

Raises:

None.

Examples

>>> PromptRef(namespace="support", name="triage").resource_id
'support/triage'
property full_name: str

Return the fully-qualified prompt reference.

Parameters:

self – Model instance.

Returns:

namespace/name:selector identifier.

Return type:

str

Raises:

None.

Examples

>>> PromptRef(namespace="support", name="triage", selector="production").full_name
'support/triage:production'
class promptdb.domain.PromptRegistration(**data)[source]

Bases: BaseModel

Registration request payload.

Parameters:
  • namespace – Prompt namespace.

  • name – Prompt name.

  • spec – Prompt spec.

  • created_by – Creator identifier.

  • alias – Alias to move after registration.

Returns:

Registration payload.

Return type:

PromptRegistration

Raises:

None.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="hi")
>>> PromptRegistration(namespace="x", name="y", spec=spec).name
'y'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

namespace: str
name: str
spec: PromptSpec
created_by: str | None
alias: str | None
class promptdb.domain.AliasMove(**data)[source]

Bases: BaseModel

Alias movement payload.

Parameters:
  • alias – Alias name.

  • version_id – Target version id.

Returns:

Alias movement request.

Return type:

AliasMove

Raises:

None.

Examples

>>> AliasMove(alias="production", version_id="v1").alias
'production'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

alias: str
version_id: str
class promptdb.domain.PromptAssetView(**data)[source]

Bases: BaseModel

Blob-backed asset metadata linked to a prompt version.

Parameters:
  • asset_id – Unique asset id.

  • version_id – Owning prompt version id.

  • kind – Asset kind.

  • storage_backend – Storage backend name.

  • bucket – Logical or physical bucket/container name.

  • object_key – Blob object key.

  • content_type – MIME content type.

  • byte_size – Optional object size.

  • checksum_sha256 – Optional checksum.

  • metadata_json – User-defined metadata.

  • created_at – Creation timestamp.

Returns:

Asset metadata.

Return type:

PromptAssetView

Raises:

None.

Examples

>>> av = PromptAssetView(
...     asset_id='a', version_id='v',
...     kind=PromptAssetKind.EXPORT_BUNDLE,
...     storage_backend='local', bucket='promptdb',
...     object_key='x.json',
... )
>>> av.object_key
'x.json'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

asset_id: str
version_id: str
kind: PromptAssetKind
storage_backend: str
bucket: str
object_key: str
content_type: str | None
byte_size: int | None
checksum_sha256: str | None
metadata_json: dict[str, Any]
created_at: str | None
class promptdb.domain.PromptVersionView(**data)[source]

Bases: BaseModel

API-ready view over an immutable prompt version.

Parameters:
  • version_id – Version identifier.

  • namespace – Prompt namespace.

  • name – Prompt name.

  • revision – Monotonic revision.

  • user_version – User-facing version label.

  • spec – Prompt spec.

  • created_by – Creator identifier.

  • aliases – Aliases pointing to this version.

Returns:

Prompt version view.

Return type:

PromptVersionView

Raises:

None.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="hi")
>>> view = PromptVersionView(
...     version_id="v1", namespace="x", name="y",
...     revision=1, spec=spec,
... )
>>> view.revision
1
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

version_id: str
namespace: str
name: str
revision: int
user_version: str | None
spec: PromptSpec
created_by: str | None
created_at: Any | None
aliases: list[str]
assets: list[PromptAssetView]
property ref: PromptRef

Return a convenient immutable reference to this exact version.

Parameters:

self – Model instance.

Returns:

Version reference.

Return type:

PromptRef

Raises:

None.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="hi")
>>> view = PromptVersionView(
...     version_id="v1", namespace="x", name="y",
...     revision=1, spec=spec,
... )
>>> view.ref.full_name
'x/y:v1'
render(variables=None)[source]

Render the current version directly.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

Render output.

Return type:

PromptRenderResult

Raises:

TypeError – If the prompt kind is unsupported.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> view = PromptVersionView(
...     version_id="v1", namespace="x", name="y",
...     revision=1, spec=spec,
... )
>>> view.render({"name": "Will"}).text
'Hi Will'
as_langchain()[source]

Materialize the current version into a LangChain prompt.

Parameters:

self – Model instance.

Returns:

LangChain prompt object.

Return type:

Any

Raises:

ImportError – If langchain-core is unavailable.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> view = PromptVersionView(
...     version_id="v1", namespace="x", name="y",
...     revision=1, spec=spec,
... )
>>> view.as_langchain().__class__.__name__
'PromptTemplate'
wrap()[source]

Wrap the version in an ergonomic resolved-prompt object.

Parameters:

self – Model instance.

Returns:

Wrapper exposing render and materialization helpers.

Return type:

ResolvedPrompt

Raises:

None.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> view = PromptVersionView(
...     version_id="v1", namespace="x", name="y",
...     revision=1, spec=spec,
... )
>>> view.wrap().ref.full_name
'x/y:v1'
class promptdb.domain.ResolvedPrompt(**data)[source]

Bases: BaseModel

Ergonomic wrapper around a resolved prompt version.

Parameters:

version – Resolved prompt version.

Returns:

Rich wrapper object.

Return type:

ResolvedPrompt

Raises:

None.

Examples

>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> version = PromptVersionView(
...     version_id="v1", namespace="x", name="y",
...     revision=1, spec=spec,
... )
>>> ResolvedPrompt(version=version).ref.full_name
'x/y:v1'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

version: PromptVersionView
property ref: PromptRef

Return an immutable reference to the concrete version.

Parameters:

self – Model instance.

Returns:

Version reference.

Return type:

PromptRef

Raises:

None.

Examples

>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> version = PromptVersionView(
...     version_id="v1", namespace="x", name="y", revision=1, spec=_s,
... )
>>> ResolvedPrompt(version=version).ref.selector
'v1'
as_langchain()[source]

Materialize the wrapped prompt as a LangChain object.

Parameters:

self – Model instance.

Returns:

LangChain prompt object.

Return type:

Any

Raises:

ImportError – If langchain-core is unavailable.

Examples

>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> version = PromptVersionView(
...     version_id="v1", namespace="x", name="y", revision=1, spec=_s,
... )
>>> ResolvedPrompt(version=version).as_langchain().__class__.__name__
'PromptTemplate'
render(variables=None)[source]

Render the wrapped prompt.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

Rendered prompt output.

Return type:

PromptRenderResult

Raises:

TypeError – If the prompt kind and helper mismatch.

Examples

>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> version = PromptVersionView(
...     version_id="v1", namespace="x", name="y", revision=1, spec=_s,
... )
>>> ResolvedPrompt(version=version).render({"name": "Will"}).text
'Hi Will'
render_text(variables=None)[source]

Render the wrapped prompt as text.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

Rendered text.

Return type:

str

Raises:

TypeError – If the wrapped prompt is not a string prompt.

Examples

>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> version = PromptVersionView(
...     version_id="v1", namespace="x", name="y", revision=1, spec=_s,
... )
>>> ResolvedPrompt(version=version).render_text({"name": "Will"})
'Hi Will'
render_messages(variables=None)[source]

Render the wrapped prompt as messages.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

Rendered chat messages.

Return type:

list[dict[str, Any]]

Raises:

TypeError – If the wrapped prompt is not a chat prompt.

Examples

>>> _msg = ChatMessage(role=MessageRole.HUMAN, template="{question}")
>>> _s = PromptSpec(kind=PromptKind.CHAT, messages=[_msg])
>>> version = PromptVersionView(
...     version_id="v1", namespace="x", name="y", revision=1, spec=_s,
... )
>>> ResolvedPrompt(version=version).render_messages({"question": "Hi"})[0]["content"]
'Hi'
invoke(variables=None)[source]

Invoke the underlying LangChain prompt object.

Parameters:

variables (Mapping[str, Any] | None) – Runtime variables.

Returns:

LangChain prompt value.

Return type:

Any

Raises:

AttributeError – If the underlying object lacks invoke.

Examples

>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}")
>>> version = PromptVersionView(
...     version_id="v1", namespace="x", name="y", revision=1, spec=_s,
... )
>>> ResolvedPrompt(version=version).invoke({"name": "Will"}).text
'Hi Will'
class promptdb.domain.PromptRenderResult(**data)[source]

Bases: BaseModel

Rendered prompt result.

Parameters:
  • ref – Prompt reference.

  • version – Resolved version.

  • text – Rendered string prompt.

  • messages – Rendered chat messages.

Returns:

Render result.

Return type:

PromptRenderResult

Raises:

None.

Examples

>>> _s = PromptSpec(kind=PromptKind.STRING, template="hi")
>>> view = PromptVersionView(
...     version_id="v1", namespace="x", name="y", revision=1, spec=_s,
... )
>>> ref = PromptRef(namespace="x", name="y")
>>> PromptRenderResult(ref=ref, version=view, text="hi").text
'hi'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

ref: PromptRef
version: PromptVersionView
text: str | None
messages: list[dict[str, Any]]

Client

Ergonomic Python client for prompt registration, resolution, and rendering.

PromptClient is the main entry point for application code. It wraps PromptService and adds compact reference parsing, file-based registration, and a ResolvedPrompt wrapper with render and LangChain helpers.

Creating a client:

from promptdb import PromptClient

# Reads PROMPTDB_* env vars (database URL, blob root, storage backend)
client = PromptClient.from_env()

# Or with explicit settings
from promptdb import AppSettings
client = PromptClient.from_env(AppSettings(
    database_url="postgresql://user:pass@localhost/promptdb",
))

Registering prompts — three approaches:

# 1. From inline text
client.register_text(
    namespace="support", name="triage",
    template="Hello {name}", kind=PromptKind.STRING,
    alias="production",
)

# 2. From a PromptSpec object
client.register_spec(namespace="support", name="triage", spec=spec)

# 3. From a YAML/JSON/text file
client.register_file(path="prompts/triage.yaml",
                     namespace="support", name="triage")

Resolving and rendering:

resolved = client.get("support/triage:production")  # ResolvedPrompt
text = resolved.render_text({"name": "Will"})
lc = resolved.as_langchain()                        # LangChain prompt
value = resolved.invoke({"name": "Will"})           # LangChain invoke

Selectors: latest, production, rev:2, 2026.04.01.1, or a UUID.

class promptdb.client.PromptClient(service)[source]

Bases: object

Developer-friendly facade over PromptService.

Parameters:

service (PromptService) – Prompt service instance.

Returns:

Local prompt client.

Return type:

PromptClient

Raises:

None.

Examples

client = PromptClient.from_env()
resolved = client.get("support/triage:latest")
__init__(service)[source]

Initialize the client with a prompt service instance.

classmethod from_env(settings=None)[source]

Create a client from environment-backed settings.

Parameters:

settings (AppSettings | None) – Optional explicit settings.

Returns:

Configured client.

Return type:

PromptClient

Raises:

ValueError – If the storage backend is misconfigured.

Examples

>>> settings = AppSettings(database_url='sqlite:///:memory:')
>>> isinstance(PromptClient.from_env(settings), PromptClient)
True
register_spec(*, namespace, name, spec, created_by=None, alias='latest')[source]

Register a prompt spec.

Parameters:
  • namespace (str) – Prompt namespace.

  • name (str) – Prompt name.

  • spec (PromptSpec) – Prompt specification.

  • created_by (str | None) – Optional creator identifier.

  • alias (str | None) – Alias to move after creation.

Returns:

Stored prompt version.

Return type:

PromptVersionView

Raises:

LookupError – If alias movement fails.

Examples

version = client.register_spec(namespace='support', name='triage', spec=spec)
register_text(*, namespace, name, template, kind=PromptKind.STRING, alias='latest', created_by=None, metadata=None, template_format=TemplateFormat.FSTRING, partial_variables=None, role=MessageRole.HUMAN)[source]

Register a prompt directly from text.

Parameters:
  • namespace (str) – Prompt namespace.

  • name (str) – Prompt name.

  • template (str) – Root template or message template.

  • kind (PromptKind) – Prompt kind.

  • alias (str | None) – Alias to move after registration.

  • created_by (str | None) – Optional creator identifier.

  • metadata (PromptMetadata | None) – Optional prompt metadata.

  • template_format (TemplateFormat) – Template engine.

  • partial_variables (dict[str, Any] | None) – Stored partial variables.

  • role (MessageRole) – Chat role when kind is chat.

Returns:

Stored prompt version.

Return type:

PromptVersionView

Raises:

ValueError – If the prompt shape is invalid.

Examples

version = client.register_text(
    namespace='support',
    name='triage',
    template='Hello {name}',
)
register_file(*, path, namespace, name, kind=None, alias='latest', created_by=None, message_role=MessageRole.HUMAN, user_version=None)[source]

Register a prompt from a text or structured file.

Parameters:
  • path (str | Path) – Input file path.

  • namespace (str) – Prompt namespace.

  • name (str) – Prompt name.

  • kind (PromptKind | None) – Prompt kind for plain-text files. Ignored for structured spec files.

  • alias (str | None) – Alias to move after registration.

  • created_by (str | None) – Optional creator identifier.

  • message_role (MessageRole) – Chat role for plain-text chat prompt files.

  • user_version (str | None) – Optional user-facing version label override.

Returns:

Stored prompt version.

Return type:

PromptVersionView

Raises:

Examples

version = client.register_file(
    path='prompts/triage.yaml',
    namespace='support',
    name='triage',
)
resolve(ref)[source]

Resolve a prompt reference.

Parameters:

ref (PromptRef | str) – Prompt reference or compact string.

Returns:

Resolved prompt version.

Return type:

PromptVersionView

Raises:

LookupError – If resolution fails.

Examples

>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:'))
>>> version = client.register_text(namespace='x', name='y', template='Hi {name}')
>>> client.resolve('x/y:latest').version_id == version.version_id
True
get(ref)[source]

Resolve and wrap a prompt reference.

Parameters:

ref (PromptRef | str) – Prompt reference or compact string.

Returns:

Wrapped resolved prompt.

Return type:

ResolvedPrompt

Raises:

LookupError – If resolution fails.

Examples

>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:'))
>>> _ = client.register_text(namespace='x', name='y', template='Hi {name}')
>>> client.get('x/y:latest').render_text({'name': 'Will'})
'Hi Will'
render(ref, variables)[source]

Render a prompt reference directly.

Parameters:
  • ref (PromptRef | str) – Prompt reference or compact string.

  • variables (dict[str, Any]) – Runtime variables.

Returns:

Render result model.

Return type:

Any

Raises:

LookupError – If resolution fails.

Examples

>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:'))
>>> _ = client.register_text(namespace='x', name='y', template='Hi {name}')
>>> client.render('x/y:latest', {'name': 'Will'}).text
'Hi Will'
list_versions()[source]

List all stored versions.

Parameters:

None.

Returns:

Stored prompt versions.

Return type:

list[PromptVersionView]

Raises:

None.

Examples

>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:'))
>>> client.list_versions()
[]
export_to_file(ref, path)[source]

Resolve and export a version bundle to a file.

Parameters:
  • ref (PromptRef | str) – Prompt reference or compact string.

  • path (str | Path) – Output file path.

Returns:

Written file path.

Return type:

Path

Raises:

Examples

client.export_to_file('support/triage:production', 'build/triage.json')
export_file(ref, path)[source]

Resolve and export a version bundle to a file.

Parameters:
  • ref (PromptRef | str) – Prompt reference or compact string.

  • path (str | Path) – Output file path.

Returns:

Written file path.

Return type:

Path

Raises:

Examples

client.export_file('support/triage:production', 'build/triage.json')

Service

Orchestration layer for prompt workflows.

PromptService coordinates registration, alias movement, resolution, rendering, and export across the persistence and storage layers. Both the FastAPI API and the Rich CLI delegate to this service.

Most application code should use PromptClient instead — it wraps this service with ergonomic helpers for compact references, file registration, and LangChain materialization.

Wiring a service manually (the client does this for you):

from promptdb.db import create_all, create_session_factory
from promptdb.storage import LocalBlobStore
from promptdb.service import PromptService

create_all("sqlite:///./promptdb.sqlite3")
service = PromptService(
    session_factory=create_session_factory("sqlite:///./promptdb.sqlite3"),
    blob_store=LocalBlobStore(".blobs"),
)

Using the service:

from promptdb.domain import PromptRegistration, PromptSpec, PromptKind

version = service.register(PromptRegistration(
    namespace="support", name="triage",
    spec=PromptSpec(kind=PromptKind.STRING, template="Hi {name}"),
    alias="production",
))

resolved = service.resolve(PromptRef.parse("support/triage:production"))
result = service.render(resolved.ref, {"name": "Will"})
class promptdb.service.PromptService(session_factory, blob_store)[source]

Bases: object

Application service for prompt workflows.

Parameters:
Returns:

Service object.

Return type:

PromptService

Raises:

None.

Examples

service = PromptService(session_factory, blob_store)
register(registration)[source]

Register a new immutable prompt version.

Parameters:

registration (PromptRegistration) – Registration payload.

Returns:

Created version.

Return type:

PromptVersionView

Raises:

SQLAlchemyError – If persistence fails.

Examples

version = service.register(registration)
move_alias(*, namespace, name, alias, version_id)[source]

Move an alias and return the target version.

Parameters:
  • namespace (str) – Prompt namespace.

  • name (str) – Prompt name.

  • alias (str) – Alias name.

  • version_id (str) – Target version id.

Returns:

Target version.

Return type:

PromptVersionView

Raises:

LookupError – If the prompt is missing.

Examples

view = service.move_alias(
    namespace='support', name='triage',
    alias='production', version_id='...',
)
resolve(ref)[source]

Resolve a prompt reference.

Parameters:

ref (PromptRef) – Prompt reference.

Returns:

Resolved version.

Return type:

PromptVersionView

Raises:

LookupError – If resolution fails.

Examples

view = service.resolve(PromptRef(namespace='support', name='triage'))
render(ref, variables)[source]

Resolve and render a prompt.

Parameters:
Returns:

Rendered output.

Return type:

PromptRenderResult

Raises:

LookupError – If resolution fails.

Examples

ref = PromptRef(namespace='support', name='triage')
result = service.render(ref, {'question': 'hello'})
list_versions()[source]

List all known versions.

Parameters:

None.

Returns:

Version views.

Return type:

list[PromptVersionView]

Raises:

None.

Examples

versions = service.list_versions()
export_bundle(version, *, key_prefix='exports')[source]

Export a prompt version bundle to blob storage.

Parameters:
  • version (PromptVersionView) – Prompt version to export.

  • key_prefix (str) – Storage key prefix.

Returns:

Relational asset view linked to the stored blob.

Return type:

PromptAssetView

Raises:

OSError – If writing fails.

Examples

key = service.export_bundle(version)
list_assets(ref)[source]

List relational blob assets for a resolved prompt version.

Parameters:

ref (PromptRef) – Prompt reference.

Returns:

Linked asset metadata.

Return type:

list[PromptAssetView]

Raises:

LookupError – If the prompt cannot be resolved.

export_to_file(version, path)[source]

Export a prompt version to a local JSON file.

Parameters:
Returns:

Output path.

Return type:

Path

Raises:

OSError – If writing fails.

Examples

service.export_to_file(version, 'build/version.json')

HTTP API

FastAPI HTTP API for prompt operations.

Exposes prompt registration, alias movement, resolution, rendering, version listing, and blob export over HTTP. Interactive OpenAPI docs are served at /docs when the server is running.

Starting the server:

uvicorn promptdb.api:app --reload

Endpoints (all under /api/v1 by default):

  • POST /prompts/register — register a new prompt version

  • GET  /prompts/{ns}/{name}/resolve?selector=... — resolve a reference

  • POST /prompts/{ns}/{name}/render?selector=... — render with variables

  • POST /prompts/{ns}/{name}/aliases/{alias} — move an alias

  • GET  /versions — list all stored versions

  • GET  /prompts/{ns}/{name}/assets?selector=... — list blob assets

  • GET  /exports/{ns}/{name}/{selector} — export to blob storage

Using the app factory in tests or custom setups:

from promptdb.api import create_app
from promptdb.settings import AppSettings

app = create_app(AppSettings(database_url="sqlite:///:memory:"))

The module-level app = create_app() instance is used by uvicorn. For production, run Alembic migrations before starting the server.

class promptdb.api.RenderRequest(**data)[source]

Bases: BaseModel

Request model for prompt rendering.

Parameters:

variables – Runtime variables.

Returns:

Render request.

Return type:

RenderRequest

Raises:

None.

Examples

>>> RenderRequest(variables={'name': 'Will'}).variables['name']
'Will'
model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

variables: dict[str, Any]
promptdb.api.build_service(settings)[source]

Build a configured prompt service.

Parameters:

settings (AppSettings) – Application settings.

Returns:

Configured service.

Return type:

PromptService

Raises:

ValueError – If storage settings are incomplete.

Examples

>>> build_service(AppSettings(database_url='sqlite:///./promptdb.sqlite3')) is not None
True
promptdb.api.create_app(settings=None)[source]

Create the FastAPI application.

Parameters:

settings (AppSettings | None) – Optional explicit settings.

Returns:

Configured app.

Return type:

FastAPI

Raises:

None.

Examples

>>> create_app().title
'promptdb'

File helpers

Load prompts from files and write specs and version bundles to disk.

Supported input formats:

  • YAML / JSON (.yaml, .yml, .json) — parsed as a full PromptSpec with kind, messages, metadata, etc.

  • Plain text (.txt, .md, .prompt, .jinja, .mustache) — the file body becomes the template. You must specify kind explicitly.

Loading a structured file:

from promptdb.files import load_prompt_file

spec = load_prompt_file("prompts/support_classifier.yaml")
print(spec.kind)       # PromptKind.CHAT
print(spec.messages)   # [ChatMessage(...), ...]

Loading a plain-text file:

spec = load_prompt_file(
    "prompts/answerer.md", kind=PromptKind.STRING,
)

Saving a spec back to disk:

from promptdb.files import save_prompt_spec

save_prompt_spec(spec, "build/classifier.yaml")   # YAML
save_prompt_spec(spec, "build/classifier.json")   # JSON

Exporting a full version bundle (includes version_id, revision, aliases):

from promptdb.files import write_version_bundle

write_version_bundle(version_view, "build/classifier.json")
promptdb.files.load_prompt_file(path, *, kind=None, message_role=MessageRole.HUMAN)[source]

Load a prompt spec from a plain-text or structured file.

Parameters:
  • path (str | Path) – File path.

  • kind (PromptKind | None) – Prompt kind for plain-text files. Structured files can omit this.

  • message_role (MessageRole) – Message role for plain-text chat prompts.

Returns:

Loaded prompt specification.

Return type:

PromptSpec

Raises:

Examples

spec = load_prompt_file('prompts/demo.txt', kind=PromptKind.STRING)
spec = load_prompt_file('prompts/demo.yaml')
promptdb.files.save_prompt_spec(spec, path)[source]

Write a prompt spec to JSON or YAML.

Parameters:
  • spec (PromptSpec) – Prompt specification.

  • path (str | Path) – Output file path.

Returns:

Output path.

Return type:

Path

Raises:

Examples

save_prompt_spec(spec, 'build/demo.yaml')
promptdb.files.write_version_bundle(version, path)[source]

Write a version bundle to a file.

Parameters:
Returns:

Output path.

Return type:

Path

Raises:

OSError – If writing fails.

Examples

write_version_bundle(version, 'build/version.json')

Storage

Blob storage adapters for prompt exports and artifacts.

Two adapters are provided:

  • LocalBlobStore — writes to the local filesystem. No external dependencies. Used by default and in tests.

  • MinioBlobStore — writes to an S3-compatible MinIO server. Requires the minio optional extra (pip install ooai-promptdb[minio]).

Both adapters expose the same interface: put_text, get_text, and presign_upload.

Local usage:

from promptdb.storage import LocalBlobStore

store = LocalBlobStore(".blobs")
store.put_text("exports/triage/v1.json", '{"spec": ...}')
content = store.get_text("exports/triage/v1.json")

MinIO usage:

from promptdb.storage import MinioBlobStore

store = MinioBlobStore(
    endpoint="localhost:9000",
    access_key="minioadmin",
    secret_key="minioadmin",
    bucket="promptdb",
)
store.put_text("exports/triage/v1.json", '{"spec": ...}')

The object_metadata() helper builds relational metadata dicts for persisting blob references in the prompt_assets table.

Selecting a backend is done through PROMPTDB_STORAGE_BACKEND (local or minio) in AppSettings.

class promptdb.storage.LocalBlobStore(root)[source]

Bases: object

Filesystem-backed blob store.

Parameters:

root (str | Path) – Storage root directory.

Returns:

Storage adapter.

Return type:

LocalBlobStore

Raises:

None.

Examples

>>> store = LocalBlobStore('.tmp-blobs')
>>> store.put_text('x.txt', 'x')
'x.txt'
put_text(key, content)[source]

Store text content.

Parameters:
  • key (str) – Object key.

  • content (str) – Text payload.

Returns:

Stored object key.

Return type:

str

Raises:

OSError – If writing fails.

Examples

>>> LocalBlobStore('.tmp-blobs').put_text('x.txt', 'hello')
'x.txt'
get_text(key)[source]

Read text content.

Parameters:

key (str) – Object key.

Returns:

Stored content.

Return type:

str

Raises:

FileNotFoundError – If the key does not exist.

Examples

>>> store = LocalBlobStore('.tmp-blobs')
>>> _ = store.put_text('x.txt', 'hello')
>>> store.get_text('x.txt')
'hello'
presign_upload(key, *, expires_seconds=3600)[source]

Return a pseudo upload URL for local usage.

Parameters:
  • key (str) – Object key.

  • expires_seconds (int) – Ignored expiration horizon.

Returns:

file:// URL.

Return type:

str

Raises:

None.

Examples

>>> LocalBlobStore('.tmp-blobs').presign_upload('x.txt').startswith('file://')
True
class promptdb.storage.MinioBlobStore(*, endpoint, access_key, secret_key, bucket, secure=False)[source]

Bases: object

MinIO-backed blob store.

Parameters:
  • endpoint (str) – MinIO endpoint.

  • access_key (str) – Access key.

  • secret_key (str) – Secret key.

  • bucket (str) – Bucket name.

  • secure (bool) – Whether to use TLS.

Returns:

Storage adapter.

Return type:

MinioBlobStore

Raises:

ImportError – If the MinIO package is unavailable.

Examples

store = MinioBlobStore(
    endpoint='localhost:9000',
    access_key='minioadmin',
    secret_key='minioadmin',
    bucket='promptdb',
    secure=False,
)
put_text(key, content)[source]

Upload text content.

Parameters:
  • key (str) – Object key.

  • content (str) – Text payload.

Returns:

Stored object key.

Return type:

str

Raises:

S3Error – If upload fails.

Examples

store.put_text('exports/demo.txt', 'hello')
get_text(key)[source]

Download text content.

Parameters:

key (str) – Object key.

Returns:

Text payload.

Return type:

str

Raises:

S3Error – If download fails.

Examples

body = store.get_text('exports/demo.txt')
presign_upload(key, *, expires_seconds=3600)[source]

Generate a presigned PUT URL.

Parameters:
  • key (str) – Object key.

  • expires_seconds (int) – Expiration horizon.

Returns:

Presigned URL.

Return type:

str

Raises:

S3Error – If URL generation fails.

Examples

url = store.presign_upload('exports/demo.txt')
promptdb.storage.object_metadata(store, key, *, content=None, content_type=None)[source]

Build relational metadata for a stored blob object.

Parameters:
  • store (object) – Blob store adapter.

  • key (str) – Stored object key.

  • content (str | None) – Optional content used to estimate size and checksum.

  • content_type (str | None) – Optional MIME type.

Returns:

Metadata payload for relational persistence.

Return type:

dict[str, object]

Raises:

None.

Examples

>>> meta = object_metadata(LocalBlobStore('.tmp-blobs'), 'x.txt', content='hello')
>>> meta['storage_backend']
'local'

Settings

Environment-backed application configuration.

All settings are read from PROMPTDB_* environment variables via Pydantic Settings. The defaults work for local development with SQLite and local blob storage — no external services required.

Environment variables:

  • PROMPTDB_DATABASE_URL — SQLAlchemy URL (default: sqlite:///./promptdb.sqlite3)

  • PROMPTDB_BLOB_ROOT — local blob directory (default: .blobs)

  • PROMPTDB_STORAGE_BACKENDlocal or minio (default: local)

  • PROMPTDB_API_PREFIX — API route prefix (default: /api/v1)

  • PROMPTDB_MINIO_ENDPOINT — MinIO host:port (required if minio)

  • PROMPTDB_MINIO_ACCESS_KEY — MinIO access key

  • PROMPTDB_MINIO_SECRET_KEY — MinIO secret key

  • PROMPTDB_MINIO_BUCKET — MinIO bucket (default: promptdb)

  • PROMPTDB_ENABLE_METRICS — mount Prometheus /metrics (default: false)

  • PROMPTDB_ENABLE_OTEL — enable OpenTelemetry instrumentation (default: false)

  • PROMPTDB_LOG_LEVEL — root log level (default: INFO)

Usage:

from promptdb.settings import AppSettings

settings = AppSettings()                          # from env vars
settings = AppSettings(database_url="sqlite:///:memory:")  # explicit
class promptdb.settings.AppSettings(_case_sensitive=None, _nested_model_default_partial_update=None, _env_prefix=None, _env_prefix_target=None, _env_file=PosixPath('.'), _env_file_encoding=None, _env_ignore_empty=None, _env_nested_delimiter=None, _env_nested_max_split=None, _env_parse_none_str=None, _env_parse_enums=None, _cli_prog_name=None, _cli_parse_args=None, _cli_settings_source=None, _cli_parse_none_str=None, _cli_hide_none_type=None, _cli_avoid_json=None, _cli_enforce_required=None, _cli_use_class_docs_for_groups=None, _cli_exit_on_error=None, _cli_prefix=None, _cli_flag_prefix_char=None, _cli_implicit_flags=None, _cli_ignore_unknown_args=None, _cli_kebab_case=None, _cli_shortcuts=None, _secrets_dir=None, _build_sources=None, **values)[source]

Bases: BaseSettings

Environment-backed application settings.

Parameters:
  • database_url – SQLAlchemy database URL.

  • blob_root – Local blob storage root.

  • storage_backendlocal or minio.

  • api_prefix – API route prefix.

  • service_name – Service name for logs and traces.

  • enable_metrics – Whether to expose metrics.

  • enable_otel – Whether to enable OTel wiring.

  • redis_url – Optional Redis URL.

  • minio_endpoint – MinIO endpoint.

  • minio_access_key – MinIO access key.

  • minio_secret_key – MinIO secret key.

  • minio_bucket – MinIO bucket.

  • minio_secure – Whether MinIO uses TLS.

  • log_level – Root log level.

Returns:

Loaded settings instance.

Return type:

AppSettings

Raises:

None.

Examples

>>> AppSettings(database_url="sqlite:///./x.sqlite3").storage_backend
'local'
model_config: ClassVar[SettingsConfigDict] = {'arbitrary_types_allowed': True, 'case_sensitive': False, 'cli_avoid_json': False, 'cli_enforce_required': False, 'cli_exit_on_error': True, 'cli_flag_prefix_char': '-', 'cli_hide_none_type': False, 'cli_ignore_unknown_args': False, 'cli_implicit_flags': False, 'cli_kebab_case': False, 'cli_parse_args': None, 'cli_parse_none_str': None, 'cli_prefix': '', 'cli_prog_name': None, 'cli_shortcuts': None, 'cli_use_class_docs_for_groups': False, 'enable_decoding': True, 'env_file': None, 'env_file_encoding': None, 'env_ignore_empty': False, 'env_nested_delimiter': None, 'env_nested_max_split': None, 'env_parse_enums': None, 'env_parse_none_str': None, 'env_prefix': 'PROMPTDB_', 'env_prefix_target': 'variable', 'extra': 'ignore', 'json_file': None, 'json_file_encoding': None, 'nested_model_default_partial_update': False, 'protected_namespaces': ('model_validate', 'model_dump', 'settings_customise_sources'), 'secrets_dir': None, 'toml_file': None, 'validate_default': True, 'yaml_config_section': None, 'yaml_file': None, 'yaml_file_encoding': None}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

database_url: str
blob_root: str
storage_backend: str
api_prefix: str
service_name: str
enable_metrics: bool
enable_otel: bool
redis_url: str | None
minio_endpoint: str | None
minio_access_key: str | None
minio_secret_key: str | None
minio_bucket: str
minio_secure: bool
log_level: str

CLI

Rich-powered CLI for local prompt operations.

The CLI provides six commands, all rendering output with Rich tables, panels, and syntax-highlighted JSON. It uses the same PromptClient and PromptService as the API.

Commands:

promptdb init                 # scaffold a workspace with sample files
promptdb list                 # list all registered prompt versions
promptdb register-file <path> <namespace> <name>  # register from file
promptdb resolve <ref>        # resolve a prompt reference to JSON
promptdb render <ref> --var key=value  # render with variables
promptdb export-file <ref> <path>     # write version bundle to disk

Prompt references use namespace/name:selector format:

promptdb resolve support/triage:production
promptdb resolve support/triage:rev:2
promptdb resolve support/triage:latest

The register-file command supports YAML, JSON, and plain text files. For plain text, specify --kind string or --kind chat.

Entry point: promptdb = promptdb.cli:main (configured in pyproject.toml).

promptdb.cli.main(argv=None)[source]

Run the CLI.

Parameters:

argv (Sequence[str] | None) – Optional explicit command-line arguments.

Returns:

Process exit status.

Return type:

int

Raises:

SystemExit – For invalid command usage.

Examples

exit_code = main(['list'])