API reference¶
Domain models¶
Core domain models for prompt definitions, rendering, and versioning.
This module contains every Pydantic model that flows through the system. Nothing here touches the database or filesystem — it is pure data and logic.
Prompt kinds and template formats:
PromptKind.STRING — single-template prompt, rendered to a string
PromptKind.CHAT — multi-message prompt, rendered to a list of dicts
TemplateFormat.FSTRING — Python f-string syntax: ``{name}``
TemplateFormat.JINJA2 — Jinja2 syntax: ``{{ name }}``
TemplateFormat.MUSTACHE — Mustache syntax: ``{{ name }}``
Building a string prompt:
spec = PromptSpec(kind=PromptKind.STRING, template="Hello {name}")
spec.render_text({"name": "Will"}) # => "Hello Will"
spec.declared_variables # => ["name"]
Building a chat prompt:
spec = PromptSpec(
kind=PromptKind.CHAT,
messages=[
ChatMessage(role=MessageRole.SYSTEM, template="You are {persona}."),
ChatMessage(role=MessageRole.HUMAN, template="{question}"),
],
partial_variables={"persona": "a helpful assistant"},
)
spec.render_messages({"question": "Hi"})
# => [{"role": "system", "content": "You are a helpful assistant."}, ...]
Prompt references:
ref = PromptRef.parse("support/triage:production")
ref.namespace # => "support"
ref.name # => "triage"
ref.selector # => "production"
Version views wrap a stored version with render and LangChain helpers:
view = PromptVersionView(...)
view.render({"name": "Will"}) # => PromptRenderResult
view.as_langchain() # => LangChain PromptTemplate
view.wrap() # => ResolvedPrompt (ergonomic wrapper)
- class promptdb.domain.PromptKind(*values)[source]¶
Bases:
StrEnumSupported prompt kinds.
- Parameters:
None.
- Returns:
Enumeration members for prompt families.
- Return type:
- Raises:
None. –
Examples
>>> PromptKind.CHAT.value 'chat'
- STRING = 'string'¶
- CHAT = 'chat'¶
- class promptdb.domain.TemplateFormat(*values)[source]¶
Bases:
StrEnumSupported template formats.
- Parameters:
None.
- Returns:
Enumeration members for supported renderers.
- Return type:
- Raises:
None. –
Examples
>>> TemplateFormat.MUSTACHE.value 'mustache'
- FSTRING = 'f-string'¶
- JINJA2 = 'jinja2'¶
- MUSTACHE = 'mustache'¶
- class promptdb.domain.PromptAssetKind(*values)[source]¶
Bases:
StrEnumKinds of relational asset records linked to a prompt version.
- Parameters:
None.
- Returns:
Enumeration members for persisted asset categories.
- Return type:
- Raises:
None. –
Examples
>>> PromptAssetKind.EXPORT_BUNDLE.value 'export_bundle'
- EXPORT_BUNDLE = 'export_bundle'¶
- ATTACHMENT = 'attachment'¶
- EXAMPLE_DATASET = 'example_dataset'¶
- SNAPSHOT = 'snapshot'¶
- class promptdb.domain.MessageRole(*values)[source]¶
Bases:
StrEnumSupported chat message roles.
- Parameters:
None.
- Returns:
Enumeration members for message roles.
- Return type:
- Raises:
None. –
Examples
>>> MessageRole.SYSTEM.value 'system'
- SYSTEM = 'system'¶
- HUMAN = 'human'¶
- AI = 'ai'¶
- GENERIC = 'generic'¶
- class promptdb.domain.PromptMetadata(**data)[source]¶
Bases:
BaseModelRich metadata attached to a prompt version.
- Parameters:
title – Human-friendly title.
description – Longer description.
tags – Search tags.
owners – User or team identifiers.
labels – Arbitrary key-value labels.
source_path – Optional file path used during import.
user_version – Optional caller-friendly version label.
- Returns:
Metadata payload.
- Return type:
- Raises:
None. –
Examples
>>> PromptMetadata(title="Classifier", tags=["support"]).title 'Classifier'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class promptdb.domain.ChatMessage(**data)[source]¶
Bases:
BaseModelConcrete chat message template.
- Parameters:
role – Message role.
template – Message template body.
name – Optional participant name.
additional_kwargs – Additional message metadata.
- Returns:
Chat message template object.
- Return type:
- Raises:
None. –
Examples
>>> ChatMessage(role=MessageRole.HUMAN, template="{question}").template '{question}'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- role: MessageRole¶
- class promptdb.domain.MessagePlaceholder(**data)[source]¶
Bases:
BaseModelPlaceholder for a runtime list of messages.
- Parameters:
variable_name – Input variable containing a list of messages.
optional – Whether an empty value is allowed.
- Returns:
Placeholder model.
- Return type:
- Raises:
None. –
Examples
>>> MessagePlaceholder(variable_name="history").variable_name 'history'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class promptdb.domain.FewShotBlock(**data)[source]¶
Bases:
BaseModelLightweight few-shot configuration.
- Parameters:
examples – Example variable mappings.
string_template – Template used for string examples.
chat_messages – Message templates used for chat examples.
insert_at – Insertion index in chat mode.
example_separator – Separator used in string mode.
- Returns:
Few-shot configuration.
- Return type:
- Raises:
ValueError – If neither a string template nor chat messages are supplied.
Examples
>>> FewShotBlock(examples=[{"x": "1"}], string_template="{x}").examples[0]["x"] '1'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- chat_messages: list[ChatMessage]¶
- class promptdb.domain.PromptSpec(**data)[source]¶
Bases:
BaseModelPrompt definition that can render directly or materialize into LangChain.
- Parameters:
kind – Prompt kind.
template_format – Template engine.
template – Root template for string prompts.
messages – Message sequence for chat prompts.
input_variables – Declared required variables.
optional_variables – Declared optional variables.
partial_variables – Stored partial variables merged at render time.
few_shot – Optional few-shot examples.
metadata – Rich prompt metadata.
- Returns:
Prompt definition.
- Return type:
- Raises:
ValueError – If the shape is invalid for the selected prompt kind.
Examples
>>> PromptSpec(kind=PromptKind.STRING, template="Hello {name}").declared_variables ['name']
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- kind: PromptKind¶
- template_format: TemplateFormat¶
- messages: list[ChatMessage | MessagePlaceholder]¶
- few_shot: FewShotBlock | None¶
- metadata: PromptMetadata¶
- property declared_variables: list[str]¶
Return discovered and explicitly declared variables.
- Parameters:
self – Model instance.
- Returns:
Sorted variable names.
- Return type:
- Raises:
None. –
Examples
>>> PromptSpec(kind=PromptKind.STRING, template="{x} {y}").declared_variables ['x', 'y']
- merged_variables(variables=None)[source]¶
Merge runtime variables with stored partial variables.
- Parameters:
- Returns:
Merged variables.
- Return type:
- Raises:
None. –
Examples
>>> spec = PromptSpec( ... kind=PromptKind.STRING, template="{name}", ... partial_variables={"name": "Will"}, ... ) >>> spec.merged_variables({}) {'name': 'Will'}
- render_text(variables=None)[source]¶
Render a string prompt.
- Parameters:
- Returns:
Rendered text.
- Return type:
- Raises:
TypeError – If called on a chat prompt.
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hello {name}") >>> spec.render_text({"name": "Will"}) 'Hello Will'
- render_messages(variables=None)[source]¶
Render a chat prompt.
- Parameters:
- Returns:
Rendered message payloads.
- Return type:
- Raises:
Examples
>>> msgs = [ChatMessage(role=MessageRole.HUMAN, template="Hi {name}")] >>> spec = PromptSpec(kind=PromptKind.CHAT, messages=msgs) >>> spec.render_messages({"name": "Will"})[0]["content"] 'Hi Will'
- to_langchain()[source]¶
Materialize the prompt into a LangChain prompt object.
- Parameters:
self – Model instance.
- Returns:
LangChain prompt object.
- Return type:
- Raises:
ImportError – If
langchain-coreis unavailable.
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> spec.to_langchain().__class__.__name__ 'PromptTemplate'
- promptdb.domain.render_template(template, variables, fmt)[source]¶
Render a template with the selected engine.
- Parameters:
template (
str) – Template text.fmt (
TemplateFormat) – Template format.
- Returns:
Rendered text.
- Return type:
- Raises:
KeyError – If a required variable is missing.
Examples
>>> render_template("Hello {name}", {"name": "Will"}, TemplateFormat.FSTRING) 'Hello Will'
- promptdb.domain.extract_variables(template, fmt)[source]¶
Extract variable names from a template.
- Parameters:
template (
str) – Template text.fmt (
TemplateFormat) – Template format.
- Returns:
Sorted variable names.
- Return type:
- Raises:
None. –
Examples
>>> extract_variables("Hello {name}", TemplateFormat.FSTRING) ['name']
- class promptdb.domain.PromptRef(**data)[source]¶
Bases:
BaseModelReference to a prompt and selector.
- Parameters:
namespace – Prompt namespace.
name – Prompt name.
selector – Alias, user-facing version label, or concrete version id.
- Returns:
Reference payload.
- Return type:
- Raises:
None. –
Examples
>>> PromptRef.parse("support/triage:production").selector 'production'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- classmethod parse(value)[source]¶
Parse a compact
namespace/name:selectorreference.- Parameters:
value (
str) – Compact reference string.- Returns:
Parsed prompt reference.
- Return type:
- Raises:
ValueError – If the input is malformed.
Examples
>>> PromptRef.parse("support/triage") PromptRef(namespace='support', name='triage', selector='latest')
- property resource_id: str¶
Return the stable prompt resource identifier.
- Parameters:
self – Model instance.
- Returns:
namespace/nameidentifier.- Return type:
- Raises:
None. –
Examples
>>> PromptRef(namespace="support", name="triage").resource_id 'support/triage'
- class promptdb.domain.PromptRegistration(**data)[source]¶
Bases:
BaseModelRegistration request payload.
- Parameters:
namespace – Prompt namespace.
name – Prompt name.
spec – Prompt spec.
created_by – Creator identifier.
alias – Alias to move after registration.
- Returns:
Registration payload.
- Return type:
- Raises:
None. –
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="hi") >>> PromptRegistration(namespace="x", name="y", spec=spec).name 'y'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- spec: PromptSpec¶
- class promptdb.domain.AliasMove(**data)[source]¶
Bases:
BaseModelAlias movement payload.
- Parameters:
alias – Alias name.
version_id – Target version id.
- Returns:
Alias movement request.
- Return type:
- Raises:
None. –
Examples
>>> AliasMove(alias="production", version_id="v1").alias 'production'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class promptdb.domain.PromptAssetView(**data)[source]¶
Bases:
BaseModelBlob-backed asset metadata linked to a prompt version.
- Parameters:
asset_id – Unique asset id.
version_id – Owning prompt version id.
kind – Asset kind.
storage_backend – Storage backend name.
bucket – Logical or physical bucket/container name.
object_key – Blob object key.
content_type – MIME content type.
byte_size – Optional object size.
checksum_sha256 – Optional checksum.
metadata_json – User-defined metadata.
created_at – Creation timestamp.
- Returns:
Asset metadata.
- Return type:
- Raises:
None. –
Examples
>>> av = PromptAssetView( ... asset_id='a', version_id='v', ... kind=PromptAssetKind.EXPORT_BUNDLE, ... storage_backend='local', bucket='promptdb', ... object_key='x.json', ... ) >>> av.object_key 'x.json'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- kind: PromptAssetKind¶
- class promptdb.domain.PromptVersionView(**data)[source]¶
Bases:
BaseModelAPI-ready view over an immutable prompt version.
- Parameters:
version_id – Version identifier.
namespace – Prompt namespace.
name – Prompt name.
revision – Monotonic revision.
user_version – User-facing version label.
spec – Prompt spec.
created_by – Creator identifier.
aliases – Aliases pointing to this version.
- Returns:
Prompt version view.
- Return type:
- Raises:
None. –
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="hi") >>> view = PromptVersionView( ... version_id="v1", namespace="x", name="y", ... revision=1, spec=spec, ... ) >>> view.revision 1
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- spec: PromptSpec¶
- assets: list[PromptAssetView]¶
- property ref: PromptRef¶
Return a convenient immutable reference to this exact version.
- Parameters:
self – Model instance.
- Returns:
Version reference.
- Return type:
- Raises:
None. –
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="hi") >>> view = PromptVersionView( ... version_id="v1", namespace="x", name="y", ... revision=1, spec=spec, ... ) >>> view.ref.full_name 'x/y:v1'
- render(variables=None)[source]¶
Render the current version directly.
- Parameters:
- Returns:
Render output.
- Return type:
- Raises:
TypeError – If the prompt kind is unsupported.
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> view = PromptVersionView( ... version_id="v1", namespace="x", name="y", ... revision=1, spec=spec, ... ) >>> view.render({"name": "Will"}).text 'Hi Will'
- as_langchain()[source]¶
Materialize the current version into a LangChain prompt.
- Parameters:
self – Model instance.
- Returns:
LangChain prompt object.
- Return type:
- Raises:
ImportError – If
langchain-coreis unavailable.
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> view = PromptVersionView( ... version_id="v1", namespace="x", name="y", ... revision=1, spec=spec, ... ) >>> view.as_langchain().__class__.__name__ 'PromptTemplate'
- wrap()[source]¶
Wrap the version in an ergonomic resolved-prompt object.
- Parameters:
self – Model instance.
- Returns:
Wrapper exposing render and materialization helpers.
- Return type:
- Raises:
None. –
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> view = PromptVersionView( ... version_id="v1", namespace="x", name="y", ... revision=1, spec=spec, ... ) >>> view.wrap().ref.full_name 'x/y:v1'
- class promptdb.domain.ResolvedPrompt(**data)[source]¶
Bases:
BaseModelErgonomic wrapper around a resolved prompt version.
- Parameters:
version – Resolved prompt version.
- Returns:
Rich wrapper object.
- Return type:
- Raises:
None. –
Examples
>>> spec = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> version = PromptVersionView( ... version_id="v1", namespace="x", name="y", ... revision=1, spec=spec, ... ) >>> ResolvedPrompt(version=version).ref.full_name 'x/y:v1'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- version: PromptVersionView¶
- property ref: PromptRef¶
Return an immutable reference to the concrete version.
- Parameters:
self – Model instance.
- Returns:
Version reference.
- Return type:
- Raises:
None. –
Examples
>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> version = PromptVersionView( ... version_id="v1", namespace="x", name="y", revision=1, spec=_s, ... ) >>> ResolvedPrompt(version=version).ref.selector 'v1'
- as_langchain()[source]¶
Materialize the wrapped prompt as a LangChain object.
- Parameters:
self – Model instance.
- Returns:
LangChain prompt object.
- Return type:
- Raises:
ImportError – If
langchain-coreis unavailable.
Examples
>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> version = PromptVersionView( ... version_id="v1", namespace="x", name="y", revision=1, spec=_s, ... ) >>> ResolvedPrompt(version=version).as_langchain().__class__.__name__ 'PromptTemplate'
- render(variables=None)[source]¶
Render the wrapped prompt.
- Parameters:
- Returns:
Rendered prompt output.
- Return type:
- Raises:
TypeError – If the prompt kind and helper mismatch.
Examples
>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> version = PromptVersionView( ... version_id="v1", namespace="x", name="y", revision=1, spec=_s, ... ) >>> ResolvedPrompt(version=version).render({"name": "Will"}).text 'Hi Will'
- render_text(variables=None)[source]¶
Render the wrapped prompt as text.
- Parameters:
- Returns:
Rendered text.
- Return type:
- Raises:
TypeError – If the wrapped prompt is not a string prompt.
Examples
>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> version = PromptVersionView( ... version_id="v1", namespace="x", name="y", revision=1, spec=_s, ... ) >>> ResolvedPrompt(version=version).render_text({"name": "Will"}) 'Hi Will'
- render_messages(variables=None)[source]¶
Render the wrapped prompt as messages.
- Parameters:
- Returns:
Rendered chat messages.
- Return type:
- Raises:
TypeError – If the wrapped prompt is not a chat prompt.
Examples
>>> _msg = ChatMessage(role=MessageRole.HUMAN, template="{question}") >>> _s = PromptSpec(kind=PromptKind.CHAT, messages=[_msg]) >>> version = PromptVersionView( ... version_id="v1", namespace="x", name="y", revision=1, spec=_s, ... ) >>> ResolvedPrompt(version=version).render_messages({"question": "Hi"})[0]["content"] 'Hi'
- invoke(variables=None)[source]¶
Invoke the underlying LangChain prompt object.
- Parameters:
- Returns:
LangChain prompt value.
- Return type:
- Raises:
AttributeError – If the underlying object lacks
invoke.
Examples
>>> _s = PromptSpec(kind=PromptKind.STRING, template="Hi {name}") >>> version = PromptVersionView( ... version_id="v1", namespace="x", name="y", revision=1, spec=_s, ... ) >>> ResolvedPrompt(version=version).invoke({"name": "Will"}).text 'Hi Will'
- class promptdb.domain.PromptRenderResult(**data)[source]¶
Bases:
BaseModelRendered prompt result.
- Parameters:
ref – Prompt reference.
version – Resolved version.
text – Rendered string prompt.
messages – Rendered chat messages.
- Returns:
Render result.
- Return type:
- Raises:
None. –
Examples
>>> _s = PromptSpec(kind=PromptKind.STRING, template="hi") >>> view = PromptVersionView( ... version_id="v1", namespace="x", name="y", revision=1, spec=_s, ... ) >>> ref = PromptRef(namespace="x", name="y") >>> PromptRenderResult(ref=ref, version=view, text="hi").text 'hi'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- version: PromptVersionView¶
Client¶
Ergonomic Python client for prompt registration, resolution, and rendering.
PromptClient is the main entry point for application code. It wraps
PromptService and adds compact reference parsing,
file-based registration, and a ResolvedPrompt
wrapper with render and LangChain helpers.
Creating a client:
from promptdb import PromptClient
# Reads PROMPTDB_* env vars (database URL, blob root, storage backend)
client = PromptClient.from_env()
# Or with explicit settings
from promptdb import AppSettings
client = PromptClient.from_env(AppSettings(
database_url="postgresql://user:pass@localhost/promptdb",
))
Registering prompts — three approaches:
# 1. From inline text
client.register_text(
namespace="support", name="triage",
template="Hello {name}", kind=PromptKind.STRING,
alias="production",
)
# 2. From a PromptSpec object
client.register_spec(namespace="support", name="triage", spec=spec)
# 3. From a YAML/JSON/text file
client.register_file(path="prompts/triage.yaml",
namespace="support", name="triage")
Resolving and rendering:
resolved = client.get("support/triage:production") # ResolvedPrompt
text = resolved.render_text({"name": "Will"})
lc = resolved.as_langchain() # LangChain prompt
value = resolved.invoke({"name": "Will"}) # LangChain invoke
Selectors: latest, production, rev:2, 2026.04.01.1, or a UUID.
- class promptdb.client.PromptClient(service)[source]¶
Bases:
objectDeveloper-friendly facade over
PromptService.- Parameters:
service (
PromptService) – Prompt service instance.- Returns:
Local prompt client.
- Return type:
- Raises:
None. –
Examples
client = PromptClient.from_env() resolved = client.get("support/triage:latest")
- classmethod from_env(settings=None)[source]¶
Create a client from environment-backed settings.
- Parameters:
settings (
AppSettings|None) – Optional explicit settings.- Returns:
Configured client.
- Return type:
- Raises:
ValueError – If the storage backend is misconfigured.
Examples
>>> settings = AppSettings(database_url='sqlite:///:memory:') >>> isinstance(PromptClient.from_env(settings), PromptClient) True
- register_spec(*, namespace, name, spec, created_by=None, alias='latest')[source]¶
Register a prompt spec.
- Parameters:
- Returns:
Stored prompt version.
- Return type:
- Raises:
LookupError – If alias movement fails.
Examples
version = client.register_spec(namespace='support', name='triage', spec=spec)
- register_text(*, namespace, name, template, kind=PromptKind.STRING, alias='latest', created_by=None, metadata=None, template_format=TemplateFormat.FSTRING, partial_variables=None, role=MessageRole.HUMAN)[source]¶
Register a prompt directly from text.
- Parameters:
namespace (
str) – Prompt namespace.name (
str) – Prompt name.template (
str) – Root template or message template.kind (
PromptKind) – Prompt kind.metadata (
PromptMetadata|None) – Optional prompt metadata.template_format (
TemplateFormat) – Template engine.partial_variables (
dict[str,Any] |None) – Stored partial variables.role (
MessageRole) – Chat role whenkindischat.
- Returns:
Stored prompt version.
- Return type:
- Raises:
ValueError – If the prompt shape is invalid.
Examples
version = client.register_text( namespace='support', name='triage', template='Hello {name}', )
- register_file(*, path, namespace, name, kind=None, alias='latest', created_by=None, message_role=MessageRole.HUMAN, user_version=None)[source]¶
Register a prompt from a text or structured file.
- Parameters:
namespace (
str) – Prompt namespace.name (
str) – Prompt name.kind (
PromptKind|None) – Prompt kind for plain-text files. Ignored for structured spec files.message_role (
MessageRole) – Chat role for plain-text chat prompt files.user_version (
str|None) – Optional user-facing version label override.
- Returns:
Stored prompt version.
- Return type:
- Raises:
FileNotFoundError – If the file does not exist.
ValueError – If
kindis omitted for plain-text files.
Examples
version = client.register_file( path='prompts/triage.yaml', namespace='support', name='triage', )
- resolve(ref)[source]¶
Resolve a prompt reference.
- Parameters:
- Returns:
Resolved prompt version.
- Return type:
- Raises:
LookupError – If resolution fails.
Examples
>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:')) >>> version = client.register_text(namespace='x', name='y', template='Hi {name}') >>> client.resolve('x/y:latest').version_id == version.version_id True
- get(ref)[source]¶
Resolve and wrap a prompt reference.
- Parameters:
- Returns:
Wrapped resolved prompt.
- Return type:
- Raises:
LookupError – If resolution fails.
Examples
>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:')) >>> _ = client.register_text(namespace='x', name='y', template='Hi {name}') >>> client.get('x/y:latest').render_text({'name': 'Will'}) 'Hi Will'
- render(ref, variables)[source]¶
Render a prompt reference directly.
- Parameters:
- Returns:
Render result model.
- Return type:
- Raises:
LookupError – If resolution fails.
Examples
>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:')) >>> _ = client.register_text(namespace='x', name='y', template='Hi {name}') >>> client.render('x/y:latest', {'name': 'Will'}).text 'Hi Will'
- list_versions()[source]¶
List all stored versions.
- Parameters:
None.
- Returns:
Stored prompt versions.
- Return type:
- Raises:
None. –
Examples
>>> client = PromptClient.from_env(AppSettings(database_url='sqlite:///:memory:')) >>> client.list_versions() []
- export_to_file(ref, path)[source]¶
Resolve and export a version bundle to a file.
- Parameters:
- Returns:
Written file path.
- Return type:
- Raises:
LookupError – If resolution fails.
OSError – If writing fails.
Examples
client.export_to_file('support/triage:production', 'build/triage.json')
- export_file(ref, path)[source]¶
Resolve and export a version bundle to a file.
- Parameters:
- Returns:
Written file path.
- Return type:
- Raises:
LookupError – If resolution fails.
OSError – If writing fails.
Examples
client.export_file('support/triage:production', 'build/triage.json')
Service¶
Orchestration layer for prompt workflows.
PromptService coordinates registration, alias movement, resolution,
rendering, and export across the persistence and storage layers. Both the
FastAPI API and the Rich CLI delegate to this service.
Most application code should use PromptClient
instead — it wraps this service with ergonomic helpers for compact references,
file registration, and LangChain materialization.
Wiring a service manually (the client does this for you):
from promptdb.db import create_all, create_session_factory
from promptdb.storage import LocalBlobStore
from promptdb.service import PromptService
create_all("sqlite:///./promptdb.sqlite3")
service = PromptService(
session_factory=create_session_factory("sqlite:///./promptdb.sqlite3"),
blob_store=LocalBlobStore(".blobs"),
)
Using the service:
from promptdb.domain import PromptRegistration, PromptSpec, PromptKind
version = service.register(PromptRegistration(
namespace="support", name="triage",
spec=PromptSpec(kind=PromptKind.STRING, template="Hi {name}"),
alias="production",
))
resolved = service.resolve(PromptRef.parse("support/triage:production"))
result = service.render(resolved.ref, {"name": "Will"})
- class promptdb.service.PromptService(session_factory, blob_store)[source]¶
Bases:
objectApplication service for prompt workflows.
- Parameters:
session_factory (
sessionmaker[Session]) – SQLAlchemy session factory.blob_store (
LocalBlobStore|MinioBlobStore) – Storage adapter withput_textandget_text.
- Returns:
Service object.
- Return type:
- Raises:
None. –
Examples
service = PromptService(session_factory, blob_store)
- register(registration)[source]¶
Register a new immutable prompt version.
- Parameters:
registration (
PromptRegistration) – Registration payload.- Returns:
Created version.
- Return type:
- Raises:
SQLAlchemyError – If persistence fails.
Examples
version = service.register(registration)
- move_alias(*, namespace, name, alias, version_id)[source]¶
Move an alias and return the target version.
- Parameters:
- Returns:
Target version.
- Return type:
- Raises:
LookupError – If the prompt is missing.
Examples
view = service.move_alias( namespace='support', name='triage', alias='production', version_id='...', )
- resolve(ref)[source]¶
Resolve a prompt reference.
- Parameters:
ref (
PromptRef) – Prompt reference.- Returns:
Resolved version.
- Return type:
- Raises:
LookupError – If resolution fails.
Examples
view = service.resolve(PromptRef(namespace='support', name='triage'))
- render(ref, variables)[source]¶
Resolve and render a prompt.
- Parameters:
- Returns:
Rendered output.
- Return type:
- Raises:
LookupError – If resolution fails.
Examples
ref = PromptRef(namespace='support', name='triage') result = service.render(ref, {'question': 'hello'})
- list_versions()[source]¶
List all known versions.
- Parameters:
None.
- Returns:
Version views.
- Return type:
- Raises:
None. –
Examples
versions = service.list_versions()
- export_bundle(version, *, key_prefix='exports')[source]¶
Export a prompt version bundle to blob storage.
- Parameters:
version (
PromptVersionView) – Prompt version to export.key_prefix (
str) – Storage key prefix.
- Returns:
Relational asset view linked to the stored blob.
- Return type:
- Raises:
OSError – If writing fails.
Examples
key = service.export_bundle(version)
- list_assets(ref)[source]¶
List relational blob assets for a resolved prompt version.
- Parameters:
ref (
PromptRef) – Prompt reference.- Returns:
Linked asset metadata.
- Return type:
- Raises:
LookupError – If the prompt cannot be resolved.
- export_to_file(version, path)[source]¶
Export a prompt version to a local JSON file.
- Parameters:
version (
PromptVersionView) – Prompt version.
- Returns:
Output path.
- Return type:
- Raises:
OSError – If writing fails.
Examples
service.export_to_file(version, 'build/version.json')
HTTP API¶
FastAPI HTTP API for prompt operations.
Exposes prompt registration, alias movement, resolution, rendering, version
listing, and blob export over HTTP. Interactive OpenAPI docs are served at
/docs when the server is running.
Starting the server:
uvicorn promptdb.api:app --reload
Endpoints (all under /api/v1 by default):
POST /prompts/register— register a new prompt versionGET /prompts/{ns}/{name}/resolve?selector=...— resolve a referencePOST /prompts/{ns}/{name}/render?selector=...— render with variablesPOST /prompts/{ns}/{name}/aliases/{alias}— move an aliasGET /versions— list all stored versionsGET /prompts/{ns}/{name}/assets?selector=...— list blob assetsGET /exports/{ns}/{name}/{selector}— export to blob storage
Using the app factory in tests or custom setups:
from promptdb.api import create_app
from promptdb.settings import AppSettings
app = create_app(AppSettings(database_url="sqlite:///:memory:"))
The module-level app = create_app() instance is used by uvicorn. For
production, run Alembic migrations before starting the server.
- class promptdb.api.RenderRequest(**data)[source]¶
Bases:
BaseModelRequest model for prompt rendering.
- Parameters:
variables – Runtime variables.
- Returns:
Render request.
- Return type:
- Raises:
None. –
Examples
>>> RenderRequest(variables={'name': 'Will'}).variables['name'] 'Will'
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- promptdb.api.build_service(settings)[source]¶
Build a configured prompt service.
- Parameters:
settings (
AppSettings) – Application settings.- Returns:
Configured service.
- Return type:
- Raises:
ValueError – If storage settings are incomplete.
Examples
>>> build_service(AppSettings(database_url='sqlite:///./promptdb.sqlite3')) is not None True
- promptdb.api.create_app(settings=None)[source]¶
Create the FastAPI application.
- Parameters:
settings (
AppSettings|None) – Optional explicit settings.- Returns:
Configured app.
- Return type:
FastAPI- Raises:
None. –
Examples
>>> create_app().title 'promptdb'
File helpers¶
Load prompts from files and write specs and version bundles to disk.
Supported input formats:
YAML / JSON (
.yaml,.yml,.json) — parsed as a fullPromptSpecwith kind, messages, metadata, etc.Plain text (
.txt,.md,.prompt,.jinja,.mustache) — the file body becomes the template. You must specifykindexplicitly.
Loading a structured file:
from promptdb.files import load_prompt_file
spec = load_prompt_file("prompts/support_classifier.yaml")
print(spec.kind) # PromptKind.CHAT
print(spec.messages) # [ChatMessage(...), ...]
Loading a plain-text file:
spec = load_prompt_file(
"prompts/answerer.md", kind=PromptKind.STRING,
)
Saving a spec back to disk:
from promptdb.files import save_prompt_spec
save_prompt_spec(spec, "build/classifier.yaml") # YAML
save_prompt_spec(spec, "build/classifier.json") # JSON
Exporting a full version bundle (includes version_id, revision, aliases):
from promptdb.files import write_version_bundle
write_version_bundle(version_view, "build/classifier.json")
- promptdb.files.load_prompt_file(path, *, kind=None, message_role=MessageRole.HUMAN)[source]¶
Load a prompt spec from a plain-text or structured file.
- Parameters:
kind (
PromptKind|None) – Prompt kind for plain-text files. Structured files can omit this.message_role (
MessageRole) – Message role for plain-text chat prompts.
- Returns:
Loaded prompt specification.
- Return type:
- Raises:
FileNotFoundError – If the file is missing.
ValueError – If
kindis omitted for plain-text files.
Examples
spec = load_prompt_file('prompts/demo.txt', kind=PromptKind.STRING) spec = load_prompt_file('prompts/demo.yaml')
- promptdb.files.save_prompt_spec(spec, path)[source]¶
Write a prompt spec to JSON or YAML.
- Parameters:
spec (
PromptSpec) – Prompt specification.
- Returns:
Output path.
- Return type:
- Raises:
ValueError – If the suffix is unsupported.
OSError – If writing fails.
Examples
save_prompt_spec(spec, 'build/demo.yaml')
- promptdb.files.write_version_bundle(version, path)[source]¶
Write a version bundle to a file.
- Parameters:
version (
PromptVersionView) – Prompt version.
- Returns:
Output path.
- Return type:
- Raises:
OSError – If writing fails.
Examples
write_version_bundle(version, 'build/version.json')
Storage¶
Blob storage adapters for prompt exports and artifacts.
Two adapters are provided:
LocalBlobStore— writes to the local filesystem. No external dependencies. Used by default and in tests.MinioBlobStore— writes to an S3-compatible MinIO server. Requires theminiooptional extra (pip install ooai-promptdb[minio]).
Both adapters expose the same interface: put_text, get_text, and
presign_upload.
Local usage:
from promptdb.storage import LocalBlobStore
store = LocalBlobStore(".blobs")
store.put_text("exports/triage/v1.json", '{"spec": ...}')
content = store.get_text("exports/triage/v1.json")
MinIO usage:
from promptdb.storage import MinioBlobStore
store = MinioBlobStore(
endpoint="localhost:9000",
access_key="minioadmin",
secret_key="minioadmin",
bucket="promptdb",
)
store.put_text("exports/triage/v1.json", '{"spec": ...}')
The object_metadata() helper builds relational metadata dicts for
persisting blob references in the prompt_assets table.
Selecting a backend is done through PROMPTDB_STORAGE_BACKEND (local
or minio) in AppSettings.
- class promptdb.storage.LocalBlobStore(root)[source]¶
Bases:
objectFilesystem-backed blob store.
- Parameters:
- Returns:
Storage adapter.
- Return type:
- Raises:
None. –
Examples
>>> store = LocalBlobStore('.tmp-blobs') >>> store.put_text('x.txt', 'x') 'x.txt'
- put_text(key, content)[source]¶
Store text content.
- Parameters:
- Returns:
Stored object key.
- Return type:
- Raises:
OSError – If writing fails.
Examples
>>> LocalBlobStore('.tmp-blobs').put_text('x.txt', 'hello') 'x.txt'
- get_text(key)[source]¶
Read text content.
- Parameters:
key (
str) – Object key.- Returns:
Stored content.
- Return type:
- Raises:
FileNotFoundError – If the key does not exist.
Examples
>>> store = LocalBlobStore('.tmp-blobs') >>> _ = store.put_text('x.txt', 'hello') >>> store.get_text('x.txt') 'hello'
- class promptdb.storage.MinioBlobStore(*, endpoint, access_key, secret_key, bucket, secure=False)[source]¶
Bases:
objectMinIO-backed blob store.
- Parameters:
- Returns:
Storage adapter.
- Return type:
- Raises:
ImportError – If the MinIO package is unavailable.
Examples
store = MinioBlobStore( endpoint='localhost:9000', access_key='minioadmin', secret_key='minioadmin', bucket='promptdb', secure=False, )
- put_text(key, content)[source]¶
Upload text content.
- Parameters:
- Returns:
Stored object key.
- Return type:
- Raises:
S3Error – If upload fails.
Examples
store.put_text('exports/demo.txt', 'hello')
- get_text(key)[source]¶
Download text content.
- Parameters:
key (
str) – Object key.- Returns:
Text payload.
- Return type:
- Raises:
S3Error – If download fails.
Examples
body = store.get_text('exports/demo.txt')
- promptdb.storage.object_metadata(store, key, *, content=None, content_type=None)[source]¶
Build relational metadata for a stored blob object.
- Parameters:
- Returns:
Metadata payload for relational persistence.
- Return type:
- Raises:
None. –
Examples
>>> meta = object_metadata(LocalBlobStore('.tmp-blobs'), 'x.txt', content='hello') >>> meta['storage_backend'] 'local'
Settings¶
Environment-backed application configuration.
All settings are read from PROMPTDB_* environment variables via
Pydantic Settings. The defaults work for local development with SQLite and
local blob storage — no external services required.
Environment variables:
PROMPTDB_DATABASE_URL— SQLAlchemy URL (default:sqlite:///./promptdb.sqlite3)PROMPTDB_BLOB_ROOT— local blob directory (default:.blobs)PROMPTDB_STORAGE_BACKEND—localorminio(default:local)PROMPTDB_API_PREFIX— API route prefix (default:/api/v1)PROMPTDB_MINIO_ENDPOINT— MinIO host:port (required ifminio)PROMPTDB_MINIO_ACCESS_KEY— MinIO access keyPROMPTDB_MINIO_SECRET_KEY— MinIO secret keyPROMPTDB_MINIO_BUCKET— MinIO bucket (default:promptdb)PROMPTDB_ENABLE_METRICS— mount Prometheus/metrics(default: false)PROMPTDB_ENABLE_OTEL— enable OpenTelemetry instrumentation (default: false)PROMPTDB_LOG_LEVEL— root log level (default:INFO)
Usage:
from promptdb.settings import AppSettings
settings = AppSettings() # from env vars
settings = AppSettings(database_url="sqlite:///:memory:") # explicit
- class promptdb.settings.AppSettings(_case_sensitive=None, _nested_model_default_partial_update=None, _env_prefix=None, _env_prefix_target=None, _env_file=PosixPath('.'), _env_file_encoding=None, _env_ignore_empty=None, _env_nested_delimiter=None, _env_nested_max_split=None, _env_parse_none_str=None, _env_parse_enums=None, _cli_prog_name=None, _cli_parse_args=None, _cli_settings_source=None, _cli_parse_none_str=None, _cli_hide_none_type=None, _cli_avoid_json=None, _cli_enforce_required=None, _cli_use_class_docs_for_groups=None, _cli_exit_on_error=None, _cli_prefix=None, _cli_flag_prefix_char=None, _cli_implicit_flags=None, _cli_ignore_unknown_args=None, _cli_kebab_case=None, _cli_shortcuts=None, _secrets_dir=None, _build_sources=None, **values)[source]¶
Bases:
BaseSettingsEnvironment-backed application settings.
- Parameters:
database_url – SQLAlchemy database URL.
blob_root – Local blob storage root.
storage_backend –
localorminio.api_prefix – API route prefix.
service_name – Service name for logs and traces.
enable_metrics – Whether to expose metrics.
enable_otel – Whether to enable OTel wiring.
redis_url – Optional Redis URL.
minio_endpoint – MinIO endpoint.
minio_access_key – MinIO access key.
minio_secret_key – MinIO secret key.
minio_bucket – MinIO bucket.
minio_secure – Whether MinIO uses TLS.
log_level – Root log level.
- Returns:
Loaded settings instance.
- Return type:
- Raises:
None. –
Examples
>>> AppSettings(database_url="sqlite:///./x.sqlite3").storage_backend 'local'
- model_config: ClassVar[SettingsConfigDict] = {'arbitrary_types_allowed': True, 'case_sensitive': False, 'cli_avoid_json': False, 'cli_enforce_required': False, 'cli_exit_on_error': True, 'cli_flag_prefix_char': '-', 'cli_hide_none_type': False, 'cli_ignore_unknown_args': False, 'cli_implicit_flags': False, 'cli_kebab_case': False, 'cli_parse_args': None, 'cli_parse_none_str': None, 'cli_prefix': '', 'cli_prog_name': None, 'cli_shortcuts': None, 'cli_use_class_docs_for_groups': False, 'enable_decoding': True, 'env_file': None, 'env_file_encoding': None, 'env_ignore_empty': False, 'env_nested_delimiter': None, 'env_nested_max_split': None, 'env_parse_enums': None, 'env_parse_none_str': None, 'env_prefix': 'PROMPTDB_', 'env_prefix_target': 'variable', 'extra': 'ignore', 'json_file': None, 'json_file_encoding': None, 'nested_model_default_partial_update': False, 'protected_namespaces': ('model_validate', 'model_dump', 'settings_customise_sources'), 'secrets_dir': None, 'toml_file': None, 'validate_default': True, 'yaml_config_section': None, 'yaml_file': None, 'yaml_file_encoding': None}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
CLI¶
Rich-powered CLI for local prompt operations.
The CLI provides six commands, all rendering output with Rich tables, panels,
and syntax-highlighted JSON. It uses the same PromptClient
and PromptService as the API.
Commands:
promptdb init # scaffold a workspace with sample files
promptdb list # list all registered prompt versions
promptdb register-file <path> <namespace> <name> # register from file
promptdb resolve <ref> # resolve a prompt reference to JSON
promptdb render <ref> --var key=value # render with variables
promptdb export-file <ref> <path> # write version bundle to disk
Prompt references use namespace/name:selector format:
promptdb resolve support/triage:production
promptdb resolve support/triage:rev:2
promptdb resolve support/triage:latest
The register-file command supports YAML, JSON, and plain text files.
For plain text, specify --kind string or --kind chat.
Entry point: promptdb = promptdb.cli:main (configured in pyproject.toml).