The Big Picture
promptlibretto assembles prompts from composable pieces. Instead of one monolithic string, you define named sections — personas, sentiment, examples, injections — and an assembly_order that weaves them together.
Tune and test in the studio, then Export Model JSON. In your app, one call loads the whole thing:
from promptlibretto import load_registry
eng = load_registry("support_bot.json")
result = await eng.run(state={
"selections": {"personas": "empathetic"},
})
Step through to see each layer build the prompt on the right →
Registry & Sections
A registry is a JSON object containing named sections. Each section holds a list of items. The assembly_order is a list of dot-notation tokens — like persona.context or sentiment.nudges — that control what goes into the prompt and in what order.
Built-in section types: base_context, personas, sentiment, static_injections, runtime_injections, output_prompt_directions, examples, prompt_endings.
Selections
At run time you select which item from each section to use. Required sections default to the first item; optional ones are skipped unless explicitly selected.
The Random toggle re-rolls the selection on every generate — useful for personas or sentiment that should vary across runs.
Runtime Modes & Template Vars
List fields inside an item (nudges, examples, directives) have per-array runtime modes: all, none, index:N, or random:K. Use random:1 to pick a single nudge each run.
Template vars fill {placeholders} in section text. Conditional fragments on base_context items only render when their variable has a value — so an unfilled {sublocation} never leaves a broken sentence.
Generation & Output Policy
Each registry carries a generation block (temperature, top_p, max_tokens, retries, …) and an optional output_policy (length caps, forbidden substrings, required patterns, prefix stripping).
When you hit Generate, the engine assembles the prompt, calls your local model browser-direct, then validates the response against the policy. If validation fails and retries > 0 it tries again. The Debug Trace panel shows the exact prompt, response, and resolved config for every attempt.
Pre-Generate & Export
Pre-Generate resolves and displays the assembled prompt before any LLM call — inspect it, then hit Generate.
Export Model JSON copies the full registry with your current selections, modes, sliders, and generation overrides baked in. Load it back in your app:
eng = load_registry("support_bot.json")
result = await eng.run(state=...)
Builder
The Builder page (/builder) is a visual form for constructing a registry from scratch — no JSON editing required.
Add sections, fill in items, drag-and-drop the assembly order, set generation and output policy, then hit Open in Studio to start tuning immediately.