Project Class

The Project class is the main entry point for interacting with a Prompt Lockbox project via the Python SDK. It represents your entire prompt library and provides methods for finding, creating, and managing all prompts within it.

Initialization

Project(path=None)

Initializes the Project, finding its root directory and loading its configuration. Parameters
path
str | Path | None
An optional path to a directory within the project. If None, it searches upwards from the current working directory to find the project root.
Raises
FileNotFoundError
exception
If no plb.toml file is found, indicating it’s not a valid project.
Example
from prompt_lockbox.api import Project

# Initialize from anywhere inside the project
project = Project()

Properties

root

The root pathlib.Path object of the Prompt Lockbox project. Returns
root
pathlib.Path
The absolute path to the project’s root directory.

Methods

get_prompt(identifier)

Finds a single prompt by its name, ID, or file path. If a name is provided, it returns the Prompt object for the latest version. Parameters
identifier
string
required
The name, ID, or file path string of the prompt to find.
Returns
prompt
Prompt | None
A Prompt object if found, otherwise None.

list_prompts()

Returns a list of all prompts found in the project. Returns
prompts
List[Prompt]
A list of Prompt objects for every valid prompt file found.

create_prompt(…)

Creates a new prompt file on disk from the given metadata.
name
string
required
The name of the new prompt.
version
string
default:"1.0.0"
The starting semantic version for the prompt.
author
str | None
The author of the prompt. If not provided, it attempts to use the current Git user.
description
string
A short, human-readable summary of the prompt’s purpose.
namespace
str | List[str] | None
A list of strings to organize the prompt in a hierarchy (e.g., [‘billing’, ‘invoices’]).
tags
List[str] | None
A list of lowercase keywords for search and discovery.
intended_model
str
The specific LLM this prompt is designed for (e.g., openai/gpt-4o-mini).
notes
str
Any extra comments, warnings, or usage instructions.
model_parameters
Dict[str, Any] | None
A dictionary of model parameters to be stored.
linked_prompts
List[str] | None
A list of IDs of other prompts that are related to this one.
Returns
new_prompt
Prompt
A Prompt object representing the newly created file.

get_prompt(identifier)

Finds a single prompt by its name, ID, or file path. If a name is provided, it returns the Prompt object for the latest version. Parameters
identifier
string
required
The name, ID, or file path string of the prompt to find.
Returns
prompt
string
required
A Prompt object if found, otherwise None.

search(…)

Searches for prompts using a specified method. Parameters
query
str
required
The search query string.
method
string
default:"fuzzy"
The search method to use. Choices: fuzzy, hybrid, splade.
limit
int
default:"10"
The maximum number of results to return.
alpha
float
(Hybrid search only) A float between 0.0 (keyword) and 1.0 (semantic) to balance the search.
Returns
results
list[dict]
A list of result dictionaries, sorted by relevance.

index(method=‘hybrid’)

Builds a search index for all prompts to enable advanced search.
method
str
default:"hybrid"
The indexing method to use. Choices: hybrid or splade.

lint()

Validates all prompt files in the project for correctness and consistency. Returns
report
dict
A dictionary of results categorized by check type.

get_status_report()

Generates a report of the lock status of all prompts. Returns
report
dict
A dictionary categorizing prompts into locked, unlocked, tampered, and missing.

document_all(prompts_to_document=None)

Uses an AI to automatically document a list of prompts, or all prompts if none are provided. Parameters
prompts_to_document
List[Prompt] | None
A specific list of Prompt objects to document. If None, all prompts in the project will be processed.

get_ai_config()

Retrieves the AI configuration from the project’s plb.toml file. Returns
config
dict
A dictionary with provider and model keys.

Prompt Class

The Prompt class represents a single, versioned prompt file. It is the primary object you’ll work with to render, validate, and manage an individual prompt.

Properties

These are read-only attributes that provide quick access to the prompt’s data.

path

The absolute pathlib.Path to the prompt’s .yml file.

data

A dict containing all the parsed data from the YAML file.

name

The name of the prompt as a string.

version

The version of the prompt as a string (e.g., “1.0.0”).

description

The description of the prompt as a string.

required_variables

A set of all undeclared template variables found in the template string (e.g., ).

Methods

render(strict=True, **kwargs)

Parameters
strict
bool
default:"True"
If True, the method will raise an UndefinedError for any missing variables. If False, it will render missing variables as <<variable_name>> in the output string.
**kwargs
any
The key-value pairs to inject into the template. These will override any default_inputs specified in the prompt file.
Returns
rendered_text
str
The final, rendered prompt text as a string.

run(kwargs)

Renders the prompt, calls the configured LLM, and returns a structured result. Parameters
result
dict
The key-value pairs to inject into the template before sending the prompt to the LLM.
Returns
rendered_text
str
A dictionary containing the rendered prompt, LLM output, model details, and usage statistics.

execute

Renders the prompt with given variables and executes it against the configured AI provider, returning the live response. This is the primary method for getting an AI response from a prompt. Parameters
**kwargs
Any
The variables to inject into the template before execution, provided as keyword arguments (e.g., customer_name=“Jane Doe”).
Returns
rendered_text
str | dict
The return type depends on whether an output_schema is defined in your prompt’s .yml file.
  • If no output_schema is present (default): It returns a str containing the raw text response from the language model.
  • If an output_schema is present: It returns a dict containing the structured, parsed data that conforms to your schema.

lock()

Creates a lock entry for this prompt in the project’s lockfile. This records the file’s current SHA256 hash and a timestamp, marking it as secure.

unlock()

Removes the lock entry for this prompt from the project’s lockfile, allowing it to be edited.

verify()

Verifies the integrity of this prompt against the lockfile. Returns
verification_status
tuple[bool, str]
A tuple containing a boolean (True if secure) and a status string (‘OK’, ‘UNLOCKED’, ‘TAMPERED’).

new_version(bump_type=‘minor’, author=None)

Creates a new, version-bumped copy of this prompt file and returns a Prompt object for the new file. Parameters
bump_type
str
default:"minor"
The type of version bump to perform. Choices: major, minor, patch.
author
str | None
The author for the new version. If None, it defaults to the author of the source prompt or the current Git user.
Returns
new_prompt
Prompt
A new Prompt object representing the newly created file.

document()

Uses an AI to analyze the prompt’s template and automatically generate and save a description and tags for it. The original file’s comments and layout are preserved.

get_critique(note=…)

Gets an AI-powered critique and suggestions for improving the prompt. This method does not modify the file. Parameters
note
str
A specific instruction for the AI on how to improve the prompt (e.g., “Make it more robust”).
Returns
critique_data
dict
A dictionary containing the ‘critique’, ‘suggestions’, and ‘improved_template’.

improve(improved_template)

Overwrites the prompt’s template block with a new version and updates its last_update timestamp. The original file’s comments and layout are preserved. Parameters
improved_template
str
required
The new template string to write to the file. This is typically sourced from the result of .get_critique().
Congo! Explore more.