Project Class
TheProject
class is the main entry point for interacting with a Prompt Lockbox project via the Python SDK. It represents your entire prompt library and provides methods for finding, creating, and managing all prompts within it.
Initialization
Project(path=None)
Initializes the Project, finding its root directory and loading its configuration. ParametersAn optional path to a directory within the project. If None, it searches upwards from the current working directory to find the project root.
If no plb.toml file is found, indicating it’s not a valid project.
Properties
root
The root pathlib.Path object of the Prompt Lockbox project. ReturnsThe absolute path to the project’s root directory.
Methods
get_prompt(identifier)
Finds a single prompt by its name, ID, or file path. If a name is provided, it returns the Prompt object for the latest version. ParametersThe name, ID, or file path string of the prompt to find.
A Prompt object if found, otherwise None.
list_prompts()
Returns a list of all prompts found in the project. ReturnsA list of Prompt objects for every valid prompt file found.
create_prompt(…)
Creates a new prompt file on disk from the given metadata.The name of the new prompt.
The starting semantic version for the prompt.
The author of the prompt. If not provided, it attempts to use the current Git user.
A short, human-readable summary of the prompt’s purpose.
A list of strings to organize the prompt in a hierarchy (e.g., [‘billing’, ‘invoices’]).
A list of lowercase keywords for search and discovery.
The specific LLM this prompt is designed for (e.g., openai/gpt-4o-mini).
Any extra comments, warnings, or usage instructions.
A dictionary of model parameters to be stored.
A list of IDs of other prompts that are related to this one.
A Prompt object representing the newly created file.
get_prompt(identifier)
Finds a single prompt by its name, ID, or file path. If a name is provided, it returns the Prompt object for the latest version. ParametersThe name, ID, or file path string of the prompt to find.
A Prompt object if found, otherwise None.
search(…)
Searches for prompts using a specified method. ParametersThe search query string.
The search method to use. Choices: fuzzy, hybrid, splade.
The maximum number of results to return.
(Hybrid search only) A float between 0.0 (keyword) and 1.0 (semantic) to balance the search.
A list of result dictionaries, sorted by relevance.
index(method=‘hybrid’)
Builds a search index for all prompts to enable advanced search.The indexing method to use. Choices: hybrid or splade.
lint()
Validates all prompt files in the project for correctness and consistency. ReturnsA dictionary of results categorized by check type.
get_status_report()
Generates a report of the lock status of all prompts. ReturnsA dictionary categorizing prompts into locked, unlocked, tampered, and missing.
document_all(prompts_to_document=None)
Uses an AI to automatically document a list of prompts, or all prompts if none are provided. ParametersA specific list of Prompt objects to document. If None, all prompts in the project will be processed.
get_ai_config()
Retrieves the AI configuration from the project’s plb.toml file. ReturnsA dictionary with provider and model keys.
Prompt Class
ThePrompt
class represents a single, versioned prompt file. It is the primary object you’ll work with to render, validate, and manage an individual prompt.
Properties
These are read-only attributes that provide quick access to the prompt’s data.path
The absolute pathlib.Path to the prompt’s .yml file.data
A dict containing all the parsed data from the YAML file.name
The name of the prompt as a string.version
The version of the prompt as a string (e.g., “1.0.0”).description
The description of the prompt as a string.required_variables
A set of all undeclared template variables found in the template string (e.g., ).Methods
render(strict=True, **kwargs)
ParametersIf True, the method will raise an
UndefinedError
for any missing variables. If False, it will render missing variables as <<variable_name>>
in the output string.The key-value pairs to inject into the template. These will override any default_inputs specified in the prompt file.
The final, rendered prompt text as a string.
run(kwargs)
Renders the prompt, calls the configured LLM, and returns a structured result. ParametersThe key-value pairs to inject into the template before sending the prompt to the LLM.
A dictionary containing the rendered prompt, LLM output, model details, and usage statistics.
execute
Renders the prompt with given variables and executes it against the configured AI provider, returning the live response. This is the primary method for getting an AI response from a prompt. ParametersThe variables to inject into the template before execution, provided as keyword arguments (e.g., customer_name=“Jane Doe”).
The return type depends on whether an output_schema is defined in your prompt’s .yml file.
- If no output_schema is present (default): It returns a str containing the raw text response from the language model.
- If an output_schema is present: It returns a dict containing the structured, parsed data that conforms to your schema.
lock()
Creates a lock entry for this prompt in the project’s lockfile. This records the file’s current SHA256 hash and a timestamp, marking it as secure.unlock()
Removes the lock entry for this prompt from the project’s lockfile, allowing it to be edited.verify()
Verifies the integrity of this prompt against the lockfile. ReturnsA tuple containing a boolean (True if secure) and a status string (‘OK’, ‘UNLOCKED’, ‘TAMPERED’).
new_version(bump_type=‘minor’, author=None)
Creates a new, version-bumped copy of this prompt file and returns a Prompt object for the new file. ParametersThe type of version bump to perform. Choices: major, minor, patch.
The author for the new version. If None, it defaults to the author of the source prompt or the current Git user.
A new Prompt object representing the newly created file.
document()
Uses an AI to analyze the prompt’s template and automatically generate and save a description and tags for it. The original file’s comments and layout are preserved.get_critique(note=…)
Gets an AI-powered critique and suggestions for improving the prompt. This method does not modify the file. ParametersA specific instruction for the AI on how to improve the prompt (e.g., “Make it more robust”).
A dictionary containing the ‘critique’, ‘suggestions’, and ‘improved_template’.
improve(improved_template)
Overwrites the prompt’s template block with a new version and updates its last_update timestamp. The original file’s comments and layout are preserved. ParametersThe new template string to write to the file. This is typically sourced from the result of .get_critique().