This guide shows the final and most important step: integrating a version-controlled prompt into your Python application using the SDK. While the plb
command line is perfect for managing your library, the Python SDK is how you’ll bring your prompts to life in a production environment.
We will write a simple Python script that loads our joke-generator prompt, renders it with a specific topic, and prints the result.
Best Practice: The “Latest Version” Advantage
A key feature of the SDK is that project.get_prompt(“name”) automatically finds and uses the latest semantic version of that prompt. This means you can create v1.1.0 of joke-generator, and your application code will start using it immediately—no code changes required!
Here is a complete example of a Python script that uses the SDK to prepare the prompt and then gets a completion from an LLM.
from prompt_lockbox.api import Project
from rich import print # For pretty printing
# --- Part 1: Prepare the Prompt with Prompt Lockbox ---
# 1. Initialize your project.
try:
project = Project()
except FileNotFoundError:
print("Error: Not a PromptLockbox project. Have you run `plb init`?")
exit()
# 2. Get a prompt by its name.
# The SDK automatically finds the LATEST version.
prompt = project.get_prompt("joke-generator")
if not prompt:
print("Error: Prompt 'joke-generator' not found!")
exit()
# --- Part 2: Run the Prompt and Get a Response ---
# 3. The `run()` method handles everything:
# - It renders the template with your variables.
# - It calls the configured LLM API.
# - It returns a structured response object.4
print("--- Calling the LLM via prompt.run()... ---")
try:
result = prompt.run(topic="computers")
# The result is a dictionary-like object with all the details
print("\n--- ✅ Success! Here is the structured result: ---")
rprint(result)
except Exception as e:
print(f"Error during prompt execution: {e}")
Here’s the terminal output:
--- Calling the LLM via prompt.run()... ---
--- ✅ Success! Here is the structured result: ---
{
'prompt_name': 'joke-generator',
'prompt_version': '1.1.0',
'rendered_prompt': 'Tell me a dad joke about computers.',
'llm_response': {
'output_text': 'Why did the computer keep sneezing?\n\nIt had a virus!',
'model_used': 'gpt-4o-mini',
'usage': {
'input_tokens': 15,
'output_tokens': 12,
'total_tokens': 27
}
}
}
Next Steps
You’ve seen how to manage prompts on the command line and how the SDK provides a simple run()
method to handle rendering and LLM execution.
Where to Go Next?
Check out our Use Cases section to learn how to:
- Set up CI/CD pipelines to automatically lint and verify your prompts.
- Manage complex team workflows with namespaces and status flags.
- Leverage advanced search to build a truly discoverable prompt library.
Responses are generated using AI and may contain mistakes.