This is a must-to-do setup to use any AI feature in the toolkit.

This guide will walk you through setting up LLMs for use within the toolkit. Whether you’re connecting to a popular cloud-based provider, running a local model, or integrating your own custom model. Follow the steps below to get started with the setup that best fits your workflow.

Install Provider Libraries

Before configuring a provider, make sure you have its Python library installed.

  • For OpenAI: pip install openai
  • For Anthropic: pip install anthropic
  • For HuggingFace: pip install transformers torch litellm
  • For Ollama: pip install ollama

There are two ways you can configure LLMs in this toolkit -

  1. Automated Setup (Recommended): Use the interactive plb configure-ai command.
  2. Manual Setup: Manually create a .env file for secrets and edit your plb.toml for settings.

We recommend using the command for a guided, error-free experience.

Automated Setup

This interactive command is the easiest and safest way to set up your LLM. It will guide you through selecting a provider, entering your API key securely, choosing a model and configuring additional settings.

Here’s the steps -

1

Start the Configuration Panel

In your terminal, run the command:

plb configure-ai
2

Choose Your Provider

The command will present a list of common providers. Use the arrow keys to make a selection and press Enter.

? Select your LLM Provider: 
> OpenAI
  Anthropic
  Ollama (Local)
  Huggingface
  ...
3

Configure Your Settings

Based on your choice of the provider, follow the guided instructions in the CLI.

Configuring mainstream, lightweight and cloud based LLMs is straightforward, but local models and Hugging Face setups require few additional steps. Please see the details below.

Once you select the provider, follow these steps:

  1. Enter your API key:
? Please enter your OpenAI API Key (will be stored in .env): 
sk-********************
  1. Select a Model:
? Enter the default model name (e.g., gpt-4o-mini): gpt-4o-mini

You can also set custom model parameters like max_tokens, temperature, etc. in plb.toml file.

For Local or Huggingface Setup:

4

Configuration Complete!

Once you’ve completed the steps for your chosen provider, the command will save all settings to your plb.toml and .env files and confirm that the setup is complete.

Success! Configuration saved. You are now ready to use AI features!

Manual Setup

If you prefer to manage your configuration files directly, you can set them up manually.

Always add .env to your .gitignore file to prevent your secret keys from being committed to version control.

1

Create the .env file for API Keys/Tokens

In your project’s root directory, create a .env file and add the corresponding environment variable for your provider.

# Inside your .env file

# For OpenAI
OPENAI_API_KEY="sk-..."

# For Hugging Face
HUGGING_FACE_HUB_TOKEN="hf_..."
2

Edit the plb.toml file

Next, open your project’s plb.toml file. Add or edit the [ai] section to specify the provider and model you intend to use.

[ai]
# In plb.toml
[ai]
provider = "openai"
model = "gpt-4o-mini"    

Once both files are created and saved, your manual configuration is complete.

However, you can check for success using plb configure-ai status.

Advanced: Using a Custom or Self-Hosted Model

Under Development

The feature is currently under development! Checkout our GitHub for updates.

Next Steps

You can now checkout the AI Features page!