This is a must-to-do setup to use any AI feature in the toolkit.
Install Provider LibrariesBefore configuring a provider, make sure you have its Python library installed.
- For OpenAI:
pip install openai
- For Anthropic:
pip install anthropic
- For HuggingFace:
pip install transformers torch litellm
- For Ollama:
pip install ollama
- …
- Automated Setup (Recommended): Use the interactive
plb configure-ai
command. - Manual Setup: Manually create a
.env
file for secrets and edit yourplb.toml
for settings.
Automated Setup
This interactive command is the easiest and safest way to set up your LLM. It will guide you through selecting a provider, entering your API key securely, choosing a model and configuring additional settings. Here’s the steps -1
Start the Configuration Panel
In your terminal, run the command:
2
Choose Your Provider
The command will present a list of common providers. Use the arrow keys to make a selection and press Enter.
3
Configure Your Settings
Based on your choice of the provider, follow the guided instructions in the CLI.Once you select the provider, follow these steps:You can also set custom model parameters like
Using a local model via Ollama requires an extra step:
To use models from Huggingface:
Configuring mainstream, lightweight and cloud based LLMs is straightforward, but local models and Hugging Face setups require few additional steps. Please see the details below.
- Enter your API key:
- Select a Model:
max_tokens
, temperature
, etc. in plb.toml
file.For Local or Huggingface Setup:Local Model Configuration (Ollama)
Local Model Configuration (Ollama)
You can checkout this awesome blog post to learn how to download and use ollama.
- Ensure Ollama is Running: The wizard will first remind you to start the Ollama server in a separate terminal.
- Set the Model Name: Enter the name of the Ollama model you have pulled (e.g., llama3). No API key is needed.
Huggingface Model Configuration
Huggingface Model Configuration
Using huggingface model requires you to create an account at huggingface and generate an access token. You can checkout this awesome blog post to learn more.
- Enter Your HF Token: Provide your HF Token when prompted. This will be securely stored in your
.env
file.
- Set the Model Repository ID: Enter the full repository ID of the model you want to use.
4
Configuration Complete!
Once you’ve completed the steps for your chosen provider, the command will save all settings to your
plb.toml
and .env files
and confirm that the setup is complete.Manual Setup
If you prefer to manage your configuration files directly, you can set them up manually.Always add .env to your .gitignore file to prevent your secret keys from being committed to version control.
1
Create the .env file for API Keys/Tokens
In your project’s root directory, create a
.env
file and add the corresponding environment variable for your provider.2
Edit the plb.toml file
Next, open your project’s plb.toml file. Add or edit the [ai] section to specify the provider and model you intend to use.Once both files are created and saved, your manual configuration is complete.However, you can check for success using
plb configure-ai
status.Advanced: Using a Custom or Self-Hosted Model
Under DevelopmentThe feature is currently under development! Checkout our GitHub for updates.