Step-by-step guide to LLM configuration
This is a must-to-do setup to use any AI feature in the toolkit.
This guide will walk you through setting up LLMs for use within the toolkit. Whether you’re connecting to a popular cloud-based provider, running a local model, or integrating your own custom model. Follow the steps below to get started with the setup that best fits your workflow.
Install Provider Libraries
Before configuring a provider, make sure you have its Python library installed.
pip install openai
pip install anthropic
pip install transformers torch litellm
pip install ollama
There are two ways you can configure LLMs in this toolkit -
plb configure-ai
command..env
file for secrets and edit your plb.toml
for settings.We recommend using the command for a guided, error-free experience.
This interactive command is the easiest and safest way to set up your LLM. It will guide you through selecting a provider, entering your API key securely, choosing a model and configuring additional settings.
Here’s the steps -
Start the Configuration Panel
In your terminal, run the command:
Choose Your Provider
The command will present a list of common providers. Use the arrow keys to make a selection and press Enter.
Configure Your Settings
Based on your choice of the provider, follow the guided instructions in the CLI.
Configuring mainstream, lightweight and cloud based LLMs is straightforward, but local models and Hugging Face setups require few additional steps. Please see the details below.
Once you select the provider, follow these steps:
You can also set custom model parameters like max_tokens
, temperature
, etc. in plb.toml
file.
For Local or Huggingface Setup:
Local Model Configuration (Ollama)
You can checkout this awesome blog post to learn how to download and use ollama.
Using a local model via Ollama requires an extra step:
Huggingface Model Configuration
Using huggingface model requires you to create an account at huggingface and generate an access token. You can checkout this awesome blog post to learn more.
To use models from Huggingface:
.env
file.Configuration Complete!
Once you’ve completed the steps for your chosen provider, the command will save all settings to your plb.toml
and .env files
and confirm that the setup is complete.
If you prefer to manage your configuration files directly, you can set them up manually.
Always add .env to your .gitignore file to prevent your secret keys from being committed to version control.
Create the .env file for API Keys/Tokens
In your project’s root directory, create a .env
file and add the corresponding environment variable for your provider.
Edit the plb.toml file
Next, open your project’s plb.toml file. Add or edit the [ai] section to specify the provider and model you intend to use.
Once both files are created and saved, your manual configuration is complete.
However, you can check for success using plb configure-ai
status.
Under Development
The feature is currently under development! Checkout our GitHub for updates.
You can now checkout the AI Features page!