Getting Started
Check out our Cloud offering at Promptly or follow the instructions below to deploy LLMStack on your own infrastructure. If you are using Promptly, you can skip this section and go to Key Concepts section.
Prerequisites
Installation
You can install LLMStack locally using the following command:
pip install llmstack
LLMStack comes with a default admin account whose credentials are admin
and promptly
. Be sure to change the password from admin panel after logging in.
If you are on Windows, please use WSL2 (Windows Subsystem for Linux) to install LLMStack. You can follow the instructions here to install WSL2. Once you are in a WSL2 terminal, you can install LLMStack using the above command.
Once installed, you can start LLMStack using the following command:
llmstack
LLMStack should automatically open your browser and point it to login page on http://localhost:3000. You can also alternatively open http://localhost:3000 to login into the platform.
LLMStack creates a config file in your home directory at ~/.llmstack/config
to store the configuration. You can change the port and other settings from this file. Refer to the configuration section for more information.
When you start LLMStack for the first time, it will download the required docker images. This may take a few minutes depending on your internet connection.
If you are deploying LLMStack on a server, make sure to update allowed_hosts
and csrf_trusted_origins
in ~/.llmstack/config
file to include the hostname of your server. Refer to the configuration section for more information.
You can add your own keys to providers like OpenAI, Cohere, Stability etc., from Settings page. If you want to provide default keys for all the users of your LLMStack instance, you can add them to the ~/.llmstack/config
file.
Upgrading
To upgrade LLMStack to the latest release, you can run the following command:
pip install llmstack --upgrade
Key Concepts
Processors
Processors are the basic building blocks in LLMStack. These provide the functionality to process the input from user or from a previous processor in a chain, take some action and optionally generate a response. LLMStack comes with a few built-in processors like OpenAI's ChatGPT, Image Generation, Stability's Image Generation etc. You can also create your own processors and add them to LLMStack.
Tools
Tools are processors that can be used to perform some action when used in the context of agents. For example, you can use ChatGPT
processor with a prompt to generate essays as a tool in an Agent app and the agent will use the tool whenever it needs to generate an essay.
Providers
Providers are the entities that provide the functionality to the processors. For example, OpenAI's ChatGPT processor uses OpenAI's API to generate text. LLMStack comes with a few built-in providers like OpenAI, Cohere, Stability etc. Providers act as namespaces for the processors. You can also create your own providers and add them to LLMStack.
Apps
Apps are the final product of LLMStack. Apps are created by chaining multiple processors together. LLMStack provides a visual editor to create apps. Apps can be shared with other users of LLMStack installation. Apps can be invoked using APIs, from the UI or triggered from Slack, Discord etc.
Agents
Agents are the autonomous apps that can perform tasks on your behalf. Agents use the provided processors as tools to perform tasks. For example, you can create an agent to act as an SDR (Sales Development Representative) and use it to send emails to your leads, using the processors provided by LLMStack as tools.