📔
SmartGPT
  • Introduction
  • Usage
  • Autos
    • Autos
    • Agents
  • Systems
    • LLMs
    • Plugins
    • Memory
  • API
    • Initializing
    • Usage
Powered by GitBook
On this page
  • Installation
  • With Docker
  • Local Models

Usage

Using SmartGPT.

PreviousIntroductionNextAutos

Last updated 2 years ago

Installation

Prerequisites:

  1. Clone the Repository.

git clone https://github.com/Cormanz/smartgpt.git

Alternatively, and run it there.

  1. Run it your first time with cargo run --release, which will auto-generate a config.yml.

  2. Fill in and optionally modify your config.yml, then run it again.

With Docker

  1. Install , preferably the latest stable version.

  2. Clone the repository with git clone https://github.com/Cormanz/smartgpt.git && cd smartgpt.

  3. Build the docker image with: docker compose build

  4. Run it in release mode docker compose run --rm smartgpt. This will create a config.yml for you.

  5. Adjust the config to your liking, and execute it once again.

Local Models

SmartGPT has experimental support for local models using our llm-rs integration. Anywhere where you see llm:, you can try to swap in a local model:

local:
    type: llama # llama / bloom / gpt2 / gptj / neox
    model path: PATH
    context tokens: 2048
    mmap: true #optional
Rust and Cargo
create a Github Codespace
docker compose