Usage

Using SmartGPT.

Installation

Prerequisites: Rust and Cargo

  1. Clone the Repository.

git clone https://github.com/Cormanz/smartgpt.git

Alternatively, create a Github Codespace and run it there.

  1. Run it your first time with cargo run --release, which will auto-generate a config.yml.

  2. Fill in and optionally modify your config.yml, then run it again.

With Docker

  1. Install docker compose, preferably the latest stable version.

  2. Clone the repository with git clone https://github.com/Cormanz/smartgpt.git && cd smartgpt.

  3. Build the docker image with: docker compose build

  4. Run it in release mode docker compose run --rm smartgpt. This will create a config.yml for you.

  5. Adjust the config to your liking, and execute it once again.

Local Models

SmartGPT has experimental support for local models using our llm-rs integration. Anywhere where you see llm:, you can try to swap in a local model:

local:
    type: llama # llama / bloom / gpt2 / gptj / neox
    model path: PATH
    context tokens: 2048
    mmap: true #optional

Last updated