Usage
Using SmartGPT.
Installation
Prerequisites: Rust and Cargo
- Clone the Repository. 
git clone https://github.com/Cormanz/smartgpt.gitAlternatively, create a Github Codespace and run it there.
- Run it your first time with - cargo run --release, which will auto-generate a- config.yml.
- Fill in and optionally modify your - config.yml, then run it again.
With Docker
- Install - docker compose, preferably the latest stable version.
- Clone the repository with - git clone https://github.com/Cormanz/smartgpt.git && cd smartgpt.
- Build the docker image with: - docker compose build
- Run it in release mode - docker compose run --rm smartgpt. This will create a- config.ymlfor you.
- Adjust the config to your liking, and execute it once again. 
Local Models
SmartGPT has experimental support for local models using our llm-rs integration. Anywhere where you see llm:, you can try to swap in a local model:
local:
    type: llama # llama / bloom / gpt2 / gptj / neox
    model path: PATH
    context tokens: 2048
    mmap: true #optionalLast updated