cocktailpeanut / dalai

The simplest way to run LLaMA on your local machine
https://cocktailpeanut.github.io/dalai
13.09k stars 1.43k forks source link

Added `ctx_size` to the config #424

Open pratyushtiwary opened 1 year ago

pratyushtiwary commented 1 year ago

Issue: when running model on systems with low ram the model throws segmentation fault error because of default context size, which is 2 G.B. by default.

Changes: this PR contains addition of new config which allows user to change context size directly from the UI.