simonw / llm-gpt4all

Plugin for LLM adding support for the GPT4All collection of models
Apache License 2.0
194 stars 19 forks source link

Error: The model `gpt-4-32k` does not exist or you do not have access to it. #11

Open SuperBruceJia opened 10 months ago

SuperBruceJia commented 10 months ago
$ llm models list
OpenAI Chat: gpt-3.5-turbo (aliases: 3.5, chatgpt)
OpenAI Chat: gpt-3.5-turbo-16k (aliases: chatgpt-16k, 3.5-16k)
OpenAI Chat: gpt-4 (aliases: 4, gpt4)
OpenAI Chat: gpt-4-32k (aliases: 4-32k)
gpt4all: orca-mini-7b - Mini Orca, 3.53GB download, needs 8GB RAM (installed)
gpt4all: GPT4All-13B-snoozy - Snoozy, 7.58GB download, needs 16GB RAM (installed)
gpt4all: ggml-all-MiniLM-L6-v2-f16 - Bert, 43.41MB download, needs 1GB RAM
gpt4all: orca-mini-3b - Mini Orca (Small), 1.80GB download, needs 4GB RAM
gpt4all: llama-2-7b-chat - Llama-2-7B Chat, 3.53GB download, needs 8GB RAM
gpt4all: ggml-model-gpt4all-falcon-q4_0 - GPT4All Falcon, 3.78GB download, needs 8GB RAM
gpt4all: ggml-replit-code-v1-3b - Replit, 4.84GB download, needs 4GB RAM
gpt4all: wizardlm-13b-v1 - Wizard v1.1, 6.82GB download, needs 16GB RAM
gpt4all: orca-mini-13b - Mini Orca (Large), 6.82GB download, needs 16GB RAM
gpt4all: starcoderbase-3b-ggml - Starcoder (Small), 6.99GB download, needs 8GB RAM
gpt4all: nous-hermes-13b - Hermes, 7.58GB download, needs 16GB RAM
gpt4all: wizardLM-13B-Uncensored - Wizard Uncensored, 7.58GB download, needs 16GB RAM
gpt4all: starcoderbase-7b-ggml - Starcoder, 16.63GB download, needs 16GB RAM
LlamaModel: llama-2-7b-chat.ggmlv3.q8_0 (aliases: llama2-chat, l2c)
LlamaModel: llama-2-13b-chat.ggmlv3.q8_0 (aliases: llama2-chat-13b)

$ llm -m gpt-4-32k 'What do you think of Boston University and City University of Hong Kong'
Error: The model `gpt-4-32k` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.
gdbing commented 9 months ago

gpt-4-32k is an OpenAI model, not one of the models available through gpt4all. You need an OpenAI API key to use it, and it doesn't run locally.