mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
24.43k stars 1.87k forks source link

Error when GPT-16K model is used in the telegram-bot example #2636

Open greygoo opened 4 months ago

greygoo commented 4 months ago

LocalAI version:

quay.io/go-skynet/local-ai:v1.18.0-ffmpeg rebuild with GO_TAGS=stablediffusion

Environment, CPU architecture, OS, and Version:

rtx4060/ryzen5700/32G

Describe the bug NOTE: The affected code is not in the LocalAI repo, but at https://github.com/mudler/chatgpt_telegram_bot, it only gets used in the example documentation.

When selecting the GPT-16K model in the bot settings and sending a prompt, the bot throws an error:

Something went wrong during completion. Reason: could not load model - all backends returned error: 11 errors occurred:
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model
 * failed loading model

 {"error":{"code":500,"message":"could not load model - all backends returned error: 11 errors occurred:\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\n","type":""}} 500 {'error': {'code': 500, 'message': 'could not load model - all backends returned error: 11 errors occurred:\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\n', 'type': ''}} <CIMultiDictProxy('Date': 'Sun, 23 Jun 2024 00:30:29 GMT', 'Content-Type': 'application/json', 'Content-Length': '406')>

From also looking at the open ai logs (see below), it looks like the model is simply missing.

To Reproduce

Expected behavior

The clicked model is selected

Suggested fixes would be to

Let me know which you'd prefer, then I can take care of that. I don't know really what I'm doing when adding a model yet, so in that case I might come back with issues I run into on discord. The other two options are done quickly without help.

Logs

log from local ai:

api-1                 | 12:30AM DBG Request received: {"model":"gpt-3.5-turbo-16k","file":"","language":"","response_format":"","size":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"system"},{"role":"user","content":"why can't pigs fly?"}],"stream":false,"echo":false,"top_p":1,"top_k":0,"temperature":0.7,"max_tokens":1000,"n":0,"batch":0,"f16":false,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"mirostat_eta":0,"mirostat_tau":0,"mirostat":0,"seed":0,"mode":0,"step":0}
api-1                 | 12:30AM DBG Parameter Config: &{OpenAIRequest:{Model:gpt-3.5-turbo-16k File: Language: ResponseFormat: Size: Prompt:<nil> Instruction: Input:<nil> Stop:<nil> Messages:[] Stream:false Echo:false TopP:1 TopK:80 Temperature:0.7 Maxtokens:1000 N:0 Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 Seed:0 Mode:0 Step:0} Name: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:512 F16:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Completion: Chat: Edit:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptStrings:[] InputStrings:[] InputToken:[]}
api-1                 | 12:30AM DBG Loading model 'gpt-3.5-turbo-16k' greedly
api-1                 | 12:30AM DBG [llama] Attempting to load
api-1                 | 12:30AM DBG Loading model llama from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | error loading model: failed to open /models/gpt-3.5-turbo-16k: No such file or directory
api-1                 | llama_init_from_file: failed to load model
api-1                 | 12:30AM DBG [llama] Fails: failed loading model
api-1                 | 12:30AM DBG [gpt4all] Attempting to load
api-1                 | 12:30AM DBG Loading model gpt4all from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | load_gpt4all_model: error 'No such file or directory'
api-1                 | 12:30AM DBG [gpt4all] Fails: failed loading model
api-1                 | 12:30AM DBG [gptneox] Attempting to load
api-1                 | 12:30AM DBG Loading model gptneox from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | gpt_neox_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | gpt_neox_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [gptneox] Fails: failed loading model
api-1                 | 12:30AM DBG [bert-embeddings] Attempting to load
api-1                 | 12:30AM DBG Loading model bert-embeddings from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | bert_load_from_file: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | bert_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [bert-embeddings] Fails: failed loading model
api-1                 | 12:30AM DBG [gptj] Attempting to load
api-1                 | 12:30AM DBG Loading model gptj from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | gptj_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | gptj_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [gptj] Fails: failed loading model
api-1                 | 12:30AM DBG [gpt2] Attempting to load
api-1                 | 12:30AM DBG Loading model gpt2 from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | gpt2_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | gpt2_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [gpt2] Fails: failed loading model
api-1                 | 12:30AM DBG [dolly] Attempting to load
api-1                 | 12:30AM DBG Loading model dolly from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | dollyv2_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | dolly_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [dolly] Fails: failed loading model
api-1                 | 12:30AM DBG [falcon] Attempting to load
api-1                 | 12:30AM DBG Loading model falcon from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | falcon_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | falcon_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [falcon] Fails: failed loading model
api-1                 | 12:30AM DBG [mpt] Attempting to load
api-1                 | 12:30AM DBG Loading model mpt from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | mpt_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | mpt_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [mpt] Fails: failed loading model
api-1                 | 12:30AM DBG [replit] Attempting to load
api-1                 | 12:30AM DBG Loading model replit from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | replit_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | replit_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [replit] Fails: failed loading model
api-1                 | 12:30AM DBG [starcoder] Attempting to load
api-1                 | 12:30AM DBG Loading model starcoder from gpt-3.5-turbo-16k
api-1                 | 12:30AM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | starcoder_model_load: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | starcoder_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 12:30AM DBG [starcoder] Fails: failed loading model
api-1                 | [172.25.0.3]:53384  500  -  POST     /v1/chat/completions
chatgpt_telegram_bot  | Something went wrong during completion. Reason: could not load model - all backends returned error: 11 errors occurred:
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  |         * failed loading model
chatgpt_telegram_bot  | 
chatgpt_telegram_bot  |  {"error":{"code":500,"message":"could not load model - all backends returned error: 11 errors occurred:\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\n","type":""}} 500 {'error': {'code': 500, 'message': 'could not load model - all backends returned error: 11 errors occurred:\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\t* failed loading model\n\n', 'type': ''}} <CIMultiDictProxy('Date': 'Sun, 23 Jun 2024 00:30:29 GMT', 'Content-Type': 'application/json', 'Content-Length': '406')>

Additional context

greygoo commented 4 months ago

I just noticed, that after restarting the container, it started downloading the gpt4all model, will double check after that finished, however, since the config points at gpt-3.5-turbo-16k I doubt that will fix it.

greygoo commented 4 months ago

Just added a one line fix to remove the entries.

greygoo commented 4 months ago

Please wait with merging, I ran into another issue with stable diffusion, which seems to be fixed with a newer container version (v2.17.1 instead of v1.18.0-ffmpeg), and with that, also gpt-4 threw a different error, which indicates that it did find the model. I'm gonna retest with the newer container version and update the issue accordingly

greygoo commented 4 months ago

log for gpt-16K:

api-1                 | 2:28PM DBG Request received: {"model":"gpt-3.5-turbo-16k","language":"","n":0,"top_p":1,"top_k":null,"temperature":0.7,"max_tokens":1000,"echo":false,"batch":0,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","size":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"system","content":"As an advanced chatbot Assistant, your primary goal is to assist users to the best of your ability. This may involve answering questions, providing helpful information, or completing tasks based on user input. In order to effectively assist users, it is important to be detailed and thorough in your responses. Use examples and evidence to support your points and justify your recommendations or solutions. Remember to always prioritize the needs and satisfaction of the user. Your ultimate goal is to provide a helpful and enjoyable experience for the user.\nIf user asks you about programming or asks to write code do not answer his question, but be sure to advise him to switch to a special mode \\\"👩🏼‍💻e Assistant\\\" by sending the command /mode to chat.\n"},{"role":"user","content":"why can't pigs fly?"}],"functions":null,"function_call":null,"stream":false,"mode":0,"step":0,"grammar":"","grammar_json_functions":null,"grammar_json_name":null,"backend":"","model_base_name":""}
api-1                 | 2:28PM DBG guessDefaultsFromFile: not a GGUF file
api-1                 | 2:28PM DBG Configuration read: &{PredictionOptions:{Model:gpt-3.5-turbo-16k Language: N:0 TopP:0xc00042e0c0 TopK:0xc00042e1c8 Temperature:0xc00042e0a0 Maxtokens:0xc00042e0a8 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc00042e218 TypicalP:0xc00042e210 Seed:0xc00042e278 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name: F16:0xc00042e1a8 Threads:0xc00042e1a0 Debug:0xc00042e270 Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:<nil>} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionName:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc00042e208 MirostatTAU:0xc00042e200 Mirostat:0xc00042e1e8 NGPULayers:0xc00042e248 MMap:0xc00042e270 MMlock:0xc00042e271 LowVRAM:0xc00042e271 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc00042e168 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}
api-1                 | 2:28PM DBG Parameters: &{PredictionOptions:{Model:gpt-3.5-turbo-16k Language: N:0 TopP:0xc00042e0c0 TopK:0xc00042e1c8 Temperature:0xc00042e0a0 Maxtokens:0xc00042e0a8 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc00042e218 TypicalP:0xc00042e210 Seed:0xc00042e278 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name: F16:0xc00042e1a8 Threads:0xc00042e1a0 Debug:0xc00042e270 Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:<nil>} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionName:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc00042e208 MirostatTAU:0xc00042e200 Mirostat:0xc00042e1e8 NGPULayers:0xc00042e248 MMap:0xc00042e270 MMlock:0xc00042e271 LowVRAM:0xc00042e271 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc00042e168 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}
api-1                 | 2:28PM DBG Prompt (before templating): As an advanced chatbot Assistant, your primary goal is to assist users to the best of your ability. This may involve answering questions, providing helpful information, or completing tasks based on user input. In order to effectively assist users, it is important to be detailed and thorough in your responses. Use examples and evidence to support your points and justify your recommendations or solutions. Remember to always prioritize the needs and satisfaction of the user. Your ultimate goal is to provide a helpful and enjoyable experience for the user.
api-1                 | If user asks you about programming or asks to write code do not answer his question, but be sure to advise him to switch to a special mode \"👩🏼‍💻e Assistant\" by sending the command /mode to chat.
api-1                 | 
api-1                 | why can't pigs fly?
api-1                 | 2:28PM DBG Prompt (after templating): As an advanced chatbot Assistant, your primary goal is to assist users to the best of your ability. This may involve answering questions, providing helpful information, or completing tasks based on user input. In order to effectively assist users, it is important to be detailed and thorough in your responses. Use examples and evidence to support your points and justify your recommendations or solutions. Remember to always prioritize the needs and satisfaction of the user. Your ultimate goal is to provide a helpful and enjoyable experience for the user.
api-1                 | If user asks you about programming or asks to write code do not answer his question, but be sure to advise him to switch to a special mode \"👩🏼‍💻e Assistant\" by sending the command /mode to chat.
api-1                 | 
api-1                 | why can't pigs fly?
api-1                 | 2:28PM DBG Loading from the following backends (in order): [llama-cpp llama-ggml gpt4all llama-cpp-fallback stablediffusion piper rwkv whisper huggingface bert-embeddings /build/backend/python/coqui/run.sh /build/backend/python/parler-tts/run.sh /build/backend/python/diffusers/run.sh /build/backend/python/petals/run.sh /build/backend/python/transformers/run.sh /build/backend/python/rerankers/run.sh /build/backend/python/vall-e-x/run.sh /build/backend/python/exllama/run.sh /build/backend/python/openvoice/run.sh /build/backend/python/sentencetransformers/run.sh /build/backend/python/sentencetransformers/run.sh /build/backend/python/bark/run.sh /build/backend/python/mamba/run.sh /build/backend/python/autogptq/run.sh /build/backend/python/vllm/run.sh /build/backend/python/exllama2/run.sh /build/backend/python/transformers-musicgen/run.sh]
api-1                 | 2:28PM INF Trying to load the model 'gpt-3.5-turbo-16k' with the backend '[llama-cpp llama-ggml gpt4all llama-cpp-fallback stablediffusion piper rwkv whisper huggingface bert-embeddings /build/backend/python/coqui/run.sh /build/backend/python/parler-tts/run.sh /build/backend/python/diffusers/run.sh /build/backend/python/petals/run.sh /build/backend/python/transformers/run.sh /build/backend/python/rerankers/run.sh /build/backend/python/vall-e-x/run.sh /build/backend/python/exllama/run.sh /build/backend/python/openvoice/run.sh /build/backend/python/sentencetransformers/run.sh /build/backend/python/sentencetransformers/run.sh /build/backend/python/bark/run.sh /build/backend/python/mamba/run.sh /build/backend/python/autogptq/run.sh /build/backend/python/vllm/run.sh /build/backend/python/exllama2/run.sh /build/backend/python/transformers-musicgen/run.sh]'
api-1                 | 2:28PM INF [llama-cpp] Attempting to load
api-1                 | 2:28PM INF Loading model 'gpt-3.5-turbo-16k' with backend llama-cpp
api-1                 | 2:28PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:28PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: llama-cpp): {backendString:llama-cpp model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:28PM INF [llama-cpp] attempting to load with AVX2 variant
api-1                 | 2:28PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-avx2
api-1                 | 2:28PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:43923'
api-1                 | 2:28PM DBG GRPC Service state dir: /tmp/go-processmanager977353088
api-1                 | 2:28PM DBG GRPC Service Started
api-1                 | 2:28PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:43923): stdout Server listening on 127.0.0.1:43923
api-1                 | 2:28PM DBG GRPC Service Ready
api-1                 | 2:28PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:28PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:43923): stdout {"timestamp":1719152938,"level":"ERROR","function":"load_model","line":464,"message":"unable to load model","model":"/models/gpt-3.5-turbo-16k"}
api-1                 | 2:28PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:43923): stderr llama_model_load: error loading model: llama_model_loader: failed to load model from /models/gpt-3.5-turbo-16k
api-1                 | 2:28PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:43923): stderr 
api-1                 | 2:28PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:43923): stderr llama_load_model_from_file: failed to load model
api-1                 | 2:28PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:43923): stderr llama_init_from_gpt_params: error: failed to load model '/models/gpt-3.5-turbo-16k'
api-1                 | 2:28PM INF [llama-cpp] Fails: could not load model: rpc error: code = Canceled desc = 
api-1                 | 2:28PM INF [llama-ggml] Attempting to load
api-1                 | 2:28PM INF Loading model 'gpt-3.5-turbo-16k' with backend llama-ggml
api-1                 | 2:28PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:28PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: llama-ggml): {backendString:llama-ggml model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:28PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-ggml
api-1                 | 2:28PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:36389'
api-1                 | 2:28PM DBG GRPC Service state dir: /tmp/go-processmanager1305482590
api-1                 | 2:28PM DBG GRPC Service Started
api-1                 | 2:28PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:36389): stderr 2024/06/23 14:28:58 gRPC Server listening at 127.0.0.1:36389
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:36389): stderr create_gpt_params: loading model /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:36389): stderr error loading model: failed to open /models/gpt-3.5-turbo-16k: No such file or directory
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:36389): stderr llama_load_model_from_file: failed to load model
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:36389): stderr llama_init_from_gpt_params: error: failed to load model '/models/gpt-3.5-turbo-16k'
api-1                 | 2:29PM INF [llama-ggml] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:36389): stderr load_binding_model: error: unable to load model
api-1                 | 2:29PM INF [gpt4all] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend gpt4all
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: gpt4all): {backendString:gpt4all model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/gpt4all
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:46721'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager2238751203
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:46721): stderr 2024/06/23 14:29:00 gRPC Server listening at 127.0.0.1:46721
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/gpt4all RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:46721): stderr load_model: error 'No such file or directory'
api-1                 | 2:29PM INF [gpt4all] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
api-1                 | 2:29PM INF [llama-cpp-fallback] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend llama-cpp-fallback
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: llama-cpp-fallback): {backendString:llama-cpp-fallback model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-fallback
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:33945'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager3858725189
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:33945): stdout Server listening on 127.0.0.1:33945
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/gpt4all RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:33945): stdout {"timestamp":1719152944,"level":"ERROR","function":"load_model","line":464,"message":"unable to load model","model":"/models/gpt-3.5-turbo-16k"}
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:33945): stderr llama_model_load: error loading model: llama_model_loader: failed to load model from /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:33945): stderr 
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:33945): stderr llama_load_model_from_file: failed to load model
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:33945): stderr llama_init_from_gpt_params: error: failed to load model '/models/gpt-3.5-turbo-16k'
api-1                 | 2:29PM INF [llama-cpp-fallback] Fails: could not load model: rpc error: code = Canceled desc = 
api-1                 | 2:29PM INF [stablediffusion] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend stablediffusion
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: stablediffusion): {backendString:stablediffusion model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/stablediffusion
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:44945'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager2792440387
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:44945): stderr 2024/06/23 14:29:04 gRPC Server listening at 127.0.0.1:44945
api-1                 | 2:29PM INF Success ip=127.0.0.1 latency="43.581µs" method=GET status=200 url=/readyz
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/gpt4all RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM INF [stablediffusion] Fails: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory
api-1                 | 2:29PM INF [piper] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend piper
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: piper): {backendString:piper model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/piper
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:37721'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager3190342740
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:37721): stderr 2024/06/23 14:29:06 gRPC Server listening at 127.0.0.1:37721
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/espeak-ng-data RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM INF [piper] Fails: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)
api-1                 | 2:29PM INF [rwkv] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend rwkv
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: rwkv): {backendString:rwkv model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/rwkv
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:34281'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager1981478481
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:34281): stderr 2024/06/23 14:29:08 gRPC Server listening at 127.0.0.1:34281
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/espeak-ng-data RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:34281): stderr Failed to open file /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:34281): stderr /build/sources/go-rwkv.cpp/rwkv.cpp/rwkv_model_loading.inc:155: file.file
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:34281): stderr 
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:34281): stderr /build/sources/go-rwkv.cpp/rwkv.cpp/rwkv.cpp:63: rwkv_load_model_from_file(file_path, *ctx->model)
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:34281): stderr 2024/06/23 14:29:10 InitFromFile /models/gpt-3.5-turbo-16k failed
api-1                 | 2:29PM INF [rwkv] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
api-1                 | 2:29PM INF [whisper] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend whisper
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: whisper): {backendString:whisper model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/whisper
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:46099'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager1854948697
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:46099): stderr 2024/06/23 14:29:10 gRPC Server listening at 127.0.0.1:46099
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/espeak-ng-data RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM INF [whisper] Fails: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory
api-1                 | 2:29PM INF [huggingface] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend huggingface
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: huggingface): {backendString:huggingface model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/huggingface
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:37915'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager2149354598
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:37915): stderr 2024/06/23 14:29:12 gRPC Server listening at 127.0.0.1:37915
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/espeak-ng-data RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM INF [huggingface] Fails: could not load model: rpc error: code = Unknown desc = no huggingface token provided
api-1                 | 2:29PM INF [bert-embeddings] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend bert-embeddings
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: bert-embeddings): {backendString:bert-embeddings model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/bert-embeddings
api-1                 | 2:29PM DBG GRPC Service for gpt-3.5-turbo-16k will be running at: '127.0.0.1:40355'
api-1                 | 2:29PM DBG GRPC Service state dir: /tmp/go-processmanager737712383
api-1                 | 2:29PM DBG GRPC Service Started
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:40355): stderr 2024/06/23 14:29:14 gRPC Server listening at 127.0.0.1:40355
api-1                 | 2:29PM DBG GRPC Service Ready
api-1                 | 2:29PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo-16k ContextSize:512 Seed:197285243 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:8 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/espeak-ng-data RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/gpt-3.5-turbo-16k Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false}
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:40355): stderr bert_load_from_file: failed to open '/models/gpt-3.5-turbo-16k'
api-1                 | 2:29PM DBG GRPC(gpt-3.5-turbo-16k-127.0.0.1:40355): stderr bert_bootstrap: failed to load model from '/models/gpt-3.5-turbo-16k'
api-1                 | 2:29PM INF [bert-embeddings] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
api-1                 | 2:29PM INF [/build/backend/python/coqui/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/coqui/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/coqui/run.sh): {backendString:/build/backend/python/coqui/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/coqui/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/parler-tts/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/parler-tts/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/parler-tts/run.sh): {backendString:/build/backend/python/parler-tts/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/parler-tts/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/diffusers/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/diffusers/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/diffusers/run.sh): {backendString:/build/backend/python/diffusers/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/diffusers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/petals/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/petals/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/petals/run.sh): {backendString:/build/backend/python/petals/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/petals/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/transformers/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/transformers/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/transformers/run.sh): {backendString:/build/backend/python/transformers/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/transformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/rerankers/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/rerankers/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/rerankers/run.sh): {backendString:/build/backend/python/rerankers/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/rerankers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/vall-e-x/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/vall-e-x/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/vall-e-x/run.sh): {backendString:/build/backend/python/vall-e-x/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/vall-e-x/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/exllama/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/exllama/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/exllama/run.sh): {backendString:/build/backend/python/exllama/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/exllama/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/openvoice/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/openvoice/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/openvoice/run.sh): {backendString:/build/backend/python/openvoice/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/openvoice/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/sentencetransformers/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/sentencetransformers/run.sh): {backendString:/build/backend/python/sentencetransformers/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/sentencetransformers/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/sentencetransformers/run.sh): {backendString:/build/backend/python/sentencetransformers/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/bark/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/bark/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/bark/run.sh): {backendString:/build/backend/python/bark/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/bark/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/mamba/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/mamba/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/mamba/run.sh): {backendString:/build/backend/python/mamba/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/mamba/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/autogptq/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/autogptq/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/autogptq/run.sh): {backendString:/build/backend/python/autogptq/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/autogptq/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/vllm/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/vllm/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/vllm/run.sh): {backendString:/build/backend/python/vllm/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/vllm/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/exllama2/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/exllama2/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/exllama2/run.sh): {backendString:/build/backend/python/exllama2/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/exllama2/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM INF [/build/backend/python/transformers-musicgen/run.sh] Attempting to load
api-1                 | 2:29PM INF Loading model 'gpt-3.5-turbo-16k' with backend /build/backend/python/transformers-musicgen/run.sh
api-1                 | 2:29PM DBG Loading model in memory from file: /models/gpt-3.5-turbo-16k
api-1                 | 2:29PM DBG Loading Model gpt-3.5-turbo-16k with gRPC (file: /models/gpt-3.5-turbo-16k) (backend: /build/backend/python/transformers-musicgen/run.sh): {backendString:/build/backend/python/transformers-musicgen/run.sh model:gpt-3.5-turbo-16k threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00015bd48 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
api-1                 | 2:29PM INF [/build/backend/python/transformers-musicgen/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
api-1                 | 2:29PM ERR Server error error="could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS" ip=172.25.0.3 latency=20.126329723s method=POST status=500 url=/v1/chat/completions
chatgpt_telegram_bot  | Something went wrong during completion. Reason: could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = 
chatgpt_telegram_bot  | [llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model
chatgpt_telegram_bot  | [gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model
chatgpt_telegram_bot  | [llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = 
chatgpt_telegram_bot  | [stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory
chatgpt_telegram_bot  | [piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)
chatgpt_telegram_bot  | [rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
chatgpt_telegram_bot  | [whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory
chatgpt_telegram_bot  | [huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided
chatgpt_telegram_bot  | [bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model
chatgpt_telegram_bot  | [/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS {"error":{"code":500,"message":"could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS","type":""}} 500 {'error': {'code': 500, 'message': 'could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS', 'type': ''}} <CIMultiDictProxy('Date': 'Sun, 23 Jun 2024 14:29:16 GMT', 'Content-Type': 'application/json', 'Content-Length': '4983')>
chatgpt_telegram_bot  | Exception while handling an update:
chatgpt_telegram_bot  | Traceback (most recent call last):
chatgpt_telegram_bot  |   File "/code/bot/bot.py", line 351, in message_handle_fn
chatgpt_telegram_bot  |     answer, (n_input_tokens, n_output_tokens), n_first_dialog_messages_removed = await chatgpt_instance.send_message(
chatgpt_telegram_bot  |                                                                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/code/bot/openai_utils.py", line 40, in send_message
chatgpt_telegram_bot  |     r = await openai.ChatCompletion.acreate(
chatgpt_telegram_bot  |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
chatgpt_telegram_bot  |     return await super().acreate(*args, **kwargs)
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
chatgpt_telegram_bot  |     response, _, api_key = await requestor.arequest(
chatgpt_telegram_bot  |                            ^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 382, in arequest
chatgpt_telegram_bot  |     resp, got_stream = await self._interpret_async_response(result, stream)
chatgpt_telegram_bot  |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 728, in _interpret_async_response
chatgpt_telegram_bot  |     self._interpret_response_line(
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line
chatgpt_telegram_bot  |     raise self.handle_error_response(
chatgpt_telegram_bot  | openai.error.APIError: could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = 
chatgpt_telegram_bot  | [llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model
chatgpt_telegram_bot  | [gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model
chatgpt_telegram_bot  | [llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = 
chatgpt_telegram_bot  | [stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory
chatgpt_telegram_bot  | [piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)
chatgpt_telegram_bot  | [rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
chatgpt_telegram_bot  | [whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory
chatgpt_telegram_bot  | [huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided
chatgpt_telegram_bot  | [bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model
chatgpt_telegram_bot  | [/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
chatgpt_telegram_bot  | [/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS {"error":{"code":500,"message":"could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS","type":""}} 500 {'error': {'code': 500, 'message': 'could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo-16k (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo-16k: no such file or directory\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS', 'type': ''}} <CIMultiDictProxy('Date': 'Sun, 23 Jun 2024 14:29:16 GMT', 'Content-Type': 'application/json', 'Content-Length': '4983')>
chatgpt_telegram_bot  | 
chatgpt_telegram_bot  | During handling of the above exception, another exception occurred:
chatgpt_telegram_bot  | 
chatgpt_telegram_bot  | Traceback (most recent call last):
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/ext/_application.py", line 1104, in process_update
chatgpt_telegram_bot  |     await coroutine
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/ext/_handler.py", line 141, in handle_update
chatgpt_telegram_bot  |     return await self.callback(update, context)
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/code/bot/bot.py", line 418, in message_handle
chatgpt_telegram_bot  |     await task
chatgpt_telegram_bot  |   File "/code/bot/bot.py", line 402, in message_handle_fn
chatgpt_telegram_bot  |     await update.message.reply_text(error_text)
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/_message.py", line 1041, in reply_text
chatgpt_telegram_bot  |     return await self.get_bot().send_message(
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/ext/_extbot.py", line 2598, in send_message
chatgpt_telegram_bot  |     return await super().send_message(
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/_bot.py", line 331, in decorator
chatgpt_telegram_bot  |     result = await func(*args, **kwargs)  # skipcq: PYL-E1102
chatgpt_telegram_bot  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/_bot.py", line 760, in send_message
chatgpt_telegram_bot  |     return await self._send_message(
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/ext/_extbot.py", line 488, in _send_message
chatgpt_telegram_bot  |     result = await super()._send_message(
chatgpt_telegram_bot  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/_bot.py", line 512, in _send_message
chatgpt_telegram_bot  |     result = await self._post(
chatgpt_telegram_bot  |              ^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/_bot.py", line 419, in _post
chatgpt_telegram_bot  |     return await self._do_post(
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/ext/_extbot.py", line 326, in _do_post
chatgpt_telegram_bot  |     return await self.rate_limiter.process_request(
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/ext/_aioratelimiter.py", line 247, in process_request
chatgpt_telegram_bot  |     return await self._run_request(
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/ext/_aioratelimiter.py", line 203, in _run_request
chatgpt_telegram_bot  |     return await callback(*args, **kwargs)
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/_bot.py", line 450, in _do_post
chatgpt_telegram_bot  |     return await request.post(
chatgpt_telegram_bot  |            ^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/request/_baserequest.py", line 165, in post
chatgpt_telegram_bot  |     result = await self._request_wrapper(
chatgpt_telegram_bot  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatgpt_telegram_bot  |   File "/usr/local/lib/python3.11/site-packages/telegram/request/_baserequest.py", line 328, in _request_wrapper
chatgpt_telegram_bot  |     raise BadRequest(message)
chatgpt_telegram_bot  | telegram.error.BadRequest: Message is too long
greygoo commented 4 months ago

Th requests for both look different, for GPT-4 it does not have a model in the request, for GPT-16K it does. So might be different problems, I'll see if I can figure out while one has no model in it.

greygoo commented 4 months ago

Was clicking around some more and ran into https://github.com/mudler/LocalAI/issues/2639 It looks like it needs the GPT-4 model access fixed, so I'm removing that part from the report and create a new issue so GPT-4 can be enabled. Will modify the PR to only remove GPT-16K.

greygoo commented 4 months ago

Updated PR, now only removing the GPT-16K model from being shown in the /settings menu