mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
23.86k stars 1.83k forks source link

When using the llama embedding model, got 500 error #1198

Open netandreus opened 11 months ago

netandreus commented 11 months ago

LocalAI version:

commit 8034ed3473fb1c8c6f5e3864933c442b377be52e (HEAD -> master, origin/master, origin/HEAD)
Author: Jesús Espino <jespinog@gmail.com>
Date:   Sun Oct 15 09:17:41 2023 +0200

Environment, CPU architecture, OS, and Version:

Describe the bug Try to use llama embeddings:

model: https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q4_0.gguf

./models/text-embedding-ada-002.yaml

f16: true
gpu_layers: 1
name: text-embedding-ada-002
backend: llama
embeddings: true
parameters:
  model: llama-2-7b.Q4_0.gguf

Here is in the servers side:

(base) andrey@m2 ~ % curl http://localhost:8080/v1/embeddings -H "Content-Type: application/json" -d '{
    "model": "text-embedding-ada-002",
    "input": "Test"
}'
{"error":{"code":500,"message":"rpc error: code = ResourceExhausted desc = grpc: received message larger than max (400000002 vs. 4194304)","type":""}}%

Server side:

(base) andrey@m2 current % ./local-ai --debug
6:18PM DBG no galleries to load
6:18PM INF Starting LocalAI using 4 threads, with models path: /Users/andrey/sandbox/llm/current/models
6:18PM INF LocalAI version: v1.30.0 (274ace289823a8bacb7b4987b5c961b62d5eee99)
6:18PM DBG Model: gpt-3.5-turbo (config: {PredictionOptions:{Model:falcon-7b-instruct-q4_0.gguf Language: N:0 TopP:0.65 TopK:40 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo F16:true Threads:0 Debug:false Roles:map[] Embeddings:true Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:1 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:2000 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}})
6:18PM DBG Model: text-embedding-ada-002 (config: {PredictionOptions:{Model:llama-2-7b.Q4_0.gguf Language: N:0 TopP:0 TopK:0 Temperature:0 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:text-embedding-ada-002 F16:true Threads:0 Debug:false Roles:map[] Embeddings:true Backend:llama TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:1 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:0 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}})
6:18PM DBG Model: text-embedding (config: {PredictionOptions:{Model:ggml-model-q4_0.bin Language: N:0 TopP:0 TopK:0 Temperature:0 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:text-embedding F16:true Threads:0 Debug:false Roles:map[] Embeddings:true Backend:bert-embeddings TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:1 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:0 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}})
6:18PM DBG Extracting backend assets files to /tmp/localai/backend_data

 ┌───────────────────────────────────────────────────┐
 │                   Fiber v2.49.2                   │
 │               http://127.0.0.1:8080               │
 │       (bound on host 0.0.0.0 and port 8080)       │
 │                                                   │
 │ Handlers ............ 71  Processes ........... 1 │
 │ Prefork ....... Disabled  PID .............. 6260 │
 └───────────────────────────────────────────────────┘

6:19PM DBG Request received:
6:19PM DBG Parameter Config: &{PredictionOptions:{Model:llama-2-7b.Q4_0.gguf Language: N:0 TopP:0 TopK:0 Temperature:0 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:text-embedding-ada-002 F16:true Threads:4 Debug:true Roles:map[] Embeddings:true Backend:llama TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[Test] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:1 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:0 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
6:19PM DBG Loading model llama from llama-2-7b.Q4_0.gguf
6:19PM DBG Loading model in memory from file: /Users/andrey/sandbox/llm/current/models/llama-2-7b.Q4_0.gguf
6:19PM DBG Loading GRPC Model llama: {backendString:llama model:llama-2-7b.Q4_0.gguf threads:4 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0x14000102d00 externalBackends:map[] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
6:19PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama
6:19PM DBG GRPC Service for llama-2-7b.Q4_0.gguf will be running at: '127.0.0.1:60196'
6:19PM DBG GRPC Service state dir: /var/folders/f9/1b1jz83s4ysfn9zfncbsb8y40000gn/T/go-processmanager3618332483
6:19PM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:60196: connect: connection refused"
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr 2023/10/20 18:19:21 gRPC Server listening at 127.0.0.1:60196
6:19PM DBG GRPC Service Ready
6:19PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:llama-2-7b.Q4_0.gguf ContextSize:0 Seed:0 NBatch:512 F16Memory:true MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:true NUMA:false NGPULayers:1 MainGPU: TensorSplit: Threads:4 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/Users/andrey/sandbox/llm/current/models/llama-2-7b.Q4_0.gguf Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: NoMulMatQ:false DraftModel: AudioPath: Quantization:}
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr create_gpt_params: loading model /Users/andrey/sandbox/llm/current/models/llama-2-7b.Q4_0.gguf
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /Users/andrey/sandbox/llm/current/models/llama-2-7b.Q4_0.gguf (version GGUF V2 (latest))
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  4096, 32000,     1,     1 ]
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
...
blk.31.attn_v.weight q4_0     [  4096,  4096,     1,     1 ]
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - tensor  290:               output_norm.weight f32      [  4096,     1,     1,     1 ]
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - kv   0:                       general.architecture str
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - kv   1:                               general.name str
...
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - kv  18:               general.quantization_version u32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - type  f32:   65 tensors
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - type q4_0:  225 tensors
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_model_loader: - type q6_K:    1 tensors
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: format         = GGUF V2 (latest)
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: arch           = llama
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: vocab type     = SPM
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_vocab        = 32000
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_merges       = 0
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_ctx_train    = 4096
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_ctx          = 512
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_embd         = 4096
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_head         = 32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_head_kv      = 32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_layer        = 32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_rot          = 128
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_gqa          = 1
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: f_norm_eps     = 1.0e-05
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: f_norm_rms_eps = 1.0e-05
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: n_ff           = 11008
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: freq_base      = 10000.0
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: freq_scale     = 1
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: model type     = 7B
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: model ftype    = mostly Q4_0
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: model size     = 6.74 B
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: general.name   = LLaMA v2
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: BOS token = 1 '<s>'
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: EOS token = 2 '</s>'
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: UNK token = 0 '<unk>'
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_print_meta: LF token  = 13 '<0x0A>'
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_tensors: ggml ctx size = 3647.96 MB
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llm_load_tensors: mem required  = 3647.96 MB (+  256.00 MB per state)
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ..................................................................................................
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_new_context_with_model: kv self size  =  256.00 MB
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: allocating
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: found device: Apple M2 Max
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: picking default device: Apple M2 Max
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loading '/tmp/localai/backend_data/backend-assets/grpc/ggml-metal.metal'
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_add                            0x157e05b00 | th_max = 1024 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_add_row
 ...
0x157f06580 | th_max =  768 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_mul_mm_q4_K_f32                0x157e102c0 | th_max =  768 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_mul_mm_q5_K_f32                0x157e10c10 | th_max =  768 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_mul_mm_q6_K_f32                0x158a07d10 | th_max =  768 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_rope                           0x158a085d0 | th_max = 1024 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_alibi_f32                      0x14a00c0e0 | th_max = 1024 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_cpy_f32_f16                    0x14a00cab0 | th_max = 1024 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_cpy_f32_f32                    0x14a00d360 | th_max = 1024 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: loaded kernel_cpy_f16_f16                    0x14a00dc10 | th_max = 1024 | th_width =   32
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: hasUnifiedMemory              = true
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: recommendedMaxWorkingSetSize  = 73728.00 MB
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_init: maxTransferRate               = built-in GPU
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_new_context_with_model: compute buffer total size =   73.47 MB
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr llama_new_context_with_model: max tensor size =   102.54 MB
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_add_buffer: allocated 'data            ' buffer, size =  3647.97 MB, ( 3648.47 / 73728.00)
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_add_buffer: allocated 'eval            ' buffer, size =     1.48 MB, ( 3649.95 / 73728.00)
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_add_buffer: allocated 'kv              ' buffer, size =   258.00 MB, ( 3907.95 / 73728.00)
6:19PM DBG GRPC(llama-2-7b.Q4_0.gguf-127.0.0.1:60196): stderr ggml_metal_add_buffer: allocated 'alloc           ' buffer, size =    72.02 MB, ( 3979.97 / 73728.00)
[127.0.0.1]:60195 500 - POST /v1/embeddings
localai-bot commented 11 months ago

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

Sources:

afonsoguerra commented 10 months ago

+1

I'm also interested in better documentation/updates on how to use different embeddings. sentencetransformers is too restrictive, bert.cpp is ok, but it is very unstable and segfaults a lot on my system, and also I would like to use bigger models than the default mini one, and I can't get a good one. Leveraging on llama.cpp embeddings would likely be ideal. Thanks!

rupurt commented 10 months ago

@afonsoguerra likewise. Do you have any links on how to setup sentencetransformers with LocalAI? From the default docs I get an error with autogptq

make[1]: *** [Makefile:4: autogptq] Error 1
make[1]: Leaving directory '/Users/alex/workspace/mudler/LocalAI/backend/python/autogptq'
make: *** [Makefile:405: prepare-extra-conda-environments] Error 2
yonitjio commented 5 months ago

This should be merged with issue https://github.com/mudler/LocalAI/issues/1617