mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
25.56k stars 1.93k forks source link

Home Assistant integration with Extended OpenAI Conversation and LocalAI #2702

Closed CyberGWJ closed 4 months ago

CyberGWJ commented 4 months ago

LocalAI version: 2.18.1

Environment, CPU architecture, OS, and Version: Linux localAI 6.1.0-22-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.94-1 (2024-06-21) x86_64 GNU/Linux

Describe the bug When using integration with Home Assistant the Assist chat return an error "Sorry, I had a problem talking to OpenAI: Connection error." When running localai with debugging there is a error: panic: Unrecognized schema: map[]

To Reproduce

  1. Have a working localai (tested using curl requests to model and received a responce.)
  2. Install Home Assistant integrations using Extended OpenAI Conversation.
  3. LocalAI receives the requests, starts processing it the gets the error.

Note: all models were tested prior to HA integration.

Expected behavior Return response from AI model

Logs root@localAI:~# local-ai --debug 9:32PM INF env file found, loading environment variables from file envFile=/etc/localai.env 9:32PM DBG Setting logging to debug 9:32PM INF Starting LocalAI using 6 threads, with models path: /usr/share/local-ai/models 9:32PM INF LocalAI version: () 9:32PM DBG CPU capabilities: [3dnowprefetch abm adx aes apic arat arch_capabilities arch_perfmon avx avx2 bmi1 bmi2 clflush clflushopt cmov constant_tsc cpuid cpuid_fault cx16 cx8 de ept ept_ad erms f16c flexpriority flush_l1d fma fpu fsgsbase fxsr ht hypervisor ibpb ibrs invpcid invpcid_single lahf_lm lm mca mce md_clear mmx movbe mpx msr mtrr nopl nx pae pat pcid pclmulqdq pdcm pdpe1gb pge pni popcnt pse pse36 pti rdrand rdseed rdtscp rep_good sep smap smep ss ssbd sse sse2 sse4_1 sse4_2 ssse3 stibp syscall tpr_shadow tsc tsc_adjust tsc_deadline_timer tsc_known_freq umip vme vmx vnmi vpid x2apic xgetbv1 xsave xsavec xsaveopt xsaves xtopology] 9:32PM DBG GPU count: 1 9:32PM DBG GPU: card #0 @0000:00:02.0 -> driver: 'bochs-drm' class: 'Display controller' vendor: 'unknown' product: 'unknown' 9:32PM DBG guessDefaultsFromFile: template already set name=gpt-4 9:32PM DBG guessDefaultsFromFile: template already set name=Hermes-2-Pro-Llama-3-8B-Q5_K_M 9:32PM DBG guessDefaultsFromFile: template already set name=luna 9:32PM DBG guessDefaultsFromFile: template already set name=luna5 9:32PM DBG guessDefaultsFromFile: template already set name=phi-2-chat:Q8_0 9:32PM INF Preloading models from /usr/share/local-ai/models

Model name: gpt-4

Model name: Hermes-2-Pro-Llama-3-8B-Q5_K_M

Model name: luna

Model name: luna5

Model name: phi-2-chat:Q8_0

9:32PM DBG Model: Hermes-2-Pro-Llama-3-8B-Q5_K_M (config: {PredictionOptions:{Model:Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf Language: Translate:false N:0 TopP:0xc000b773a8 TopK:0xc000b773b0 Temperature:0xc000b773b8 Maxtokens:0xc000b773e8 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc000b773e0 TypicalP:0xc000b773d8 Seed:0xc000b77400 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:Hermes-2-Pro-Llama-3-8B-Q5_K_M F16:0xc000b773a0 Threads:0xc000b77398 Debug:0xc000b773f8 Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:{{.Input -}} <|im_start|>assistant ChatMessage:<|im_start|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}} {{- if .FunctionCall }}

{{- else if eq .RoleName "tool" }} {{- end }} {{- if .Content}} {{.Content }} {{- end }} {{- if .FunctionCall}} {{toJson .FunctionCall}} {{- end }} {{- if .FunctionCall }}

{{- else if eq .RoleName "tool" }} {{- end }}<|im_end|> Completion:{{.Input}} Edit: Functions:<|im_start|>system You are a function calling AI model. Here are the available tools:

{{range .Functions}} {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }} {{end}}

You should call the tools provided to you sequentially Please use XML tags to record your reasoning and planning before you call the functions as follows:

{step-by-step reasoning and plan in bullet points}

For each function call return a json object with function name and arguments within XML tags as follows:

{"arguments": , "name": } <|im_end|> {{.Input -}} <|im_start|>assistant UseTokenizerTemplate:false JoinChatMessagesByCharacter:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:true GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:true NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[(?s)(.*?) (?s)(.*?)] ReplaceFunctionResults:[{Key:(?s)^[^{\[]* Value:} {Key:(?s)[^}\]]*$ Value:} {Key:'([^']*?)' Value:_DQUOTE_${1}_DQUOTE_} {Key:\\" Value:__TEMP_QUOTE__} {Key:' Value:'} {Key:_DQUOTE_ Value:"} {Key:__TEMP_QUOTE__ Value:"} {Key:(?s).* Value:}] ReplaceLLMResult:[{Key:(?s).* Value:}] CaptureLLMResult:[] FunctionName:true} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc000b773d0 MirostatTAU:0xc000b773c8 Mirostat:0xc000b773c0 NGPULayers:0xc000b773f0 MMap:0xc000b77340 MMlock:0xc000b773f9 LowVRAM:0xc000b773f9 Grammar: StopWords:[<|im_end|> <|eot_id|> <|end_of_text|>] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc000b77348 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}) 9:32PM DBG Model: gpt-4 (config: {PredictionOptions:{Model:Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf Language: Translate:false N:0 TopP:0xc000b76fd8 TopK:0xc000b76fe0 Temperature:0xc000b76fe8 Maxtokens:0xc000b77018 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc000b77010 TypicalP:0xc000b77008 Seed:0xc000b77030 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-4 F16:0xc000b76fd0 Threads:0xc000b76fc8 Debug:0xc000b77028 Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:{{.Input -}} <|im_start|>assistant ChatMessage:<|im_start|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}} {{- if .FunctionCall }} {{- else if eq .RoleName "tool" }} {{- end }} {{- if .Content}} {{.Content }} {{- end }} {{- if .FunctionCall}} {{toJson .FunctionCall}} {{- end }} {{- if .FunctionCall }} {{- else if eq .RoleName "tool" }} {{- end }}<|im_end|> Completion:{{.Input}} Edit: Functions:<|im_start|>system You are a function calling AI model. Here are the available tools: {{range .Functions}} {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }} {{end}} You should call the tools provided to you sequentially Please use XML tags to record your reasoning and planning before you call the functions as follows: {step-by-step reasoning and plan in bullet points} For each function call return a json object with function name and arguments within XML tags as follows: {"arguments": , "name": } <|im_end|> {{.Input -}} <|im_start|>assistant UseTokenizerTemplate:false JoinChatMessagesByCharacter:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:true GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:true NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[(?s)(.*?) (?s)(.*?)] ReplaceFunctionResults:[{Key:(?s)^[^{\[]* Value:} {Key:(?s)[^}\]]*$ Value:} {Key:'([^']*?)' Value:_DQUOTE_${1}_DQUOTE_} {Key:\\" Value:__TEMP_QUOTE__} {Key:' Value:'} {Key:_DQUOTE_ Value:"} {Key:__TEMP_QUOTE__ Value:"} {Key:(?s).* Value:}] ReplaceLLMResult:[{Key:(?s).* Value:}] CaptureLLMResult:[] FunctionName:true} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc000b77000 MirostatTAU:0xc000b76ff8 Mirostat:0xc000b76ff0 NGPULayers:0xc000b77020 MMap:0xc000b76da0 MMlock:0xc000b77029 LowVRAM:0xc000b77029 Grammar: StopWords:[<|im_end|> <|eot_id|> <|end_of_text|>] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc000b76ea0 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}) 9:32PM DBG Model: luna (config: {PredictionOptions:{Model:luna-ai-llama2-uncensored.Q6_K.gguf Language: Translate:false N:0 TopP:0xc000b77640 TopK:0xc000b77608 Temperature:0xc000b77620 Maxtokens:0xc000b776e0 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc000b776d8 TypicalP:0xc000b776d0 Seed:0xc000b776f8 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:luna F16:0xc000b77678 Threads:0xc000b77670 Debug:0xc000b776f0 Roles:map[assistant:ASSISTANT: system:SYSTEM: user:USER:] Embeddings:false Backend:llama TemplateConfig:{Chat:lunademo-chat ChatMessage: Completion:lunademo-completion Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionName:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc000b776c8 MirostatTAU:0xc000b776c0 Mirostat:0xc000b776b8 NGPULayers:0xc000b776e8 MMap:0xc000b77679 MMlock:0xc000b776f1 LowVRAM:0xc000b776f1 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc000b77660 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}) 9:32PM DBG Model: luna5 (config: {PredictionOptions:{Model:luna-ai-llama2-uncensored.Q5_K_M.gguf Language: Translate:false N:0 TopP:0xc000b779a0 TopK:0xc000b77968 Temperature:0xc000b77980 Maxtokens:0xc000b77a40 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc000b77a38 TypicalP:0xc000b77a30 Seed:0xc000b77a58 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:luna5 F16:0xc000b779d8 Threads:0xc000b779d0 Debug:0xc000b77a50 Roles:map[assistant:ASSISTANT: system:SYSTEM: user:USER:] Embeddings:false Backend:llama TemplateConfig:{Chat:luna5demo-chat ChatMessage: Completion:luna5demo-completion Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionName:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc000b77a28 MirostatTAU:0xc000b77a20 Mirostat:0xc000b77a18 NGPULayers:0xc000b77a48 MMap:0xc000b779d9 MMlock:0xc000b77a51 LowVRAM:0xc000b77a51 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc000b779c0 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}) 9:32PM DBG Model: phi-2-chat:Q8_0 (config: {PredictionOptions:{Model:phi-2-layla-v1-chatml-Q8_0.gguf Language: Translate:false N:0 TopP:0xc000b77c88 TopK:0xc000b77c90 Temperature:0xc000b77c98 Maxtokens:0xc000b77cc8 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc000b77cc0 TypicalP:0xc000b77cb8 Seed:0xc000b77ce0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:phi-2-chat:Q8_0 F16:0xc000b77c60 Threads:0xc000b77c78 Debug:0xc000b77cd8 Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:{{.Input}} <|im_start|>assistant ChatMessage:<|im_start|>{{ .RoleName }} {{.Content}}<|im_end|> Completion:{{.Input}} Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionName:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc000b77cb0 MirostatTAU:0xc000b77ca8 Mirostat:0xc000b77ca0 NGPULayers:0xc000b77cd0 MMap:0xc000b77c61 MMlock:0xc000b77cd9 LowVRAM:0xc000b77cd9 Grammar: StopWords:[<|im_end|>] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc000b77c50 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}) 9:32PM DBG Extracting backend assets files to /tmp/localai/backend_data 9:32PM DBG processing api keys runtime update 9:32PM DBG processing external_backends.json 9:32PM DBG external backends loaded from external_backends.json 9:32PM INF core/startup process completed! 9:32PM DBG No configuration file found at /tmp/localai/upload/uploadedFiles.json 9:32PM DBG No configuration file found at /tmp/localai/config/assistants.json 9:32PM DBG No configuration file found at /tmp/localai/config/assistantsFile.json 9:32PM INF LocalAI API is listening! Please connect to the endpoint for API documentation. endpoint=http://0.0.0.0:8989 9:32PM DBG Request received: {"model":"phi-2-chat:Q8_0","language":"","translate":false,"n":0,"top_p":1,"top_k":null,"temperature":0.5,"max_tokens":150,"echo":false,"batch":0,"ignore_eos":false,"repeat_penalty":0,"repeat_last_n":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","size":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"system","content":"You possess the knowledge of all the universe, answer any question given to you truthfully and to your fullest ability. \nYou are also a smart home manager who has been given permission to control my smart home which is powered by Home Assistant.\nI will provide you information about my smart home along, you can truthfully make corrections or respond in polite and concise language.\n\nCurrent Time: 2024-07-02 21:32:42.655296-04:00\n\nAvailable Devices:\n```csv\nentity_id,name,state,aliases\ntodo.shopping_list,Shopping List,0,\nmedia_player.pioneer_vsx_932_e87cce,Pioneer VSX-932 E87CCE,unknown,\nsensor.thermostat_air_temperature,Thermostat Air temperature,81.0,\nsensor.thermostat_humidity,Thermostat Humidity,52.0,\nclimate.thermostat,Thermostat ,heat_cool,\nfan.thermostat,Thermostat ,on,\nmedia_player.samsung_qn90ba_85,Samsung QN90BA 85,off,\nlight.reading_light_light,Reading,off,\nlight.yard_light,Yard Light,on,\nlock.front_door_door_lock,Front Door Door lock,locked,\nsensor.front_door_temperature,Front Door Temperature,77.0,\nswitch.ups_ha_switch,UPS HA Switch,on,\nswitch.ups_computer_switch,UPS Computer Switch,on,\nswitch.sengled_e1c_nb7_switch,sengled E1C-NB7 Switch,on,\n```\n\nThe current state of devices is provided in Available Devices.\nOnly use the execute_services function when smart home actions are requested.\nDo not tell me what you're thinking about doing either, just do it.\nIf I ask you about the current state of the home, or many devices I have, or how many devices are in a specific state, just respond with the accurate information but do not call the execute_services function.\nIf I ask you what time or date it is be sure to respond in a human readable format.\nIf you don't have enough information to execute a smart home command then specify what other information you need."},{"role":"user","content":"How are you feeling?"}],"functions":[{"name":"execute_services","description":"Use this function to execute service of devices in Home Assistant.","parameters":{"properties":{"list":{"items":{"properties":{"domain":{"description":"The domain of the service","type":"string"},"service":{"description":"The service to be called","type":"string"},"service_data":{"description":"The service data object to indicate what to control.","properties":{"entity_id":{"description":"The entity_id retrieved from available devices. It must start with domain, followed by dot character.","type":"string"}},"required":["entity_id"],"type":"object"}},"required":["domain","service","service_data"],"type":"object"},"type":"array"}},"type":"object"}}],"function_call":"auto","stream":false,"mode":0,"step":0,"grammar":"","grammar_json_functions":null,"grammar_json_name":null,"backend":"","model_base_name":""} 9:32PM DBG guessDefaultsFromFile: template already set name=phi-2-chat:Q8_0 9:32PM DBG Configuration read: &{PredictionOptions:{Model:phi-2-layla-v1-chatml-Q8_0.gguf Language: Translate:false N:0 TopP:0xc000512638 TopK:0xc000b77c90 Temperature:0xc000512630 Maxtokens:0xc000512608 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc000b77cc0 TypicalP:0xc000b77cb8 Seed:0xc000b77ce0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:phi-2-chat:Q8_0 F16:0xc000b77c60 Threads:0xc000b77c78 Debug:0xc000512730 Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:{{.Input}} <|im_start|>assistant ChatMessage:<|im_start|>{{ .RoleName }} {{.Content}}<|im_end|> Completion:{{.Input}} Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString:auto functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionName:false} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc000b77cb0 MirostatTAU:0xc000b77ca8 Mirostat:0xc000b77ca0 NGPULayers:0xc000b77cd0 MMap:0xc000b77c61 MMlock:0xc000b77cd9 LowVRAM:0xc000b77cd9 Grammar: StopWords:[<|im_end|>] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc000b77c50 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:} 9:32PM DBG Response needs to process functions panic: Unrecognized schema: map[] goroutine 77 [running]: github.com/mudler/LocalAI/pkg/functions.(*JSONSchemaConverter).visit(0xc000427990, 0xc0004c4960, {0x0, 0x0}, 0xc0004c4960) /home/runner/_work/LocalAI/LocalAI/pkg/functions/grammar_json_schema.go:304 +0x69b github.com/mudler/LocalAI/pkg/functions.(*JSONSchemaConverter).Grammar(0xc000427990, 0xc0004c4960, {0xc0009ff568, 0x1, 0x1}) /home/runner/_work/LocalAI/LocalAI/pkg/functions/grammar_json_schema.go:336 +0x85 github.com/mudler/LocalAI/pkg/functions.(*JSONSchemaConverter).GrammarFromBytes(0xc000427990, {0xc000512e98, 0x2, 0x8}, {0xc0009ff568, 0x1, 0x1}) /home/runner/_work/LocalAI/LocalAI/pkg/functions/grammar_json_schema.go:343 +0x89 github.com/mudler/LocalAI/pkg/functions.JSONFunctionStructureFunction.Grammar({{0x0, 0x0, 0x0}, {0x0, 0x0, 0x0}, 0x0}, {0xc0009ff568, 0x1, 0x1}) /home/runner/_work/LocalAI/LocalAI/pkg/functions/grammar_json_schema.go:405 +0xf8 github.com/mudler/LocalAI/core/http/endpoints/openai.ChatEndpoint.func3(0xc0002b4c08) /home/runner/_work/LocalAI/LocalAI/core/http/endpoints/openai/chat.go:234 +0xe9d github.com/gofiber/fiber/v2.(*Ctx).Next(0xc000c9cbf0?) /home/runner/go/pkg/mod/github.com/gofiber/fiber/v2@v2.52.4/ctx.go:1027 +0x3d github.com/mudler/LocalAI/core/http.App.func5(0xc0002b4c08) /home/runner/_work/LocalAI/LocalAI/core/http/app.go:164 +0x227 github.com/gofiber/fiber/v2.(*App).next(0xc000236f08, 0xc0002b4c08) /home/runner/go/pkg/mod/github.com/gofiber/fiber/v2@v2.52.4/router.go:145 +0x1be github.com/gofiber/fiber/v2.(*Ctx).Next(0xc0002b4c08?) /home/runner/go/pkg/mod/github.com/gofiber/fiber/v2@v2.52.4/ctx.go:1030 +0x4d github.com/mudler/LocalAI/core/http.App.LocalAIMetricsAPIMiddleware.func8(0xc0002b4c08) /home/runner/_work/LocalAI/LocalAI/core/http/endpoints/localai/metrics.go:38 +0xa5 github.com/gofiber/fiber/v2.(*Ctx).Next(0xc0002b4c08?) /home/runner/go/pkg/mod/github.com/gofiber/fiber/v2@v2.52.4/ctx.go:1027 +0x3d github.com/gofiber/contrib/fiberzerolog.New.func1(0xc0002b4c08) /home/runner/go/pkg/mod/github.com/gofiber/contrib/fiberzerolog@v1.0.0/zerolog.go:36 +0xb7 github.com/gofiber/fiber/v2.(*App).next(0xc000236f08, 0xc0002b4c08) /home/runner/go/pkg/mod/github.com/gofiber/fiber/v2@v2.52.4/router.go:145 +0x1be github.com/gofiber/fiber/v2.(*App).handler(0xc000236f08, 0x49d44f?) /home/runner/go/pkg/mod/github.com/gofiber/fiber/v2@v2.52.4/router.go:172 +0x78 github.com/valyala/fasthttp.(*Server).serveConn(0xc0002dca00, {0x4110c258, 0xc0009ff2c0}) /home/runner/go/pkg/mod/github.com/valyala/fasthttp@v1.51.0/server.go:2359 +0xe70 github.com/valyala/fasthttp.(*workerPool).workerFunc(0xc000626820, 0xc0007216c0) /home/runner/go/pkg/mod/github.com/valyala/fasthttp@v1.51.0/workerpool.go:224 +0xa4 github.com/valyala/fasthttp.(*workerPool).getCh.func1() /home/runner/go/pkg/mod/github.com/valyala/fasthttp@v1.51.0/workerpool.go:196 +0x32 created by github.com/valyala/fasthttp.(*workerPool).getCh in goroutine 1 /home/runner/go/pkg/mod/github.com/valyala/fasthttp@v1.51.0/workerpool.go:195 +0x190 root@localAI:~# **Additional context** I got my previos issue resolved by changing the VM CPU type.
CyberGWJ commented 4 months ago

I found a couple of solutions. If I use version 2.8.2 it works. I am test another integration tool.

CyberGWJ commented 4 months ago

cannot get it work. will look for another solution.