Closed benda95280 closed 3 months ago
This happens when LLM output nothing. already fixed in the latest version. Thanks!
No idea if this is better ... ?
root@ailice:/home/benda/AIlice/AIlice/ailice# python3 AIliceWeb.py --modelID=lm-studio:AbLoGa/Chat-Mistral-7b-openorca-q4-gguf --share=true
config.json is located at /root/.config/ailice
In order to simplify installation and usage, we have set local execution as the default behavior, which means AI has complete control over the local environment. To prevent irreversible losses due to potential AI errors, you may consider one of the following two methods: the first one, run AIlice in a virtual machine; the second one, install Docker, use the provided Dockerfile to build an image and container, and modify the relevant configurations in config.json. For detailed instructions, please refer to the documentation.
storage started.
browser started.
arxiv started.
google started.
duckduckgo started.
scripter started.
computer started.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
We now start the vector database. Note that this may include downloading the model weights, so it may take some time.
Vector database has been started. returned msg: vector database has been switched to a non-persistent version. tokenizer: bert-base-uncased, model: nomic-ai/nomic-embed-text-v1
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Running on local URL: http://127.0.0.1:7860
IMPORTANT: You are using gradio version 4.19.2, however version 4.29.0 is available, please upgrade.
--------
Running on public URL: https://44bece524680185594.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
ASSISTANT_AIlice: !CALL<!|"coder-proxy","file_listing","./current_directory"|!>
ASSISTANT_file_listing:
SYSTEM_AIlice: Agent file_listing returned:
ASSISTANT_AIlice: !CALL<!|"coder-proxy","file_listing","./current_directory"|!>
ASSISTANT_file_listing:
SYSTEM_AIlice: Agent file_listing returned:
ASSISTANT_AIlice: !CALL<!|"coder-proxy","file_listing","./current_directory"|!>
ASSISTANT_file_listing:
SYSTEM_AIlice: Agent file_listing returned:
ASSISTANT_AIlice: !CALL<!|"coder-proxy","file_listing","./current_directory"|!>
ASSISTANT_file_listing:
SYSTEM_AIlice: Agent file_listing returned:
ASSISTANT_AIlice: !CALL<!|"coder-proxy","file_listing","./current_directory"|!>
ASSISTANT_file_listing: ^C
LS Studio Logs : LM-Studio.log
Yes, this is the expected outcome after the fix.
The 7B model used to be able to complete some simple tasks, but as AIlice's design has become more complex, these models are no longer adequate and often perform poorly. Therefore, we recommend using a better model to replace it. In the future, we will consider fine-tuning a 7B model specifically for AIlice's application to see if it can achieve the desired results.
Additionally, if based on hf:Open-Orca/Mistral-7B-OpenOrca, the performance might be slightly better. However, it still remains impractical. Currently, achieving practical results would require GPT-4o, and the best open-source model available(as I know) is Mixtral-8x22b-instruct, but it still cannot execute tasks as smoothly as GPT-4o
Hello,
My first try seems to generate error :
Launched with the command line : python3 AIliceWeb.py --modelID=lm-studio:AbLoGa/Chat-Mistral-7b-openorca-q4-gguf
LmStudio log : LM-Studio.log