Closed iwoomi closed 1 year ago
Please try the latest commit to see if it solves your issue. Note that: if you are running the web client on another machine, you need to set http://{LAN_ip_of_the_server}:{port}/
to web/src/api/hugginggpt.ts@Line=9
.
Thank you, now I can use it, but still have some questions(I'm a newbie).
Why I have only "default" option in the select area? not as this(which have many options): https://github.com/microsoft/JARVIS/issues/79#issuecomment-1499760848, it that I need a pro version of openai or huggingface?
As below, I tell huggingGPT to generate a cat under a window, but it cannot, is this normal? or where should I config to solve this problem? or should I use a pro version of huggingface?
I run this command to start the sever
And run
npm run dev
to start the web server, it opens this urlhttp://localhost:9999/#/
, then I click the gear button→input my openai tokens→click "save"→refresh the web pageBut it still have only one "default" button here (not as describe here)
I submit "hello", it acts normal, but I submit "draw a cat", it returns "something seems seems wrong"
Here is the error log output in the terminal
Errors
``` INFO:__main__:******************************************************************************** INFO:__main__:input: Hello DEBUG:__main__:[{'role': 'system', 'content': '#1 Task Planning Stage: The AI assistant can parse user input to several tasks: [{"task": task, "id": task_id, "dep": dependency_task_id, "args": {"text": text orMy
lite.yaml
is as below(both of openai and huggingface are free version)lite.yaml
```yaml openai: key: sk-xxxxxxxxxxxxxxxxxxxxxxxx # "gradio" (set when request) or your_personal_key huggingface: token: hf_xxxxxxxxxxxxxxxxxxxxxxxx # required: huggingface token @ https://huggingface.co/settings/tokens dev: false debug: true log_file: logs/debug.log model: text-davinci-003 # currently only support text-davinci-003, we will support more open-source LLMs in the future use_completion: true inference_mode: huggingface # local, huggingface or hybrid, prefer hybrid local_deployment: minimal # minimal, standard or full, prefer full num_candidate_models: 5 max_description_length: 100 proxy: http://127.0.0.1:1087 # optional: your proxy server "http://ip:port" http_listen: host: 0.0.0.0 port: 8004 # needs to be consistent with endpoint: `http://localhost:8004/`@web/src/api/hugginggpt.ts line 9 local_inference_endpoint: host: localhost port: 8005 logit_bias: parse_task: 0.1 choose_model: 5 tprompt: parse_task: >- #1 Task Planning Stage: The AI assistant can parse user input to several tasks: [{"task": task, "id": task_id, "dep": dependency_task_id, "args": {"text": text orI notice that there are 3 lines in
lite.yaml
But I'm using
inference_mode: huggingface
, theoretically, it should not use these three lines, but if I comment out these 3 lines, it throws an errorSo I have to leave it there(uncomment it), but since I'm using "huggingface" inference mode, of cause I had no "local inference endpoints" running on my local, but why it shows an error
So, anybody knows what's going on?