rhohndorf / Auto-Llama-cpp

Uses Auto-GPT with Llama.cpp
MIT License
384 stars 68 forks source link

error in running docker build #22

Open jcl2023 opened 1 year ago

jcl2023 commented 1 year ago

Duplicates

Steps to reproduce 🕹

when I ran docker run -p80:3000 auto-llama1, I got the following errors:

Welcome to Auto-Llama! Enter the name of your AI and its role below. Entering nothing will load defaults. Name your AI: For example, 'Entrepreneur-GPT' AI Name: Traceback (most recent call last): File "/app/main.py", line 313, in prompt = construct_prompt() ^^^^^^^^^^^^^^^^^^ File "/app/main.py", line 205, in construct_prompt config = prompt_user() ^^^^^^^^^^^^^ File "/app/main.py", line 231, in prompt_user ai_name = utils.clean_input("AI Name: ") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/utils.py", line 3, in clean_input return input(prompt) ^^^^^^^^^^^^^ EOFError: EOF when reading a line

Any idea how to fix it?

Current behavior 😯

No response

Expected behavior 🤔

No response

Your prompt 📝

# Paste your prompt here
chiu0602 commented 1 year ago

It seems the application will not expose a port, so -p is not needed. Also can you try command below? It enable interactive tty for communicating to the app. docker run -it auto-llama1