Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
When we try to run llama2-webui in docker container.. facing issue as
below local url can not expose to outside and if we use gradio that couldnt get route or associated port and ip can not be identified
Running on local URL: http://127.0.0.1:7861
Running on public URL: https://748b3f0963e1e60211.gradio.live
Can someone help in same as we are configuring llma2 server
When we try to run llama2-webui in docker container.. facing issue as below local url can not expose to outside and if we use gradio that couldnt get route or associated port and ip can not be identified Running on local URL: http://127.0.0.1:7861 Running on public URL: https://748b3f0963e1e60211.gradio.live
Can someone help in same as we are configuring llma2 server