intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.75k stars 1.27k forks source link

Ubuntu 22.04 Run Text Generation WebUI on Intel GPU Could not create share link #11653

Open taotao1-1 opened 4 months ago

taotao1-1 commented 4 months ago

Running on local URL: http://127.0.0.1:7860

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.

image

lei-sun-intel commented 4 months ago

Have a try "unset http_proxy" before run your python script?