Open Bedoshady opened 6 months ago
This can be resolved by modifying the my_open function in the "block_requests.py" file under the modules directory.
# Kindly provided by our friend WizardLM-30B
def my_open(*args, **kwargs):
filename = str(args[0])
if filename.endswith("index.html"):
with original_open(*args, **kwargs) as f:
file_contents = f.read()
file_contents = file_contents.replace("cdnjs.cloudflare.com", "127.0.0.1")
file_contents = file_contents.replace(
"</head>",
'\n <script src="file/js/katex/katex.min.js"></script>'
'\n <script src="file/js/katex/auto-render.min.js"></script>'
'\n <script src="file/js/highlightjs/highlight.min.js"></script>'
'\n <script src="file/js/highlightjs/highlightjs-copy.min.js"></script>'
"\n <script>hljs.addPlugin(new CopyButtonPlugin());</script>"
"\n </head>",
)
return io.StringIO(file_contents)
else:
return original_open(*args, **kwargs)
though depending on how old installation is, you would omit the newer replace
# Kindly provided by our friend WizardLM-30B
def my_open(*args, **kwargs):
filename = str(args[0])
if filename.endswith("index.html"):
with original_open(*args, **kwargs) as f:
file_contents = f.read()
file_contents = file_contents.replace("cdnjs.cloudflare.com", "127.0.0.1")
return io.StringIO(file_contents)
else:
return original_open(*args, **kwargs)
I had to do this after updating to the latest versions of pytorch, flash_attention 2 and gradio.
def my_open(*args, **kwargs): filename = str(args[0]) if filename.endswith("index.html"): with original_open(*args, **kwargs) as f: file_contents = f.read() file_contents = file_contents.replace("cdnjs.cloudflare.com", "127.0.0.1") return io.StringIO(file_contents) else: return original_open(*args, **kwargs)
I dont know how did you do figure it out but it worked. Can you tell me how did you debug this error as it wasnt clear for me
The most recent call in the traceback is:
File "H:\Downloads\text-generation-webui-main\modules\block_requests.py", line 46, in my_open
file_contents = file_contents.replace(b'\t\t<script\n\t\t\tsrc="https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.3.9/iframeResizer.contentWindow.min.js"\n\t\t\tasync\n\t\t></script>', b'')
TypeError: replace() argument 1 must be str, not bytes
First I had to understand why this was being done. The pull request mentions that it's to remove the call to cloudflare (seems like a privacy measure to help prevent fingerprinting or tracking).
The original replace function replaced the entire line, and for some reason this does not seem to act the same way with the latest version of gradio. The error mentions that the argument should be a string, and not a string converted to bytes. Switching this to use StringIO and removing the binary mode (b) from each string initiator resolves the issue.
While I am not completely sure on the exact change that happened on Gradio's end, it might have to do with how they handle storing configs per session now instead of bundling them. Or it could be that they changed how headers are added (sync instead of async). I could be wrong on both accounts.
This fix results in a "not found" error in the network trace, but the benefit is that it is simpler to read and debug and is less hacky.
Actually it is not gradio but jinja that cause the problem. Starting from jinja 3.1.3+, it will open template file in text mode instead of binary mode.
I have the same problem. I did a git pull
and a pip install -r requirements.txt
, and got this error. In requirements.txt, it is Jinja2==3.1.2, but somehow 3.1.3 is installed. Completely no clue what's happening.
(textgen) user@host:~/miniconda3/envs/textgen/lib/python3.11/site-packages/jinja2$ pip show Jinja2
Name: Jinja2
Version: 3.1.2
Summary: A very fast and expressive template engine.
Home-page: https://palletsprojects.com/p/jinja/
Author: Armin Ronacher
Author-email: armin.ronacher@active-4.com
License: BSD-3-Clause
Location: /home/user/miniconda3/envs/textgen/lib/python3.11/site-packages
Requires: MarkupSafe
Required-by: altair, Flask, gradio, jupyter_server, jupyterlab, jupyterlab_server, llama_cpp_python, llama_cpp_python_cuda, llama_cpp_python_cuda_tensorcores, nbconvert, torch
(textgen) user@host:~/miniconda3/envs/textgen/lib/python3.11/site-packages/jinja2$ conda list | grep -i jinja
jinja2 3.1.2 pypi_0 pypi
(textgen) user@host:~/miniconda3/envs/textgen/lib/python3.11/site-packages/jinja2$ python -c "import jinja2; print(jinja2.__path__, jinja2.__version__)"
['/home/user/miniconda3/envs/textgen/lib/python3.11/site-packages/jinja2'] 3.1.3
For me start update_wizard_windows.bat
with Option A fixed it.
If you use a manually proxy server in Windows, you may need to set HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
environment variables.
import os
from urllib.request import getproxies
proxies = getproxies()
os.environ["http_proxy"] = proxies["http"]
os.environ["https_proxy"] = proxies["https"]
os.environ["no_proxy"] = "localhost, 127.0.0.1/8, ::1"
@Bedoshady this issue should be resolved now that https://github.com/oobabooga/text-generation-webui/pull/5976 has been merged.
Let me know if you have any questions, I'm happy to help.
Describe the bug
I tried installing server and running it which worked at first, but couldnt load my llama 3 models, I typed this command
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
then it stopped working, I tried different solutions but it didnt workIs there an existing issue for this?
Reproduction
Install web text ui normally via miniconda then do this command
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
Screenshot
No response
Logs
System Info