jiayev / GPT4V-Image-Captioner

GNU General Public License v3.0
787 stars 58 forks source link

HTTP Error: 500 Server Error: Internal Server Error for url: http://127.0.0.1:8000/v1/chat/completions #4

Closed huangxin1745 closed 10 months ago

huangxin1745 commented 10 months ago

用vqa报错

SleeeepyZhou commented 10 months ago

可以给一下报错截图不。

huangxin1745 commented 10 months ago

我又下载了chat,然后又点了一次安装,现在点切换是这个样子…… 请按任意键继续. . . Running on local URL: http://127.0.0.1:8848

To create a public link, set share=True in launch(). Retrying... ========Use torch type as:torch.bfloat16 with device:cuda========

A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Please 'pip install apex' Retrying... False

===================================BUG REPORT=================================== D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg)

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=121, Highest Compute Capability: 8.9. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO. CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local Traceback (most recent call last): File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\transformers\utils\import_utils.py", line 1382, in _get_module return importlib.import_module("." + module_name, self.name) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\importlib__init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\transformers\integrations\bitsandbytes.py", line 11, in import bitsandbytes as bnb File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes__init__.py", line 6, in from . import cuda_setup, utils, research File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\research\init.py", line 1, in from . import nn File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\research\nn\init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\optim\init__.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\cextension.py", line 20, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\cog_openai_api.py", line 407, in load_mod(MODEL_PATH) File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\cog_openai_api.py", line 371, in load_mod model = AutoModelForCausalLM.from_pretrained( File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\transformers\modeling_utils.py", line 3476, in from_pretrained from .integrations import get_keys_to_not_convert, replace_with_bnb_linear File "", line 1075, in _handle_fromlist File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\transformers\utils\import_utils.py", line 1372, in getattr module = self._get_module(self._class_to_module[name]) File "D:\GPT4V-Image-Captioner\GPT4V-Image-Captioner\myenv\lib\site-packages\transformers\utils\import_utils.py", line 1384, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):

    CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

Retrying... Retrying... Retrying...

SleeeepyZhou commented 10 months ago

bitsandbytes没安装好,可以尝试再安装一下,如果不行我等会看看传一个bitsandbytes的包。

huangxin1745 commented 10 months ago

重装后问题已解决,谢谢

SleeeepyZhou commented 10 months ago

好的🎉

CCpt5 commented 10 months ago

Update: OpenAI switched to a pay as you go billing system since I last used it. My account had $0 in it. I put $10 in and now it's working.

Thanks for your efforts! (here because of the reddit post btw).

== Having this problem as well. I've tried manually installing the requirements.txt file w/o luck. There was a new version of Bitsandbytes released on Jan 8 (the first update in years?) - Perhaps a version older than this new 0.42.0 is needed? I'm trying to verify that it's not something w/ my network blocking the connection...