I followed the installation method using the install_windows.bat for windows 11, then start_windows.bat and installed the missing dependencies using as seen in the image below.
Error message:
configuration_cogagent.py: 100%|██████████████████████████████████████████████████████████| 1.80k/1.80k [00:00<?, ?B/s]
.gitattributes: 100%|█████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 1.30MB/s]
README.md: 100%|██████████████████████████████████████████████████████████████████████████| 7.46k/7.46k [00:00<?, ?B/s]
generation_config.json: 100%|█████████████████████████████████████████████████████████| 137/137 [00:00<00:00, 91.2kB/s]
config.json: 100%|████████████████████████████████████████████████████████████████| 1.10k/1.10k [00:00<00:00, 1.09MB/s]
cross_visual.py: 100%|████████████████████████████████████████████████████████████| 32.6k/32.6k [00:00<00:00, 32.6MB/s]
model-00008-of-00008.safetensors: 100%|███████████████████████████████████████████| 1.78G/1.78G [01:10<00:00, 25.2MB/s]
I:\Image Captioning\GPT4V-Image-Captioner\myenv\lib\site-packages\huggingface_hub\file_download.py:149: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in I:\Image Captioning\GPT4V-Image-Captioner. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.7, 14.8MB/s]
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-developmentetensors: 100%|██████████████████████████████████████████▉| 1.78G/1.78G [01:10<00:00, 11.5MB/s]
warnings.warn(message)
model.safetensors.index.json: 100%|█████████████████████████████████████████████████| 177k/177k [00:00<00:00, 6.77MB/s]
modeling_cogagent.py: 100%|███████████████████████████████████████████████████████████████| 42.5k/42.5k [00:00<?, ?B/s]
util.py: 100%|████████████████████████████████████████████████████████████████████████████| 19.0k/19.0k [00:00<?, ?B/s]
visual.py: 100%|██████████████████████████████████████████████████████████████████████████| 5.45k/5.45k [00:00<?, ?B/s]
model-00003-of-00008.safetensors: 100%|███████████████████████████████████████████| 4.98G/4.98G [03:02<00:00, 27.3MB/s]
model-00007-of-00008.safetensors: 100%|███████████████████████████████████████████| 4.95G/4.95G [03:13<00:00, 25.5MB/s]
model-00004-of-00008.safetensors: 100%|███████████████████████████████████████████| 4.98G/4.98G [03:59<00:00, 20.8MB/s]
model-00001-of-00008.safetensors: 100%|███████████████████████████████████████████| 4.97G/4.97G [04:13<00:00, 19.6MB/s]
model-00006-of-00008.safetensors: 100%|███████████████████████████████████████████| 4.95G/4.95G [04:17<00:00, 19.2MB/s]
model-00002-of-00008.safetensors: 100%|███████████████████████████████████████████| 4.98G/4.98G [05:11<00:00, 16.0MB/s]
model-00005-of-00008.safetensors: 100%|███████████████████████████████████████████| 4.98G/4.98G [05:14<00:00, 15.8MB/s]
Fetching 18 files: 100%|███████████████████████████████████████████████████████████████| 18/18 [05:16<00:00, 17.61s/it]
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.3.8MB/s]
0it [00:00, ?it/s]08.safetensors: 28%|████████████▏ | 1.41G/4.98G [04:17<01:56, 30.6MB/s]
bin I:\Image Captioning\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so:05, 59.8MB/s]
I:\Image Captioning\GPT4V-Image-Captioner\myenv\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
function 'cadam32bit_grad_fp32' not found
[2024-02-14 03:14:37,126] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-02-14 03:14:37,674] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.1+cu121 with CUDA 1201 (you have 2.1.1+cpu)
Python 3.10.11 (you have 3.10.11)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
This is the output from installing dependencies through api like seen in the photo:
I followed the installation method using the install_windows.bat for windows 11, then start_windows.bat and installed the missing dependencies using as seen in the image below.
Error message:
This is the output from installing dependencies through api like seen in the photo:
Install_windows.bat output: