gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
308 stars 24 forks source link

change of model download path? #32

Closed RYG81 closed 4 months ago

RYG81 commented 4 months ago

Hi, I don't know if true but download path keep changes for models. some time it downloads in custom node folder and sometime in cache folder under huggingface hub folder, and now we have folder for it in models folder. Any way to fix it to specific folder?

gokayfem commented 4 months ago

i made a new update to repo yesterday, now all of the files downloads into models/LLavacheckpoints, this is more convenient.

RYG81 commented 4 months ago

So how can I move old downloads to this new location and save download data and time?

gokayfem commented 4 months ago

you can move them and it will work, even if you dont move them huggingface will find them i think. also becareful move only the files starts with 'filesfor...'

RYG81 commented 4 months ago

image Just moving this folder data to new folders in model folder

gokayfem commented 4 months ago

yes you can move the folder files_for_uform_gen2_qwen to models/LLavacheckpoints. Dont forget to update the repository.

RYG81 commented 4 months ago

thanks for the guidance. Hope to have more vision model support added. All the best and keep good work.

gokayfem commented 4 months ago

thanks for the guidance. Hope to have more vision model support added. All the best and keep good work.

which one do you want

RYG81 commented 4 months ago

Suggestion for vision model - https://huggingface.co/spaces/Vision-CAIR/MiniGPT-v2 also it keeps downloading models, i have created all nodes and downloaded all required models in folder but still everytime I run it downloads models under user .cache\huggingface\hub folder Please fix this as it takes very long and hard to break downloading process

Also, i suggest print download folder location so that we know its downloading on right path

RYG81 commented 4 months ago

I am getting this error when using kosmos2 and internlm

Error occurred when executing Kosmos2model:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 9.50 GiB Requested : 508.10 MiB Device limit : 8.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "D:\000AI\ComfyUI\ComfyUI\execution.py", line 149, in recursive_execute obj = class_def() ^^^^^^^^^^^ File "D:\000AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\kosmos2.py", line 55, in init self.predictor = KosmosModelPredictor() ^^^^^^^^^^^^^^^^^^^^^^ File "D:\000AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\kosmos2.py", line 21, in init self.model = AutoModelForVision2Seq.from_pretrained(self.model_path).to(self.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\000AI\ComfyUI\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 2595, in to return super().to(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\000AI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1152, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "D:\000AI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply module._apply(fn) File "D:\000AI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply module._apply(fn) File "D:\000AI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply module._apply(fn) File "D:\000AI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 825, in _apply param_applied = fn(param) ^^^^^^^^^ File "D:\000AI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1150, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

gokayfem commented 4 months ago

internlm is a heavy model even it's quantized version. also its not downloading, its fetching your old files from this repository from the root.

RYG81 commented 4 months ago

https://github.com/Sanster/VLM-demos this repo lists many VLMs, if there is a way to integrate all of them in comfy.

RYG81 commented 4 months ago

Models keep downloading, i have used ufromgen2 before and have all models on location, but still it keeps downloading

https://github.com/gokayfem/ComfyUI_VLM_nodes/assets/10570236/c99498e1-b27c-4abc-b4cd-b33c331e044d