Closed enkie358 closed 2 months ago
You have "Load in 4-bit" selected and the device is set to GPU, right?
It is set to load in 4-bit but the machine I was testing this on was set to CPU.
On Mon, Jun 24, 2024, 10:42 PM jhc13 @.***> wrote:
You have "Load in 4-bit" selected and the device is set to GPU, right?
— Reply to this email directly, view it on GitHub https://github.com/jhc13/taggui/issues/222#issuecomment-2188024340, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJLAAJATE5XKFYFH3MGDF3DZJD7M5AVCNFSM6AAAAABJ2ZFT32VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBYGAZDIMZUGA . You are receiving this because you authored the thread.Message ID: @.***>
That model can only be used with a GPU.
But the program should not have let you proceed if you selected CPU. Did you modify the code?
No, I did not. Thank you for letting me know!
On Mon, Jun 24, 2024, 10:51 PM jhc13 @.***> wrote:
That model can only be used with a GPU.
But the program should not have let you proceed if you selected CPU. Did you modify the code?
— Reply to this email directly, view it on GitHub https://github.com/jhc13/taggui/issues/222#issuecomment-2188033345, or unsubscribe https://github.com/notifications/unsubscribe-auth/BJLAAJANWZNLH5Q64KVXYVDZJEAO7AVCNFSM6AAAAABJ2ZFT32VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBYGAZTGMZUGU . You are receiving this because you authored the thread.Message ID: @.***>
I get this error when I try to caption images with this model:
-- Loading internlm/internlm-xcomposer2-vl-7b-4bit... Traceback (most recent call last): File "auto_captioning\captioning_thread.py", line 532, in run File "auto_captioning\captioning_thread.py", line 528, in run File "auto_captioning\captioning_thread.py", line 415, in run_captioning File "auto_captioning\captioning_thread.py", line 271, in load_processor_and_model File "transformers\models\auto\auto_factory.py", line 558, in from_pretrained File "transformers\modeling_utils.py", line 3451, in from_pretrained OSError : internlm/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.
--
Any advice would be appreciated!