lllyasviel / Omost

Your image is almost there!
Apache License 2.0
6.69k stars 401 forks source link

whenever i am running the program i get this error #70

Open e7rnal opened 3 weeks ago

e7rnal commented 3 weeks ago

(Omost) C:\Users\aj214\Desktop\Omost>python gradio_app.py C:\Users\aj214\Desktop\Omost\lib_omost\pipeline.py:64: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32) Unload to CPU: CLIPTextModel Unload to CPU: UNet2DConditionModel Unload to CPU: AutoencoderKL Unload to CPU: CLIPTextModel Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. Traceback (most recent call last): File "C:\Users\aj214\Desktop\Omost\gradio_app.py", line 75, in llm_model = AutoModelForCausalLM.from_pretrained( File "C:\Users\aj214.conda\envs\Omost\lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained return model_class.from_pretrained( File "C:\Users\aj214.conda\envs\Omost\lib\site-packages\transformers\modeling_utils.py", line 3703, in from_pretrained hf_quantizer.validate_environment(device_map=device_map) File "C:\Users\aj214.conda\envs\Omost\lib\site-packages\transformers\quantizers\quantizer_bnb_8bit.py", line 86, in validate_environment raise ValueError( ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

i am using nvidia rtx 2050 4 GB and 8 GB shared GPU

saifulbabo67646 commented 2 weeks ago

Facing the same error any solution would be very helpful