ximinng / DiffSketcher

[NIPS 2023] Official implementation for "DiffSketcher: Text Guided Vector Sketch Synthesis through Latent Diffusion Models" https://arxiv.org/abs/2306.14685
https://ximinng.github.io/DiffSketcher-project/
MIT License
240 stars 25 forks source link

OverflowError: cannot fit 'int' into an index-sized integer #14

Closed ysm2000 closed 8 months ago

ysm2000 commented 8 months ago

DDIMScheduler { "_class_name": "DDIMScheduler", "_diffusers_version": "0.20.2", "beta_end": 0.012, "beta_schedule": "scaled_linear", "beta_start": 0.00085, "clip_sample": false, "clip_sample_range": 1.0, "dynamic_thresholding_ratio": 0.995, "num_train_timesteps": 1000, "prediction_type": "epsilon", "rescale_betas_zero_snr": false, "sample_max_value": 1.0, "set_alpha_to_one": false, "skip_prk_steps": true, "steps_offset": 1, "thresholding": false, "timestep_spacing": "leading", "trained_betas": null }

prompt: a photo of Sydney opera house negative_prompt: None

Traceback (most recent call last): File "/home/DiffSketcher/run_painterly_render.py", line 129, in main(args, seed_range) File "/home/DiffSketcher/run_painterly_render.py", line 48, in main pipe.painterly_rendering(args.prompt) File "/home/DiffSketcher/pipelines/painter/diffsketcher_pipeline.py", line 241, in painterly_rendering target_file, attention_map = self.extract_ldm_attn(prompt) File "/home/DiffSketcher/pipelines/painter/diffsketcher_pipeline.py", line 120, in extract_ldm_attn outputs = self.diffusion(prompt=[prompts], File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "/home/DiffSketcher/methods/painter/diffsketcher/ASDS_pipeline.py", line 141, in call text_embeddings = self._encode_prompt( File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 307, in _encode_prompt text_inputs = self.tokenizer( File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2829, in call encodings = self._call_one(text=text, text_pair=text_pair, all_kwargs) File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2915, in _call_one return self.batch_encode_plus( File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3106, in batch_encode_plus return self._batch_encode_plus( File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 807, in _batch_encode_plus batch_outputs = self._batch_prepare_for_model( File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 879, in _batch_prepare_for_model batch_outputs = self.pad( File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3313, in pad outputs = self._pad( File "/home/anaconda3/envs/diffsketcher/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3682, in _pad encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] difference OverflowError: cannot fit 'int' into an index-sized integer Sorry to bother you, I want to know why the error occurred. Is it because my GPU memory is not enough? Considering that I only have a 2080?or other reasons?

ximinng commented 8 months ago

hello ysm2000,

I'm not sure what caused the error. But I'm sure it won't work on 2080 GPU. By the way, the Nivida 3090 is a good choice. Enabling xformer and grad_checkpoint can greatly save VRAM.

Best regards, Ximing

ysm2000 commented 8 months ago

hello ysm2000,

I'm not sure what caused the error. But I'm sure it won't work on 2080 GPU. By the way, the Nivida 3090 is a good choice. Enabling xformer and grad_checkpoint can greatly save VRAM.

Best regards, Ximing

Thank you for answering my question! I'll try again after switching to a larger gpu.