dvlab-research / MGM

Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
Apache License 2.0
3.08k stars 273 forks source link

when use stable-diffusion,AttributeError: 'NoneType' object has no attribute 'tokenize' #131

Open ALR-alr opened 4 days ago

ALR-alr commented 4 days ago

Traceback (most recent call last): File "/home/anaconda3/envs/mgm/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/anaconda3/envs/mgm/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/Projects/miniGemini/MGM/mgm/serve/cli.py", line 239, in main(args) File "/home/Projects/miniGemini/MGM/mgm/serve/cli.py", line 215, in main output_img = pipe(prompt, negative_prompt=common_neg_prompt).images[0] File "/home/anaconda3/envs/mgm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/anaconda3/envs/mgm/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 1138, in call ) = self.encode_prompt( File "/home/anaconda3/envs/mgm/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 406, in encode_prompt prompt = self.maybe_convert_prompt(prompt, tokenizer) File "/home/anaconda3/envs/mgm/lib/python3.10/site-packages/diffusers/loaders/textual_inversion.py", line 137, in maybe_convert_prompt prompts = [self._maybe_convert_prompt(p, tokenizer) for p in prompts] File "/home/anaconda3/envs/mgm/lib/python3.10/site-packages/diffusers/loaders/textual_inversion.py", line 137, in prompts = [self._maybe_convert_prompt(p, tokenizer) for p in prompts] File "/home/anaconda3/envs/mgm/lib/python3.10/site-packages/diffusers/loaders/textual_inversion.py", line 161, in _maybe_convert_prompt tokens = tokenizer.tokenize(prompt) AttributeError: 'NoneType' object has no attribute 'tokenize'

with torch==2.0.1 diffusions==0.26.3 but my cuda is 12.2, not match the torch==2.0.1+cu117, is it the question?

ALR-alr commented 4 days ago

I followed the shell in dissusion's readme.md "pip install --upgrade diffusers[torch]", to make sure that my versions match, but useless. I found the similiar questtion on https://github.com/huggingface/diffusers/issues, but no good solutions. Finally I replaced StableDiffusionXLPipeline with StableDiffusionPipeline, it works now.