vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.56k stars 409 forks source link

[Issue]: Mixture tiling - unhashable type `list` #2778

Closed lbeltrame closed 8 months ago

lbeltrame commented 8 months ago

Issue Description

Aside issue #2777, a separate problem occurs when using mixture tiling.

Model: Animagine 3 XL Prompt: 1girl, long hair, blue eyes, cinematic composition, cinematic lighting, cinematic angle, anime artwork, anime style, key visual, vibrant, studio anime, highly detailed, anime coloring, newest, masterpiece, best quality 1girl, short hair, yellow eyes, cinematic composition, cinematic lighting, cinematic angle, anime artwork, anime style, key visual, vibrant, studio anime, highly detailed, anime coloring, newest, masterpiece, best quality

Parameters: X components: 2 X spacing 0.5

transformers doesn't seem to like the input it is given, as this exception occurs:

05:00:36-814831 ERROR    gradio call: TypeError                                 
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /notebooks/automatic/modules/call_queue.py:31 in f                           │
│                                                                              │
│   30 │   │   │   try:                                                        │
│ ❱ 31 │   │   │   │   res = func(*args, **kwargs)                             │
│   32 │   │   │   │   progress.record_results(id_task, res)                   │
│                                                                              │
│ /notebooks/automatic/modules/txt2img.py:87 in txt2img                        │
│                                                                              │
│   86 │   p.script_args = args                                                │
│ ❱ 87 │   processed = modules.scripts.scripts_txt2img.run(p, *args)           │
│   88 │   if processed is None:                                               │
│                                                                              │
│ /notebooks/automatic/modules/scripts.py:500 in run                           │
│                                                                              │
│   499 │   │   parsed = p.per_script_args.get(script.title(), args[script.arg │
│ ❱ 500 │   │   processed = script.run(p, *parsed)                             │
│   501 │   │   s.record(script.title())                                       │
│                                                                              │
│ /notebooks/automatic/scripts/mixture_tiling.py:88 in run                     │
│                                                                              │
│   87 │   │   shared.log.debug(f'Tiling: args={p.task_args}')                 │
│ ❱ 88 │   │   processed: processing.Processed = processing.process_images(p)  │
│   89 │   │   # restore pipeline and params                                   │
│                                                                              │
│ /notebooks/automatic/modules/processing.py:768 in process_images             │
│                                                                              │
│    767 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p): │
│ ❱  768 │   │   │   │   res = process_images_inner(p)                         │
│    769                                                                       │
│                                                                              │
│                           ... 4 frames hidden ...                            │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion │
│                                                                              │
│    392 │   │   │   │   if isinstance(self, TextualInversionLoaderMixin):     │
│ ❱  393 │   │   │   │   │   prompt = self.maybe_convert_prompt(prompt, tokeni │
│    394                                                                       │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   136 │   │                                                                  │
│ ❱ 137 │   │   prompts = [self._maybe_convert_prompt(p, tokenizer) for p in p │
│   138                                                                        │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   136 │   │                                                                  │
│ ❱ 137 │   │   prompts = [self._maybe_convert_prompt(p, tokenizer) for p in p │
│   138                                                                        │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   159 │   │   """                                                            │
│ ❱ 160 │   │   tokens = tokenizer.tokenize(prompt)                            │
│   161 │   │   unique_tokens = set(tokens)                                    │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py:5 │
│                                                                              │
│    584 │   │   for i, token in enumerate(tokens):                            │
│ ❱  585 │   │   │   if token in no_split_token:                               │
│    586 │   │   │   │   tok_extended = self._added_tokens_decoder.get(self._a │
╰──────────────────────────────────────────────────────────────────────────────╯
TypeError: unhashable type: 'list'

Version Platform Description

04:40:11-578078 INFO Starting SD.Next
04:40:11-583778 INFO Logger: file="/notebooks/automatic/sdnext.log"
level=INFO size=2596258 mode=append
04:40:11-585001 INFO Python 3.10.10 on Linux
04:40:11-623486 INFO Version: app=sd.next updated=2024-01-30 hash=076e16c8
url=https://github.com/vladmandic/automatic/tree/dev
04:40:11-755913 INFO Updating main repository
04:40:12-162781 INFO Upgraded to version: 076e16c8 Tue Jan 30 01:40:35 2024 +0300
04:40:12-168144 INFO Platform: arch=x86_64 cpu=x86_64 system=Linux
release=5.19.0-45-generic python=3.10.10

Relevant log output

04:52:18-310248 ERROR    Exception: unhashable type: 'list'                     
04:52:18-311359 ERROR    Arguments: args=('task(xg85zb1a21z1zo5)', '1girl,  long
                         hair, blue eyes,  cinematic composition, cinematic     
                         lighting, cinematic angle, anime artwork, anime style, 
                         key visual, vibrant, studio anime,  highly detailed,   
                         anime coloring, newest, masterpiece, best quality      
                         \n1girl, short hair, yellow eyes,  cinematic           
                         composition, cinematic lighting, cinematic angle, anime
                         artwork, anime style, key visual, vibrant, studio      
                         anime,  highly detailed, anime coloring, newest,       
                         masterpiece, best quality \n', 'lowres, worst quality, 
                         low quality, bad anatomy, bad hands, text, error,      
                         missing fingers, extra digit, fewer digits, cropped,   
                         worst quality, low quality, normal quality, jpeg       
                         artifacts, signature, watermark, username, blurry,     
                         artist name,  photo, deformed, black and white,        
                         realism, disfigured, low contrast, lipgloss, long hair,
                         braid, medium hair, bob cut. lantern, from behind', [],
                         28, 2, 2, True, False, False, 1, 1, 7, 7, 0.7, 0, 2,   
                         2376152481.0, -1.0, 0, 0, 0, 1216, 832, True, 0.45, 2, 
                         'RealESRGAN 4x+ Anime6B', True, 20, 0, 0, 5, 0.8, '',  
                         '', False, 4, 0.95, False, 1, 1, False, 0.6, 1, [], 2, 
                         3, 1, 1, 0.8, 8, 64, True, 2, 1, 0.5, 0.5, False,      
                         False, 'positive', 'comma', 0, False, False, '',       
                         'None', True, 0, 'None', 2, True, 1, 0, 0, '', [], 0,  
                         '', [], 0, '', [], False, True, False, False, False,   
                         False, 0, [], 30, '', 4, [], 1, '', '', '', '', 'None',
                         16, 'None', 1, True, 'None', 2, True, 1, 0, True,      
                         'none', 3, 4, 0.25, 0.25, 'none', 0.5, None, False,    
                         False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '',
                         'ad_negative_prompt': '', 'ad_confidence': 0.75,       
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 15,    
                         'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked':
                         True, 'ad_inpaint_only_masked_padding': 0,             
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default',
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()},        
                         {'ad_model': 'None', 'ad_prompt': '',                  
                         'ad_negative_prompt': '', 'ad_confidence': 0.3,        
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4,     
                         'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked':
                         True, 'ad_inpaint_only_masked_padding': 32,            
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default',
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 
                         "{'ad_model': 'face_yolov8n.pt', 'ad_prompt': '',      
                         'ad_negative_prompt': '', 'ad_confidence': 0.8,        
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 15,    
                         'ad_denoising_strength': 0.35,                         
                         'ad_inpaint_only_masked': True,                        
                         'ad_inpaint_only_masked_padding': 0,                   
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'UniPC',  
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}",       
                         "{'ad_model': 'None', 'ad_prompt': '',                 
                         'ad_negative_prompt': '', 'ad_confidence': 0.3,        
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4,     
                         'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked':
                         True, 'ad_inpaint_only_masked_padding': 32,            
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'UniPC',  
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}",       
                         'False', 'keyword prompt', 'None', 'False', 'False',   
                         True, False, 1, False, False, False, 1.1, 1.5, 100,    
                         0.7, False, False, True, False, False, 0,              
                         'Gustavosta/MagicPrompt-Stable-Diffusion', '', False,  
                         False, False, '', 1, True, False, '', 'Lerp', False,   
                         'NCNF:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,0,0\nNCNF=disable_O
                         UT03-04-05_OUT10-11:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,0,0\n
                         \nNIN:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nNIN=disable_IN
                         -ALL_MID00:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\n\nNOFACE:
                         1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nFACE:0,0,0,0,0,0,0,0
                         .2,1,1,0.7,0,0,0,0,0,0\nBG:0,0,0,0.5,1,1,0,1,0,0,0,0,0,
                         0,0,1,1\nUP-BODY:0,1,1,1,1,0.5,0,1,1,0,0,0,0,0.5,0.5,0,
                         0.5\nBG2:0,0,0.5,1,0.7,1,0.7,0.7,0.4,0,0,0,0,0,0,0.5,1\
                         nBOOB:0.5,1,1,1,0,0,0,0.5,1,0,0,1,0.3,0,0,0,1\nBODY:0,0
                         .5,0.5,1,1,0.5,0.5,0,0.5,0.5,0.2,1,1,1,1,1,1\nGAFU:0.5,
                         0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\n\nANGRYFACE:0,0,0,0,0,
                         0,0,0,1,1,1,1,0,0,0,0,0\nNIKKORI:0,0,0,0,0,0,0,0,1,1,1,
                         1,0,0,0,0,0\nTHINKINGFACE:0,0,0,0,0,0,0,0,1,1,1,1,0,0,0
                         ,0,0\nEYES:0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\n\n\n\nLyN
                         OFACE:1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,1,1,1,1,1
                         ,1\nLyFACE:1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0.4,0,
                         0,0,0,0,0\nLyRiki:1,0,0,0,0,0,0,0,3,0,0,0,0,1,0,0,0,0,0
                         ,0,0,0,0,0,0,0\n\nKOUZU:1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1
                         ,1,1,1,1,1\nNOOUT:0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\n',
                          True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID',   
                         'IN05-OUT05', 'none', '', '0.5,1',                     
                         'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09
                         ,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT0
                         6,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20',  
                         False,                                                 
                         'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05
                         :attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1\n',
                          False, False, False, False, False, 'Matrix',          
                         'Columns', 'Mask', 'Prompt', '1,1', '0.2', False,      
                         False, False, 'Attention', [False], '0', '0', '0.4',   
                         None, '0', '0', False) kwargs={}                       
04:52:18-323204 ERROR    gradio call: TypeError                                 
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /notebooks/automatic/modules/call_queue.py:31 in f                           │
│                                                                              │
│   30 │   │   │   try:                                                        │
│ ❱ 31 │   │   │   │   res = func(*args, **kwargs)                             │
│   32 │   │   │   │   progress.record_results(id_task, res)                   │
│                                                                              │
│ /notebooks/automatic/modules/txt2img.py:87 in txt2img                        │
│                                                                              │
│   86 │   p.script_args = args                                                │
│ ❱ 87 │   processed = modules.scripts.scripts_txt2img.run(p, *args)           │
│   88 │   if processed is None:                                               │
│                                                                              │
│ /notebooks/automatic/modules/scripts.py:500 in run                           │
│                                                                              │
│   499 │   │   parsed = p.per_script_args.get(script.title(), args[script.arg │
│ ❱ 500 │   │   processed = script.run(p, *parsed)                             │
│   501 │   │   s.record(script.title())                                       │
│                                                                              │
│ /notebooks/automatic/scripts/mixture_tiling.py:88 in run                     │
│                                                                              │
│   87 │   │   shared.log.debug(f'Tiling: args={p.task_args}')                 │
│ ❱ 88 │   │   processed: processing.Processed = processing.process_images(p)  │
│   89 │   │   # restore pipeline and params                                   │
│                                                                              │
│ /notebooks/automatic/modules/processing.py:768 in process_images             │
│                                                                              │
│    767 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p): │
│ ❱  768 │   │   │   │   res = process_images_inner(p)                         │
│    769                                                                       │
│                                                                              │
│                           ... 4 frames hidden ...                            │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion │
│                                                                              │
│    392 │   │   │   │   if isinstance(self, TextualInversionLoaderMixin):     │
│ ❱  393 │   │   │   │   │   prompt = self.maybe_convert_prompt(prompt, tokeni │
│    394                                                                       │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   136 │   │                                                                  │
│ ❱ 137 │   │   prompts = [self._maybe_convert_prompt(p, tokenizer) for p in p │
│   138                                                                        │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   136 │   │                                                                  │
│ ❱ 137 │   │   prompts = [self._maybe_convert_prompt(p, tokenizer) for p in p │
│   138                                                                        │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   159 │   │   """                                                            │
│ ❱ 160 │   │   tokens = tokenizer.tokenize(prompt)                            │
│   161 │   │   unique_tokens = set(tokens)                                    │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py:5 │
│                                                                              │
│    584 │   │   for i, token in enumerate(tokens):                            │
│ ❱  585 │   │   │   if token in no_split_token:                               │
│    586 │   │   │   │   tok_extended = self._added_tokens_decoder.get(self._a │
╰──────────────────────────────────────────────────────────────────────────────╯
TypeError: unhashable type: 'list'
04:52:18-476645 INFO     High memory utilization: GPU=88% RAM=32% {'ram':       
                         {'used': 14.22, 'total': 44.08}, 'gpu': {'used': 13.9, 
                         'total': 15.73}, 'retries': 0, 'oom': 0}               
05:00:35-304068 INFO     High memory utilization: GPU=88% RAM=32% {'ram':       
                         {'used': 14.22, 'total': 44.08}, 'gpu': {'used': 13.9, 
                         'total': 15.73}, 'retries': 0, 'oom': 0}               
05:00:35-769867 ERROR    Pipeline switch error: from=StableDiffusionXLPipeline  
                         to=StableDiffusionTilingPipeline                       
                         StableDiffusionTilingPipeline.__init__() missing 1     
                         required positional argument: 'safety_checker'         
05:00:35-771258 ERROR    Pipeline switch: TypeError                             
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /notebooks/automatic/modules/sd_models.py:1009 in switch_pipe                │
│                                                                              │
│   1008 │   │   │   │   │   components_skipped.append(item)                   │
│ ❱ 1009 │   │   │   new_pipe = cls(**pipe_dict)                               │
│   1010 │   │   │   switch_mode = 'auto'                                      │
╰──────────────────────────────────────────────────────────────────────────────╯
TypeError: StableDiffusionTilingPipeline.__init__() missing 1 required positional argument: 'safety_checker'
05:00:36-406898 INFO     High memory utilization: GPU=88% RAM=32% {'ram':       
                         {'used': 14.22, 'total': 44.08}, 'gpu': {'used': 13.9, 
                         'total': 15.73}, 'retries': 0, 'oom': 0}               
05:00:36-802446 ERROR    Exception: unhashable type: 'list'                     
05:00:36-803520 ERROR    Arguments: args=('task(1ofrqo161xjapvk)', '1girl,  long
                         hair, blue eyes,  cinematic composition, cinematic     
                         lighting, cinematic angle, anime artwork, anime style, 
                         key visual, vibrant, studio anime,  highly detailed,   
                         anime coloring, newest, masterpiece, best quality      
                         \n1girl, short hair, yellow eyes,  cinematic           
                         composition, cinematic lighting, cinematic angle, anime
                         artwork, anime style, key visual, vibrant, studio      
                         anime,  highly detailed, anime coloring, newest,       
                         masterpiece, best quality \n', 'lowres, worst quality, 
                         low quality, bad anatomy, bad hands, text, error,      
                         missing fingers, extra digit, fewer digits, cropped,   
                         worst quality, low quality, normal quality, jpeg       
                         artifacts, signature, watermark, username, blurry,     
                         artist name,  photo, deformed, black and white,        
                         realism, disfigured, low contrast, lipgloss, long hair,
                         braid, medium hair, bob cut. lantern, from behind', [],
                         28, 2, 2, True, False, False, 1, 1, 7, 7, 0.7, 0, 2,   
                         2376152481.0, -1.0, 0, 0, 0, 1216, 832, True, 0.45, 2, 
                         'RealESRGAN 4x+ Anime6B', True, 20, 0, 0, 5, 0.8, '',  
                         '', False, 4, 0.95, False, 1, 1, False, 0.6, 1, [], 2, 
                         3, 1, 1, 0.8, 8, 64, True, 2, 1, 0.5, 0.5, False,      
                         False, 'positive', 'comma', 0, False, False, '',       
                         'None', True, 0, 'None', 2, True, 1, 0, 0, '', [], 0,  
                         '', [], 0, '', [], False, True, False, False, False,   
                         False, 0, [], 30, '', 4, [], 1, '', '', '', '', 'None',
                         16, 'None', 1, True, 'None', 2, True, 1, 0, True,      
                         'none', 3, 4, 0.25, 0.25, 'none', 0.5, None, False,    
                         False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '',
                         'ad_negative_prompt': '', 'ad_confidence': 0.75,       
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 15,    
                         'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked':
                         True, 'ad_inpaint_only_masked_padding': 0,             
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default',
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()},        
                         {'ad_model': 'None', 'ad_prompt': '',                  
                         'ad_negative_prompt': '', 'ad_confidence': 0.3,        
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4,     
                         'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked':
                         True, 'ad_inpaint_only_masked_padding': 32,            
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default',
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 
                         "{'ad_model': 'face_yolov8n.pt', 'ad_prompt': '',      
                         'ad_negative_prompt': '', 'ad_confidence': 0.8,        
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 15,    
                         'ad_denoising_strength': 0.35,                         
                         'ad_inpaint_only_masked': True,                        
                         'ad_inpaint_only_masked_padding': 0,                   
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'UniPC',  
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}",       
                         "{'ad_model': 'None', 'ad_prompt': '',                 
                         'ad_negative_prompt': '', 'ad_confidence': 0.3,        
                         'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0,        
                         'ad_mask_max_ratio': 1, 'ad_x_offset': 0,              
                         'ad_y_offset': 0, 'ad_dilate_erode': 4,                
                         'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4,     
                         'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked':
                         True, 'ad_inpaint_only_masked_padding': 32,            
                         'ad_use_inpaint_width_height': False,                  
                         'ad_inpaint_width': 512, 'ad_inpaint_height': 512,     
                         'ad_use_steps': False, 'ad_steps': 28,                 
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7,          
                         'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same 
                         checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same  
                         VAE', 'ad_use_sampler': False, 'ad_sampler': 'UniPC',  
                         'ad_use_noise_multiplier': False,                      
                         'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,   
                         'ad_clip_skip': 1, 'ad_restore_face': False,           
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 
                         'None', 'ad_controlnet_weight': 1,                     
                         'ad_controlnet_guidance_start': 0,                     
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}",       
                         'False', 'keyword prompt', 'None', 'False', 'False',   
                         True, False, 1, False, False, False, 1.1, 1.5, 100,    
                         0.7, False, False, True, False, False, 0,              
                         'Gustavosta/MagicPrompt-Stable-Diffusion', '', False,  
                         False, False, '', 1, True, False, '', 'Lerp', False,   
                         'NCNF:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,0,0\nNCNF=disable_O
                         UT03-04-05_OUT10-11:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,0,0\n
                         \nNIN:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nNIN=disable_IN
                         -ALL_MID00:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\n\nNOFACE:
                         1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nFACE:0,0,0,0,0,0,0,0
                         .2,1,1,0.7,0,0,0,0,0,0\nBG:0,0,0,0.5,1,1,0,1,0,0,0,0,0,
                         0,0,1,1\nUP-BODY:0,1,1,1,1,0.5,0,1,1,0,0,0,0,0.5,0.5,0,
                         0.5\nBG2:0,0,0.5,1,0.7,1,0.7,0.7,0.4,0,0,0,0,0,0,0.5,1\
                         nBOOB:0.5,1,1,1,0,0,0,0.5,1,0,0,1,0.3,0,0,0,1\nBODY:0,0
                         .5,0.5,1,1,0.5,0.5,0,0.5,0.5,0.2,1,1,1,1,1,1\nGAFU:0.5,
                         0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\n\nANGRYFACE:0,0,0,0,0,
                         0,0,0,1,1,1,1,0,0,0,0,0\nNIKKORI:0,0,0,0,0,0,0,0,1,1,1,
                         1,0,0,0,0,0\nTHINKINGFACE:0,0,0,0,0,0,0,0,1,1,1,1,0,0,0
                         ,0,0\nEYES:0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\n\n\n\nLyN
                         OFACE:1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,1,1,1,1,1
                         ,1\nLyFACE:1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0.4,0,
                         0,0,0,0,0\nLyRiki:1,0,0,0,0,0,0,0,3,0,0,0,0,1,0,0,0,0,0
                         ,0,0,0,0,0,0,0\n\nKOUZU:1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1
                         ,1,1,1,1,1\nNOOUT:0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\n',
                          True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID',   
                         'IN05-OUT05', 'none', '', '0.5,1',                     
                         'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09
                         ,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT0
                         6,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20',  
                         False,                                                 
                         'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05
                         :attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1\n',
                          False, False, False, False, False, 'Matrix',          
                         'Columns', 'Mask', 'Prompt', '1,1', '0.2', False,      
                         False, False, 'Attention', [False], '0', '0', '0.4',   
                         None, '0', '0', False) kwargs={}                       
05:00:36-814831 ERROR    gradio call: TypeError                                 
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /notebooks/automatic/modules/call_queue.py:31 in f                           │
│                                                                              │
│   30 │   │   │   try:                                                        │
│ ❱ 31 │   │   │   │   res = func(*args, **kwargs)                             │
│   32 │   │   │   │   progress.record_results(id_task, res)                   │
│                                                                              │
│ /notebooks/automatic/modules/txt2img.py:87 in txt2img                        │
│                                                                              │
│   86 │   p.script_args = args                                                │
│ ❱ 87 │   processed = modules.scripts.scripts_txt2img.run(p, *args)           │
│   88 │   if processed is None:                                               │
│                                                                              │
│ /notebooks/automatic/modules/scripts.py:500 in run                           │
│                                                                              │
│   499 │   │   parsed = p.per_script_args.get(script.title(), args[script.arg │
│ ❱ 500 │   │   processed = script.run(p, *parsed)                             │
│   501 │   │   s.record(script.title())                                       │
│                                                                              │
│ /notebooks/automatic/scripts/mixture_tiling.py:88 in run                     │
│                                                                              │
│   87 │   │   shared.log.debug(f'Tiling: args={p.task_args}')                 │
│ ❱ 88 │   │   processed: processing.Processed = processing.process_images(p)  │
│   89 │   │   # restore pipeline and params                                   │
│                                                                              │
│ /notebooks/automatic/modules/processing.py:768 in process_images             │
│                                                                              │
│    767 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p): │
│ ❱  768 │   │   │   │   res = process_images_inner(p)                         │
│    769                                                                       │
│                                                                              │
│                           ... 4 frames hidden ...                            │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion │
│                                                                              │
│    392 │   │   │   │   if isinstance(self, TextualInversionLoaderMixin):     │
│ ❱  393 │   │   │   │   │   prompt = self.maybe_convert_prompt(prompt, tokeni │
│    394                                                                       │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   136 │   │                                                                  │
│ ❱ 137 │   │   prompts = [self._maybe_convert_prompt(p, tokenizer) for p in p │
│   138                                                                        │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   136 │   │                                                                  │
│ ❱ 137 │   │   prompts = [self._maybe_convert_prompt(p, tokenizer) for p in p │
│   138                                                                        │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/diffusers/loaders/textual_inversion. │
│                                                                              │
│   159 │   │   """                                                            │
│ ❱ 160 │   │   tokens = tokenizer.tokenize(prompt)                            │
│   161 │   │   unique_tokens = set(tokens)                                    │
│                                                                              │
│ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py:5 │
│                                                                              │
│    584 │   │   for i, token in enumerate(tokens):                            │
│ ❱  585 │   │   │   if token in no_split_token:                               │
│    586 │   │   │   │   tok_extended = self._added_tokens_decoder.get(self._a │
╰──────────────────────────────────────────────────────────────────────────────╯
TypeError: unhashable type: 'list'

Backend

Diffusers

Branch

Dev

Model

SD-XL

Acknowledgements

vladmandic commented 8 months ago

mixture tiling sets prompt to be a list, any normal pipeline expects a str. so if pipeline is not set, this is expected. i'm closing this and continuing in #2777 as that is root cause.