Bismuth-Consultancy-BV / MLOPs

Machine Learning Toolset for Houdini
BSD 3-Clause "New" or "Revised" License
364 stars 53 forks source link

'Convert Model' shelf tool broken in H20/MLOPs 3.0 #142

Open spelafort opened 6 months ago

spelafort commented 6 months ago

The Convert Model shelf tool appears broken in the newer version of MLOPs. Inputting a checkpoint directory (absolute or relative path, with or without config file) causes the same error as inputting no checkpoint file at all:

Traceback (most recent call last):
  File "C:\Users/USER/Documents/GitHub/MLOPs/scripts/python\mlops_utils.py", line 557, in on_accept
    convert_model.convert(
  File "C:\Users/USER/Documents/GitHub/MLOPs/scripts/python\sdpipeline\convert_model.py", line 12, in convert
    pipe = download_from_original_stable_diffusion_ckpt(
TypeError: download_from_original_stable_diffusion_ckpt() got an unexpected keyword argument 'checkpoint_path'

Traceback appears consistently when the 'Convert' button is clicked.

Tr1dae commented 5 months ago

Same for me - MLOPS3 - Houdini 20.0.625 image

olop2 commented 5 months ago

Hi from France here! I have passed some time trying to debug the convert model node. I'm now able to convert my sdxl and even turbo models !

So here what i have done, hope it'll work for you :

1) in "MLOPS/scripts/python/sdpipeline/convert_model.py" change the text line 13 : "checkpoint_path=checkpoint_file," by "checkpoint_path_or_dict=checkpoint_file,"

2) go to "/MLOPS/data/dependencies/python/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py" at line 1212 there is "StableDiffusionPipeline," ,add a new line with "StableDiffusionXLPipeline,"

3) and just after in line 1220 replace "pipeline_class = StableDiffusionPipeline" by "pipeline_class = StableDiffusionXLPipeline"

4) After that convert model will normally works but if you choose to work with an XL model , don't forget in your pipeline node parameters , in solver tab , to choose " StableDiffusionXL - Autodetect " or select it manually.

Finally, with my old GPU 8gb Vram i had the famous " out of memory message" and not able to cook an image even in 512x512 in XL pipeline so i searched how to put a --lowvram argument somewhere like in comfyui but i didn't find and my skills in programming in python are too limited . Instead , but i'm not really sure of this , i have just reinstalled some dependencies(accelerate , transformers and xformers) by doing "pip install dependency -U" ( I'm on manjaro linux ) and after that my gpu memory behave like in comfyui in --lowvram mode. It just work even in 1024x1024. magical. For now i have not tested other workflow so i don't what is working or not.

I'm sure there is a lot of things to fo with the mlops data and also with the dependency file to get it work better, maybe we have to just wait for a new update from the mlops team.

Cheers!