alpertunga-bile / prompt-generator-comfyui

Custom AI prompt generator node for the ComfyUI
MIT License
78 stars 8 forks source link

The following error was printed using the female-positive_generator_v4 version. There are no issues with the V2 and V3 versions. Please provide an answer. Thank you![bug] #13

Open DreamLoveBetty opened 5 months ago

DreamLoveBetty commented 5 months ago

The following error was printed using the female-positive_generator_v4 version. There are no issues with the V2 and V3 versions. Please provide an answer. Thank you! Also, should we consider refining the model to reduce memory usage? Currently, using SDXL+V4 version inference has exceeded 24GB of memory usage。

Traceback (most recent call last): File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 57, in try_wo_onnx_pipeline self.pipe = get_bettertransformer_pipeline(model_name=model_path) File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 103, in get_bettertransformer_pipeline model = get_model(model_name, use_device_map=True) File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 38, in get_model model = AutoModelForCausalLM.from_pretrained( File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3565, in from_pretrained model.load_adapter( File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\integrations\peft.py", line 180, in load_adapter peft_config = PeftConfig.from_pretrained( File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 151, in from_pretrained return cls.from_peft_type(kwargs) File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 118, in from_peft_type return config_cls(kwargs) TypeError: LoraConfig.init() got an unexpected keyword argument 'layer_replication'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\Program\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\Program\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\Program\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\prompt_generator.py", line 265, in generate generator = Generator(model_path, is_accelerate) File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 48, in init self.try_wo_onnx_pipeline(model_path) File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\generate.py", line 59, in try_wo_onnx_pipeline self.pipe = get_default_pipeline(model_path) File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 65, in get_default_pipeline model = get_model(model_name) File "D:\Program\ComfyUI\custom_nodes\prompt-generator-comfyui\generator\model.py", line 38, in get_model model = AutoModelForCausalLM.from_pretrained( File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3565, in from_pretrained model.load_adapter( File "D:\Program\ComfyUI\venv\lib\site-packages\transformers\integrations\peft.py", line 180, in load_adapter peft_config = PeftConfig.from_pretrained( File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 151, in from_pretrained return cls.from_peft_type(kwargs) File "D:\Program\ComfyUI\venv\lib\site-packages\peft\config.py", line 118, in from_peft_type return config_cls(**kwargs) TypeError: LoraConfig.init() got an unexpected keyword argument 'layer_replication'

alpertunga-bile commented 5 months ago

Hello, thanks for reporting. I hope my use of versions did not mislead you. I use versions to differentiate models rather than showing which one is better.

Having said this; The v2 version is the latest trained model and the v4 model is an experimental model. The training is not done because dataset is growing each day. There are little informations below about the current models:

Actually, I am focused on training the v2 model because it can be trained more fast and it can be used by much people because of its size. v4 model is an experimental model to see if the dataset is good and big enough for a big model and to see the outputs from a big model. I haven't tried using the model because of my VRAM and because I don't think it's ready to generate prompts. Because it is a lora model, I have to implement the quantization and check for how to load it.

I am working on the quantization part of the problem right now. I hope the fix and the model is going to be ready soon. But thank you for providing an error that I have to look.

DreamLoveBetty commented 5 months ago

Thank you for your answer. Yes, I misunderstood before. The subconscious always thinks that the bigger the better... Haha.. In addition, I actually loaded the basic model of "westlake-7B-V2" while using V4, but the result still reported an error. However, your answer made me let go of my obsession. The V3 version is sufficient for use, and I tried to compare it with llama3 because llama3 is also too large.T_T