Closed K-O-N-B closed 1 month ago
使用8b模型也是出错,大概2周前还是正常运作的
H:\ComfyUI-qiuye\ComfyUI\models\clip\siglip-so400m-patch14-384 H:\ComfyUI-qiuye\ComfyUI\models\LLM\Meta-Llama-3.1-8B !!! Exception during processing !!! H:\ComfyUI-qiuye\ComfyUI\models\LLM\Meta-Llama-3.1-8B does not appear to have a file named config.json. Checkout 'https://huggingface.co/H:\ComfyUI-qiuye\ComfyUI\models\LLM\Meta-Llama-3.1-8B/tree/None' for available files. Traceback (most recent call last): File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI\custom_nodes\Comfyui_CXH_joy_caption\Joy_caption_node.py", line 135, in gen self.loadCheckPoint() File "H:\ComfyUI-qiuye\ComfyUI\custom_nodes\Comfyui_CXH_joy_caption\Joy_caption_node.py", line 107, in loadCheckPoint tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH,use_fast=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI.ext\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 854, in from_pretrained config = AutoConfig.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI.ext\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 976, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI.ext\Lib\site-packages\transformers\configuration_utils.py", line 632, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI.ext\Lib\site-packages\transformers\configuration_utils.py", line 689, in _get_config_dict resolved_config_file = cached_file( ^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI.ext\Lib\site-packages\transformers\utils\hub.py", line 373, in cached_file raise EnvironmentError( OSError: H:\ComfyUI-qiuye\ComfyUI\models\LLM\Meta-Llama-3.1-8B does not appear to have a file named config.json. Checkout 'https://huggingface.co/H:\ComfyUI-qiuye\ComfyUI\models\LLM\Meta-Llama-3.1-8B/tree/None' for available files.
晕菜了,minicpm也用不了了,是不是因为升级了comfyui,导致这个插件所有node都出问题了?
Got that same error. I also wanted to use the Comfyui node to make prompts from images.
This issue has been fixed by ihmily's commits, which was submitted 4 days ago. You just need to go to line 181 of joy_caption_node.py and replace "str(DEVICE)" by "device_type=DEVICE.type"
晕菜了,minicpm也用不了了,是不是因为升级了comfyui,导致这个插件所有node都出问题了?
这个错误很容易修复,ihmily已经提交了一个commits,正等待审核通过。如果你想马上用,那么可以自己该 - 到joy_caption_node.py的181行, 将"str(DEVICE)"替换为 "device_type=DEVICE.type"。
ok, just wait for the update.
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set
max_memory
in to a higher value to use more memory (at your own risk). !!! Exception during processing !!! User specified an unsupported autocast device_type 'cuda:0' Traceback (most recent call last): File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "H:\ComfyUI-qiuye\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI\custom_nodes\Comfyui_CXH_joy_caption\Joy_caption_node.py", line 181, in gen with torch.amp.autocast_mode.autocast(str(DEVICE), enabled=True): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\ComfyUI-qiuye\ComfyUI.ext\Lib\site-packages\torch\amp\autocast_mode.py", line 241, in init raise RuntimeError( RuntimeError: User specified an unsupported autocast device_type 'cuda:0'