expected: loaded the new .json example transformer workflow 'hello there, david!', expected a translation when run
outcome: loaded and downloaded model, then immediate error message
Logs
C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Starting server
To see the GUI go to: http://127.0.0.1:8188
config.json: 100%|████████████████████████████████████████████████████████████████████████████| 808/808 [00:00<?, ?B/s]
generation_config.json: 100%|█████████████████████████████████████████████████████████████████| 189/189 [00:00<?, ?B/s]
special_tokens_map.json: 100%|████████████████████████████████████████████████████████████| 3.55k/3.55k [00:00<?, ?B/s]
README.md: 100%|██████████████████████████████████████████████████████████████████████████| 7.65k/7.65k [00:00<?, ?B/s]
.gitattributes: 100%|█████████████████████████████████████████████████████████████████████| 1.22k/1.22k [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████| 564/564 [00:00<?, ?B/s]
sentencepiece.bpe.model: 100%|████████████████████████████████████████████████████| 4.85M/4.85M [00:00<00:00, 5.68MB/s]
tokenizer.json: 100%|█████████████████████████████████████████████████████████████| 17.3M/17.3M [00:01<00:00, 9.30MB/s]
pytorch_model.bin: 100%|██████████████████████████████████████████████████████████| 5.48G/5.48G [02:08<00:00, 42.5MB/s]
Fetching 9 files: 100%|██████████████████████████████████████████████████████████████████| 9/9 [02:09<00:00, 14.42s/it]
An error occurred while executing a workflow██████████████████████████████████████| 5.48G/5.48G [02:08<00:00, 41.9MB/s]
Traceback (most recent call last):
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 370, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 241, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\Python\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 217, in map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 206, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy_extras\nodes\nodes_language.py", line 265, in execute
kwargs_to_try = ({"torch_dtype": unet_dtype(supported_dtypes=(torch.bfloat16, torch.float16, torch.float32)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 724, in unet_dtype
if dt == torch.bfloat16 and should_use_bf16(device, model_params=model_params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 1164, in should_use_bf16
free_model_memory = maximum_vram_for_weights(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 693, in maximum_vram_for_weights
return get_total_memory(device) * 0.88 - minimum_inference_memory()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 149, in get_total_memory
_, mem_total_cuda = torch.cuda.mem_get_info(dev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\memory.py", line 684, in mem_get_info
device = _get_device_index(device)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\_utils.py", line 38, in _get_device_index
return _torch_get_device_index(device, optional, allow_cpu)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\_utils.py", line 803, in _get_device_index
raise ValueError(
ValueError: Expected a torch.device with a specified index or an integer, but got:cuda
Traceback (most recent call last):
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 370, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 241, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\Python\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 217, in map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 206, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy_extras\nodes\nodes_language.py", line 265, in execute
kwargs_to_try = ({"torch_dtype": unet_dtype(supported_dtypes=(torch.bfloat16, torch.float16, torch.float32)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 724, in unet_dtype
if dt == torch.bfloat16 and should_use_bf16(device, model_params=model_params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 1164, in should_use_bf16
free_model_memory = maximum_vram_for_weights(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 693, in maximum_vram_for_weights
return get_total_memory(device) * 0.88 - minimum_inference_memory()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 149, in get_total_memory
_, mem_total_cuda = torch.cuda.mem_get_info(dev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\memory.py", line 684, in mem_get_info
device = _get_device_index(device)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\_utils.py", line 38, in _get_device_index
return _torch_get_device_index(device, optional, allow_cpu)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\_utils.py", line 803, in _get_device_index
raise ValueError(
ValueError: Expected a torch.device with a specified index or an integer, but got:cuda
An error occurred while executing a workflow
Traceback (most recent call last):
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 370, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 241, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\Python\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 217, in map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 206, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy_extras\nodes\nodes_language.py", line 265, in execute
kwargs_to_try = ({"torch_dtype": unet_dtype(supported_dtypes=(torch.bfloat16, torch.float16, torch.float32)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 724, in unet_dtype
if dt == torch.bfloat16 and should_use_bf16(device, model_params=model_params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 1164, in should_use_bf16
free_model_memory = maximum_vram_for_weights(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 693, in maximum_vram_for_weights
return get_total_memory(device) * 0.88 - minimum_inference_memory()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 149, in get_total_memory
_, mem_total_cuda = torch.cuda.mem_get_info(dev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\memory.py", line 684, in mem_get_info
device = _get_device_index(device)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\_utils.py", line 38, in _get_device_index
return _torch_get_device_index(device, optional, allow_cpu)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\_utils.py", line 803, in _get_device_index
raise ValueError(
ValueError: Expected a torch.device with a specified index or an integer, but got:cuda
Traceback (most recent call last):
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 370, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 241, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\Python\Lib\contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 217, in map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\cmd\execution.py", line 206, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy_extras\nodes\nodes_language.py", line 265, in execute
kwargs_to_try = ({"torch_dtype": unet_dtype(supported_dtypes=(torch.bfloat16, torch.float16, torch.float32)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 724, in unet_dtype
if dt == torch.bfloat16 and should_use_bf16(device, model_params=model_params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 1164, in should_use_bf16
free_model_memory = maximum_vram_for_weights(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 693, in maximum_vram_for_weights
return get_total_memory(device) * 0.88 - minimum_inference_memory()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\comfy\model_management.py", line 149, in get_total_memory
_, mem_total_cuda = torch.cuda.mem_get_info(dev)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\memory.py", line 684, in mem_get_info
device = _get_device_index(device)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\cuda\_utils.py", line 38, in _get_device_index
return _torch_get_device_index(device, optional, allow_cpu)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Public\.venv_hiddenswitch\Lib\site-packages\torch\_utils.py", line 803, in _get_device_index
raise ValueError(
ValueError: Expected a torch.device with a specified index or an integer, but got:cuda
Your question
expected: loaded the new .json example transformer workflow 'hello there, david!', expected a translation when run
outcome: loaded and downloaded model, then immediate error message
Logs
Other
No response