Open lopezguan opened 2 weeks ago
Exception Message: 'context'
File "E:\ComfyUI-aki-v1.4\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "E:\ComfyUI-aki-v1.4\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "E:\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "E:\ComfyUI-aki-v1.4\custom_nodes\comfyui-ollama\CompfyuiOllama.py", line 248, in ollama_generate_advance return (response['response'], response['context'],)
## System Information - **ComfyUI Version:** v0.2.4-16-g30c0c81 - **Arguments:** E:\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc - **OS:** nt - **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.3.1+cu121 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 3080 Ti : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 12884377600 - **VRAM Free:** 11609833472 - **Torch VRAM Total:** 0 - **Torch VRAM Free:** 0 ![image](https://github.com/user-attachments/assets/310f65de-7559-4e0c-9a25-cdbf0b74e1a3)
what version of ollama are you using? this was a bug in some versions that they omitted context from the response.
ComfyUI Error Report
Error Details
Exception Message: 'context'
Stack Trace