Kosinkadink / ComfyUI-VideoHelperSuite

Nodes related to video workflows
GNU General Public License v3.0
414 stars 74 forks source link

help! is that error cause the black frames? runtime warning for invalid value #223

Open abumolly opened 1 month ago

abumolly commented 1 month ago

custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py:95: RuntimeWarning: invalid value encountered in cast return tensor_to_int(tensor, 8).astype(np.uint8) really don't know how to solve the problem, is that error cause the animatediff work flow generated black frames? really appreciate for your reply.

AustinMroz commented 1 month ago

The black outputs are a fairly common issue and are the result of something going wrong earlier in the workflow. If you pass the ouput into a Preview Image node, you'll get the same black output.

These can be quite inconvenient to debug, but I'd suggest you start with the example txt2img workflow and change one thing at a time until it breaks and produces black output.

t2i_wf

abumolly commented 1 month ago

thank you for your suggestion, I did used the original txt2img workflow, actually everything was ok before I installed tensorflow and pytorch, which followed chatgpt 4o to enhance the efficience of rendering in mac m3. For now, other workflows were all rendered ok except animatediff ones, and I reinstalled animatediff which was not useful. workflow

abumolly commented 1 month ago

ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py:95: RuntimeWarning: invalid value encountered in cast return tensor_to_int(tensor, 8).astype(np.uint8)---------that's the log

Kosinkadink commented 1 month ago

If everything was working before chat gpt blindly told you to edit venv stuff, then your venv is likely messed up. You can create a new venv and set it up from scratch and avoid any and all AI-generated install suggestions.

abumolly commented 1 month ago

If everything was working before chat gpt blindly told you to edit venv stuff, then your venv is likely messed up. You can create a new venv and set it up from scratch and avoid any and all AI-generated install suggestions.

thank you so.... much for your reply. not sure whether it's the problem of the "environment.yaml", since i got the error as below, when I create the new venv and prepare to reinstall the animatediff. do I have to change the code of the "environment.yaml" for Mac system?

conda env create -f environment.yaml Retrieving notices: ...working... done Channels:

LibMambaUnsatisfiableError: Encountered problems while solving:

Could not solve for environment specs The following package could not be installed └─ pytorch-cuda 11.7* is not installable because it requires └─ cuda 11.7. , which does not exist (perhaps a missing channel).

Kosinkadink commented 1 month ago

ComfyUI's readme has a guide for installing ComfyUI on Mac: https://github.com/comfyanonymous/ComfyUI?tab=readme-ov-file#apple-mac-silicon

ComfyUI does not use conda for its environment for the default install, just pip. I'm not sure what guide you are using, but it does not apply to Mac at all; CUDA is an NVIDIA-card only thing, so it is not a thing for mac installs.

abumolly commented 1 month ago

Really appreciate for your inspiration. while, I canceled conda and reinstalled the animatediff, the problem was still there, black output with the error :

ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py:95: RuntimeWarning: invalid value encountered in cast return tensor_to_int(tensor, 8).astype(np.uint8)

截屏2024-06-04 15 54 30

seem problem was about the animatediff model, since it's ok if cancel the animatediff related nodes.

abumolly commented 1 month ago
截屏2024-06-04 16 08 37

For the simple workflow, there's no errors report but outputs are black.

got prompt [rgthree] Using rgthree's optimized recursive execution. Requested to load SD1ClipModel Loading 1 new model [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (8) less or equal to context_length None. [AnimateDiffEvo] - INFO - Using motion module mm_sd_v15.ckpt:v1. 100%|███████████████████████████████████████████| 20/20 [01:00<00:00, 3.05s/it] Prompt executed in 64.34 seconds

Kosinkadink commented 1 month ago

See if you get black images if you use a V2 or a V3 motion model on that same workflow

abumolly commented 1 month ago

hi, I changed the sampler to dpmpp_2m_sde, and down the denoise value to 0.3, it can generated some images. not sure why but seem problems solved...

截屏2024-06-04 19 10 22

denoise 0.5 will be ok, but 0.8, it will turn to black...