Closed sober54 closed 1 year ago
Oops. I didn't mean to push that :P
Also in another patch, fixed determinism, which I was not passing the seed to. But flat and normal should match so can split RGB channels to grab certain cells and make masks from.
Thank you for your work ":P" . your node now functioning, but I have encountered a new issue in my workflow. Let me provide some context. My workflow combines two types of noise spaces: a colorful grain noise image and your Voronoi Noise image. This combination generates a new noise image. I have tried all the noise nodes, and your Voronoi Noise node adds more details to the generated image. However, I am unsure whether it is the shape or color that is influencing the result This method allows the generated image to be controllable on many levels. There are many parameters that can be adjusted, and they will be reflected in the final result, such as color, sharpness, and details. However, the issue I encountered after this update is that the two types of noise cannot blend together Error occurred when executing VAEEncode:
too many indices for tensor of dimension 3
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 276, in encode t = vae.encode(pixels[:,:,:,:3]) Forgive my limited GPTEnglish , not fully express my gratitude and respect. Love from China, and I appreciate your efforts.
This is something I generated, and I feel it is good in terms of details and colors,only prompt words and latent upscale *1.5.
Ah, that is because the noise is natural color (RGB) while the voronoi is linear grayscale. I suppose we would need a Linear to RGB node similar to RGBA to RGB
Alright, you'll need to fix the order of inputs on the nodes, but I've added optionals, including "RGB_output" which will convert the images to RGB or Linear on output. There is also Images to RGB and Images to Linear nodes now under WAS Suite/Image menu.
new noise looks very juicy, can't wait to try it. Thanks again for your great work. The reason I use this workflow is to generate decent images with 20 iterations on my low-performance graphics card. So, your node practically saved my life. I can even calculate the exact time of the life you saved,love from china again,thanks again
Would a number of images from the noise nodes help? Like if you select 4 you'd get four noise images with each images seed incremented?
I will test and provide feedback as soon as possible because the two noise nodes I am using can already provide rich variations by changing parameters. Additionally, changing the parameters of the bland node can also bring about changes. So, I mostly use fixed seeds
I actually use your latent bland node, which provides functionality similar to Photoshop's layer blending modes
latent blend node still got "too many indices for tensor of dimension 3"error color mode
black n white max modulator min it just working ,your magic is restored, and I'm so happy, I will adjust the input order and try the method of latent blending again. Thank you for your great work
Error occurred when executing VAEEncode:
too many indices for tensor of dimension 3
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 276, in encode t = vae.encode(pixels[:,:,:,:3]) latents blend node still got this,when i use 2 of your Voronoi Noise node,but works fine whan i use that old color plasma node,I believe that after synthesizing multiple noisy images, the resulting latent space should also have these small color blocks. These latent space images have enough white, black, and other colors to make the lighting and shadows in the image look more natural. The small details are also more prominent due to the presence of shadows. These are just my speculations, and I don't know if the algorithm delves into this level. But based on my observations, there is indeed such an effect. With your help, the pasta monster is now flying
I couldn't get the "too many indices for tensor of dimension 3" error using the blend node myself. Here is the workflow I tried:
I think I understand now. The RGB switch seems to control the output RGB channels rather than specific colors. Whether to generate a colored image is controlled by the flat switch. dpmpp-3m always broke the face , just discovered it yesterday, so I drew this grid https://drive.google.com/file/d/10fXekMQwwBQWXwVLX4tdVn8jt3HZxRXB/view?usp=drive_link thank you again sir , can't thank you enough
No problem. Btw in the workflow, I found that blending in your starting noise before doing the final steps like I did in that workflow above, somehow adds extra details when it starts diffusing the second KSampler (Advanced). I don't know how or why, but it does.
I don't know either, I can't test the xl model, but it also happened in the 1.5 model. I use TTN suit, and its advanced sampler has an option to add noise, so the same thing should have happened here. https://github.com/tinyterra/ComfyUI_tinyterraNodes
If you manually input the last digit of the blend value, 0.00X (X being a specific value), you can still adjust these details in the case of a fixed seed
I have been playing around with building starting noise too with power fractal shader. I create RGB noise with 3 randomly seeded nodes and encode to latent, then run them with two KSampler advanced for 1 step separately and Blend Latents those results, then use that as the latent for generations.
holy ......good one I think I know what's going on. It's indeed the black and white that are at play. The AI has learned how to draw black, but we're only giving it a brown or colored latent input
I have added another node after your node to generate noise for the image, but that node has RGBA channels, and it seems that transparency is also affecting it.
By using your node and changing the madulator value to 8, all the black colors are gathered in the center of the image. Then, applying the subsequent node to the transparency channel, I have been able to create many things that resemble non-human creations today. However, since I'm not using the usual workflow, I didn't save them. You can also give it a try. I also discovered that the 2.1 model is very useful today
https://drive.google.com/file/d/1871LDE2t7w8Wtfw8RRdPrFbZOHhYvVbR/view?usp=drive_link This is that workflow, I can only use it for drawing, but I feel like you can create something great with it
That's interesting on the last one, wow. Going to have to check it out.
I'm glad you decided to give it a try. With a sufficiently high CFG value, along with the effect of node #18, that kind of picture will appear. #15 only provides color, and I'm unsure if the a and b values of node #18 control the position or width. The width of the image I drew is 768, and I set the a value to half of the width. The content in the middle of the picture starts to become very rich. I'm sharing this workflow with you.
That's a really amazing. I could see those being the visual effects applied to some sort of character in a epic movie. Main villain or something
My workflow is more like stacking garbage, yours is more functional and enjoyable. I will try to add more controllable steps to your workflow, such as incorporating the node for content expansion. Then I will test if it is compatible with Lora, even for those detailed Lora aspects. Even with the 1.5 model, I have achieved some good results
Can I attach this issue link and workflow link to the post I am posted on civtai? I think your workflow works better and only requires your node package, making it simpler,looooove form china Or you can build a workflow and then publish it to civitai, I will be the first one to download it
Yeah that's fine, though should this be kept open?
I think it would be great if you could publish a new workflow on c, as you would be able to explain how it works more clearly. This should help close this issue
I think it would be great if you could publish a new workflow on c, as you would be able to explain how it works more clearly. This should help close this issue
Kinda thinking of making a node based on the process that outputs a latent.
lmao,please make it happen
lmao,please make it happen
Thank you, sir. I apologize for taking so long to provide feedback. I spent a long time testing, and the higher the octaves and exponent, the more details are available. However, a high lacunarity value is needed to showcase these details. Before '34, 'Result too large,' this should be the setting with the most details. With this setting, accessories even interact with the skin on the face. I saw this mug culture outside of China in movies. I wonder if you also like it,But in my country, people who provide high-quality code and share it are referred to as 'Cyber Buddha' '赛博菩萨'.
Alpha Channel can also be used as latent. I created such an image yesterday without using any lora
These area all really cool, and they all seem to have a very unique cirsp look to them that I haven't really seen before. That's cool. I like the "Cyber Buddha" saying. It's wholesome. Great work here. Really showed how this node can shine.
You are the comfy cyber Buddha. If someone did this during the 1.5 era, I think MJ would have no chance. My computer configuration is too low, it takes more than 120 seconds to generate a 1024² image. There are too few samples to draw any conclusions. So, we still need someone from civtai to test which type of noise is actually at work and find more specific control methods. If you agree, I will attach the address of this issue. Thank you for your efforts and thank you for all great nodes 1.5 2.1 xl mj https://lexica.art/?prompt=833f8d1b-c313-4e34-8256-28a4cca64f73
Wow. That's really impressive actually. And thank you for the kind words friend.
You are the comfy cyber Buddha. If someone did this during the 1.5 era, I think MJ would have no chance. My computer configuration is too low, it takes more than 120 seconds to generate a 1024² image. There are too few samples to draw any conclusions. So, we still need someone from civtai to test which type of noise is actually at work and find more specific control methods. If you agree, I will attach the address of this issue. Thank you for your efforts and thank you for all great nodes 1.5 2.1 xl mj https://lexica.art/?prompt=833f8d1b-c313-4e34-8256-28a4cca64f73
Maybe you can help me add some comparison images to this repos paper; https://github.com/WASasquatch/PPF_Noise_ComfyUI/blob/main/perlin_power_fractals_and_latent_diffusion.md
Thank God I can finally provide some help for you. Shall I upload some images that I think are good to Google Cloud? Is that enough?
I found all my prompts on Civitai and Lexica, just to test if I could recreate the images on Civitai.
Or I can now conduct some specific tests, such as examining the variations in texture, color, details, and other aspects using the same prompts
If you can do some grid comparisons that would be great. Maybe comparing against base ComfyUI noise in regular KSampler simple. I will try to do some tomorrow if I have time. It's late here now. Spent all evening trying to write a paper describing the process and potential haha
Test environment description: I will be working with the 2.1 model (rmada merge) and lock seed number 1 to generate 768*768-sized images. I will be using the advanced K-sampler built into comfyui, with both K-samplers set to the same values. No negative prompts will be used for the samplers. I believe I have created a fair platform to compare the two lanterns. Additionally, to account for the impact of computer hardware temperature on image quality, each test will have a five-minute interval. I have also provided the download for the workflow used in this test. The test content will be limited based on prompts. All prompts will be limited to comparing the following aspects against a black background: rainbow (color comparison), plastic bag (transparency, i.e., how white is handled, and details such as folds, i.e., black contrast), metal ball (comparison based on texture, although it's difficult to quantify, I just like metal balls), and glass cup.
I will test it this way, and I will also add some fluffy things into it,I should also add some scenes into it, such as a cozy bed. My previous workflow did generate that cozy feeling
It should be done, sir. I have uploaded the images to the readme.md. You may need to organize them yourself to avoid me messing up what you have written
Thank you! I'll take a look as soon as I am at the computer.
I need more colored pencils vs I need to draw that face well
I need more colored pencils vs I need to draw that face well
Haha nice. :3
By the way, I just pushed contrast and brightness to repo. Try 0.45 brightness and -0.75 contrast vs default 0.0. You'll notice it seems to add a lot of details.
Okay, sir. I will start extensive testing. Thank you for your contribution. Thank you, thank you, thank you, thank you
The perfect smoke and accurate lighting that I have been anticipating have finally appeared
this......
Error occurred when executing Image Voronoi Noise Filter:
setting an array element with a sequence.
File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 4221, in voronoi_noise_filter image = WTools.worley_noise(height=width, width=height, density=density, option=modulator, use_broadcast_ops=True, flat=(flat == "True")).image File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 2082, in init self.image = self.generateImage(option, flat_mode=flat) File "C:\Users\sober\compyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 2116, in generateImage non_flat_black_adjusted[h, w] = self.data[h, w] + self.data[closest_point_idx]