ttddee / Cascade

Node-based image editor with GPU-acceleration.
GNU General Public License v3.0
727 stars 31 forks source link

No wiring up of parameters? #55

Open andybak opened 2 years ago

andybak commented 2 years ago

Seems like a missed opportunity. When I opened up "checkerboard" for example, I expected there to be inputs for each of the parameters to allow them to be controlled per pixel.

ttddee commented 2 years ago

Could you elaborate on what functionality you are looking for? What would you like to wire the parameters to? And what do you mean by "controlled per pixel"?

andybak commented 2 years ago

In most of the node-based systems I'm familiar with, nearly all parameters for a node are also inputs.

For example for checkboard, you've only got a single input whereas I would have expected size, aspect, phase etc to also accept inputs (of the appropriate types).

This is where the majority of the power of node based systems comes from - and the reason they approach the power and flexibility of scripting directly.

Without this you have a much more limited and inflexible system.

https://nodes.io/ and https://cables.gl/home are good examples. As is the shader editor in Unity or Unreal.

"controlled per pixel"?

You can choose to evaluate input parameters for each pixel or just once for the whole image. The former allows a much wider range of potential results.

ttddee commented 2 years ago

I see what you are saying but IMO in an image editing context exposing all kinds of inputs actually does not make a lot of sense.

Every node represents one image editing operation and all the parameters are exposed through the GUI. Take for example the Checkerboard node. It creates a checkerboard and lets you define the size and color of the checkers. That's it. If you want to manipulate single pixels this node won't let you do that. Another one will though, depending on what you want.

Image editing/compositing packages usually do not expose parameters through inputs, see Nuke, Natron, Autodesk Flame. What's passed down the graph is the image and you can manipulate any pixel you want in any way. But you need to use the right operation. Just like in Photoshop where you can't ask the blur filter to change the colors of an image.

The power of a node graph comes mainly from the non-destructive nature because all operations are perfectly describable as meta-data.

Another reason for exposing parameters directly in the graph is animation. Cascade is not designed for animation though, so no gains there.

Maybe there is a use case I am missing. Can you tell me a concrete operation that you would like to do on an image that is not possible now and would be made possible by this?

andybak commented 2 years ago

Can you tell me a concrete operation that you would like to do on an image that is not possible now and would be made possible by this?

I think this is one of those "if you don't get it, then you don't get it" things. For me it's just such an obvious force-multiplier. It turns nodes from "baked-in limited use" things into building blocks for creative expression.

Take checkerboard. If the checker frequency acceps an input from a gradient then you've got an infinitely flexible warpable pattern generator.

A more prosaic example would be blur - blur strength should be a grayscale input to allow variable strength blurs that are derived from the souce image itself.

I just can't imagine not doing this kind of thing in a node-based tool. And it falls so naturally out of the UI paradigm.

ttddee commented 2 years ago

I think this is one of those "if you don't get it, then you don't get it" things.

Sounds a bit condescending but sure I'll take it.

The operations in a system like a node graph are separated as much as possible for good reason. It prevents "baked-in limited use".

It can be confusing if you are not used to node-based image editing tools, but I will try to explain:

Take checkerboard. If the checker frequency acceps an input from a gradient then you've got an infinitely flexible warpable pattern generator.

This is no problem. It's just not combined into one big node that can do everything. Just use a checkerboard node and then a warp node. Now, if you decide later that you don't like the checker, you can just replace it with a different image and keep the warp. That would not be possible if the warp was built in to the checker node.

Keep in mind that most generative algorithms are a little more complex and still have to be implemented somewhere. They don't just manifest by combining some input values.

A more prosaic example would be blur - blur strength should be a grayscale input to allow variable strength blurs that are derived from the souce image itself.

Yes. A greyscale input is called a channel, or a mask. That can be derived from any image and used anywhere in the current implementation.

andybak commented 2 years ago

Sounds a bit condescending but sure I'll take it.

Sorry - I didn't mean it to come across like that!

A greyscale input is called a channel, or a mask. That can be derived from any image and used anywhere in the current implementation.

Unless I misunderstan you then that's a different thing. A mask controls the compositing of one node to another. I'm talking about an input that affects the paremeters of a node. So blur strength of 60 and a 50% gray input would result in a strength of 30. This is evaluated per pixel and looks very different to a 50% blend of blurred and not blurred.

Another simpler example would be for a node that shifts the hue. The amount of hue shift could be rederived from a circular gradient. Black = none, 50%=180 degrees shifted etc.

The power of this is that the input itself can be generated from the input image. i.e. Take the input, blur it and use the blurred version to control the strength of a parameter for a 3rd node.

ttddee commented 2 years ago

I get what you are saying. On some select nodes that would add more possibilities, true. On many operations blending with a mask is the same as regulating the intensity though. I'll give it some thought and mock something up.

Here is a project file showing the difference in the case of blur and hue: huetest.zip

Just add your own image in the Read Node.