I'm sure this isn't on your list of things because its been absent for soo long, but this is actually one of the best preprocessors, especially now that we have unified controlnet models that accept various preprocessor inputs.
DSINE is objectively better than any Depth preprocessor, giving superior detail for inference and depth to images, it doesn't just compete with Depth, it has its own behavior. The images that DSINE produces over Depth are often richer and prompt adherence is better, not just from controlnet model training, but due to the nature of the output of the preprocessor itself. It utilizes shape and color to infer details, while Depth merely utilizes shape and shade, which has a more limited dynamic range for controlnet model inference. With the color and fidelity of shapes in Normal maps, you get a much better representation of input images overall, while allowing for more range of color/lighting/depth of field differences, while still keeping adherence to the context of the content. It quite literally puts Depth preprocessors to shame.
Normal BAE is not a very good preprocessor at this stage, so without DSINE, normal preprocessing becomes pretty useless as a feature altogether. It was merely ok during SD 1.5 when 512x512 was common, but not now when 1024x1024 or greater resolutions are possible.
DSINE also outperforms all the third party Normal preprocessors in "Spaces", so even those aren't a good solution when DSINE already exists, it's almost like a step backward to not have DSINE in Forge.
I'm sure this isn't on your list of things because its been absent for soo long, but this is actually one of the best preprocessors, especially now that we have unified controlnet models that accept various preprocessor inputs.
DSINE is objectively better than any Depth preprocessor, giving superior detail for inference and depth to images, it doesn't just compete with Depth, it has its own behavior. The images that DSINE produces over Depth are often richer and prompt adherence is better, not just from controlnet model training, but due to the nature of the output of the preprocessor itself. It utilizes shape and color to infer details, while Depth merely utilizes shape and shade, which has a more limited dynamic range for controlnet model inference. With the color and fidelity of shapes in Normal maps, you get a much better representation of input images overall, while allowing for more range of color/lighting/depth of field differences, while still keeping adherence to the context of the content. It quite literally puts Depth preprocessors to shame.
Normal BAE is not a very good preprocessor at this stage, so without DSINE, normal preprocessing becomes pretty useless as a feature altogether. It was merely ok during SD 1.5 when 512x512 was common, but not now when 1024x1024 or greater resolutions are possible.
DSINE also outperforms all the third party Normal preprocessors in "Spaces", so even those aren't a good solution when DSINE already exists, it's almost like a step backward to not have DSINE in Forge.