Closed shawnington closed 4 months ago
it's right here: https://github.com/liusida/ComfyUI-Login/blob/main/load_image_incognito.py
AFAIK, VAE is not a secure way of protecting image unless you train your secure VAE from scratch. Many VAEs can decode more or less the latent to image, just with a little distortion.
it's right here: https://github.com/liusida/ComfyUI-Login/blob/main/load_image_incognito.py
AFAIK, VAE is not a secure way of protecting image unless you train your secure VAE from scratch. Many VAEs can decode more or less the latent to image, just with a little distortion.
Thanks! Not sure how I missed that sitting there.
That is indeed true, but the option to have some distortion added is a bit of obfuscation from the original source, so for example after conversion back to an image, it would not match any hash from the original image because of the distortion.
So if someone had the latent, and decoded it, it wouldn't be as trivial to match it to an image on your system since you cannot simply just do a hash comparison. Definitely not much more security, but some, and I believe offering any extra layers of security or obfuscation is good.
Do you have any thoughts on that idea?
I think I won't against add additional distortions.
I was looking for it and couldn't find it.
I was contemplating this issue again as I finally decided to open source all of my custom nodes, and while I reworking my image resizing node, that outputs scaling data so upscale can be done back to the correct resolution for recomposing, I realized that there is nothing that can prevent another node like mine from intercepting the image and saving it, so I was going to include the option to only output a VAE encoded image so that the non-encoded image data never escapes the loading node.