Open anilrgukt opened 8 years ago
Hi Anil,
in the paper it's one model trained for each texture.
A few years ago we experimented with an MCGSM trained on natural images and only retraining a few parameters for each texture. This worked quite well, so I would expect something similar to also work well with RIDE, especially if you don't have a large texture to train on.
If the texture is small, augmenting the training data by flipping the images can also help (train.py --augment).
Lucas
Dear Lucas,
In your paper titled 'Modeling Natural Image Statistics', I have a question with implementation details of figure 1.8 attached below, How were the 70% missing pixels estimated? Is it by sampling from the distribution? Or by gradient ascent or any other optimization on MCGSM's density.
Also a general remark, have you tried any other image processing tasks like image denoising, deblurring with MCGSM or RIDE model? We observed that model is preferring much smoother solutions.
Thanks, Anil
Hi Anil,
to remove 70% of the pixels, I sample from a uniform distribution for each pixel and if a value is less than 0.7, I consider the pixel missing. If I remember correctly, I used L-BFGS to reconstruct the pixels.
I played around with the MCGSM for denoising, but not RIDE. For this, I followed a different approach (optimization didn't work so well): I trained a conditional model, p(image | noisy image), for each noise level and then sampled from the model to denoise an image. I averaged multiple samples to estimate the conditional mean. It worked okay but not great (in terms of PSNR):
That said, using a more advanced model like RIDE and using a single sample should give perceptually much more pleasant results (less blurry, more high-frequency content).
Dear Lucas,
The texture samples generated in your paper from mcgsm and ride[figure 3], are they from the model trained on only that particular texture? or is the model trained with all the textures is biased with that single texture?
thanks, Anil