Open shindahye58 opened 1 week ago
@shindahye58 Thank you for the question:)
Your asking is when the traditional filter performs better than the autoencoder (or other deep models).
Before start, I have to say that PSNR may be difficult from what we perceive.
PSNR (Peak Signal-to-Noise Ratio) is defined with R as the maximum pixel value, while MSE (Mean Squared Error) represents the average pixel difference between the clean and noisy images. If our autoencoder (or another generative model) generates clean-looking images that visually appeal to humans but differ from the original, the PSNR score may be low despite the visual quality. In such cases, alternative metrics, such as Inception Scores, may be more suitable for evaluating image quality.
In general, it is very challenging—almost impossible—to surpass deep models specifically designed for denoising (e.g., autoencoders) using traditional filter-based methods, provided we have adequate supervised data. For instance, a bilateral filter can be considered as a single-channel, 3x3 convolutional layer, giving it far less expressive power than a deep model. However, traditional filters are lightweight and easy to implement as preprocessing tools, making them valuable especially in cases where noise can be mathematically modeled, such as salt-and-pepper or Gaussian noise.
Hello Professor,
I noticed in the lecture notes that PSNR (Peak Signal-to-Noise Ratio) is commonly used to measure the performance of denoising techniques. I’m curious about how traditional filter-based methods, such as the median filter, compare to autoencoder-based methods in terms of PSNR. In what situations would filter-based methods outperform autoencoder-based approaches, and vice versa? Also, are there specific types of noise or images where one method is clearly more effective than the other?