Closed ghost closed 6 years ago
What's your computing device? I guess that if it is a GPU, the communication between CPU and GPU, or the IO of the hard disk may be the bottleneck. And if it is a CPU, it is very fast indeed to finish processing an epoch of MNIST in 20 seconds.
I'm using a CPU, but with fully connected layers only I finish an epoch within a second. Even if I up the number of layers in the network shown above it's still 20 seconds. It seems that it's not the computation is causing how slow it is but that it's something else. It's probably not communication because I'm running it in a CPU.
@XifengGuo could you please suggest me the value of lam_recon to use for the RGB images of 100100 pixel resolution? I am not sure whether 10010030.0005 = 15 is the right value to use.
@raaju-shiv I think 0.1~1.0 should be a better value.
I am noticing that even with all parameters that control for size, it still takes 20 seconds to complete an epoch. Any idea why that is?
So the summary looks like this: