I was running a few checks on how I expected convolutions with random weights to effect the KA measure, and it seems that when I run the slm model with just an fbcorr layer, I get very unexpected output (measured by KA metric).
For example, the KA mean drops way below pixels for an slm with a single fbcorr, and performing this operation by hand with some other convolution code shows an improvement over pixels.
Note that a lot of slm models will perform worst than pixels. Try with remove_mean=False for the first layers. Sorry I won't be able to debug this further.
I was running a few checks on how I expected convolutions with random weights to effect the KA measure, and it seems that when I run the slm model with just an fbcorr layer, I get very unexpected output (measured by KA metric).
For example, the KA mean drops way below pixels for an slm with a single fbcorr, and performing this operation by hand with some other convolution code shows an improvement over pixels.
Here's an ipython notebook demonstrating this: slm experiments at the end http://nbviewer.ipython.org/urls/dl.dropbox.com/u/1688838/explore_ka_convolutions.ipynb
Can anyone reproduce this?