Closed thelukester92 closed 7 years ago
Currently, all layers provide inputs() and outputs() methods that report the sizes of inputs and outputs. Perhaps, we need to make this interface a bit more informative. For example, instead of just outputs(), I think we need:
size_t channels(); // returns the number of output channels size_t dims(); // returns the number of output dimensions size_t dim_vals(size_t dim); // returns the number of output values in the specified dimension
outputs() would be equivalent to: size_t outputs() { size_t n = channels(); for(size_t i = 0; i < dims(); i++) n *= dim_vals(i); return n; }
Fully-connected layers would always output a one-dimensional vector with one channel. Element-wise layers would just pass these meta-values straight through. Pooling layers would just scale down the dim_vals. Convolutional layers would do what they do, etc.
New changes have made that whole interface irrelevant. However, GLayerConvolutional2D is presently commented out, so let's re-purpose this issue for getting GLayerConvolutional2D uncommented again.
GLayerConvolutional2D is now back. To support thread safety, the forwardProp, backProp, and updateGradient methods now copy 2 or 3 GLayerConvolutional2D::Image objects. Luke, I think that should be reasonably efficient, but you might want to take a look just to see if you have a better idea.
Since the original intent of this issue report was to improve the interface, we should probably now take another look at our interface for all of the block types, and open issues to improve them.
This isn't working:
The reason is that the downstream GLayerConvolutional2D expects an upstream GLayerConvolutional2D to determine input image dimensions. Not sure how to make this work.