Open AlexDBlack opened 5 years ago
Not a complete solution, but it might help to allow the use of -2
in reshapes to mean "keep the same dimensions as in the original vector".
E.g. SD.zero(4, 5, 3).reshape(-2, -1)
would have shape[4, 15]
.
Optionally, you could limit -2
s to all be before any non--2
s
It wouldn't quite fix your example, but it makes working with variable minibatches much easier. Especially for things like flatten.
@rnett That's not a bad idea. It wouldn't solve all cases - consider [a,b,c].reshape(1,a,b,c)... can't use it there. My only thought is whether this would deviate from numpy/tf behaviour (they don't support -2 in reshapes afaik, only -1s).
Reminder: Once this is implemented: let's update capsnet as discussed here - https://github.com/deeplearning4j/deeplearning4j/pull/7391#issuecomment-480116153
So, say I do x.shape() -> SDVariable with values [16,8,3,3] on some iteration Now I want to make a new variable with shape [16,8,10,10] - i.e., "same as x but last 2 values changed to some constant" Currently I can only think of ugly ways to do this: shape + slice + concat with constant; or alternatively scatter ops...