Open lunixbochs opened 4 years ago
Note: it looks like this generally works, but also SAME convolution breaks when I do this - it seems SAME expects to know the input dimensions at compile time #891
Note: it looks like this generally works, but also SAME convolution breaks when I do this - it seems SAME expects to know the input dimensions at compile time #891
Looks like #891 has been addressed. @lunixbochs - you say things are generally working. Should this issue be closed? If not, please provide details on what is still not working.
from #891:
flexible = ct.RangeDim()
@mb.program(input_specs=[mb.TensorSpec(shape=(1, 100, 100, flexible.symbol)),])
Is it possible to use RangeDim like this without manually using the .symbol property? It's been a while but I think I had to dig that out of the source and it wasn't really documented.
It would be nice to use a
RangeDim
directly in themb.TensorSpec()
shape here instead of the symbol indirection:Context:
I'm trying to write a converter for a model type with flexible input dimensions, because the automatic conversion doesn't work and it's not worth me fixing it (I'm already generating the Pytorch model from yet another framework, flashlight/wav2letter, so I'd rather generate the right ops directly for MIL than try to make the stacked abstractions make sense)
I originally got it working with the old-style NeuralNetworkBuilder, but couldn't figure out how to specify flexible input features. I just found the MIL builder and ported my code to that. It seems to accept a
RangeDim
in the program decorator for flexible input shape as shown, but only if I use the.symbol
property on aRangeDim
(which I had to dig around in source to find). Passing theRangeDim
directly feels more intuitive to me.