Currently, Flatten layers are skipped in the pytorch parser. Additionally, this operation is not flagged as unsupported, so the model will parse, but exhibit incorrect behavior. This PR adds support for these layers by adding them to the parser. The optimizer pass that converts the operations to channels_last for pytorch models is adapted to transpose the input to the flattener so the output elements are in the correct order.
Type of change
For a new feature or function, please create an issue first to discuss it
with us before submitting a pull request.
Note: Please delete options that are not relevant.
[x] Bug fix (non-breaking change that fixes an issue)
[x] New feature (non-breaking change which adds functionality)
Tests
Verified that a simple model with a Conv2D and a Flatten operation give correct results, both for the torch.nn.Flatten and torch.flatten() interfaces to this operation in pytorch. Pytest has been added to verify this.
LGTM. I also added support for fully parsing start_dim and end_dim of nn.Flatten to our Reshape. Once tests pass, I'll merge this if there are no objections in the meantime.
Currently,
Flatten
layers are skipped in the pytorch parser. Additionally, this operation is not flagged as unsupported, so the model will parse, but exhibit incorrect behavior. This PR adds support for these layers by adding them to the parser. The optimizer pass that converts the operations tochannels_last
for pytorch models is adapted to transpose the input to the flattener so the output elements are in the correct order.Type of change
For a new feature or function, please create an issue first to discuss it with us before submitting a pull request.
Note: Please delete options that are not relevant.
Tests
Verified that a simple model with a Conv2D and a
Flatten
operation give correct results, both for thetorch.nn.Flatten
andtorch.flatten()
interfaces to this operation in pytorch. Pytest has been added to verify this.Checklist
pre-commit
on the files I edited or added.