Open oke-aditya opened 2 years ago
I'm a junior developer and all my thoughts are probably step too far. So please forgive me if I'm wrong.
Don't worry @oke-aditya , that is what issues are for. You've put in quite some work to compile this, so even if we totally disagree (not saying that we are) you don't need to be sorry at all. Thanks a lot for your work!
I'm all for reducing duplications. For my own projects I have written ConvBNReLU
quite a few times, so having this in a central location is a good thing to have. I can't say much about the other blocks that you have proposed, but even if downstream libraries don't need them, I consider reducing duplications in our own code a win.
I'm not sure whether layers should be
torchvision.layers
ortorchvision.nn
For the naming issue I would go for torchvision.nn
to follow torch.nn
and torchtext.nn
. In our implementations, I would promote import torchvision.nn as tvnn
as canonical import to avoid possible conflicts with the import of the other domain libraries.
Should
torchvision.nn
contain losses?
I don't see why not. Not sure if we want to push it as feature, i.e. making it a priority to add something there, but if someone wants to contribute a loss I think we can accept this. So I would only put the general model blocks in there in the first place and see what the community wants.
Portability:-
Since torchvision
is a proper library (can be installed and imported from), I wouldn't worry about the needs of someone copy-pasting our code. Plus, in the MobileNetV2
example you gave, we import internal stuff
so copy-pasting the code is already not possible.
Layer Customization : -
Yes, there is always a trade-off between customizability and ease-of-use. Given that we implement quite a few important models, IMO we can use this as "limit" for customizability. That means, all our stuff should be covered without some hacky workarounds or special casing. Everything beyond that probably warrants a custom implementation of the user. We can always make it more general in the future if there is some need.
Additionally I will recommend not hurrying this feature, we could create
torchvision.experimental.nn
and start working out things.(or probably I can try in a fork)
The standard way is to "mark" (in the documentation as well as in the release notes) a module in a prototype state. The aim of this is twofold:
Thanks @oke-aditya for bringing up this topic! Just sharing my 2 cents. I think there are two kinds of APIs: developer APIs and public APIs. Developers APIs are those helper function/utils that we use within the codebase to improve dev efficiency and avoid code duplication. Public APIs are the products we offer to the community and commit on backward compatibility.
For example, I would treat layers like ConvNormAct as developer API since it's just an shortcut for composing three operators together. Can we put this kind of API under torchvision.ops.misc
? and only used within torchvision?
For layers like SqueezeExcitation that are back by papers, referenced in many model architectures. We could make them public to the community. Putting them in torchvision.nn
or torchvision.ops
both sound good to me.
Hi @kazhang I agree to your thoughts. We should be exposing only those layers that are referenced in model architectures.
It would be nice to reduce internal code duplication by keeping them under nn.misc.py
or perhaps naming these classes with _
or keeping in _.py
file in layers / nn folder.
I would prefer nn
as to separate out model building logic from model pre / post processing / helper logic.
Picks up from discussion in https://github.com/pytorch/vision/pull/4293#discussion_r696471536
🚀 Feature
API for Commonly used Layers in building models.
Motivation
A huge code duplication is involved in building very basic blocks for large neural networks. Some of these blocks can be standardized and re-used. Also these could be offered to end user as an API so that downstream libraries can build models easily.
E.g. for duplication are SqueezeExcitation, ConvBNRelu, etc.
Pitch
Create an API called
torchvision.nn
ortorchvision.layers
Our implementations need to be generic but not locked to certain activation functions or channels, etc. These can be simply classes based onnn.Module
.An example
User can use this as
Also layers can be mixed into new custom models.
E.g.
Points to Consider
We have torchvision.ops then why layers?
Ops are transforms that do manipulations with pre-processing and post processing of structures such as Boxes, Masks, Anchors, etc. These are not used in "model building" but are optional steps for specific models. Also these are
E.g. NMS, IoU, RoI, etc.
One doesn't need ops for every model. E.g. You don't need to do RoI align, for DeTR. Or you don't computer IoU for segmentation masks.
With separate API can be clear distinction in what are
layer
for models and operators fortasks
such as detection, segmentation.Should
torchvision.nn
contain losses?This is tricky, and for now I see no clear winner. PyTorch does not differentiate the API for losses or layers. E.g. we do
nn.Conv2d
which builds a convolutional layer. Also we donn.CrossEntropy
ornn.MSE
which builds a loss function.I'm not sure whether layers should be
torchvision.layers
ortorchvision.nn
(if implemented of course)Users don't need to worry about colliding namespaces. They can do.
Note that
nn
seems to be the convention adopted by torchtext.Other points to consider.
Portability: - Currently most of the torchvision models are easily copy pastable. E.g. We can easily copy paste mobilenetv2.py file and edit it on the go to customize models. By Adding such API we can reduce the internal code duplication but these files would no longer be single standalone files for models.
Layer Customization : - Layer Customization has far too many options to consider. E.g. there are several implementations possible for BasicBlock of ResNet or some slight modifications of inverted residual layer. One can't create an implementation that will suit all the needs for everyone. If one tries to, then the API would be significantly complicated.
TorchScript: - We shouldn't be hampering torchscript compatibility of any model while implementing above API.
Additional context
Some candidates for layers
Also Quantizable versions of these !
Quantizable versions will allow users to directly fuse up the models.
Additionally I will recommend not hurrying this feature, we could create
torchvision.experimental.nn
and start working out things.(or probably I can try in a fork) Linking plans #3911 #4187P.S. I'm a junior developer and all my thoughts are probably step too far. So please forgive me if I'm wrong.
cc @datumbox @pmeier @NicolasHug @fmassa