Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/
When subclassing existing modules, a common use case I've encountered is to automatically fill certain arguments of the super class constructor based on HParams values. By design, HParams are realized in the base class (ModuleBase), where the default values are filled in and user-provided values are type-checked. However, in order to call the base class constructor, all it arguments must be filled, and these arguments may rely on default HParams values, which are not available yet.
As a result, one must explicitly realize the HParams in the derived class. But there is no way to prevent the base class from realizing the HParams again, and in cases with multiple levels of inheritance, HParams realization could happen multiple times.
For example, let's say I'm writing a CharCNN module and want to subclass texar.modules.encoders.Conv1DEncoder. The signature of the constructor for Conv1DEncoder is: __init__(self, in_channels: int, in_features: Optional[int] = None, hparams=None). I have a field named "char_embed_dim" in default_hparams, which I am using to fill the in_channels argument. To do that, I have to realize HParams.
One possible solution is to change the constructor of HParams: __init__(self, hparams, default_hparams). If hparams is already an HParams instance, then we don't do anything. This way, when a subclass realizes HParams, it can pass the realized HParams into the super class constructor, thus prevent repeated realization.
When subclassing existing modules, a common use case I've encountered is to automatically fill certain arguments of the super class constructor based on HParams values. By design, HParams are realized in the base class (
ModuleBase
), where the default values are filled in and user-provided values are type-checked. However, in order to call the base class constructor, all it arguments must be filled, and these arguments may rely on default HParams values, which are not available yet.As a result, one must explicitly realize the HParams in the derived class. But there is no way to prevent the base class from realizing the HParams again, and in cases with multiple levels of inheritance, HParams realization could happen multiple times.
For example, let's say I'm writing a
CharCNN
module and want to subclasstexar.modules.encoders.Conv1DEncoder
. The signature of the constructor forConv1DEncoder
is:__init__(self, in_channels: int, in_features: Optional[int] = None, hparams=None)
. I have a field named"char_embed_dim"
indefault_hparams
, which I am using to fill thein_channels
argument. To do that, I have to realize HParams.One possible solution is to change the constructor of HParams:
__init__(self, hparams, default_hparams)
. Ifhparams
is already an HParams instance, then we don't do anything. This way, when a subclass realizes HParams, it can pass the realized HParams into the super class constructor, thus prevent repeated realization.