When using the flag_gems library, specifically after calling flag_gems.enable() or with flag_gems.use_gems(), a warning is triggered:
UserWarning: Warning only once for all operators, other operators may also be overridden.
This warning indicates that a previously registered kernel for a specific operator is being overridden. It appears in the following context:
In [1]: import flag_gems
In [2]: flag_gems.enable()
/usr/local/lib/python3.10/dist-packages/torch/library.py:169: UserWarning: Warning only once for all operators, other operators may also be overrided.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor
registered at aten/src/ATen/RegisterSchema.cpp:6
dispatch key: CUDA
previous kernel: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1079
new kernel: registered at /dev/null:241 (Triggered internally at ../aten/src/ATen/core/dispatch/OperatorEntry.cpp:153.)
self.m.impl(name, dispatch_key if dispatch_key != "" else "CompositeImplicitAutograd", fn)
Additionally, this warning is also displayed when running unit tests with pytest.
Related PR
To address this issue, a recent pull request was made to suppress this specific warning during PyTorch dispatch calls.
When using the flag_gems library, specifically after calling
flag_gems.enable()
orwith flag_gems.use_gems()
, a warning is triggered:UserWarning: Warning only once for all operators, other operators may also be overridden. This warning indicates that a previously registered kernel for a specific operator is being overridden. It appears in the following context:
Additionally, this warning is also displayed when running unit tests with pytest.
Related PR To address this issue, a recent pull request was made to suppress this specific warning during PyTorch dispatch calls.
190