FlagOpen / FlagGems

FlagGems is an operator library for large language models implemented in Triton Language.
Apache License 2.0
296 stars 27 forks source link

Warning Triggered During PyTorch Dispatch #191

Closed 2niuhe closed 1 month ago

2niuhe commented 1 month ago

When using the flag_gems library, specifically after calling flag_gems.enable() or with flag_gems.use_gems(), a warning is triggered:

UserWarning: Warning only once for all operators, other operators may also be overridden. This warning indicates that a previously registered kernel for a specific operator is being overridden. It appears in the following context:

In [1]: import flag_gems

In [2]: flag_gems.enable()
/usr/local/lib/python3.10/dist-packages/torch/library.py:169: UserWarning: Warning only once for all operators,  other operators may also be overrided.
  Overriding a previously registered kernel for the same operator and the same dispatch key
  operator: aten::add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor
    registered at aten/src/ATen/RegisterSchema.cpp:6
  dispatch key: CUDA
  previous kernel: registered at ../aten/src/ATen/LegacyBatchingRegistrations.cpp:1079
       new kernel: registered at /dev/null:241 (Triggered internally at ../aten/src/ATen/core/dispatch/OperatorEntry.cpp:153.)
  self.m.impl(name, dispatch_key if dispatch_key != "" else "CompositeImplicitAutograd", fn)

Additionally, this warning is also displayed when running unit tests with pytest.

Related PR To address this issue, a recent pull request was made to suppress this specific warning during PyTorch dispatch calls.

190

StrongSpoon commented 1 month ago

warning is expected. it indicates that the replacement of flag_gems succeeded.

Bowen12992 commented 1 month ago

https://github.com/FlagOpen/FlagGems/pull/190