Open DuanYaQi opened 4 months ago
could you please provide a demo code to recurrence this problem?
could you please provide a demo code to recurrence this problem?
You can add the following code at the beginning of triton.compile()
function to reproduce the issue.
tmp = torch.rand(256).cuda()
tmp.ne(0)
it will recursively compile ne.Scalar
,
or we can confirm that the triton.compile()
pipeline does not depend on torch
. But I haven’t seen any descriptions related to this in the community anywhere.
FlagGems Commit: https://github.com/FlagOpen/FlagGems/commit/3c62c9ce0dee6f8b1b22d2e01e37443aa28512ce Python 3.10.12 Torch: 2.3.1 Triton: 2.3.1 Pytest: 8.2.2
In triton of commit id fc7a8e35819bda632bdcf1cf75fd9abe4d4e077a, the JITFunction treats all arguments with type annotation as constants, which is not expected behavior. Only those arguments annotated with tl.constexpr
should be included.
def __init__(self, fn, version=None, do_not_specialize=None):
...
# annotations
self.annotations = {self.arg_names.index(name): ty for name, ty in fn.__annotations__.items()}
self.__annotations__ = fn.__annotations__
# index of constexprs
self.constexprs = [self.arg_names.index(ann) for ann in self.__annotations__.keys()]
So as a workaround, you can remove type annotation for parameters other than tl.constexpr
s. To change the generated code by pointwise_dynamic
, you need to modify generate_pointwise_kernel
in src/flag_gems/utils/pointwise_dynamic.py.
Issue
There is an identified issue in the
triton.compile()
pipeline where theflag_gems.use_gems()
is being activated all the time. This leads to an infinite recursion problem when certain functions are called to be compiled.Specifically, if
torch.ne.Scalar
function is invoked during thetriton.compile()
pipeline, it will trigger another call totriton.compile()
bylib.impl("ne.Scalar", ne_scalar, "CUDA")
inFlagGems/src/flag_gems/__init__.py::enable()
, causing an infinite loop and eventually a stack overflow.