Closed Eutenacity closed 8 months ago
i know the resaon. Just update the python version newer than 3.10.10
But another error.
Traceback (most recent call last):
File "/home/wenxianglin/scattermoe/scattermoe-main/demo.py", line 25, in num_stages
may help.
I test on A6000.
update the verion of triton solve all my problem....
Good to know your problem's fixed. I did test mostly on an A100 and a Titan RTX, so it will be good to know about device issues, but they might largely be Triton problems.
This might be related https://github.com/openai/triton/issues/1589
I can confirm upgrading Python fixed this issue. Better to add an instruction in README.
I'm using Python 3.12.7 but every test fails
Sorry, i am not familiar with triton.
After pytest. AssertionError happen
../../miniconda3/envs/dsmii/lib/python3.10/site-packages/torch/nn/modules/module.py:1518: in _wrapped_call_impl return self._call_impl(*args, kwargs) ../../miniconda3/envs/dsmii/lib/python3.10/site-packages/torch/nn/modules/module.py:1527: in _call_impl return forward_call(*args, *kwargs) scattermoe/mlp.py:83: in forward h = self.experts( ../../miniconda3/envs/dsmii/lib/python3.10/site-packages/torch/nn/modules/module.py:1518: in _wrapped_call_impl return self._call_impl(args, kwargs) ../../miniconda3/envs/dsmii/lib/python3.10/site-packages/torch/nn/modules/module.py:1527: in _call_impl return forward_call(*args, kwargs) scattermoe/parallel_experts.py:142: in forward results = ParallelLinear.apply( ../../miniconda3/envs/dsmii/lib/python3.10/site-packages/torch/autograd/function.py:539: in apply return super().apply(*args, *kwargs) # type: ignore[misc] scattermoe/parallel_experts.py:14: in forward output = kernels.ops.scatter2scatter( scattermoe/kernels/ops.py:146: in scatter2scatter _scatter2scatter[grid]( ../../miniconda3/envs/dsmii/lib/python3.10/site-packages/triton/runtime/autotuner.py:114: in run ret = self.fn.run(args, num_warps=config.num_warps, num_stages=config.num_stages, kwargs, *config.kwargs) ../../miniconda3/envs/dsmii/lib/python3.10/site-packages/triton/runtime/autotuner.py:232: in run return self.fn.run(args, **kwargs)