CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. #92
File "/home/pritam/i2vgen-xl/tools/modules/unet/util.py", line 253, in forward
out = xformers.ops.memory_efficient_attention(
File "/home/pritam/anaconda3/envs/vgen/lib/python3.8/site-packages/xformers/ops.py", line 574, in memory_efficient_attention
return op.forward_no_grad(
File "/home/pritam/anaconda3/envs/vgen/lib/python3.8/site-packages/xformers/ops.py", line 189, in forward_no_grad
return cls.FORWARD_OPERATOR(
File "/home/pritam/anaconda3/envs/vgen/lib/python3.8/site-packages/torch/_ops.py", line 143, in call
return self._op(*args, **kwargs or {})
does this ops.py has anything to do with it ?
i am attaching the full error here. please help.
File "/home/pritam/i2vgen-xl/tools/modules/unet/util.py", line 253, in forward out = xformers.ops.memory_efficient_attention( File "/home/pritam/anaconda3/envs/vgen/lib/python3.8/site-packages/xformers/ops.py", line 574, in memory_efficient_attention return op.forward_no_grad( File "/home/pritam/anaconda3/envs/vgen/lib/python3.8/site-packages/xformers/ops.py", line 189, in forward_no_grad return cls.FORWARD_OPERATOR( File "/home/pritam/anaconda3/envs/vgen/lib/python3.8/site-packages/torch/_ops.py", line 143, in call return self._op(*args, **kwargs or {}) does this ops.py has anything to do with it ?
i2v.txt