Maghoumi / pytorch-softdtw-cuda

Fast CUDA implementation of (differentiable) soft dynamic time warping for PyTorch
MIT License
630 stars 59 forks source link

Grid size (4) < 2 * SM count (164) will likely result in GPU under utilization due to low occupancy. #21

Open 18445864529 opened 2 years ago

18445864529 commented 2 years ago

Thank you for your great work! This is very useful and works like a charm for my use case. Here is one quick question:

/net/papilio/storage2/bowenz/anaconda3/envs/zbw/lib/python3.9/site-packages/numba/cuda/compiler.py:726: NumbaPerformanceWarning: Grid size (4) < 2 * SM count (164) will likely result in GPU under utilization due to low occupancy.
  warn(NumbaPerformanceWarning(msg))
/net/papilio/storage2/bowenz/anaconda3/envs/zbw/lib/python3.9/site-packages/numba/cuda/compiler.py:726: NumbaPerformanceWarning: Grid size (4) < 2 * SM count (164) will likely result in GPU under utilization due to low occupancy.
  warn(NumbaPerformanceWarning(msg))
/net/papilio/storage2/bowenz/anaconda3/envs/zbw/lib/python3.9/site-packages/numba/cuda/compiler.py:726: NumbaPerformanceWarning: Grid size (4) < 2 * SM count (164) will likely result in GPU under utilization due to low occupancy.
  warn(NumbaPerformanceWarning(msg))

At the very beginning of the running, I got this warning three times, but the training seems to be going successfully. What does this imply, can I simply ignore this warning? Thank you again!

Whatever314 commented 11 months ago

Check my answer https://stackoverflow.com/a/77618835/23058072