Closed aweaver1fandm closed 1 year ago
Ideally, you should tweak the grid size according to GPU and the per-observation type (nchans and tsamp). You can try to tweak it and gain some performance.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Describe the bug Every time we run your_candmaker using a GPU we get the following warning:
/numba/cuda/dispatcher.py:488: NumbaPerformanceWarning: Grid size 64 will likely result in GPU under-utilization due to low occupancy.
I understand it's a warning and not an error but it logs a ton of these errors. We've run your_candmaker with several different input files and they all produce this issue
To Reproduce your_candmaker.py -c $FIL.csv -fs 256 -ts 256 -g $SLURM_JOB_GPUS -o ./candidates/
Expected behavior Not sure if the code can be modified to use a better grid size or if the warnings can be suppressed similar to https://stackoverflow.com/questions/29347987/why-cant-i-suppress-numpy-warnings
Versions (please provide the versions of the the following packages):
Additional context We have your built into it's own Conda environment.
We are always running this code on a compute cluster with a Tesla V100 GPU. We have not tried it on other GPUs