Open DavideRossi opened 3 weeks ago
cc: @pnunna93
Hi @DavideRossi, Could you please share python and hip traces for the script?
For python trace, you can add this before the line where script hangs. You can stop it once the stack trace doesn't change. import faulthandler;faulthandler.dump_traceback_later(10, repeat=True)
Please run the script with AMD_LOG_LEVEL=3 for hip trace.
Please also share the torch version and your machine details, outputs of 'pip show torch' and 'rocminfo'.
Hi, I am sorry but I mistakenly deleted the container I was using for these tests. In any case I was able to trace the problem to accelerate when using device_map="auto"
. The same code using device_map="cuda:0"
was not hanging. I'm now trying to replicate the whole process with a new container.
Now I'm having issues with 8 bits support, but I'm going to post into the #538 thread about that.
System Info
An AMD Epyc system with 3 MI210. Quite a complex setup. The system uses slurm to schedule batch jobs which are usually in the form of apptainer run containers. The image I'm using has rocm6.0.2 on ubuntu22.04.
Reproduction
I followed the installation instructions at https://github.com/TimDettmers/bitsandbytes/blob/multi-backend-refactor/docs/source/rocm_installation.mdx bar the fact that I checked out the
multi-backend-refactor
branch (I hope that was the right thing to do). Then I tried running the example at https://github.com/TimDettmers/bitsandbytes/blob/multi-backend-refactor/examples/int8_inference_huggingface.py, I just added a line to print the value ofmax_memory
right after it is set (I also run a modified version forcing the use of only one GPU). The jobs just hangs after printing:Then goes on timeout after one hour. I'm willing to help debug the issue, just tell me how I can help.
Expected behavior
This is what I get when running the same example with an A100 (after a few seconds):