Open lijun-1999 opened 6 months ago
Does your V100 GPUs have 32G memory or 16G memory? The training part requires 32GB V100 GPUs.
And have you made any changes to the code?
Dear Professor, Hello! I am using a server with 4 V100 GPUs (each with 32GB of memory). When I run the SimKGC project without any modifications in this environment, I encounter the following error:
Traceback (most recent call last):
File "main.py", line 22, in result
Previously, when I was not running in the 4 V100 environment, I received a different error in the code, which I modified. However, after making the changes, when I moved to the 4 V100 environment, I started encountering the "CUDA out of memory" error. Could you please provide some guidance on how to address this issue? Best regards!
At 2024-06-03 18:31:59, "Liang Wang" @.***> wrote:
Does your V100 GPUs have 32G memory or 16G memory? The training part requires 32GB V100 GPUs.
And have you made any changes to the code?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hello, I noticed in your readme.md file that it states if you encounter a "CUDA out of memory" issue, it might be due to limited hardware resources. However, I am using a server with 4 V100 GPUs, so why am I still facing this problem? Moreover, reducing the batch size did not resolve the issue.