THanks for providing stellar.
I am currently trying to run stellar on the Hubmap demo dataset on our Cluster. Although it states that it should finish quite fast, it runs >24h. I see that the GPU gets used, although just around 2.5 MB. I am not sure whats wrong. The loss also gets printed.
THanks for providing stellar. I am currently trying to run stellar on the Hubmap demo dataset on our Cluster. Although it states that it should finish quite fast, it runs >24h. I see that the GPU gets used, although just around 2.5 MB. I am not sure whats wrong. The loss also gets printed.
My environment:
My slurm file
This is the GPU usage
I have not changed any of the scripts. DOes anyone have a suggestions?