Hi! I tried to train a StarE model on WD50K dataset on a single GeForce RTX 3090 and it takes 12h for one epoch. Is it the case when the authors ran it or did I miss out on something (I didn't modify your code)? If this is really the case then training for 400 epochs as default setting suggests seems impossible for users.
Hi, make sure you're starting the run.py script with the parameter DEVICE cuda to run everything on a GPU. I'd guess it's slow now because everything is run on a CPU
Hi! I tried to train a StarE model on WD50K dataset on a single GeForce RTX 3090 and it takes 12h for one epoch. Is it the case when the authors ran it or did I miss out on something (I didn't modify your code)? If this is really the case then training for 400 epochs as default setting suggests seems impossible for users.