Closed albertotono closed 1 year ago
I changed line 59 in eval_helper.py
to
pair_vis(gen_pcs[worse_ten].to(device), ref_pcs[worse_ten].to(device),
titles, subtitles, writer, step=step)
Let's see if this works. How can I change the eval also to be less than 256 or bigger?
It seem worse_ten
is cuda tensor and gen_pcs
is cpu tensor; my torch version (1.10.2) seems to be okish with such indexing; what's the torch version you are using?
worse_ten
to cpu; could you try again? eval less than 256
? Are you referring to the batch-size or the size of eval data?yes it is working now. I am using Pytorch 2.0 cu117. ( with 2GPUs A6000 Ada it took me around around 24 hours = 12.5km driven by average ICE car for a total of 3.11kg of CO_2 credit https://mlco2.github.io/impact/ I happy to share the weights of the VAE (DM me) to other researchers if it can be helpful and reduce CO_2 emission. Thank you so much.
np, I fixed also that part. thanks for the clarification.
Hi @albertotono is there any way you could share those weights you mentioned? Thank you so much in advance. If you could send them by email (mail: michallatkos@gmail.com), I would be extremely grateful.
@ZENGXH Thanks again for the amazing work and quick reply to maintaining this repo
During training the VAE I run into this issue. While it is doing the evaluation
But for some reasons it didn't stop the training, the terminal was hanging there