Closed fan-hd closed 2 years ago
Hi, @fan-hd, are you sure you generated the training dataset with 1000 samples? What is your output? Is it just a folder with images?
On a side note, we have improved our dataset generation pipeline significantly. For instance, we now use EEVEE instead of Cycles rendering pipeline, which is much faster. Also, instead of saving everything in folders, we either save files in HDF5 files or in ZIP, e.g. https://github.com/rozumden/render-blur
Best, Denys
Hi, @fan-hd, are you sure you generated the training dataset with 1000 samples? What is your output? Is it just a folder with images?
On a side note, we have improved our dataset generation pipeline significantly. For instance, we now use EEVEE instead of Cycles rendering pipeline, which is much faster. Also, instead of saving everything in folders, we either save files in HDF5 files or in ZIP, e.g. https://github.com/rozumden/render-blur
Best, Denys
Hi, @rozumden , I'm sure there are 1000 seqs in each object.
The training set contains 50 object folders.
Each folder contains 1000 blurred images and a GT
folder,
and there are 1000 seqs with 24 frames in the GT
folder corresponding to the blurred images.
Besides, I'm not sure how you choose the video seqs for validation set. The validation set contains only 1000 samples. Is there any rule to pick videos from sports-1m?
Thanks a lot.
I also checked my folder and it was 72GB. I'm sorry for the confusion and my mistake in README. It's fixed now.
Sequences for the validation set are chosen randomly. It should all be done automatically. There is no need to pick the videos manually.
I also checked my folder and it was 72GB. I'm sorry for the confusion and my mistake in README. It's fixed now.
Sequences for the validation set are chosen randomly. It should all be done automatically. There is no need to pick the videos manually.
Got it.
Thanks for your reply.
Hi, @rozumden , would you mind providing the random seeds for data generation and network initialization? There's some performance drop (about 0.5 in PSNR, 0.02 in SSIM and 0.03 in TIoU) in my reproduced model when testing on benchmarks.
Thanks a lot.
Hi, do you mean that you regenerated the training dataset and retrained the network, everything from scratch?
Hi, do you mean that you regenerated the training dataset and retrained the network, everything from scratch?
Yes.
For how many epochs did you train? In our final model, we trained for 50 epochs.
Unfortunately, we don't have access to random seeds anymore. Network initialization is already provided in this repository. We initialize the encoder with ResNet50 weights. The rendering network is initialized randomly (and we again don't have random seeds). I don't think that random initialization is the reason why you have the performance drop.
I've just noticed that in the main_settings.py, the latent learning is disabled by default (g_use_latent_learning = False) due to the increased memory consumption during training. Have you turned it on?
For latent training, the dataset must be augmented with additional backgrounds by running shapeblur_addbg(folder) from run_addbg.py.
Yes, I trained for 50 epochs and turned on the latent learning during training.
Hi, did you manage to achieve higher scores? Maybe I can also share with you our generated training dataset (if you already have a ShapeNetv2 license). Then, we can see if this performance drop is due to the training dataset or something else.
Hi, thanks for your generous sharing.
I have a question about training set generating in your work. I generated a training set following your codes. Its size is about 100GB, far less than 1TB. Is there anything wrong?
Thanks.