Closed JZhao12 closed 2 years ago
Hi @JZhao12
the original Augmented Autoencoder is trained object-wise. But our newer Multipath-Encoders which can be found under the multipath branch in this repository can also be trained on >100 objects jointly that share the same latent space. For the latter you need multiple GPUs for training. See the paper for details.
Hi Martin, thanks for your reply.
May I ask the meaning of 'object-wise'? Did you train for example 10 AAEs with 10 different objects respectively? Or you trained each object one by one such as you trained only one AAE with the first object, then use the trained AAE to continue to train the next object.
Thank you very much! Jianyu
Yes, the AAEs are trained per object. You can still easily run 30 AAEs in parallel at inference time on a modern GPU. The multipath encoder is trained on many objects.
Thank you for your reply! It is really kind of you! Merry Christmas:)
Hi, sorry to disturb you, I would like to ask a question:
How did you train all the data? Did you train each object separetely? Or you just train everything together with the batch size 64, leaning rate 1e-4.
Thank you very much! Jianyu