Open cheryyunl opened 8 months ago
Thanks for your feedback. For layer parameter, you should change system.ae_model.in_dim = parameter num. You should change the hyperparameter about fold_rate or layer num for autoencoder, the details about autoencoder in https://github.com/NUS-HPC-AI-Lab/Neural-Network-Diffusion/blob/main/core/module/modules/encoder.py.
Thank you so much for your quick reply! In the encoder.py for the entire network parameter generator, if in_dim = 50, 000, will there be a lot of pressure on the memory and training time required for training? Is there any dimensionality reduction operation?
Your problems are not accidental. We also find it tricky to deal with large model parameters
The information that can be provided is that Latent_AE_cnn_big under core/module/modules/autoencoder.py is the autoencoder model for large model parameters. But it also takes a long time to train, and our future task is to solve large-scale parameter generation.
Thanks for your feedback again.
Thank you!!! I will try to use Latent_AE_cnn_big for training autoencoder. An additional question is for the entire network parameters, the paper also use 200 checkpoints for parameter encoder training? For Latent_AE_cnn_big, the number of samples looks small. Do you try to use more checkpoints?
Thanks again!!
Sorry to hear that you are having such difficulty. But I never try more checkpoints. We will update the p-diff for easier large parameter generation in the future.
Hi @1zeryu , would you mind adding more details about your code in the README? I am having trouble understanding the hierarchical structure and how to make modifications.
Sure, I will do it soon. Thanks for your feedback.
Hi, I find that in the experiments for entire networks, the parameter number is very large (more than 50,000 or 100,000). How to use the encoder to train the parameter embedding? If the in_dim = parameter number? Would you like to provide the training details (especially the autoencoder) in the entire network generation?
Thank you for your reply!!!