Open markjenei94 opened 2 months ago
Hi Mark,
Thanks for your question, and you're correct in your understanding. The current code does perform inference over timesteps that were used for training, and the latent code is indeed retrieved from a pre-trained list using the time index.
For nowcasting with unseen input samples, you can generate a new latent code using:
new_latent = nn.Parameter(torch.FloatTensor(1, latent_size))
Then, freeze all the parameters of the model by doing:
for param in model.parameters():
param.requires_grad = False
Next, create an optimizer that only optimizes the new latent code:
optimizer = optim.AdamW([new_latent], lr=lr)
This approach allows the model to work with new latent codes during nowcasting. Let me know if you have any further questions or need clarification!
Hi Xihaier,
Oh, that all makes sense now, thank you very much for the prompt answer, and also for publishing this method - it is very promising!
Hi, I have a question about inference with unseen input samples.
What I found is that the laten code is obtained by simply passing the time index to the model, and taking the corresponding code from a list of pre-trained codes
latent = self.latents[idx, :]
in _MMGNetnet.py - line 186
Did I miss something, or is there some code change needed to construct these new latent codes?