Open sudongwang-upc opened 7 months ago
When sampling, we require that we generate self.model.num_parallel_samples
at every sampling iteration. Hence, we repeat all the tensors that many times.
When sampling, we require that we generate
self.model.num_parallel_samples
at every sampling iteration. Hence, we repeat all the tensors that many times.
Thank you for your response to this issue. Let me elaborate on my question again.
Thanks for elaborating.
Feel free to follow up! :)
@ashok-arjun May I ask is there a quick way to extract the mean and std of the t-distribution that model predicts?
Why repeat here? The results generated after repeat (i.e. the parameters of the distribution) should be the same, right? What is the significance of this?
https://github.com/time-series-foundation-models/lag-llama/blob/9965e30faff6087992a13717ec2f3c2cc3967fd5/lag_llama/gluon/lightning_module.py#L221-L226