Before, the standard deviation was applied on the mean difference between current and last weights (scalar), which resulted in the evolution parameter being zero already in the first iteration. Hence, the deBroeg algorithm only runs once when using .autodiff(). Removing the standard deviation fixes the issue.
I also tested between using the maximum difference and the mean difference of current and last weight for a few example dates. In the dates I've tested, the results are identical, but the mean difference method uses less iterations. This is why I have settled for the mean difference approach for now.
Furthermore, I've implemented two additional lines related to any mask that is applied to a 'Fluxes' instance by using the 'mask_stars' function. Specifically, i identify stars with a std of zero in the first iteration (as 'mask_stars' sets masked stars to be -1 for all times) and then apply the mask after all iterations to the resulting weights. I did this, as before it would only set the weights to zero for these masked stars in the first iteration, but as soon as the weights are defined by the binned white noise of the lightcurve (from the second iteration onwards), the masked stars get non-zero values, which defies the purpose of masking.
Before, the standard deviation was applied on the mean difference between current and last weights (scalar), which resulted in the evolution parameter being zero already in the first iteration. Hence, the deBroeg algorithm only runs once when using .autodiff(). Removing the standard deviation fixes the issue.
I also tested between using the maximum difference and the mean difference of current and last weight for a few example dates. In the dates I've tested, the results are identical, but the mean difference method uses less iterations. This is why I have settled for the mean difference approach for now.
Furthermore, I've implemented two additional lines related to any mask that is applied to a 'Fluxes' instance by using the 'mask_stars' function. Specifically, i identify stars with a std of zero in the first iteration (as 'mask_stars' sets masked stars to be -1 for all times) and then apply the mask after all iterations to the resulting weights. I did this, as before it would only set the weights to zero for these masked stars in the first iteration, but as soon as the weights are defined by the binned white noise of the lightcurve (from the second iteration onwards), the masked stars get non-zero values, which defies the purpose of masking.