Open maxbriel opened 5 months ago
Because it doesn't happen in NN, the reason is not in the grids, but the interpolator itself. This might be a case where the interpolator output could improve on, when respecting limits, while this would be a more tricky one to take care of.
Right now it's hard to tell what the issue is as I'm not able to reproduce the issue without the correct sim_props. If the sim_props are available somewhere on QUEST I could have a closer look at the simplex vertices being used to carryout the barycentric interpolation. I suspect that those vertex final values will have some higher masses than the starting values which can theoretically lead to the average being higher since the weighting is a function of the distances.
Something else to consider is that since the masses are very high and the period in also pretty high we are in a very sparse part of the grid given that we are in log space. So, the data being (~8 binaries) used to perform the interpolation could very well be from binaries that have considerably higher starting masses and so the weighted average of the data produces a mass greater than the starting mass.
I don't recall the post-processing taking care of these types of physical constraints and looking at the paper I don't think they are (some of the symbols like M_sys, M_tri, etc. are confusing to me so I'm not sure). While this is ultimately a issue in how we weight the simulations deemed pertinent to the interpolation, I think the easiest fix would be adding a correction in the post-processing step which may need some consideration since to my knowledge certain systems (like the ones that have reverse MT) will have an increase in stellar mass.
About the issue of sparse data because of log space. I'd expect the interpolation to happen in log space if the underlying grid is in log space, as it is. If interpolating in log space, the weighting shouldn't be an issue as explained by @philipp-rajah .
About the reverse MT, this should be a different interpolation class. Beside that, I'd need to check the data, but even in reverse MT we shouldn't get the primary to become more massive then initially. If you see this in the grid data, please let me know.
Apologies if I did not explain correctly, while I still think the weighting is an issue. I think the main issue is that the closest data points in the the sparse regions of the grid could be dissimilar from the point we want to interpolate and thus it is entirely plausible that the final mass becomes higher than the initial mass if the simulations used to perform the interpolation are much larger in mass than the point that we want to interpolate.
For the reverse MT, I was referring to the constraints that we apply. The constraints/physical corrections that we apply are independent of class. So if we add a constraint that keeps the final mass from being smaller than the initial it could impact other classes where the final value is supposed to be larger than the initial value. If this is not the case and no final masses are supposed to be larger than the initial value than this is irrelevant. It sounds like this is the case, but I am wondering if class-wise constraints might be something needed in the future?
When running a binary with large masses, I'm getting $M\mathrm{final}$ larger than the initial $M\mathrm{ZAMS}$ back from the interpolator.
For example, the following binary at Z=1e-4, where the S1_mass increases without an interaction occurring.