Open nmdickson opened 4 years ago
Two testing scripts, for the data loading, etc, and for the likelihoods, etc, make the most sense to me.
This will require creating a test dataset, i.e. a test cluster hdf resource. It's going to need to have all the usual trappings from any cluster, and be able to run through every likelihood, so I think for now I'll borrow the data from both M62 and 47Tuc, where they exist. If necessary (or expediant) I'll adjust the actual variable values as we go.
I've also placed the PM_tot
and PM_ratio
variables into the high_mass
subgroup, as it's a good idea to test on these subgroups, but you cannot have a mix of variables inside and outside subgroups, (whenever a subgroup exists it removes the parent group, in data).
I'm not sure when that decisions was made but it's how things stand for now so it's how we'll test as well.
Significant progress made in 357533d on data
tests.
This should be bumped up in priority now, especially testing that the fitting scripts can run correctly, in order to easily check that new branches will function before they are PR'd.
It's probably past time to add things like unit tests, just to make sure changes aren't breaking things like the likelihood functions or data reading or etc.