Below are my thoughts on the next items to work on and develop. This is primarily @maxgarman, and I've roughly sorted them from near term to longer term. The details also get fuzzier the further down the list we go...
Add error bars to the normalized intensity plot. There are a few sources right now that are in the peak table. I'm not sure if it's better to try and combine them, or maybe just have a series of errorbars offset from each data point to show the scale. @dnewton600 might have some ideas along these lines...
Create an automated testing script. I'd like when we get done with a round of edits to run several test cases to see if we're still getting the same answers. Right now I think we do this manually, and don't record anything. The main things I think to record are the example number, phase fraction(s), and number of peaks fit.
With @cmschenck and @creuzige, complete the interaction volume code. I've got some ideas using the lozenge shaped interaction area (volume), creating a grid of points inside that area, and then using a filled contour to show the location of diffracted, fluorescing, and absorbed intensity. I'm pretty sure @cmschenck has some ideas about this as well.
Use uncertainty algorithm to sort and rank the sources of uncertainty (high to low) in the phase fraction. Right now we're kind of eyeballing that for most of the examples it's the variation in normalized intensity. But there are some peaks that have pretty poor fits/intensity, so those might start to contribute. @dnewton600 might have some ideas on when it's "better" to remove the peak from fitting if it significantly contributes to uncertainty.
Prototype report output. I'd like to wrap up all the plots and data into a .pdf, .cif, .json, or .csv file. Not sure on best way to sort that right now. Bonus points if you can take the exported report (or some subset) and feed it back into the program for analysis...
Add composition variation to uncertainties. Likely a longer term project...
Thanks Adam for putting these here. I'll put some comments here that I'll reiterate in the meeting today:
Bullet point 1: I think it may be good to add 'over-layed' error bars, where the inner error bar measures the uncertainty in fitting, and the outer includes both the fitting and the uncertainty in the normalized uncertainties
Bullet point 2: For automated testing, the dash library does include some automated testing (https://dash.plotly.com/testing), which would be great to look into. I think this is a good idea to bring up.
Bullet point 4: The nice thing about using the hierarchical bayesian approach, is that we won't have to worry about 'discarding' any data. If an uncertainty from a fitted peak is very high, the model essentially treats the peak as providing no information (and it won't affect the uncertainty estimates of the phase fraction). This would not be the case if we did something simple like using an average of the normalized intensities.
Below are my thoughts on the next items to work on and develop. This is primarily @maxgarman, and I've roughly sorted them from near term to longer term. The details also get fuzzier the further down the list we go...
Add error bars to the normalized intensity plot. There are a few sources right now that are in the peak table. I'm not sure if it's better to try and combine them, or maybe just have a series of errorbars offset from each data point to show the scale. @dnewton600 might have some ideas along these lines...
Create an automated testing script. I'd like when we get done with a round of edits to run several test cases to see if we're still getting the same answers. Right now I think we do this manually, and don't record anything. The main things I think to record are the example number, phase fraction(s), and number of peaks fit.
With @cmschenck and @creuzige, complete the interaction volume code. I've got some ideas using the lozenge shaped interaction area (volume), creating a grid of points inside that area, and then using a filled contour to show the location of diffracted, fluorescing, and absorbed intensity. I'm pretty sure @cmschenck has some ideas about this as well.
Use uncertainty algorithm to sort and rank the sources of uncertainty (high to low) in the phase fraction. Right now we're kind of eyeballing that for most of the examples it's the variation in normalized intensity. But there are some peaks that have pretty poor fits/intensity, so those might start to contribute. @dnewton600 might have some ideas on when it's "better" to remove the peak from fitting if it significantly contributes to uncertainty.
Prototype report output. I'd like to wrap up all the plots and data into a .pdf, .cif, .json, or .csv file. Not sure on best way to sort that right now. Bonus points if you can take the exported report (or some subset) and feed it back into the program for analysis...
Add composition variation to uncertainties. Likely a longer term project...