Closed sophie22 closed 1 year ago
Hi, @sophie22. This looks great. The only thing I have to add is whether it's worth standardising the level of precision reported for each test, which I think would come under the remit of this issue. Most of them are already in a form that I think is fine, but there are some where I've been sloppy. Sorry!
I would say the following for the ACR modules:
Geometric Accuracy -> 2 d.p. Ghosting -> 3 d.p. Slice Position -> 2 d.p. Slice Thickness -> 2 d.p. SNR -> 2 d.p. MTF -> estimated rotation angle -> 1 d.p. -> MTF values -> 2 d.p. Uniformity -> 2 d.p.
Hi @YassineAzma , that is a great idea, and I agree that precision should be standardised as well. Thank you for providing the number of relevant decimal points to display for the ACR tasks, I had confirmation from @elizaGSTT and Becky that the same precision is appropriate for the MagNet phantom values as well.
Is your feature request related to a problem? Please describe. Current result dictionaries have various keys and various levels of nestedness which make it difficult to write a standardised parser for in the web-app to store results values and metadata in a relational database. File with examples of result output for every task: output.txt ACR_output.txt
Describe the solution you'd like
output proposed.txt ACR_output proposed.txt
This issue does not cover the SNR task that has additional complexity, addressed in a separate issue.