Clearly it would be preferable if we could make use of modeling work already done, instead of reinventing the wheel (yet again). The main starting point should be the work the various participants have already done. Hopefully you would be able to upload that here, and also to present it to the rest of us?
In addition we should probably check with existing standards to see if we can take over something from there, or at least not wantonly break with it. A couple of places to look might be
mmCIF (https://www.ebi.ac.uk/pdbe/docs/documentation/mmcif.html). This is highly authoritative and is used for PDB deposition, and has already modeled the processing of multi-sweep data sets. On the flip side it is huge and quite heavyweight to work with. It might take some work to isolate the records that are relevant to us (unless someone is already on top of this??)
Clearly it would be preferable if we could make use of modeling work already done, instead of reinventing the wheel (yet again). The main starting point should be the work the various participants have already done. Hopefully you would be able to upload that here, and also to present it to the rest of us?
In addition we should probably check with existing standards to see if we can take over something from there, or at least not wantonly break with it. A couple of places to look might be
Does anyone have any other source we ought to check?