We can differentiate between records from score sets, experiments, etc. by looking at what fields are included, but this is imperfect and makes some tasks (like choosing what view models to use for validation in MaveTools) harder.
I propose adding a new field to the view models called type or recordType or similar that identifies what view model was used to create it. This could get extended to label meta-analysis score sets or other possibly-forthcoming record types, like records describing clinical annotations/strength of evidence or base editor data, that might have slightly different fields or validation constraints.
The benefit is that it makes validation of uploads and downstream processing of records much more explicit. The downside is that we have to figure out exactly what labels go in the field, and it also makes the API output slightly more verbose.
We can differentiate between records from score sets, experiments, etc. by looking at what fields are included, but this is imperfect and makes some tasks (like choosing what view models to use for validation in MaveTools) harder.
I propose adding a new field to the view models called
type
orrecordType
or similar that identifies what view model was used to create it. This could get extended to label meta-analysis score sets or other possibly-forthcoming record types, like records describing clinical annotations/strength of evidence or base editor data, that might have slightly different fields or validation constraints.The benefit is that it makes validation of uploads and downstream processing of records much more explicit. The downside is that we have to figure out exactly what labels go in the field, and it also makes the API output slightly more verbose.