Open ukanga opened 2 months ago
The implementation plan needs to cater for:
Example of nested repeats
{
"_id":2533748,
"_tags":[
],
"_uuid":"2f684d87-d3c9-4bb2-adab-6e18aaa8bf89",
"_notes":[
],
"_edited":false,
"_status":"submitted_via_web",
"_version":"202411290755",
"_duration":"",
"_xform_id":22511,
"_attachments":[
],
"_geolocation":[
null,
null
],
"_media_count":0,
"_total_media":0,
"formhub/uuid":"50ac8510ca7f4a4b979b4104f8f5b9fa",
"_submitted_by":"johndoe",
"_date_modified":"2024-11-29T07:56:59.824285+00:00",
"hospital_repeat":[
{
"hospital_repeat/hospital":"Aga Khan",
"hospital_repeat/child_repeat":[
{
"hospital_repeat/child_repeat/sex":"male",
"hospital_repeat/child_repeat/name":"Zakayo",
"hospital_repeat/child_repeat/birthweight":3.3
},
{
"hospital_repeat/child_repeat/sex":"female",
"hospital_repeat/child_repeat/name":"Melania",
"hospital_repeat/child_repeat/birthweight":3.5
}
]
},
{
"hospital_repeat/hospital":"Mama Lucy",
"hospital_repeat/child_repeat":[
{
"hospital_repeat/child_repeat/sex":"female",
"hospital_repeat/child_repeat/name":"Winnie",
"hospital_repeat/child_repeat/birthweight":3.1
}
]
}
],
"meta/instanceID":"uuid:2f684d87-d3c9-4bb2-adab-6e18aaa8bf89",
"_submission_time":"2024-11-29T07:56:59.491726+00:00",
"_xform_id_string":"nested_repeats",
"_bamboo_dataset_id":"",
"_media_all_received":true
}
Example nested repeats export
nested_repeats_2024_11_29_08_00_34_821888.csv
When looping through the data to build the repeat columns, we also build the attachment uri We can avoid looping through the submissions to build the repeat columns, but it seems we still need to do it to build the attachment URI
The idea for this issue is to eliminate the first one on columns. We shouldn't have to.
The idea for this issue is to eliminate the first one on columns. We shouldn't have to.
Noted. After further investigation I confirmed the first loop creates the columns and the second loop relies on the first loop to have the columns already created. We can only eliminate the first loop through this proposal
We need to keep track of the maximum number of responses for each repeat question from the submissions received.
For every new submission, we'll check the repeat questions. For each one of them, we'll check the number of responses in the incoming submission and compare with the current known maximum value. We'll update the current maximum value if the incoming value is greater.
The number of occurrences will be stored in a new metadata
JSON field in the XFormVersion
model.
Example
{
"repeats_export_cols": {
"hospital_repeat": 2,
"child_repeat": 2
}
}
When generating the CSV, we use repeats_export_cols
to build the columns.
Constraints
repeats_export_cols
should be updated when a submission is deleted or updated. When a submission is updated, we can follow the same process as if it were a new submission. For deletion, say the deleted submission had the highest number of hospital_repeat
, how do determine the successor? Do we traverse all submissions?Solution: _If the submission being deleted had the highest number of any of the repeat responses, we delete the key repeats_export_cols
and let it be re-built when generating the CSV export next time_
Solution: _When generating the CSV, we check if repeats_export_cols
is present, if not we build it_.
hospital_repeat
potentially at the same time, this will result in race conditions. Solution: We need to perform atomic updates at the database level and not at the application level
def update_repeats_export_col(pk, repeat, incoming_max):
"""
Atomically update a repeat's export number of columns
"""
with connection.cursor() as cursor:
cursor.execute(
"""
UPDATE logger_xformversion
SET metadata = jsonb_set(
metadata,
%s,
GREATEST(
COALESCE((metadata->>%s)::int, 0),
%s
)::text::jsonb
)
WHERE id = %s
""",
[[repeat], repeat, incoming_max, pk],
)
The worst case in terms of performance is that we'll still need to loop through the submissions if repeats_export_cols
is not present. But this will rarely happen for a form
The number of occurrences will be stored in a new
metadata
JSON field in theXFormVersion
model.
Why the XFormVersion
and not the XForm
model? We don't currently refer to the XFormVersion
during an export.
Could we consider using the MetaData
class instead, which we have used historically, for any additional information for the form?
repeats_export_cols
should be updated when a submission is deleted or updated. When a submission is updated, we can follow the same process as if it were a new submission. For deletion, say the deleted submission had the highest number ofhospital_repeat
, how do determine the successor? Do we traverse all submissions?Solution: _If the submission being deleted had the highest number of any of the repeat responses, we delete the key
repeats_export_cols
and let it be re-built when generating the CSV export next time_
Could we keep also keep a count of the number of records/submissions that have the highest number of any
repeats? Hence, during a deletion, we reduce the counter and only delete it when the submissions counter is zero?
Could we consider using the MetaData class instead, which we have used historically, for any additional information for the form?
Sure, we can use MetaData
model. I had chosen the XFormVersion
because of the form schema is against a specific version. Hence during export, we check the version being exported and use the repeat metadata for that version
When a submission is updated, we can follow the same process as if it were a new submission
I had proposed this but after another thought, a submission that had the highest number of repeat responses can be edited to reduce the repeats. Hence editing a submission should be handled similar to deletion
Could we keep also keep a count of the number of records/submissions that have the highest number of any repeats? Hence, during a deletion, we reduce the counter and only delete it when the submissions counter is zero?
I think this will work. This is because there could be other submissions with the same number of repeats and there is no need for deletion unless the counter is 0
The current process of CSV exports in Ona Data requires parsing all records to determine all the column headers specifically for repeat data. There is an opportunity to optimise this process such that not all records need to be parsed to determine all the required columns.
All possible columns for a form in Ona Data can be determined from reading the XForm definition of a form in XML or JSON or even from the source XLSForm. Evaluating the number of repeats whenever each submission is received is possible for repeat questions. This number can be updated if a newer, higher number is identified in an accessible manner for each repeat group. During a CSV export, there should be no need to parse through all submissions to determine the depth of a repeat group.