Open kaligrafy opened 2 years ago
Having large logs can make the interview hang or be very slow
First, do logs even have to go to the frontend?
it should not, but right now, we just increment an array so it does. This should be done server-side
and logs could be simplified a lot and keep the same information!
They are server-side only so that is not the issue, though it does seem to make the queries to update a bit slower
As part of the redesign and to ensure confidentiality, logs will go in a separate table, indexed to allow quick writing, one entry per log, with a timestamp, the survey ID, a nullable user ID (null is the participant), the field updated.
Should we log participant data and validated data in separate tables? Or add a boolean field in the single log table?
In case we want to use the logs to get the exact amount of time a user spends on an interview, here's a post about sql queries that can be tweaked to return this info: https://stackoverflow.com/questions/30877926/how-to-group-following-rows-by-not-unique-value/30880137#30880137
@greenscientist You were mentioning that we could use tools like Prometheus for edit logging? I'd like your input on this issue.
Here's the requirements:
In order to analyze post-survey the flow of the interview to study for example the "fardeau du répondant" (participant burden?), we need to know the sequence of actions, how much time was spent on each question, if a given was changed multiple times, etc.
A "thin" version of the requirement is to be able to simply know how much time a user has spent on each section of the survey. In this case, it is not required to track each edit.
Another requirement is to track which users touched which interviews, either for validation or edition. That is for basic security issues: who accessed what, but also to track the work done by people. For specific roles (for example an 'interviewer' for phone surveys), one wants to be able to output for each users how many interviews they touched, how many they started, how many they completed, etc. (this last requirement is issue #43, but it is somewhat related to this one).
Note that each survey can have their own directives wrt what to track. The above requirements are examples for one ongoing survey, but other surveys may have other, or none at all. Evolution should be able to support whatever will allow the survey maintainers to properly get the information that they want/need.
Current implementation: Upon each edit, we save a timestamp, along with the values that were changed/removed and their values. This is saved as an array in a json field in the interviews table. To track the time spent on section, there's a _sections field in the responses, which has a timestamp and name of the section started and all actions done on the sections.
Problems:
Possible naive solution:
Still use the database, but with separate tables, with one row per log, not indexed for easy insertion
But is there some more complete and purposeful for this, that is not too hard to implement and integrate with evolution?
A quick remark on this. This is a user flow tracking "need". A tracing tool like zipkin could be useful here. There might be better tool now, or thing directly integrated with OpenTelemetry. Will investigate.
"Evolution should be able to support whatever" we need to stop being in the business of supporting everything in the world. We need more restricted specifications. Unless we want to spend all our time working on this.
Ok, we really have 3 separate requirements here. They can probably be implemented separately. One system could be able to provide all of it, but it might complexify the solution too much, so we will try to not make it a priority.
Maybe drop old ones or alert...