Closed georgeslabreche closed 6 years ago
How invested do we want to be with UX?
If you are asking about how all this subtasking and validation should be defined: for MVP I'd start with having all of that in the code.
But it would be good to show results in the admin dashboard (how many tasks are verified), how much data is verified?
Hmm, for the dashboard we'd have to reflectively process the models created by the developer. Unless I am overthinking it?
reflectively process the models
I don't get this sentence. What does it mean, what would be the effect?
How are we making the dashboard flexible enough account for all models that a dev can create? I guess we have just have framework/template around trackig task statuses.
Oh yes. That's a good question.
Regarding statistics: For MVP I would just rely on statistics on tasks/documents. Ie. verified tasks / Total number of tasks (it can raise while the project runs) and verified documents / documents imported (so called amount of data verified). It is something different because of the possibility to create subtasks dynamically. That would require the notion of "document" (apart from existing task).
Regarding UX related to verification: what about manual verification #5 ? This would require to show data sent, but not necessarily in a model view.
We have addressed that in dedicated issues: #16 #23
ACCEPTANCE CRITERIA: The demo project should showcase the following features:
QUESTIONS: