ChrisBoesch / singpathfire

An AngularFire-based version of SingPath
http://chrisboesch.github.io/singpathfire
MIT License
4 stars 7 forks source link

Track landing/start problem times and problem solved times in backend. #6

Open SingaporeClouds opened 9 years ago

dinoboff commented 9 years ago

Resolution time tracking is implemented:

TODO:

SingaporeClouds commented 9 years ago

I'm unclear on why the verifier can't save the solve time duration when it records that the problem has been solved. The verifier will already have access to the REST API and know which user just successfully solved the problem. The verifier just needs to record the duration time.

userSolveTimes//// -> {"solved": true, "duration": 54}

On the API, there will be a way to fetch the solveTimePercentiles for each problem on a path.

problemSolveTimePercentiles/// -> {"10": 15, "50": 56, "90": 210}

Each time a player solves a new problem, it also shouldn't take much time (in the background or async GET) to recalculate the player's cumulative solve time percentage for the path. The process would fetch the problemSolveTimePercentiles for all the problems on the path and all the userSolveTimes on that path and recalculate and average of the users solve time percentiles across all problems solved.

userSolveTimePercentile/// -> {"percentile": 42, "solved": 14}

The GUI can then fetch the user's userSolveTimePercentile for a path and the problemSolveTimePercentiles for problems a task group to estimate how long each user will need to solve problems.

userSolveTimes and userSolveTimePercentile can get updated in the background after each new problem solved by a player. problemSolveTimePercentiles will likely get updated by a cron job each night. We can recalculate all or manage an object that keeps track of all problems that have had new solutions provided that day.

problemsSolvedSinceLastUpdate/// -> true

dinoboff commented 9 years ago

Like I said, time tracking is implemented. You can find it at /sinpath/resolutions/$pathId/$levelId/$problemId/$publicId with startedAt and endedAt field.

The only - minor - issue I have is with the stats update and the lack of transaction with the rest API. I am thinking of updating stats in a tasking queue not allowing concurrent tasks or updating stats in a cron job.

SingaporeClouds commented 9 years ago

Ok. Based on my past experience with GAE task queues, I recommend keeping the queues in Firebase if at all possible. Then any cron job running on GAE would check the Firebase queue object for any tasks to run and then only delete the tasks as they are completed.

Firebase has a transaction API. You can look at how they update counters. So running in a single thread on GAE should be fine initially.

dinoboff commented 9 years ago

Transaction is not available for the REST API which the backend has to use.

SingaporeClouds commented 9 years ago

Ok. So single-threaded task processing from the backend it is.

On Sat, Apr 18, 2015 at 9:02 PM, Damien Lebrun notifications@github.com wrote:

Transaction is not available for the REST API which the backend has to use.

— Reply to this email directly or view it on GitHub https://github.com/ChrisBoesch/singpathfire/issues/6#issuecomment-94163889 .

Best Regards, Chris Boesch Associate Professor of Information Systems (Education) Singapore Management University

SingaporeClouds commented 9 years ago

One option to process verification jobs concurrently would be to have a counter and security rule as discussed in this stackoverflow answer.

http://stackoverflow.com/questions/23041800/firebase-transactions-via-rest-api

{ job: 123, update_counter: 0, started: null }

//Firebase security rule. { ".write": "newData.hasChild('update_counter')", "update_counter": { ".validate": "newData.val() === data.val()+1" } }

The 1st verifier could look for jobs with counter zero and increment the counter to 1 and set a started time as it attempts to process the verification job. The 2nd verifier could look for verification jobs in the queue at counter zero and increment them to 1 and set a started time as it tries to process them. If there are no jobs at counter level zero, the verifiers could look for counter==1 and some amount of time passed (30 seconds) since started and then attempt to update the counter, process and delete those jobs. Once a job is attempted 5 times and the counter has been incremented to 5, and error message could be saved and the stalling/error causing job could be deleted. This same approach might scale to 4, 8, 16 verifiers since verifiers would only try to process counter==0 tasks and then counter==1 tasks that have been idle for some amount of time.

We could automatically launch additional verifiers once the average job processing time increases beyond some limit and shutdown additional verifiers once the job processing time falls back below some threshold. We can start by just figuring out how and when to go from 0 to 1, 1 to 2, 2 to 1, and 1 to 0 verifiers.