AuburnACM / auacm

The Auburn ACM Website
Apache License 2.0
15 stars 3 forks source link

Problem timing #100

Closed WilliamHester closed 8 years ago

WilliamHester commented 8 years ago

After months of waiting and doing everything but Python development, it's finally here.

It should be good to go, but feel free to make any comments you have about it. Be sure to try it out to see what you think.

One thing that this branch (sadly) does not do is any sort of timing of different programming language for real multipliers. I think that would certainly be a worthwhile project though.

Fixes #15

WilliamHester commented 8 years ago

One thing we might want to add to this is some sort of status readout. Right now, it's silent until it's done timing all of the problems. I could add that as either default or via a command-line argument.

BrandonLMorris commented 8 years ago

Yeah, we should probably log it somewhere or something. I don't guess we have any logging, do we?

WilliamHester commented 8 years ago

I added logging, so we should be just about good to go.

BrandonLMorris commented 8 years ago

The real multipliers, I don't think we were ever planning on putting generating them directly in the project. I was under the impression that we would guess until we got around to empirically testing it out on problems with different complexities. That is to say, the multipliers would be hard-coded and not generated by AUACM

WilliamHester commented 8 years ago

I would like to see some performance stats on multiple machines of Java vs Python vs C++, etc. If it happens that the numbers are about the same, then it would make sense to just hardcode the values in, but if we get varying results, then it would make more sense to actually generate those as well. It would be a larger undertaking for a different PR though.

BrandonLMorris commented 8 years ago

Why would multiple machines make a difference? They're all going to be judged on the same computer.

I guess I figure that language discrepancy << algorithmic discrepancy. So the multiplier need not be precise, but reasonable. If it becomes an issue, we can handle it then.

WilliamHester commented 8 years ago

As for outputting SQL, that's not necessary, if you check out the problem_timer.py file, you can see that if it finds a correct solution with a time greater than that of the previous solution, it updates the problem in the database and commits it. The logging is purely so that someone can see what's going on as it's happening.

As for the error, that's odd, but it shouldn't be caused by this PR. I opened the JS debugger and noticed that it's actually receiving the "good" message from the backend, it's just not doing anything with it. I'll look more into that tomorrow/this weekend to see if I can find the source of that issue.

WilliamHester commented 8 years ago

This branch does not kick off the problem timing from problem creation. Since we probably want to do more than just kick it off (we probably want to verify that the submitted judge solution works), I think I will leave that out of this update. As it stands, this is just a utility to generate all of the problems' times, though it is not far off from being able to integrate with uploading a problem (another good reason to have the update_status function a callback rather than bound to the judge itself–you could do something like call back with whether or not a judge solution works rather than creating a submission for that user).

BrandonLMorris commented 8 years ago

This is getting stale. I can make the required changes myself if necessary. The site is 70% live and this is crucial to being feature-complete

WilliamHester commented 8 years ago

I think this PR should be considered done and merged in. After that, it would make sense to make another one with support for kicking off problem timing when a problem is created.

BrandonLMorris commented 8 years ago

You're right. I didn't realize/forgot that status updates had been corrected.