Open TroySigX opened 1 year ago
Not really. For example, a problem asks to print out the shortest path from to node to another. Since there might be more than one path that are shortest, the sample output only prints one of them. Can Competitest check this multiple answer-problem in this case?
What if output_compare_method
accepted three arguments: output, expected output and input?
So you could write a function to check the correctness of your solution.
What I think is that programmers using the plugins usually compete in online contest like Codeforces, where it is not convenient to write the output checking function since the contest is very time-sensitive. Is there any solution?
One possible solution I can think of is to create a server for programmers to "not actually" submit to check their output (like creating a dummy user on Codeforces and let coders submit through that user, but the plugin should only check for the sample test cases, not the hidden test cases on Codeforces). This is easier to utilize the output checking function that has been written before by testers.
I could add a way to store more than one answer in a testcase, so that a testcase is correct when output matches one of the provided expected outputs. This can be already done with a workaround
|
between possible answers when writing expected outputoutput_compare_method
split expected_output
into a list, using the special character as separatoroutput
belongs to the listOne possible solution I can think of is to create a server for programmers to "not actually" submit to check their output (like creating a dummy user on Codeforces and let coders submit through that user, but the plugin should only check for the sample test cases, not the hidden test cases on Codeforces).
This is unlikely, and probably dummy users aren't allowed on many competitive programming platforms.
This is easier to utilize the output checking function that has been written before by testers.
A better way would be to use a checker function provided by testers, though I don't know if they're available during a contest.
I think it's nearly impossible to generate all answers to put into the checker during contest.
like creating a dummy user on Codeforces and let coders submit through that user
Some platforms do this such as vjudge but this won't work in running contests because your submitted code will be first submitted by this dummy user and if you submitted it again you will be marked as a cheater.
I don't think we can implement this feature in the way you want (automated). A manual test code should be written. I also believe that this feature is related to interactive problems and stress testing. We can try to implement 3 features with the same effort. @xeluxee what do you think?
This is the current script I use to test interactive problems. I also noticed that I can use it to stress-test problems and make it a more generalized script abandoning the other bash script I used to use for stress testing.
Testing interactive problems is also a feature to be added. But what I meant was that in problems where more than 1 answers are valid (e.g That might be multiple path from source to sink with the shortest length), it is almost impossible to generate all the answers in the testing function since the number might be very large.
it is almost impossible to generate all the answers in the testing function since the number might be very large.
Agree. And the only way (I think) to test this problem is to write code that will test your output according to the input which is similar to stress-testing and testing interactive problems.
We need an interactor which will do the following:
Assign a function to
output_compare_method
(see configuration for further details). In this case it's likely you want to use local configuration