Closed hugobowne closed 7 years ago
Thanks for pointing this out, since it seems very important to sort out before things like the MCMC course. For now, a quick fix is modifying test_object_after_expression
to reset the seed via the pre_code
argument (tested it quickly in teach editor)...
*\ =sct
def inner_test():
import numpy as np
#test_function("numpy.random.exponential", index= 1, do_eval=False)
#test_function("numpy.random.exponential", index= 2, do_eval=False)
test_object_after_expression(
"t1",
context_vals=[2,3,1],
undefined_msg="have you defined `t1`?",
incorrect_msg="are you sure you assigned the correct value to `t1`?",
pre_code="np.random.seed(42)")
it works! great! pre_code
not discussed in wiki: https://github.com/datacamp/pythonwhat/wiki/test_object_after_expression
coule include a simple version of this example
Ah, I looked at one of the other "expression" tests to find it. I think the main problem here is that there are two sources of API documentation, the source code and the wiki docs. For example, pre_code was well documented in the source, but not the wiki.
I've added an explanation of pre_code to the wiki, but the bigger issue will be resolved by generating API docs from the source code, and then using the wiki for examples, tutorials, and FAQs (issue #82).
Since the teach editor already displays a pop-up with function signatures, would it be crazy to include the function docstring below? @vincentvankrunkelsven
Oops, reopening because it's not clear the original SCTs should have ever failed in the first place!
Just paired w/Filip on it. The python backend doesn't re-run the pre-exercise-code in the solution process (since it might load in a bunch of datasets, do a fair amount of computation, etc..). Because re-submitting does re-run the pre-exercise-code for the submission, their generators will become out of sync if there are SCTs that generate random values.
The pre_code solution is a quick fix. It may be useful to let instructors tell pythonwhat to set the seed for all SCTs, etc..
@machow v interesting.
I don't explicitly generate random values in the SCT. Does test_function_definition()
implicitly do this?
Wrt
The python backend doesn't re-run the pre-exercise-code in the solution process
I can't see why I do not experience the same problem with this example (which works fine):
*\ =pre_exercise_code
import numpy as np
np.random.seed(42)
*\ =sample_code
x = np.random.rand(1000)
*\ =solution
x = np.random.rand(1000)
*\ =sct
test_object("x")
success_msg("Great work! We'll put the function to use in the next exercise.")
Ah, good point! I left out a critical detail, AFAIK it only runs the solution code on the initial submission, so resubmitting only reruns submission code and SCTs. That SCT works because it doesn't generate anything random, but only checks something in the solution and submission environments.
The problem with test_function_definition in the initial post is that the SCT itself is generating something random in the solution environment each time, but the seed is never reset.
On Oct 17, 2016 5:56 PM, "Hugo Bowne-Anderson" notifications@github.com wrote:
@machow https://github.com/machow v interesting.
I don't explicitly generate random values in the SCT. Does test_function_definition() implicitly do this?
Wrt
The python backend doesn't re-run the pre-exercise-code in the solution process
I can't see why I do not experience the same problem with this example:
*\ =pre_exercise_code
import numpy as np np.random.seed(42)
*\ =sample_code
x = np.random.rand(1000)
*\ =solution
x = np.random.rand(1000)
*\ =sct
test_object("x")
success_msg("Great work! We'll put the function to use in the next exercise.")
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/datacamp/pythonwhat/issues/118#issuecomment-254346518, or mute the thread https://github.com/notifications/unsubscribe-auth/ACdIomevxkHKmv1SOljAvyZut-gb5U5sks5q0-7zgaJpZM4KRGCK .
In the following, the solution code is accepted the first time I hit submit, but not the 2nd.
e.g. if I write incorrect solution, then edit it to correct solution OR even if I just submit solution code twice.
the message thrown is "are you sure you assigned the correct value to
t1
?" so it's something to do with how the random numbers are generated.*\ =pre_exercise_code
*\ =sample_code
*\ =solution
*\ =sct
note that the issue does NOT occur in the following case (w/out user-defined functions):
*\ =pre_exercise_code
*\ =sample_code
*\ =solution
*\ =sct