datacamp / pythonwhat

Verify Python code submissions and auto-generate meaningful feedback messages.
http://pythonwhat.readthedocs.io/
GNU Affero General Public License v3.0
69 stars 31 forks source link

Problem with SCT & numpy.random #118

Closed hugobowne closed 7 years ago

hugobowne commented 8 years ago

In the following, the solution code is accepted the first time I hit submit, but not the 2nd.

e.g. if I write incorrect solution, then edit it to correct solution OR even if I just submit solution code twice.

the message thrown is "are you sure you assigned the correct value to t1?" so it's something to do with how the random numbers are generated.

*\ =pre_exercise_code

import numpy as np

np.random.seed(42)

*\ =sample_code

def successive_poisson(tau1, tau2, size=1):
    # Draw samples out of first exponential distribution
    t1 = np.random.exponential(tau1, size=size)

    # Draw samples out of first exponential distribution
    t2 = np.random.exponential(tau2, size=size)

    return t1 + t2

*\ =solution

def successive_poisson(tau1, tau2, size=1):
    # Draw samples out of first exponential distribution
    t1 = np.random.exponential(tau1, size=size)

    # Draw samples out of first exponential distribution
    t2 = np.random.exponential(tau2, size=size)

    return t1 + t2

*\ =sct


def inner_test():
    import numpy as np
    #test_function("numpy.random.exponential", index= 1, do_eval=False)
    #test_function("numpy.random.exponential", index= 2, do_eval=False)
    test_object_after_expression(
        "t1",
        context_vals=[2,3,1],
        undefined_msg="have you defined `t1`?",
        incorrect_msg="are you sure you assigned the correct value to `t1`?")

#TO DO: `results` arg!
import numpy as np
# Test: shout_all() definition
test_function_definition("successive_poisson", body=inner_test, #results=[[np.random.rand(5)]],
    wrong_result_msg="Are you returning the correct values in `successive_poisson`?"
)

success_msg("Great work! We'll put the function to use in the next exercise.")

note that the issue does NOT occur in the following case (w/out user-defined functions):

*\ =pre_exercise_code

import numpy as np
np.random.seed(42)

*\ =sample_code

x = np.random.rand(1000)

*\ =solution

x = np.random.rand(1000)

*\ =sct


test_object("x")

success_msg("Great work! We'll put the function to use in the next exercise.")
machow commented 8 years ago

Thanks for pointing this out, since it seems very important to sort out before things like the MCMC course. For now, a quick fix is modifying test_object_after_expression to reset the seed via the pre_code argument (tested it quickly in teach editor)...

*\ =sct


def inner_test():
    import numpy as np
    #test_function("numpy.random.exponential", index= 1, do_eval=False)
    #test_function("numpy.random.exponential", index= 2, do_eval=False)
    test_object_after_expression(
        "t1",
        context_vals=[2,3,1],
        undefined_msg="have you defined `t1`?",
        incorrect_msg="are you sure you assigned the correct value to `t1`?",
        pre_code="np.random.seed(42)")
hugobowne commented 8 years ago

it works! great! pre_code not discussed in wiki: https://github.com/datacamp/pythonwhat/wiki/test_object_after_expression

coule include a simple version of this example

machow commented 8 years ago

Ah, I looked at one of the other "expression" tests to find it. I think the main problem here is that there are two sources of API documentation, the source code and the wiki docs. For example, pre_code was well documented in the source, but not the wiki.

I've added an explanation of pre_code to the wiki, but the bigger issue will be resolved by generating API docs from the source code, and then using the wiki for examples, tutorials, and FAQs (issue #82).

Since the teach editor already displays a pop-up with function signatures, would it be crazy to include the function docstring below? @vincentvankrunkelsven

machow commented 8 years ago

Oops, reopening because it's not clear the original SCTs should have ever failed in the first place!

machow commented 7 years ago

Just paired w/Filip on it. The python backend doesn't re-run the pre-exercise-code in the solution process (since it might load in a bunch of datasets, do a fair amount of computation, etc..). Because re-submitting does re-run the pre-exercise-code for the submission, their generators will become out of sync if there are SCTs that generate random values.

The pre_code solution is a quick fix. It may be useful to let instructors tell pythonwhat to set the seed for all SCTs, etc..

hugobowne commented 7 years ago

@machow v interesting.

I don't explicitly generate random values in the SCT. Does test_function_definition() implicitly do this?

Wrt

The python backend doesn't re-run the pre-exercise-code in the solution process

I can't see why I do not experience the same problem with this example (which works fine):

*\ =pre_exercise_code

import numpy as np
np.random.seed(42)

*\ =sample_code

x = np.random.rand(1000)

*\ =solution

x = np.random.rand(1000)

*\ =sct


test_object("x")

success_msg("Great work! We'll put the function to use in the next exercise.")
machow commented 7 years ago

Ah, good point! I left out a critical detail, AFAIK it only runs the solution code on the initial submission, so resubmitting only reruns submission code and SCTs. That SCT works because it doesn't generate anything random, but only checks something in the solution and submission environments.

The problem with test_function_definition in the initial post is that the SCT itself is generating something random in the solution environment each time, but the seed is never reset.

On Oct 17, 2016 5:56 PM, "Hugo Bowne-Anderson" notifications@github.com wrote:

@machow https://github.com/machow v interesting.

I don't explicitly generate random values in the SCT. Does test_function_definition() implicitly do this?

Wrt

The python backend doesn't re-run the pre-exercise-code in the solution process

I can't see why I do not experience the same problem with this example:

*\ =pre_exercise_code

import numpy as np np.random.seed(42)

*\ =sample_code

x = np.random.rand(1000)

*\ =solution

x = np.random.rand(1000)

*\ =sct

test_object("x")

success_msg("Great work! We'll put the function to use in the next exercise.")

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/datacamp/pythonwhat/issues/118#issuecomment-254346518, or mute the thread https://github.com/notifications/unsubscribe-auth/ACdIomevxkHKmv1SOljAvyZut-gb5U5sks5q0-7zgaJpZM4KRGCK .

machow commented 7 years ago

documented: http://pythonwhat.readthedocs.io/en/latest/expression_tests.html#pre-code-fixing-mutations