lcompilers / lpython

Python compiler
https://lpython.org/
Other
1.5k stars 158 forks source link

test random failed #2542

Closed certik closed 7 months ago

certik commented 7 months ago
167/325 Test #174: test_random_02 ...................***Failed    0.12 sec
0.7547386333122814 0.7539588431983508
Traceback (most recent call last):
  File "/home/runner/work/lpython/lpython/integration_tests/test_random_02.py", line 11, in <module>
    test_seed()
  File "/home/runner/work/lpython/lpython/integration_tests/test_random_02.py", line 9, in test_seed
    assert abs(t1 - t2) > 1e-3
           ^^^^^^^^^^^^^^^^^^^
AssertionError

https://github.com/lcompilers/lpython/actions/runs/7939986580/job/21680720142?pr=2540

certik commented 7 months ago

The test is flaky: https://github.com/lcompilers/lpython/blob/496f29efc65cddd0f47721ebfed356f86b5f2389/integration_tests/test_random_02.py#L9, one can't test random numbers like that.

I thought we already tackled this issue before with @Shaikh-Ubaid in LFortran. One must test random numbers using tests designed for random numbers: they pass with very high probability (=every time in practice). The above test is low probability, that one can even trigger a failure at a CI. So that's a bad test. We need to improve it.

Shaikh-Ubaid commented 7 months ago

A good test exists in LFortran. The test in LPython was more for completeness or as measure of extra safety.

Since, it already has tests in LFortran, do we still need to add a similar test for LPython?

certik commented 7 months ago

Just test it in some way that is robust in LPython as well, to ensure that everything works.

syheliel commented 7 months ago

I have added a PR for it. The new testcase's aim is to test whther the distribution is uniform. I think it's a more valuable test compared to original version

certik commented 7 months ago

I think this is fixed now.