Describe the bug
The tests pass and fail due to randomness in some of the function calls.
To Reproduce
Steps to reproduce the behavior:
Run the fit unit tests multiple times in a row: tests/unit_tests/lcls_tools/common/data_analysis/test_fit_gauss.py
Example errors shown below.
Expected behavior
The test should either fail or pass every time depending on code editions. There should be no random element or ability to pass the test if run multiple times without code edits.
Additional context
Skipped these tests for now until the fitting class is updated to prevent this.
Output of fit test tests/unit_tests/lcls_tools/common/data_analysis/test_fit_gauss.py seems to vary with # of runs. I'm looking into adjusting this test to be more consistent.
Output from running the test several times back to back:
(tools) PC101046:lcls-tools nneveu$ python -m unittest tests/unit_tests/lcls_tools/common/data_analysis/test_fit_gauss.py
/Users/nneveu/miniconda3/envs/tools/lib/python3.11/site-packages/scipy/optimize/_minpack_py.py:1010: OptimizeWarning: Covariance of the parameters could not be estimated
warnings.warn('Covariance of the parameters could not be estimated',
...
Traceback (most recent call last):
File "/Users/nneveu/github/lcls-tools/tests/unit_tests/lcls_tools/common/data_analysis/test_fit_gauss.py", line 37, in test_fit_tool_gaussian
self.assertLessEqual(val["rmse"], 0.4)
AssertionError: 0.8434927791146087 not less than or equal to 0.4
Describe the bug The tests pass and fail due to randomness in some of the function calls.
To Reproduce Steps to reproduce the behavior:
tests/unit_tests/lcls_tools/common/data_analysis/test_fit_gauss.py
Example errors shown below.
Expected behavior The test should either fail or pass every time depending on code editions. There should be no random element or ability to pass the test if run multiple times without code edits.
Additional context Skipped these tests for now until the fitting class is updated to prevent this.
Output of fit test
tests/unit_tests/lcls_tools/common/data_analysis/test_fit_gauss.py
seems to vary with # of runs. I'm looking into adjusting this test to be more consistent.Output from running the test several times back to back:
OK
OK
OK
OK
Ran 3 tests in 0.016s
FAILED (failures=1)
Originally posted by @nneveu in https://github.com/slaclab/lcls-tools/issues/126#issuecomment-1915818130