Closed ta-oliver closed 3 years ago
We can fix the random seeds and run 5 times? That will give same result each time but check a variety of scenarios.
We can fix the random seeds and run them 5 times? That will give the same result each time but check a variety of scenarios.
The probability of these tests failing is around 0.1 i.e. once in about 10 tests. or maybe even less. So not sure if those running 5 times will catch the bug that produces it. It's again a chance.
Can do more than 5 as long as test are fast.
What makes certain outcomes fail?
In my case: The "signal" column is filled with NaN values after "vortex_indicator" is called, the NaN value is coming from "trn"
trn = true_range.rolling(window).sum()
And when we try and compare the "signal" column to determine bearish and bullish movements we are comparing a NaN value to a number
bullish = df_with_signals["signal"] >= 0 bearish = df_with_signals["signal"] < 0
This results in different values in the "allocation" columns
Thanks, @NikolaR01.
Looks good. Thanks @bi-kash and @NikolaR01 .
@ta-oliver Tests are failing randomly. Randomness is due to dependency on randomly seeded simulated test data. Yeah we got to investigate it. We need to save the data_frame that causes test fails.