Open TrueDoctor opened 4 years ago
Demanding that a bunch of numbers only differ by up to 1 ulps is a pretty harsh condition.
So the question is not if the unit tests are failing because the result is wrong (although it might very well be wrong!). The question is if the result is within the error bounds of the algorithm.
If you want to investigate the result, could you look at the raw output and also compare to what scipy's expm
gives you?
Demanding that a bunch of numbers only differ by up to 1 ulps is a pretty harsh condition.
Both tests fail due to the assertion.
So the question is not if the unit tests are failing because the result is wrong (although it might very well be wrong!). The question is if the result is within the error bounds of the algorithm.
When the assertion is disabled (and resonable error bounds are chosen) both tests pass
If you want to investigate the result, could you look at the raw output and also compare to what scipy's
expm
gives you?
The results I've used are from wolfram alpha, but i could add some scipy tests as well It might be worthwhile to automate test generation
python_testing!(
simple, f64, vec![1.0, 0.0, 1.0, 0.0],
double, f64, vec![2.0, 0.0, 2.0, 0.0],
random, f64, vec![1.02, -3.2, 4.2, 100.0]
);
I have now added some complex tests on the generic-scalar
branch
Some of the test are failing:
running 15 tests
test tests::complex_exp ... ok
test tests::complex_exp_py ... ok
test tests::complex_random_py ... FAILED
test tests::double_py_f32 ... FAILED
test tests::double_py ... ok
test tests::random_py ... FAILED
test tests::simple_py ... ok
test tests::verify_pade_13 ... ok
test tests::simple_py_f32 ... FAILED
test tests::verify_pade_3 ... ok
test tests::verify_pade_5 ... ok
test tests::verify_pade_7 ... ok
test tests::verify_pade_9 ... ok
test tests::exp_of_unit ... ok
test tests::exp_of_doubled_unit ... ok
failures:
---- tests::complex_random_py stdout ----
thread 'tests::complex_random_py' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)
left = 0.0008432883514622831
right = 0.0
', src/lib.rs:735:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
---- tests::double_py_f32 stdout ----
thread 'tests::double_py_f32' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)
left = 0.000011444092
right = 0.0
', src/lib.rs:735:5
---- tests::random_py stdout ----
thread 'tests::random_py' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)
left = 295408072890019770000000000000000000000.0
right = 0.0
', src/lib.rs:735:5
---- tests::simple_py_f32 stdout ----
thread 'tests::simple_py_f32' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)
left = 0.00000047683716
right = 0.0
', src/lib.rs:735:5
failures:
tests::complex_random_py
tests::double_py_f32
tests::random_py
tests::simple_py_f32
I don't have case specific handling for f32 yet, I should use a different epsilon but for example random_py seems to be way off the assert might be necessary after all :sweat_smile: As I'm just a second year computer scientist, understanding the whole paper is a bit much, but it would be greatly appreciated should you have a bit of time/motivation to look into that
I just added some random test and expm is already failing:
It might be beneficial to extend the test coverage further