Closed claireratigan closed 4 months ago
just getting back to this -- I rebased my changes over the recent commits and am planning to address the comments, but I guess my first question is: is #13 going to obviate the need for this PR? or is this still useful?
Hey @claireratigan,
I would like to move forward with #13. In my opinion, implementing Rust in bs is necessary. Your approach is what I would like to pursue in the next steps.
Adding multiple options to calculate values using f32/f64, and making the trait flexible enough to support U256, is essential.
Please allow some time for @hayden4r4 to review and accept it :D
Hey @claireratigan , We are almost there. If you are still interested in introduce support multi types, we are open for that :)
The best solution will be generic functions if it is possible of course.
@day01 I'm not sure I understood your last comment exactly, but I think if you look at my latest commit it contains what you're looking for :)
@claireratigan yes it contains everything what i was looking for :) can you add some tests to affected functions?
I think this is ready for another look
we have still error in: benches/implied_volatility.rs
const INPUTS: Inputs = Inputs {
And it will be all :)
Now that the other PR is closed I want to make the lets_be_rational
functions generic as well, which is going to take a bit more work
Also, for what it's worth, after I rebased over the new changes my benchmark test increased ~20%:
Running benches/implied_volatility.rs (target/release/deps/implied_volatility-12094a8984b9c1f1)
Gnuplot not found, using plotters backend
calc_rational_iv time: [310.61 ns 311.32 ns 312.21 ns]
change: [+21.417% +21.736% +22.030%] (p = 0.00 < 0.05)
Performance has regressed.
templating the lets_be_rational_functions improved the benchmarks, not quite enough to overcome the total regression:
calc_rational_iv time: [267.38 ns 267.55 ns 267.82 ns]
change: [-13.822% -13.669% -13.508%] (p = 0.00 < 0.05)
Performance has improved.
Verified, tests are correct, f32 correct.
@claireratigan Unfortunately I have to revert your solution to f64 support only. based on implementation of lets be rational we have taylor series n newton raphson method adjusted to f64 precision. (17th nr n 12th taylor) and i forgot about it...
To prepare correct Float generic operation we have to change lower algo for example to add f128 support. Of course all your work to adjust to f64 will stay in repo.
n of course in lib it was a bug that precision was for f32. all calculation was adjusted to f64.
I don't think it was correct to revert this PR. At the very least, I see no drawback to having it in the repository; it makes f64 computations work end-to-end and in my benchmarking it's at least as performant as the existing code.
Secondly, after reverting the repository still has a lot of f32 junk floating around that needs fixing. I can make a new PR that just cuts all of that out, and makes the repository entirely f64-based, but why do that when this solves that problem and is more flexible.
Also, just because Jackel targets f64 infrastructure doesn't mean it's wrong to provide this to f32, just that it's less efficient than it could be. Keeping the generic function signatures makes room for making implementations that target f32 and f128 precision. Indeed, you could use something like num_traits::min_positive_value()
to determine the precision you are trying to achieve from the float type if you were trying to cater performance to the different types.
And just to expound on that, I think e.g. you could just set the maximum number of iterations to 1 for f32, 2 for f64, and 4 for f128 in implied_volatility_from_a_transformed_rational_guess_with_limited_iterations
and that would probably suffice?
it not that case :) It is here: https://github.com/hayden4r4/blackscholes-rust/blob/master/src/lets_be_rational/black.rs#L105 https://github.com/hayden4r4/blackscholes-rust/blob/master/src/lets_be_rational/black.rs#L52
And it is not only adding implementation of next NR algo n Taylor series. Soo it is changing whole algo based on https://sin.put.poznan.pl/files/download/37938
problem bases on whole algorithm. correct algo is only for f64 (link above). Any other adjustment is like
result as f128
actually ... we have in code result as f32
even the whole calculation is on f64.
We have to move all code to f64 how you said. any other adjustment is wrong at know. even f32 which we have actually. Idea about which we spoke as flexible (generic) solution is just wrong. All of it should be for f64 and performance how you said will be increased.
I think it should be self-explanatory what is happening here, but let me know if you have any questions.
I made some design choices around templating in some places and using macros for implementation in others, but I could go either way on which makes more sense
This may break some public API (most notably
Inputs
is templated on type now, which wasn't true before), but I expect most existing code would be able to infer the old f32 type without code changes.