Closed safinaskar closed 1 year ago
I can confirm this behavior:
fn main() {
use bigdecimal::One;
let mut a = BigDecimal::from_str("0.5").unwrap() + BigDecimal::from_str("0.5").unwrap();
assert!(a.is_one());
for i in 0..26 {
assert!(a.is_one());
let b = &a * &a;
println!("{i}");
a = b;
}
}
26 iterations takes 21 secs:
$ time cargo run --release
Compiling bigd v0.1.0 (/home/wpwoodjr/MandelRust/bigd)
Finished release [optimized] target(s) in 2.96s
Running `target/release/bigd`
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
real 0m21.003s
user 0m19.610s
sys 0m0.261s
27 iterations takes 51 secs:
$ time cargo run --release
Compiling bigd v0.1.0 (/home/wpwoodjr/MandelRust/bigd)
Finished release [optimized] target(s) in 2.94s
Running `target/release/bigd`
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
real 0m51.559s
user 0m50.038s
sys 0m0.472s
The issue is the current implementation does very simple multiplication:
(A × 10ᵃ) · (B × 10ᵇ) = (A * B) × 10ᵃ⁺ᵇ
After doing 0.5 + 0.5 = 5×10⁻¹ + 5×10⁻¹ = 10×10⁻¹
we get the issue where in successive multiplications, A*B increases exponentially:
(10×10⁻¹)·(10×10⁻¹) = 100×10⁻²
(100×10⁻²)·(10×10⁻²) = 10000×10⁻⁴
(10000×10⁻⁴)·(10000×10⁻⁴)=100000000×10⁻⁸
and you can see, the value of "one" will contain 1 followed by 2ⁱ zeros.
The new implementation that's being worked on will not be susceptible to this.
I think the precaution the user is expected to take is to manually set the scale:
loop {
let prod = &x * &x;
x = prod.with_scale(100); // keep 100 digits?
}
In the mean time, one optimization I can make is add a check that if either self.is_one()
or rhs.is_one()
is true we return the other.
More generally, we can always attempt remove trailing zeros after any math operation, but we are limited to the interface of the num::BigInt. It has a trailing_zeros method, but that's binary digits, not decimal digits.
Rem is implemented, so we can do product % 1000000
and get a value out and count trailing zeros, repeat until there are none left.
Sorry the new version is taking much longer than I anticipated.
Oh yeah. There is a normalized
function:
fn main() {
use bigdecimal::One;
let mut a = BigDecimal::from_str("0.5").unwrap() + BigDecimal::from_str("0.5").unwrap();
assert!(a.is_one());
for i in 0..26 {
assert!(a.is_one());
let b = &a * &a;
println!("{i}");
a = b.normalized();
}
}
That'll work as expected.
I think the idea was not to do extra work when we don't have to. So multiplication isn't going to remove trailing zeros by default, as that could actually hurt performance if "scales" were already aligned and the user knew the digit would be used to add/subtract values.
Obviously, that's asking too much knowledge of the library/interface for a standard user. I wonder if there's a way to solve this with the type system...
Is there any kind of "assign" function like in rug
which would avoid allocations?
Maybe you can help me... the reason I came over here to comment... this code just is absurdly slow, even with the normailzed() function applied:
use bigdecimal::{BigDecimal, FromPrimitive, Zero};
use std::str::FromStr;
fn main() {
let xmin = BigDecimal::from_str("-1.86046051629867317223033980199826782").unwrap();
let xmax = BigDecimal::from_str("-1.86046051629867317223033978309405982").unwrap();
let ymin = BigDecimal::from_str("0.00012711744988464921042888420168070").unwrap();
let ymax = BigDecimal::from_str("0.00012711744988464921042889837983670").unwrap();
let columns = 800;
let rows = 600;
let max_iterations = 15;
let eight = &BigDecimal::from_i32(8).unwrap();
let rows_decimal = BigDecimal::from_usize(rows).unwrap();
let columns_decimal = BigDecimal::from_usize(columns).unwrap();
let dy_neg = (&ymin - &ymax)/&rows_decimal;
let mut y_vals = vec![BigDecimal::zero(); rows];
y_vals[0] = ymax;
for i in 1..rows {
y_vals[i] = y_vals[i - 1].clone();
y_vals[i] += &dy_neg;
}
let dx = (&xmax - &xmin)/&columns_decimal;
let mut iteration_counts = vec![vec![0; columns]; rows];
for i in 0..rows {
let mut x_val = xmin.clone();
for j in 0..columns {
iteration_counts[i][j] = count_iterations(&x_val, &y_vals[i], max_iterations, eight);
x_val += &dx;
println!("{:4} ", iteration_counts[i][j]);
}
println!();
}
}
pub fn count_iterations(x: &BigDecimal, y: &BigDecimal, max_iterations: i32, eight: &BigDecimal) -> i32 {
let mut count = 0;
let mut zx = x.normalized();
let mut zy = y.normalized();
while count < max_iterations {
let zxsq = &zx*&zx;
let zysq = &zy*&zy;
if &(&zxsq + &zysq) >= eight {
break;
}
let new_zx = &zxsq - &zysq + x;
let zxzy = &zx*&zy;
zy = &zxzy + &zxzy + y;
zy = zy.normalized();
zx = new_zx.normalized();
count += 1;
}
if count < max_iterations {
count
} else {
-1
}
}
Using rug
, this is very fast. Any help appreciated!
normalized
will only help with removing trailing zeros. Using zx.with_scale()
or zx.round()
would keep things manageable.
I see that after running three iterations of count_iterations things get ridiculous:
[examples/grid.rs:53] &zxsq = BigDecimal("3.4613133327063255443352353815722045816611958409944513040035462804475524")
[examples/grid.rs:53] &zxsq = BigDecimal("2.56272968806444454627049644186722661531919876457665443954579667467451951588337068024526573246095339057028341773028316477792539953632658277376")
[examples/grid.rs:53] &zxsq = BigDecimal("0.4931818215868559286582816047212496774713011241642566851138704778286142033069463413446902428418186301160113491353689686311940726508120744104032098931748511424643702460201334005887263580534430240844875754451068376581797469950604929061850286842185036676587467070256089308108836376576")
[examples/grid.rs:53] &zxsq = BigDecimal("1.86945365676977117898512550474963215293367304024686714611377019288518384217377865093122365405503823932834774945590482959987961919972324712766486187139661630494174983880622227409996489585105624493076859560717035578916592931604915634217579727350609660677748496177176382557017044084283889824858094114194347269981899803536073014308740921350140061628337670986734254794280158659109822510618951382010570826107542552019923223402008996728593311342959358807838407520591506772522600822780182132979448013407419041428510600136358302181276730642438289845705577054233560088576")
[examples/grid.rs:53] &zxsq = BigDecimal("0.0000808484870181644157073008612917293845070161198111348197589589704077845098929703042292776699788874985610927999880740549534729213617329956450823025437435781697161732371128446128392603015421836028984300111472970002987918821951694278991439924532465423851524063805676246396875869960976191796010854125639268132769628730644006647179174298233766091905404439404813439235389904121384345643970696507885845736818384156003107436435116438177733016626847356373186854929779586485799068753108071849231912178224759252829874960021836562874237445510913704251146682274598532638761296606935665416101481231214659066087630978315641773892226636956036728781878855357993860770168496977326683809208800760844177026156976444781140050289514016519954654822895325948165362470769143816973002526310779600801979727956129887872880173648626597736981604059882507694379935839729864262008612627069971277915187780186386041557651782483915181286947462693023833500977365055350702275954445276187548931030966677127554785449768201169991050137394349384748410961460990123186609918502059262310715939788896857828144656951783758244605823998549739767847189234757429886976")
changing to:
let zxsq = (&zx*&zx).round(70);
let zysq = (&zy*&zy).round(70);
dbg!(&zxsq);
yields more manageable:
[examples/grid.rs:53] &zxsq = BigDecimal("3.4613133327063255443352353815722045816611958409944513040035462804475524")
[examples/grid.rs:53] &zxsq = BigDecimal("2.5627296880644445462704964418672266153191987645766544395457966746745195")
[examples/grid.rs:53] &zxsq = BigDecimal("0.4931818215868559286582816047212496774713011241642566851138704778286141")
[examples/grid.rs:53] &zxsq = BigDecimal("1.8694536567697711789851255047496321529336730402468671461137701928851841")
[examples/grid.rs:53] &zxsq = BigDecimal("0.0000808484870181644157073008612917293845070161198111348197589589704078")
I'm aware this shouldn't be necessary.
I have not looked at Rug, but will check it out.
I tried rounding, but am getting an exception:
use bigdecimal::{BigDecimal};
use std::str::FromStr;
fn main() {
let z = BigDecimal::from_str("3.4613133327063255443352353815722045816611958409944513040035462804475524").unwrap();
let zsq = &z*&z;
dbg!(&zsq);
let zsq = zsq.round(70);
dbg!(&zsq);
}
$ cargo run
Compiling bigd v0.1.0 (/home/wpwoodjr/MandelRust/bigd)
Finished dev [unoptimized + debuginfo] target(s) in 0.38s
Running `target/debug/bigd`
[src/main.rs:8] &zsq = BigDecimal("11.98068998717057027117831038176842424089721245689422762852009735276472125002445815376381273406650291482823239255488095168167189245044715074576")
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /home/wpwoodjr/.cargo/registry/src/github.com-1ecc6299db9ec823/bigdecimal-0.3.0/src/lib.rs:594:43
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
I fixed BigDecimal::round
so it wont crash. It also correctly "ROUNDS_HALF_EVEN" by default. That will have to do until a proper math context is developed.
Try with this branch to make sure it's fixed.
[dependency]
bigdecimal = { git = "https://github.com/akubera/bigdecimal-rs", branch="bugfix/UnwrapExceptionInRound" }
Try with this branch to make sure it's fixed
Yes, the original bug is fixed
"optimized" multiplication implementation was released in version v0.3.1.
Yes, the original bug is fixed in new version
Consider this code (I use bigdecimal 0.3.0):
At every loop iteration we simply do
1 * 1
, so every iteration should be very fast. But instead every iteration seems to be x2 slower than previous. When I run this code, first 20 dots immediately appear, but then dots start to appear very slowly