JuliaDiff / ForwardDiff.jl

Forward Mode Automatic Differentiation for Julia
Other
894 stars 148 forks source link

missing method big(::Dual) #537

Open stevengj opened 3 years ago

stevengj commented 3 years ago

It seems like this should work. All that is needed is:

Base.big(x::Dual{T}) where {T} = Dual{T}(big(x.value), big(x.partials))
Base.big(p::Partials) = Partials(big.(p.values))

(Mentioned on discourse.)

Pramodh-G commented 3 years ago

Hi, I'd like to take this up. I have some queries though, for Eg, the Base.float(x::Dual) is defined here as follows: Base.float(d::Dual{T,V,N}) where {T,V,N} = convert(Dual{T,promote_type(V, Float16),N}, d)

Should I stick to the same format for Base.big(x::Dual) too? Thanks :)

stevengj commented 3 years ago

I think the float function is wrong (see #535), so I wouldn't follow that model.

Calling big on the values seems like the best way to make it consistent with Base.

dlfivefifty commented 3 years ago

I ran into this in code that tried to convert to BigFloat . So I’m wondering whether to do big(float(x)) or float(big(x)) ?

stevengj commented 3 years ago

big(float(x)) should generally be faster, assuming float(x) doesn't lose precision (e.g. x is an integer ≤ maxintfloat)

markmbaum commented 3 years ago

@stevengj is there a reason that floatmin and floatmax are not implemented for Dual numbers? Just ran into that issue and found this issue.