Open liushang0322 opened 2 years ago
atanh(-3 + 0im) return -0.34657359027997264 + 1.5707963267948966im
but the result shouldbe -0.34657359027997264 - 1.5707963267948966im
may be something wrong?
What's happening here is that asech
has a branch cut at im=0
, so the formula Julia is using (asech(x) = acosh(inv(x))
) is getting tripped up because inv(-3+0im)
is -1/3 - 0.0im
(because floating point is weird), which causes this to take the other half of the branch.
这里发生的事情是在 处
asech
有一个分支im=0
,所以 Julia 使用的公式 (asech(x) = acosh(inv(x))
) 被绊倒了,因为inv(-3+0im)
is-1/3 - 0.0im
(因为浮点很奇怪),这导致它占用了分支的另一半。
so is the result right?
acosh(-3 + 0im) return -1.762747174039086 + 3.141592653589793im
but the result should be 1.762747174039086 + 3.141592653589793im ?
This does appear to be a bug. We appear to be taking the wrong side of some branch cuts. That said, you probably shouldn't rely on finite precision math for this type of thing since any rounding errors in your inputs will lead to different results (since the functions aren't continuous).
这似乎是一个错误。我们似乎采取了一些分支削减的错误方面。也就是说,您可能不应该依赖有限精度数学来处理此类事情,因为输入中的任何舍入误差都会导致不同的结果(因为函数不是连续的)。
So please ask if these issues will be fixed in the future Thank you very much for your reply
yes.
FYI, the results of Python stdlib cmath
is the same as the Julia's.
import cmath
cmath.atanh(-3 + 0j)
# (-0.34657359027997264+1.5707963267948966j)
def asech(x):
return cmath.acosh(1/x)
asech(-3 + 0j)
# -1.9106332362490186j
Is the current behavior of asech
really wrong? It seems to be in line with what I thought was expected for a complex function $f(x + \mathrm{i} \varepsilon)$ near the branch cut when using IEEE Floating point arithmetic, i.e. it matches the the limit $\varepsilon \to 0^+$ from above for an imaginary part with a positively signed zero, and the limit from below for a negatively signed zero:
julia> asech(-3.0 + 1e-10im)
1.1785113019775792e-11 - 1.9106332362490184im
julia> asech(-3.0 + 0.0im)
0.0 - 1.9106332362490184im
julia> asech(-3.0 - 1e-10im)
1.1785113019775792e-11 + 1.9106332362490184im
julia> asech(-3.0 - 0.0im)
0.0 + 1.9106332362490184im
Similarly, the atanh
result seems to be consistent in this regard as well. See also https://github.com/JuliaLang/julia/issues/12916 about atanh
for arguments $x>1$ on the positive real axis.
Now granted, this does not match the Wolfram Alpha (or Mathematica) result for asech(-3)
: As mentioned in the issue, because Wolfram Alpha does not have the concept of a signed zero, one of the two possible limits will be chosen if evaluating the function directly on the cut. (I used to think the convention was to use the limit coming from the counter-clockwise direction, however for the inverse hyperbolic secant, it appears the limit from below is chosen.)
The only thing that is not consistent with this logic is the acosh(-3 + 0im)
example. However, I think this is due to the fact that -3 + 0im
will be of type Complex{Int64}
and then the evaluation of conj
in https://github.com/JuliaLang/julia/blob/e1fa6a51e4142fbf25019b1c95ebc3b5b7f4e8a1/base/complex.jl#L1009 will not flip the sign of the imaginary part, because it is an integer 0
. (Incidentally, this will make conj(1 + 0im)
not equal to inv(1 + 0im)
, since the latter returns a Complex{Float64}
object.)
In any case, if passing an argument of type Complex{Float64}
to acosh
instead, it gives the expected result:
julia> acosh(-3 + 0im)
-1.762747174039086 + 3.141592653589793im
julia> acosh(-3.0 + 0.0im)
1.762747174039086 + 3.141592653589793im
Interestingly, the complex atanh
function always converts its argument to float in the very first line of the implementation, while acosh
does not, which seems a little inconsistent.
asech(-3 + 0im) return 0.0 - 1.9106332362490184im
but the result should be 0.0 + 1.9106332362490184im
are there something wrong?