Open quaeritis opened 3 years ago
Hi @quaeritis,
Thanks for reporting!
The C-code is generated via https://github.com/basnijholt/pfapack/blob/master/pfapack/ctypes.py and downloaded from @michaelwimmer's website https://michaelwimmer.org/pfapack.tgz (see conda-forge recipe).
I am no C expert, perhaps @michaelwimmer knows what is going on?
This isn't a bug: a 110MB matrix is very unlikely to get a Pfaffian which fits into a float. By comparison, determinant also fails.
@quaeritis if you want to compute something like a topological invariant, you're better off using alternative approaches that work with sparse matrices.
The error happens in pfapack.py, so in the python part. The problem is that for a large matrix the pfaffian becomes an exponentially large number: you are multiplying many doubles together, and eventually you get an overflow, that's normal (same happens for determinants).
For a very large matrix it is better to compute the sign and logarithm of the pfaffian. In the Fortran code of ZSKPF10 this is done fore example, the output is
PFAFF (output) DOUBLE COMPLEX array, dimension 2
* The value of the Pfaffian in the form
* PFAFF(1)*10**PFAFF(2) with PFAFF(2) purely real.
*
Just looked into ctypes.py
: if avoid_overflow=True
, indeed the ZSKPF10 code is called, but the number is converted back to a normal scalar, or in case of overflow becomes np.inf
(times the number in front of 10^...). Maybe it would be better to return for avoid_overflow=True
the prefactor a and the exponent b of the number a*10^b directly?
Hello,
I have the problem that an overflow occurs in pfapack when the matrix exceeds a certain size.
Code and file to reproduce the error: skew_h_pi.npy.gz
Error message:
Output:
Some questions:
libcpfapack.so
?Best regards quaeritis