Closed wbhart closed 3 years ago
Any reason to use that rather than directly using the strong echelon form? It looks like Howell form just moves zero rows of the latter to the end (which confuses me a bit, because I though strong echelon required zero rows to be at the end anyway...)
I have completely forgotten why I added this ticket. Perhaps @thofma or @fieker has a comment.
I understand what this ticket is about now.
Here is my understanding. Over a field you have the Gauss-Jordan form, which is a canonical form over a field. We actually currently compute an LU decomposition, which presumably also only works over a field, as mentioned in the ticket.
Over a PID you can use Hermite Normal Form. This is not particularly relevant here.
Over a PIR, the Hermite Normal Form is not unique, so there is something called the Howell Form, which is again unique. This is a logical canonical form to use to compute the determinant in cases such as Z/nZ. It's still O(n^3) like the current code, but works over PIR's not just fields.
What Strong Echelon Form is, I don't know. It's not terminology I am familiar with.
I hadn't heard of it either until I looked at the code and saw that Howell Form called it. I found this paper by Fieker and Hofmann (presumably, @thofma and @fieker) which describe it
https://arxiv.org/pdf/1612.09176.pdf
I tweaked the algorithm a little bit for sparse matrices...I think I should end up with the same result, though more testing is needed. I don't entirely understand the difference between it and the Howell form (I think something about ordering and extra zero rows), but for computing the determinant, the former should be sufficient.
K
On Tue, Apr 21, 2020 at 6:32 PM wbhart notifications@github.com wrote:
I understand what this ticket is about now.
Here is my understanding. Over a field you have the Gauss-Jordan form, which is a canonical form over a field. We actually currently compute an LU decomposition, which presumably also only works over a field, as mentioned in the ticket.
Over a PID you can use Hermite Normal Form. This is not particularly relevant here.
Over a PIR, the Hermite Normal Form is not unique, so there is something called the Howell Form, which is again unique. This is a logical canonical form to use to compute the determinant in cases such as Z/nZ. It's still O(n^3) like the current code, but works over PIR's not just fields.
What Strong Echelon Form is, I don't know. It's not terminology I am familiar with.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-617485326, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIIVSGZURMEVE3A35PDRNZCKXANCNFSM4EA6Q6WA .
Yes, for the determinant it does not matter which one you use. When trying to compute the Hermite normal over Z by lifting a Howell form over Z/nZ, it is easier to work with the strong echelon form.
Regarding the ticket, I don't know if we want to do this in flint itself. The Howell form won't be as fast as the current implementation in the case that n is prime and surely we don't want to test if the modulus is a prime.
So my understanding is that, if n is prime, the strong echelon just does standard row elimination, using the xgcd between the pivot and any other entries rather than inverting the pivot and doing multiplication, both of which should be log n operations. There is the extra "reducing upwards" phase, which maybe could be excluded for something like a "weak echelon form". That is, just Gaussian reduce to a upper triangular form, which is all that is needed for the determinant.
Also, I assume primality testing for word size integers is extremely fast, but alternatively one could have a separate set of _prime functions that required known primality. Or _composite, if you prefer to assume the opposite.
On Wed, Apr 22, 2020, 2:14 AM thofma notifications@github.com wrote:
Yes, for the determinant it does not matter which one you use. When trying to compute the Hermite normal over Z by lifting a Howell form over Z/nZ, it is easier to work with the strong echelon form.
Regarding the ticket, I don't know if we want to do this in flint itself. The Howell form won't be as fast as the current implementation in the case that n is prime and surely we don't want to test if the modulus is a prime.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-617656734, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIMXT22LMXO5KPMLV73RN2YPJANCNFSM4EA6Q6WA .
Yes, the strong echelon form over a field is just Gaussian elimination, but with a lot of overhead. Also it has cubic complexity, while the LU decomposition that is currently implemented should be subcubic (I think).
We already experienced with nmod_polys that while primality for word size integers is fast, it is not negligible.
I'm not convinced the current LU is subcubic. I think it is currently cubic.
Testing primality of n is going to be bad in two cases; when n is larger than a word and when the dimension of the matrix is small.
We could implement a Gaussian elimination which silently fails if it hits an impossible inverse. Then the det algorithm could switch over to using the howell form. When the dimension is sufficiently large we could also do a probable prime test on n to decide which algorithm to start with.
On Wed, Apr 22, 2020 at 08:07:16AM -0700, wbhart wrote:
I'm not convinced the current LU is subcubic. I think it is currently cubic.
Testing primality of n is going to be bad in two cases; when n is larger than a word and when the dimension of the matrix is small.
We could implement a Gaussian elimination which silently fails if it hits an impossible inverse. Then the det algorithm could switch over to using the howell form. When the dimension is sufficiently large we could also do a probable prime test on n to decide which algorithm to start with.
I'd like to see any evidence that for large n being prime or not makes any difference... After the 1st xgcd, in general the pivot element will be 1 and the xgcd's revert to divide only.... It is possible that primes are faster, but I doubt it is worth the effort... -- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-617836859
Actually, looking at the code for the strong echelon form again, the fact that the rows get approximately cleared by a unit means that the final pivots are divisors of the modulus, right? (lines 183-187 of nmod_mat.h, which don't seem to occur in Algorithm 3). That is, over a field, the strong echelon form should be the same as the rref, so that the determinant is not preserved (?).
Am I missing something?
On Wed, Apr 22, 2020 at 8:33 AM Claus Fieker notifications@github.com wrote:
On Wed, Apr 22, 2020 at 08:07:16AM -0700, wbhart wrote:
I'm not convinced the current LU is subcubic. I think it is currently cubic.
Testing primality of n is going to be bad in two cases; when n is larger than a word and when the dimension of the matrix is small.
We could implement a Gaussian elimination which silently fails if it hits an impossible inverse. Then the det algorithm could switch over to using the howell form. When the dimension is sufficiently large we could also do a probable prime test on n to decide which algorithm to start with.
I'd like to see any evidence that for large n being prime or not makes any difference... After the 1st xgcd, in general the pivot element will be 1 and the xgcd's revert to divide only.... It is possible that primes are faster, but I doubt it is worth the effort...
You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-617836859
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-617852717, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIIK3TPXFQVOVXRPYJTRN4E4BANCNFSM4EA6Q6WA .
(which is to say, one could totally modify the strong echelon form to make something akin to a fraction free LU, which would compute the determinant in roughly the same time as the existing LU decomposition)
On Wed, Apr 22, 2020 at 10:44 AM Kartik Venkatram kartikv@gmail.com wrote:
Actually, looking at the code for the strong echelon form again, the fact that the rows get approximately cleared by a unit means that the final pivots are divisors of the modulus, right? (lines 183-187 of nmod_mat.h, which don't seem to occur in Algorithm 3). That is, over a field, the strong echelon form should be the same as the rref, so that the determinant is not preserved (?).
Am I missing something?
On Wed, Apr 22, 2020 at 8:33 AM Claus Fieker notifications@github.com wrote:
On Wed, Apr 22, 2020 at 08:07:16AM -0700, wbhart wrote:
I'm not convinced the current LU is subcubic. I think it is currently cubic.
Testing primality of n is going to be bad in two cases; when n is larger than a word and when the dimension of the matrix is small.
We could implement a Gaussian elimination which silently fails if it hits an impossible inverse. Then the det algorithm could switch over to using the howell form. When the dimension is sufficiently large we could also do a probable prime test on n to decide which algorithm to start with.
I'd like to see any evidence that for large n being prime or not makes any difference... After the 1st xgcd, in general the pivot element will be 1 and the xgcd's revert to divide only.... It is possible that primes are faster, but I doubt it is worth the effort...
You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-617836859
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-617852717, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIIK3TPXFQVOVXRPYJTRN4E4BANCNFSM4EA6Q6WA .
Yes, you are right. The strong echelon form cannot do only unimodular transformations, so the determinant won't be preserved.
Ok, how about this: replace the existing ticket with a new ticket (not for this release) for FFLU decomposition/determinant/solving for non-prime modulus? Once I get through the current Z stuff, I can try to put that together, and do some timing tests to see how much worse it is in practice than the prime version.
On Wed, Apr 22, 2020 at 10:51 AM thofma notifications@github.com wrote:
Yes, you are right. The strong echelon form cannot do only unimodular transformations, so the determinant won't be preserved.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-617932756, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIKH5F6B464ORBSGH3TRN4VBBANCNFSM4EA6Q6WA .
Sounds like a good idea. One could also add the computation of the kernel and solving a la Storjohann & Mulders: https://link.springer.com/chapter/10.1007/3-540-68530-8_12
I'm in favour. If you open the other ticket could you repeat your idea there so we don't forget.
@kartikv Are you working on this? Or should I push it out to the next release?
I was going to wait until the sparse matrix stuff was resolved, but if you want it in sooner I can do that.
On Tue, May 12, 2020 at 9:41 AM wbhart notifications@github.com wrote:
@kartikv https://github.com/kartikv Are you working on this? Or should I push it out to the next release?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-627459136, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUINM2UCSYIYV3BVNC7TRRF33NANCNFSM4EA6Q6WA .
I don't mind, you are the volunteer, so it's your call what you want to contribute.
I just wanted to point out that this ticket was assigned to the flint-2.6 release which we anticipate going into beta in about 8-9 days. The sparse stuff will go into the next release, simply because it is too new to be considered stable for this release.
If you would prefer to wait, I can assign this ticket to the next release. It isn't urgent in any way.
That sounds good to me, there's a bunch of other LA stuff to group together with it (e.g., fmpz_nmod_mat stuff). Also, would it be reasonable to have some sort of consistent nomenclature for which functions assume the modulus is prime (implicitly or explicitly) and which do not? This is a bit tricky when you want different behavior for prime and non-prime moduli, and I'm not sure what the best way is to handle it.
We can't really change our function names, so there's not much we can do about this.
However, we eventually want everything that can possibly handle non-prime modulus to do so. Apart from factorisation and gcd, that's just about everything, though note that the mathematics isn't even known in some cases.
But note that this is an absolutely gargantuan job. We will only get there in small increments over a very long period of time.
My thought was to not change any existing functions, but instead to create a new pmod module (well, family of modules) which is like nmod but assumes n is prime. For all the functions that don't care about primality, the pmod functions just wrap the corresponding nmod ones: the functions in nmod_* that require primality conversely wrap the pmod ones. We only get two different implementations when we have distinct composite and prime algorithms, in which case the pmod ones may just be faster. This should leave all existing interfaces in place with roughly the same runtimes (assuming inline wrapper functions have no overhead), with slightly more certainty for new users as to what to expect from a given function.
(also, I've been referring to the individual components of flint as modules, do you have a term for them?)
On Tue, May 12, 2020 at 2:26 PM wbhart notifications@github.com wrote:
But note that this is an absolutely gargantuan job. We will only get there in small increments over a very long period of time.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-627607130, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIOP6Q7BNHXBUVK66P3RRG5IBANCNFSM4EA6Q6WA .
That would need a lot of discussion before we decided to go down that route. I'd certainly hope that most of the existing functions can remain as fast as they are in the prime case and yet deal with the non-prime case. Most of the time it boils down to detecting impossible inverses (which we get for free), then using the now known factorisation of n to split the original problem into two, then do some chinese remaindering. This shouldn't impact the timings in the prime case at all.
Duplicating all the code into a new module is not really a favoured idea at present. Flint is already quite large. There has to be a really compelling reason to make it much larger (sparse stuff has always been desirable of course).
Yes, we call them modules as well.
On Tue, May 12, 2020 at 01:23:28PM -0700, kartikv wrote:
That sounds good to me, there's a bunch of other LA stuff to group together with it (e.g., fmpz_nmod_mat stuff). Also, would it be reasonable to have some sort of consistent nomenclature for which functions assume the modulus is prime (implicitly or explicitly) and which do not? This is a bit tricky when you want different behavior for prime and non-prime moduli, and I'm not sure what the best way is to handle it. Do you have example where you get (and expect) different answers for prime/ non-prime? Especially in the linear algebra I don't see this baring situations where you ask for explicit algorithms - which may not make sense. But it is easy to define a rref in such a way that the results for prime/non-prime "agree", ie. the non-prime algo will return the identical result to the prime algo if the modulus happens to be prime.
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-627573122
Apologies, when I said different behavior, I meant a different algorithm, not different results for the same input: for instance, for composite n, one might want to first pull off small prime factors, do computation modulo those, then proceed as if the rest of the modulus was prime, possibly obtaining additional factors of the modulus when inverses fail. Of course, for things like rref, the form of the output is different for composite and prime moduli (for the former, it looks more like the Howell form).
Also, Bill, my hope was that the library wouldn't grow too large if we simply had inline wrappers connected two modules...though maybe you meant documentation size rather than library file size? But all this should wait until one can do a full accounting of the existing functions and see what might want to have separate composite versions.
K
On Tue, May 12, 2020 at 10:45 PM Claus Fieker notifications@github.com wrote:
On Tue, May 12, 2020 at 01:23:28PM -0700, kartikv wrote:
That sounds good to me, there's a bunch of other LA stuff to group together with it (e.g., fmpz_nmod_mat stuff). Also, would it be reasonable to have some sort of consistent nomenclature for which functions assume the modulus is prime (implicitly or explicitly) and which do not? This is a bit tricky when you want different behavior for prime and non-prime moduli, and I'm not sure what the best way is to handle it. Do you have example where you get (and expect) different answers for prime/ non-prime? Especially in the linear algebra I don't see this baring situations where you ask for explicit algorithms - which may not make sense. But it is easy to define a rref in such a way that the results for prime/non-prime "agree", ie. the non-prime algo will return the identical result to the prime algo if the modulus happens to be prime.
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-627573122
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-627759507, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIJHCHKHQI6WZ56N5C3RRIXYTANCNFSM4EA6Q6WA .
On Tue, May 12, 2020 at 11:19:32PM -0700, kartikv wrote:
Apologies, when I said different behavior, I meant a different algorithm, not different results for the same input: for instance, for composite n, one might want to first pull off small prime factors, do computation modulo those, then proceed as if the rest of the modulus was prime, possibly obtaining additional factors of the modulus when inverses fail. Of course, for things like rref, the form of the output is different for composite and prime moduli (for the former, it looks more like the Howell form). No, it does not. For prime modulus, there is no reason for the Howell form to not be identical to the rref. In fact, the only parts of the algorithm executed are the ones used for Gaussian elemination. Depending on the actual implementatino, using canonical_units you even "divide" by the pivot element in this case to get 1s on the diagonal. For composite modulus you have choice: (2 1) mod 8 is a rref, but the Howell form would be
(2 1) which is also a rref, describing the "same" module. (0 4) Note that this behaviour comes from annihilators which do not exist for prime modulus
Also, Bill, my hope was that the library wouldn't grow too large if we simply had inline wrappers connected two modules...though maybe you meant documentation size rather than library file size? But all this should wait until one can do a full accounting of the existing functions and see what might want to have separate composite versions.
K
On Tue, May 12, 2020 at 10:45 PM Claus Fieker notifications@github.com wrote:
On Tue, May 12, 2020 at 01:23:28PM -0700, kartikv wrote:
That sounds good to me, there's a bunch of other LA stuff to group together with it (e.g., fmpz_nmod_mat stuff). Also, would it be reasonable to have some sort of consistent nomenclature for which functions assume the modulus is prime (implicitly or explicitly) and which do not? This is a bit tricky when you want different behavior for prime and non-prime moduli, and I'm not sure what the best way is to handle it. Do you have example where you get (and expect) different answers for prime/ non-prime? Especially in the linear algebra I don't see this baring situations where you ask for explicit algorithms - which may not make sense. But it is easy to define a rref in such a way that the results for prime/non-prime "agree", ie. the non-prime algo will return the identical result to the prime algo if the modulus happens to be prime.
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-627573122
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-627759507, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUIJHCHKHQI6WZ56N5C3RRIXYTANCNFSM4EA6Q6WA .
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-627771218
Sorry, I meant actually composite modulus, not a "composite" modulus that is secretly prime. Let me try again my previous post again in more detail, with rref as an example.
For p prime, the putative pmod_mat_rref(M, p) and existing nmod_mat_rref(M, p) will, indeed, return the exact same result, namely a matrix N such that N = AM for A an invertible matrix, where
1) only the first r rows of N are nonzero (where r is the rank of M) 2) the leading entry of row i is (j_i,1) for i=0,..,r-1, and j0 < ... < j{r-1} 3) the entries of column j_i are zero save for the 1 in row i.
In pmod_mat_rref, this is done by finding, for each column, some nonzero pivot row, moving it to follow the previous pivot row, multiplying by the inverse of the pivot element, and using said row to eliminate all other entries on that column. In nmod_mat_rref, this is done in essentially the same fashion, save that one uses the extended gcd (via the canonical unit function) to compute the inverse.
Now, if n is composite (not prime), the putative pmod_mat_rref(M, n) may still "accidentally" succeed (if all the pivot elements it finds are invertible). However, if it fails to find an inverse, it will simply fail and say "p not a prime, x not invertible modulo p" for whatever x it finds. nmod_mat_rref(M, n) will instead never fail and instead do all the gcd tricks and sundry from strong_echelon form, to produce a matrix N = AM for A invertible such that
1) only the first r rows of N are nonzero 2) the leading entry of row i is (j_i, a_i) for i=0,...,r-1, a_i | n, and j0 < ... < j{r-1} 3) the entries of column j_i are zero below row i and < a_i above row i, 4) the rows of N span the rowspace of M.
In this case, the advantage of having separate pmod and nmod functions is only in code simplicity/readability for the former, as any slight benefit from (a) having guaranteed inverses, (b) always doing normal Gaussian elimination, and (c) not checking for additional vectors in the row space is pretty small. If all nmod_* functions can be tweaked to do essentially the optimal thing for n prime and do something sensible for n composite, then I completely agree that a new module would be excessive.
On Wed, May 13, 2020 at 3:10 AM Claus Fieker notifications@github.com wrote:
On Tue, May 12, 2020 at 11:19:32PM -0700, kartikv wrote:
Apologies, when I said different behavior, I meant a different algorithm, not different results for the same input: for instance, for composite n, one might want to first pull off small prime factors, do computation modulo those, then proceed as if the rest of the modulus was prime, possibly obtaining additional factors of the modulus when inverses fail. Of course, for things like rref, the form of the output is different for composite and prime moduli (for the former, it looks more like the Howell form). No, it does not. For prime modulus, there is no reason for the Howell form to not be identical to the rref. In fact, the only parts of the algorithm executed are the ones used for Gaussian elemination. Depending on the actual implementatino, using canonical_units you even "divide" by the pivot element in this case to get 1s on the diagonal. For composite modulus you have choice: (2 1) mod 8 is a rref, but the Howell form would be
(2 1) which is also a rref, describing the "same" module. (0 4) Note that this behaviour comes from annihilators which do not exist for prime modulus
Also, Bill, my hope was that the library wouldn't grow too large if we simply had inline wrappers connected two modules...though maybe you meant documentation size rather than library file size? But all this should wait until one can do a full accounting of the existing functions and see what might want to have separate composite versions.
K
On Tue, May 12, 2020 at 10:45 PM Claus Fieker notifications@github.com wrote:
On Tue, May 12, 2020 at 01:23:28PM -0700, kartikv wrote:
That sounds good to me, there's a bunch of other LA stuff to group together with it (e.g., fmpz_nmod_mat stuff). Also, would it be reasonable to have some sort of consistent nomenclature for which functions assume the modulus is prime (implicitly or explicitly) and which do not? This is a bit tricky when you want different behavior for prime and non-prime moduli, and I'm not sure what the best way is to handle it. Do you have example where you get (and expect) different answers for prime/ non-prime? Especially in the linear algebra I don't see this baring situations where you ask for explicit algorithms - which may not make sense. But it is easy to define a rref in such a way that the results for prime/non-prime "agree", ie. the non-prime algo will return the identical result to the prime algo if the modulus happens to be prime.
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-627573122
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-627759507, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAKDUIJHCHKHQI6WZ56N5C3RRIXYTANCNFSM4EA6Q6WA
.
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/wbhart/flint2/issues/401#issuecomment-627771218
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-627887368, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUINB3QCRG7DGQ4NOPU3RRJW2JANCNFSM4EA6Q6WA .
Does this extend to the characteristic polynomial as well? Our current nmod_mat_charpoly assumes prime n, right?
I believe so, but I believe it should also work for most composite moduli, and it shouldn't (?) be too hard for it to work in general.
On Fri, Apr 9, 2021, 2:17 AM Fredrik Johansson @.***> wrote:
Does this extend to the characteristic polynomial as well? Our current nmod_mat_charpoly assumes prime n, right?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/wbhart/flint2/issues/401#issuecomment-816546194, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAKDUILHEVZOY4WHE4R3EF3TH3A4LANCNFSM4EA6Q6WA .
I can't find a simple description of the Howell form over Z/nZ (e.g. Howell's original paper) that isn't behind a greed wall, so I'm giving up on this one and will remove it from the milestone. There are descriptions of algorithms, but these seem to assume you already read Howell's paper.
I think Howell's original algorithm is O(n^3). I think we base det on an implementation of that, rather than try to keep track of the changes to the determinant all the way through the strong echelon form computation and then through the final adjustments to get the Howell form.
Implemented in trunk.
Now that we have Howell form, we can unconditionally compute the determinant in time O(n^3) by adapt the Howell form (keeping track of multiplications by units and permutations as the algorithm proceeds) and multiply diagonal elements to get the determinant.
Currently, over Z/nZ we assume that we have a field, as n_invmod is called (which raises an exception for an impossible inverse).