Open 8d1h opened 3 years ago
No, we can't require Singular as a prerequisite for Macaulay2, but we could try to devise a faster algorithm.
That's very reasonable.
Actually I looked at the code for singularLocus
, the most compute-heavy parts are the call of minors
and the construction of a quotient ring. Both seem to be rather basic methods...
Also I noticed that by changing the strategy of minors
to Cofactor
seems to improve for this particular case (and several others). Though I'm not sure that this is always the case.
singularLocus(Ring) := QuotientRing => (R) -> (
f := presentation R;
A := ring f;
c := codim(R,Generic=>true);
A / trim (ideal f + minors(c, jacobian f, Strategy=>Cofactor)))
Notice also that forming a quotient ring involves computing a Groebner basis, which is perhaps not needed if all you want to do is to determine whether the Jacobian ideal is the unit ideal, especially in the homogeneous case. I'm not sure whether the code for I == 1
is optimized in that sense yet.
Anyway, if you come up with changes that are improvements, feel free to submit a pull request.
Yes I'm aware that forming a quotient ring computes a Gröbner basis. In fact this seems to be the bottleneck: quotient ring, dim
, degree
, etc., almost all the important methods use a Gröbner basis computation, and Singular appears to be far more efficient in doing it. Maybe the Gröbner basis engine needs to be upgraded entirely...
We are working on it :). The default GB being used here is not the best possible, unfortunately.
The routine is still often very much less efificient that it needs to be. In particular, I believe it is computing the saturation of the singular locus, which is a computation heavy step compared to the GB itself. On the other hand @kschwede (and co-authors) has a Macaulay2 package called FastLinAlg
which is designed to be smarter about singular loci. I don't think it is (yet) as good as the singular code for smoothness, but it is alot better (I believe) than the default singular locus routine.
That would be great, thanks! I will take a look at FastLinAlg
.
First a quick comment. That sort of example is zero dimensional (10 random equations in 4 variables). Ie, projectively it's the empty scheme. If you increase the number of variables then Macaulay2 hangs on dimension computations (singular also doesn't handle it quickly). It might be just handling some edge cases better, I'm not sure.
However, for a quick comparison.
Here's an Abelian surface in P^8 (actually a product of elliptic curves). There are 31646160 potential minors to compute...
S = ZZ/101[x_0..x_8] J = ideal(x_5x_7-x_4x_8,x_2x_7-x_1x_8,x_5x_6-x_3x_8,x_4x_6-x_3x_7,x_2x_6-x_0x_8,x_1x_6-x_0x_7,x_2x_4-x_1x_5,x_2x_3-x_0x_5,x_1x_3-x_0x_4,x_6^3+x_7^3+x_8^3,x_3x_6^2+x_4x_7^2+x_5x_8^2,x_0x_6^2+x_1x_7^2+x_2x_8^2,x_3^2x_6+x_4^2x_7+x_5^2x_8,x_0x_3x_6+x_1x_4x_7+x_2x_5x_8,x_0^2x_6+x_1^2x_7+x_2^2x_8,x_3^3+x_4^3+x_5^3,x_0x_3^2+x_1x_4^2+x_2x_5^2,x_0^2x_3+x_1^2x_4+x_2^2x_5,x_2^3+x_5^3+x_8^3,x_1x_2^2+x_4x_5^2+x_7x_8^2,x_0x_2^2+x_3x_5^2+x_6x_8^2,x_1^2x_2+x_4^2x_5+x_7^2x_8,x_0x_1x_2+x_3x_4x_5+x_6x_7x_8,x_0^2x_2+x_3^2x_5+x_6^2x_8,x_1^3+x_4^3+x_7^3,x_0x_1^2+x_3x_4^2+x_6x_7^2,x_0^2x_1+x_3^2x_4+x_6^2x_7,x_0^3-x_4^3-x_5^3-x_7^3-x_8^3);
o61 : elapsedTime regularInCodimension(2, S/J) -- 6.13749 seconds elapsed
o61 = true
o69 : elapsedTime regularInCodimension(2, S/J, Strategy=>StrategyPoints) -- 7.3259 seconds elapsed
o69 = true
i82 : elapsedTime regularInCodimension(2, S/J, Strategy=>StrategyDefaultWithPoints) -- 1.61807 seconds elapsed
o82 = true
In this example, elapsedTime isSmooth(J, Use=>Singular) does not appear to terminate in a finite amount of time (well, ~10 minutes). I'll run it a while and respond again if it ever terminates.
A couple quick comments, the default setting in regularInCodimension seems to look at around 300 minors before deciding it's nonsingular. If you use the StrategyPoints, it looks at less than a dozen (but figuring those out requires finding rational/geometric points, so on average it takes about the same time). On the other hand, the default settings don't always verify that this thing is nonsingular. You can increase the number of minors to compute, or change the strategy. In this particular example, StrategyDefaultWithPoints usually takes less than 2 seconds. It's a hybrid approach, finding a point for about a third of the minors, and using heuristics the rest of the time. It definitely gives the best performance on this example (maybe it should be the default).
EDIT: It terminated with an out of memory error (the machine has 64gb of memory):
Singular error: no more memory
System 63173692k:63173692k Appl 62181052k/987114k Malloc -3652k/0k Valloc 63171820k/987114k Pages 15792955/0 Regions 31160:31160 Internal: 3740408
Thanks for the comments! That's definitely an interesting example! Is this algorithm always faster than directly computing the singular locus, or are there certain use cases that it works better?
Also I figured out that for the above example (or when we have a bunch of fat points) we can improve the performance of minors
by using the Cofactor
strategy. Still, cutting out this factor, Singular is still faster when computing the dimension of the singular locus (or computing dimension and degree in general).
In my experience, Bareiss is better when working with single variable rings, or homogeneous things in 2 variables. Otherwise typically cofactor is better unless the matrix has a lot of symmetry / is sparse.
Anyways, this algorithm does not compute the singular locus, but it finds an ideal inside the Jacobian ideal. Many times that's good enough (it certainly is to verify that something is nonsingular).
So what this algorithm does is try to only compute some of the minors of the Jacobian. Check to see if we've verified that the partially computed singular locus satisfies whatever condition (has codimension > N for instance), and if not, repeat (typically computing more and more minors between each codim check). How it chooses which minors depends on the Strategy option.
The default is some heuristics / random. But telling it to find some rational / geometric points in the locus where the partial minors (+ the original ideal) vanish, evaluate the matrix at those points, and then find some interesting minors of the evaluated matrix is another strategy (StrategyPoints). In some cases, finding points is slow but in this example, mixing in some points with the heuristics seems to work quite well.
The algorithm also typically gives up long before all minors have been computed (in which case the function returns null), but you also can tell it not to give up if you'd like (make sure your strategy has some random component as the heuristics can enter a loop). It does not recompute minors its already computed.
In particular, this algorithm is massively (many orders of magnitude) faster than the default when affirmatively checking something is smooth (or verifying something about the codim of the singular locus affirmatively) but will be typically worse (although not hugely worse) when computing something negatively.
At some point in the future, this should be multi-threaded to get the best of all worlds, but multi-threading was too unstable when I wrote it (I haven't played around in 1.17 yet).
Wow thanks a lot this is really a great explanation!
@kschwede -- Which algorithm are you referring to when you say "this algorithm"?
Does it do something like this: sort the rows and columns so an entry with minimal lead polynomial is in the upper left hand corner, then do the same with the remainder of the matrix to put something small in position (2,2), and so on?
@DanGrayson
I mean the heuristic algorithms. Yes, that is roughly what it does.
First: it chooses a new random Lex or GRevLex order.
Second. then it chooses the smallest matrix element (or the matrix element with the smallest term depending on what strategy is being used) with respect to that order, eliminates that row and column and repeats this 2nd step until right size submatrix is selected.
For the GRevLex order as well, it periodically multiplies an entry (in a dummy matrix) that was previously chosen by a high degree thing, to guarantee that is not chosen in the next round.
Third. It computes the determinant of that minor that we just picked and adds it to running the ideal.
Eh here is something that must be a bug.
I = Grassmannian(1,4,CoefficientRing=>ZZ/3);
R = ring I;
J = I + ideal(random_1 R, random_2 R);
elapsedTime(S = singularLocus J; dim S;) -- 3.39217 seconds elapsed
elapsedTime dim singularLocus J; -- 26.4575 seconds elapsed
That is really strange.
Eh here is something that must be a bug.
I = Grassmannian(1,4,CoefficientRing=>ZZ/3); R = ring I; J = I + ideal(random_1 R, random_2 R); elapsedTime(S = singularLocus J; dim S;) -- 3.39217 seconds elapsed elapsedTime dim singularLocus J; -- 26.4575 seconds elapsed
That is strange! But it would be easy to find out what's going on: just set errorDepth to 0 and interrupt it while it's busy with all that junk to see (in the debugger) what it's doing.
Well the bug is really buried very deep and has nothing to do with singular locus... The problem is with try
: it gets slow when the thing involved has no name to refer to.
I = Grassmannian(1,4,CoefficientRing=>ZZ/3);
R = ring I; J = I + ideal(random_2 R);
time try (singularLocus J+1) -- used 11.6103 seconds
time try (S = singularLocus J; S+1) -- used 1.10086 seconds
When computing the dimension, at some point there is a check using try
so it slowed down the entire computation...
Edit. I removed the lines in ringmap.m2 so dim singularLocus J
doesn't run so slow now. But I don't know how to fix the problem with try
...
The method
singularLocus
in M2 is not very efficient. I recently looked at the filebir.m2
written by Giovanni Staglianò for his article, where one gets the option to useSingular
to carry out this computation. For example, here is a test case.Is it possible to make this an official feature, which will make
singularLocus
much more usable? Update: The above example is a bit degenerate. Here is a better one.