Open mahrud opened 7 months ago
Yikes! I'll revert the changes from #3013.
I don't think we should revert either #2766 or #3013 or remove reduce
from more places (at least not yet), rather I think we need to discuss a convention on whether matrices should be reduced or not.
I can think of several options:
reduce
as a function that one can use to get a normal form for matrices.Personally, I'd say reduce only before checking equality and expose a reduce
function. Obviously in that circumstance the reduction should be cached.
As another plausible proposal (although not necessarily a good one). Rather than doing a reduce every time, sometimes it might be more efficient to compute Hom(M,N)
, and compute the reduction there since that only involves a single gb per pair of modules right? You it would still be slower than not reducing since you still have to do a reduction step, but just without a gb. Also I would expect that the gb for Hom(M,N)
would be more expensive than any one reduction for map(N,M,...)
, so it's not clear to me whether this would help overall.
I'm not sure if I understood your suggestion. What are M and N in Hom(M,N)? Assuming that N is the target of the matrix being reduced, then in the example above, wouldn't computing a gb to compute Hom(M, module I)
for each M be slower than computing just one gb for module I
?
Incidentally, I discovered that Hom(R^1, module I)
also attempts to compute a gb for I
, but if you remove the reduce hook (e.g. using the addHook line above) then it's instantaneous, as it should be 🙃
I think I'm also leaning towards exposing reduce
and not reducing unless a gb for the target is already cached or it is fast to compute (e.g. over ZZ), and certainly never reducing if there are no relations that would affect anything!
If you have many maps between a pair of modules M,N, then plausibly computing the one gb would be faster, in particular for the sort of arithmetic that was being done in #3012 but with more complex modules and more maps. But as you noted this is likely slower in the case where you only care about one map, and I'd suspect that's the more common case.
@mikestillman What is your opinion on this? I think we need a solution before the next release.
Here is what I recall, I'm not completely sure if this matches what the issue is now: This is something @DanGrayson and I agonized about years ago, and decided that reducing for multiplication is good, but for additions, the ideal had to be pretty trivial for it to matter, and slowed things way down. So we opted to not reduce for sums. Is that where the problem is now? @d-torrance @mahrud ?
In any case, for the next release we don't want a monumental slowdown!
Yes, the problem is for addition. A simple example:
i1 : M = ZZ^1/ideal 2
o1 = cokernel | 2 |
1
o1 : ZZ-module, quotient of ZZ
i2 : v = map(M, ZZ^1, 1)
o2 = | 1 |
1
o2 : Matrix M <-- ZZ
What should v + v
return? In 1.22, it returns | 2 |
, which seems wrong. On the development
branch, where addition now uses reduce
, it returns 0
, which seems correct, but at the cost of slowing everything else down. :)
It's not just addition, also submatrix
, and more generally whenever map(Module,Module,Matrix)
is used.
I feel like reducing for multiplication and not for addition seems like a pretty arbitrary choice (e.g. maybe for certain applications this is fine, but for some others this would be just as much of a pain).
I found an example where, despite there being no relations in the module, without calling reduce
the matrix looks incorrect:
restart
debug Core
needsPackage "Truncations"
R = QQ[a,b,c];
D = res ideal(a*b, a*c, b*c, a^2-b^2)
f = truncate(3, D.dd_1)
g = truncate(3, D.dd_2)
r = map(target f, source g, raw f * raw g)
0 == r -- false
0 == map(R, reduce(target r, raw r)) -- true
Here are f
, g
, and r = f * g
and what the target module is:
i50 : f, g, map(R, raw f * raw g)
o50 = ({3} | 0 0 0 0 0 0 0 0 0 0 0 0 |, {3} | -1 0 0 0 |, {3} | 0 0 0 0 |)
{3} | 0 0 0 0 0 0 0 0 0 1 0 0 | {3} | 0 0 0 0 | {3} | 0 0 0 0 |
{3} | 0 0 0 0 0 0 1 0 0 0 0 0 | {3} | 0 0 0 -b | {3} | 0 0 0 0 |
{3} | -1 0 0 0 0 0 0 0 0 0 1 0 | {3} | 0 -1 0 0 | {3} | 0 0 0 0 |
{3} | 0 0 0 1 0 0 0 1 0 0 0 1 | {3} | 0 0 0 -b | {3} | 0 0 0 0 |
{3} | 1 0 0 0 0 0 0 0 1 0 0 0 | {3} | 0 0 0 a | {3} | 0 0 0 0 |
{3} | 0 -1 0 0 0 0 0 0 0 0 0 0 | {3} | 0 0 0 0 | {3} | 0 0 0 0 |
{3} | 0 0 -1 0 1 0 0 0 0 0 0 0 | {3} | 0 1 -1 0 | {3} | 0 0 0 0 |
{3} | 0 1 0 0 0 1 0 0 0 0 0 0 | {3} | 1 0 0 0 | {3} | 0 0 0 a |
{3} | 0 0 1 0 0 0 0 0 0 0 0 0 | {3} | 0 0 0 0 | {3} | 0 0 0 -b |
{3} | -1 0 0 0 |
{3} | 0 0 1 0 |
i51 : target r
o51 = image | c3 bc2 ac2 b2c abc a2c b3 ab2 a2b a3 |
Since a*a^2b - b*aa^3 = 0
it makes sense why we want r
to be the zero map...
Since #3013 was merged several pieces of code I use have all ground to a halt. Previously, a change I introduced in #2766 caused
submatrix
to callreduce
, which slowed down submatrix when the target of a matrix needed a gb computation, even if the computation was trivial (see #2898). Here's an example:This used to be instantaneous in 1.21.
Now, this happens in many more cases, even when there are no relations involved (which was the initial reasoning in #3012). These simple operations should all be O(1), but induce a gb computation:
To see that these are all caused by calls to
reduce
, try running this and repeat the commands above:I've had to add the lines above to my
init.m2
, otherwise I can't do anything.Tangentially related to this, here's another method that should be trivial, but actually runs an unnecessary Groebner basis on the target: