Closed bhutz closed 5 years ago
Author: paulfili, atowsley
Branch: u/bhutz/affine_patch_23807
I'm not sure that your solution is the best fix for the underlying problem. This discrepency seems strange:
sage: AffineSpace(QQ,2) is AffineSpace(QQ,2)
True
sage: ProjectiveSpace(QQ,2) is ProjectiveSpace(QQ,2)
False
Maybe AffineSpace
shouldn't be unique, for the same reasons ProjectiveSpace
isn't?
Changing AffineScheme
(yes, it's that high up) to UniqueRepresentation
happened on:
https://github.com/sagemath/sage-prod/issues/17008
It was already predicted we'd be having unintended consequences from this and indeed here we are.
If we're keeping UniqueRepresentation, then we must make AfffineSpace immutable. In affine_patch
we have this line:
AA._default_embedding_index = i
(where AA is either a freshly constructed affine space or an affine space that was passed in). We can't make that change, so
affine_patch
must be removed_default_embedding_index
must be a construction parameter to AffineSpace
(if the default is "None" and the projective closure of the space is asked, a projective space should be created and the right affine patch should be forced into the projective space -- clearly projective spaces are caching their affine patches, not the other way around)UniqueRep
then it should probably cache projective space rather than the other way around. We already have a problem there now:sage: P2.<x,y,z>=ProjectiveSpace(QQ,2);
sage: A=P2.affine_patch(2)
sage: A.projective_embedding() == P2
False
(the variable names get lost)
names=[repr(a) for a in self.gens()]
names.pop(i)
AA = AffineSpace(n, self.base_ring(), names = names, default_embedding_index = i)
to get generator names. Given that we cannot make assumptions on what the variable names of the projective space are, I don't think we can do much else than just dropping the variable name whose hyperplane we're removing. If you do want to have a modification, perhaps something like
[repr(a)+"_aff" for a in self.gens()]
or so.
EDIT: I think we're forced to choose between the following:
UniqueRepresentation
on AffineSpace
default_embedding
entirely and accept that one cannot get back projective closures from affine patches (in fact that's already not really possible anyway).ProjectiveSpace
also into UniqueRepresentation
.
The reason is that with unique rep affine spaces, we're always going to have an issue withP2a.<x,y,z>=ProjectiveSpace(QQ,2)
Aa=P2a.affine_patch(0)
P2b.<x,y,z>=ProjectiveSpace(QQ,2)
Ab=P2b.affine_patch(0)
With UniqueRepresentation
, we're going to have that Aa is identical to Ab, because there's no info to distinguish the two, so there's no room for Aa
and Ab
to lead back to P2a
and P2b
respectively.
Description changed:
---
+++
@@ -8,3 +8,5 @@
True
These patches are isomorphic, but should not be the same object. + +Finally, since this ticket is discovering modification of global immutable objects, I think it deserves status "major" at the very least.
Changed author from paulfili, atowsley to Ben Hutz
One correction as it affects choice (2): .projective_embedding is just the map, you are trying to compare to the codomain, and that works.
sage: P2.<x,y,z>=ProjectiveSpace(QQ,2);
sage: A=P2.affine_patch(2)
sage: A.projective_embedding().codomain() == P2
True
}}
Replying to @bhutz:
One correction as it affects choice (2): .projective_embedding is just the map, you are trying to compare to the codomain, and that works.
sage: P2.<x,y,z>=ProjectiveSpace(QQ,2); sage: A=P2.affine_patch(2) sage: A.projective_embedding().codomain() == P2 True }}
Thanks! In fact:
sage: A.projective_embedding().codomain() is P2
True
That's what I originally expected to hold. It shows that A caches P2 somewhere, which makes the global modification stuff even worse.
The affine_patch function and the projective_embedding function update a dictionary object associated to the affine or projective space for caching of these associations. So making those spaces immutable would totally break that functionality.
I personally don't like using unique representation here, but was not involved in the changes you cited that added that. Is it even reasonable to consider undoing that?
Alternatively, it seems perhaps going unique everywhere is the next alternative. But I'm not sure how hard it would be to preserve the connections between embeddings and patches.
Replying to @bhutz:
I personally don't like using unique representation here, but was not involved in the changes you cited that added that. Is it even reasonable to consider undoing that?
Alternatively, it seems perhaps going unique everywhere is the next alternative. But I'm not sure how hard it would be to preserve the connections between embeddings and patches.
I think both are reasonable solutions. I don't like UniqueRepresentation
either, so I'd have a slight preference for the former.
Here is another unfortunate current behavior
sage: A.<x,y>=AffineSpace(QQ,2)
sage: B.<x,y>=AffineSpace(QQ,2)
sage: A is B
True
sage: X = A.subscheme(x-y)
sage: Y = B.subscheme(x-y)
sage: X is Y
False
Similar example, but more complicated in construction
sage: P.<x,y,z>=ProjectiveSpace(QQ,2)
sage: P2.<x,y,z> = ProjectiveSpace(QQ,2)
sage: X = P.subscheme(x-y)
sage: Y = P2.subscheme(x-y)
sage: X == Y, X is Y
True, False
sage: A = X.affine_patch(0)
sage: B = Y.affine_patch(0)
sage: A == B, A is B, A.ambient_space() is B.ambient_space()
(True, False, True)
sage: B.projective_embedding().codomain() == Y
True
sage: B.projective_embedding().codomain() is Y
False
sage: B.projective_embedding().codomain().ambient_space() is P2
True
Adding Peter Bruin, since the effects seen here are a direct result of his modifications on #17008. He might have an idea on how to fix this.
Options I see:
Undo UniqueRepresentation
on (some) schemes and deal with ensuing problems in another way
Go all-the-way on making schemes UniqueRepresentation
, which is going to take a lot of design consideration because it means they really need to be immutable)
I'd expect that anything in between is going to be very hard to reason about (as it turns out: anything involving UniqueRepresentation
is hard to reason/program with in practice, because of Python's imperative streak: people get used to modifying objects)
EDIT: The real problem I see on #17008 is that consistent production of point sets relies on the fact that schemes.generic.scheme.point_homset
is cached. So, in SchemeHomset_generic.__reduce__
we need to do something along the lines of:
if <self> is the result of self.codomain().pointset(...):
return PointSet,(self.codomain(),self.domain().ring_this_is_spec_of())
else:
return what we did before
in order to give the cache a shot at kicking in. Once we do that, the UniqueRepresentation
from #17008 shouldn't be necessary for the problem encountered there.
Note that this applies to all cached routines: once you cache something, you should rewrite the pickler to allow a chance for the cache to do the lookup for you.
An argument FOR some UniqueRepresentation
: the construct AffineScheme
is really just Spec
. Every ring HAS a spec associated with it, so it would be defendable to make sure that the Spec
of a ring can be obtained from the ring reliably. This could be done by caching the thing on the ring and obtaining it from there, but since it's possible to have a ring without a spec but not its spec without the ring, the spec should probably be allowed to be GCed even when the ring is still around. A UniqueRepresentation
construction does that.
If we go this route, then obviously we need a unique polynomial ring for each affine patch for each projective space that we construct. I doubt we can key polynomial rings with projective spaces ... (that's a memory leak waiting to happen). The possibility for registering a new default embedding on a spec would need to go away.
Sure, I see that line of reasoning, and that is essentially what the branch on this ticket is doing (fixing the naming of the affine patches).
But if we go this route (which I'm not yet convinced of) then we'd still need to think about what things should be immutable and how that affects the connection between patches and embeddings. Not to mention fixing some of those examples involving subschemes which should also be unique by the same argument.
I'd still like to see peter's response concerning adding uniquerepresentation before making a plan.
Replying to @bhutz:
Sure, I see that line of reasoning, and that is essentially what the branch on this ticket is doing (fixing the naming of the affine patches).
Yes, but see the example in comment:3. Just keying on variable names makes things slightly better, but not sufficient. Even putting the ProjectiveSpace as part of the construction keys for the affine patches wouldn't save us because
sage: ProjectiveSpace(QQ,1)==ProjectiveSpace(QQ,1)
True
UniqueRepresentation
is like a virus ...
But if we go this route (which I'm not yet convinced of) then we'd still need to think about what things should be immutable and how that affects the connection between patches and embeddings. Not to mention fixing some of those examples involving subschemes which should also be unique by the same argument.
I'm not so sure that the same argument applies: While subschemes of affine schemes may be affine again (and hence a spec of something), that's not how they are constructed in your examples. Since they are not primarily "the spec" of a ring to begin with, I don't think there's the same urgency. Obviously, spec of the coordinate ring would hopefully give the subscheme back, but I'm not sure we're supporting that, and I think we could live without that (for now).
That being said, it seems that having some schemes UR
and other not is going to be incredibly painful.
A useful way to store the patches/embeddings, which allows us to keep UniqueRepresentation
for affine schemes (I am strongly in favour of keeping unique representation for the reason Nils mentioned in comment;10) would be the "structure" construction that is currently used for number fields (introduced by Julian Rüth in #11670).
The idea of "structure" is the following: number fields have unique representation, but the defining data of a number field includes an optional description of how it relates to another number field. In this way, you can create a number field either on its own or (for example) as a subfield of a different number field. See sage/rings/number_field/structure.py
for details.
Analogously, we could use this construction to create, say, both an affine space on its own and a different affine space that is abstractly the same as the first one but is equipped with the extra structure of being an affine patch of a given projective space, or a closed subscheme of another scheme.
One advantage is that this would make it easier to reason about our objects, as well as to pickle them, because it is specified exactly what the defining data of a given scheme is. We do probably have to think about where/how to store the "structure" in such a way that we avoid memory leaks.
Replying to @pjbruin:
A useful way to store the patches/embeddings, which allows us to keep
UniqueRepresentation
for affine schemes (I am strongly in favour of keeping unique representation for the reason Nils mentioned in comment;10) would be the "structure" construction that is currently used for number fields (introduced by Julian Rüth in #11670).
That construction is a complete memory leak. For instance:
sage: u=id(NumberField(x^2+1,name="a").absolute_field('b'))
sage: import gc
sage: gc.collect()
65
sage: [type(v) for v in gc.get_objects() if id(v)==u]
[<class 'sage.rings.number_field.number_field.NumberField_quadratic_with_category'>]
This leaks for the following reason. If we do:
sage: K.<a>=NumberField(x^2+1)
sage: L.<b>=K.absolute_field()
sage: L._structure._reduction[1][0] is K
True
We see that L
has a strong link to K
. So L
will now keep K
alive. What's more, via sage.rings.number_field.number_field.NumberField_version2._cache
there's now a strong global link to K
as a key in a WeakValueDictionary
with value L
, so as long as L
is alive, K
will be reachable (and not just part of a cycle).
So, if K
somehow references L
then K
and L
will not be collected. And here it is:
sage: K._cache__absolute_field.values()[0] is L
True
Note that in principle it's fine for L
and K
to reference each other: the cyclic garbage collector will find those dead cycles just fine. The problem is that by using any of L
and K
as part of a key in a UniqueFactory
or similar, you're getting a strong reference to the cycle. So the key now keeps the value in the WeakValueDictionary
alive.
An important part of the problem is a double caching strategy: UniqueFactory
already provides caching. absolute_field
implements its own cache. With the introduction of structure
the first caching has been made a little better. But that now clashes with the second caching.
The situation might be slightly better if absolute_field
were non-caching, but it could well be that the required computations to arrive at the key under which the other cache operates are now too expensive. One of the reasons why UniqueRepresentation
etc. is so insidious is because it forces a caching strategy in one place, which precludes caching from being used elsewhere.
In this case a solution could be to implement absolute_field
as:
@cached_method
def compute_absolute_field_data(...):
...
def absolute_field(...):
data = self.compute_absolute_field_data(...)
L = NumberField(data,...)
return L
so that the expensive data gets stored without creating or referencing the field that can be constructed with it.
Replying to @nbruin:
Replying to @pjbruin:
A useful way to store the patches/embeddings, which allows us to keep
UniqueRepresentation
for affine schemes (I am strongly in favour of keeping unique representation for the reason Nils mentioned in comment;10) would be the "structure" construction that is currently used for number fields (introduced by Julian Rüth in #11670).That construction is a complete memory leak.
Ouch. It seems that neither the author nor the reviewers of #11670 (including me) paid attention to memory leaks.
In this case a solution could be to implement
absolute_field
as:@cached_method def compute_absolute_field_data(...): ... def absolute_field(...): data = self.compute_absolute_field_data(...) L = NumberField(data,...) return L
so that the expensive data gets stored without creating or referencing the field that can be constructed with it.
This is already how absolute_field()
is implemented; data
is the absolute defining polynomial, which is computed and cached by the method absolute_polynomial()
. Removing the @cached_method
decorator from NumberField_{generic,relative}.absolute_field()
solves the memory leak in your example:
sage: u=id(NumberField(x^2+1,name="a").absolute_field('b'))
sage: import gc
sage: gc.collect()
53
sage: [type(v) for v in gc.get_objects() if id(v)==u]
[]
Replying to @pjbruin:
Replying to @nbruin:
Replying to @pjbruin:
A useful way to store the patches/embeddings, which allows us to keep
UniqueRepresentation
for affine schemes (I am strongly in favour of keeping unique representation for the reason Nils mentioned in comment;10) would be the "structure" construction that is currently used for number fields (introduced by Julian Rüth in #11670).That construction is a complete memory leak.
Ouch. It seems that neither the author nor the reviewers of #11670 (including me) paid attention to memory leaks.
Whenever you see a CachedRepresentation
or a UniqueFactory
that has included in its input parameters objects that are similar to what is returned, you can expect memory leaks (and I really mean, you should expect to be able to easily create an example as above). That is one of my main reasons to not like it. It's very difficult to use it in non-trivial examples and not create memory leaks. Basically, input to these constructors should consist of just strings and integers. If you can't do that, it's a good sign you should perhaps not use it, because someone will create a memory leak with it in the near future.
Fix suggested above now at #23851.
Hi, I am working on this ticket as a part of Google Summer of Code'18. Can we start the discussion again? I needs some details to get started. I suggest using sage-devel
in parallel to trac
for the discussion.
Ideally, I'd still like to see UniqueRepresentation removed from affine schemes. Nils sketched a possible alternative to the issue in #17008 in comment 9. Peter, does that seem like the details might be workable for the #17008 issue eliminating the need for UR for affine space?
Replying to @bhutz:
Ideally, I'd still like to see UniqueRepresentation removed from affine schemes. Nils sketched a possible alternative to the issue in #17008 in comment 9. Peter, does that seem like the details might be workable for the #17008 issue eliminating the need for UR for affine space?
I find that hard to predict; the only way to find that out is by trying. Personally I don't trust the idea of modifying mathematical objects after construction (except for caching purposes), since this is very likely to introduce bugs. An example I just found:
sage: X.<x,y> = toric_varieties.A2_Z2()
sage: Y = X.subscheme([x*y, x^2])
sage: Y.is_smooth([0, 0])
True
sage: Y.embedding_center()
[0 : 0]
sage: Y.is_smooth([0, 1]) # this test changes properties of Y!
True
sage: Y.embedding_center()
[0 : 1]
My ideal world would be one where it is made completely explicit what the defining data of a mathematical object are, where all properties of an object can be determined from these defining data, and where these defining data are immutable. From that perspective, using UniqueRepresentation
or UniqueFactory
is a way to force oneself to be careful and consistent about these defining data, and undoing it would be a step backwards.
As a more constructive contribution, I made a branch u/pbruin/23807-proposal to sketch the kind of solution that I have in mind.
Thanks Peter. I agree that it is possible to get the patching/embedding to work (maybe even without memory leaks) while keeping UR. I'm just not familiar enough with the kind of issue in #17008 to know if that fix is reasonable. From your response it sounds like for this ticket perhaps both fixes should be attempted and see what issues crop up and proceed from there.
Changed branch from u/bhutz/affine_patch_23807 to u/raghukul01/affine_patch_23807
New commits:
88f82ca | Upgrade cysignals to 1.7.2 |
885f1f9 | adjust for minor system-dependent floating point variation on this test; #25815 |
208364a | Updated [SageMath](../wiki/SageMath) version to 8.3.rc2 |
c6cfd94 | Trac 23807: proposal for making projective embedding part of defining data |
b08f802 | Trac 23807: get projective embedding of affine subscheme from ambient affine scheme |
812722b | 23807: Some modifications |
Changed keywords from none to sagedays@icerm
Branch pushed to git repo; I updated commit sha1. New commits:
91862d6 | 23807: UniqueRepresentation added for ProjectiveSpaces |
Branch pushed to git repo; I updated commit sha1. New commits:
5e70658 | 23807: Modified generator names for affine_patch |
Changed keywords from sagedays@icerm to sagedays@icerm, gsoc2018
Apart from documentation updates this branch is ready for testing.
Doctest failures
sage -t --long src/sage/dynamics/arithmetic_dynamics/projective_ds.py # 5 doctests failed
sage -t --long src/sage/categories/homset.py # 2 doctests failed
See also the patchbot.
A couple other comments
algebraic subscheme:
def __init__(self, A, polynomials, ambient_projective_space=None,
default_embedding_index=None):
Do we want to set-up a ProjectiveScheme class like the AffineScheme class?
The default should be immutable
P.<x,y,z>=ProjectiveSpace(QQ,2)
A.<u,v> = AffineSpace(2,QQ,ambient_projective_space=P,default_embedding_index=0)
A._default_embedding_index=4
A.projective_embedding()
Yes exactly, why do we only have AffineScheme
class?
As of immutable objects, I think that issue is in most of the classes as well, consider
sage: P = ProjectiveSpace(QQ,3)
sage: P.coordinate_ring()
sage: P._coordinate_ring = QQ
sage: P.coordinate_ring()
I dont really know how to make variables immutable in sage, there is a set_immutable()
function for Sequences, but dont think it would work here.
ambient_projective_space=None, default_embedding_index=None
in Subscheme.There is one issue with change in generator names, consider:
sage: P.<x,y> = ProjectiveSpace(QQ, 1)
sage: f = DynamicalSystem_projective([y^2, x^2])
sage: f.dehomogenize(tuple([0,1]))
Replying to @raghukul01:
- As of immutable objects, I think that issue is in most of the classes as well, consider
sage: P = ProjectiveSpace(QQ,3) sage: P.coordinate_ring() sage: P._coordinate_ring = QQ sage: P.coordinate_ring()
Most objects are semantically immutable, not syntactically (i.e., it does not raise an error when you do mutate it). The user is not suppose to access _coordinate_ring
because it starts with a leading underscore. So as far as the user is concerned, the object is immutable. So you should not be changing these things in code (which is part of the problem here).
I dont really know how to make variables immutable in sage, there is a
set_immutable()
function for Sequences, but dont think it would work here.
As mentioned above, this is basically a red herring and irrelevant.
Branch pushed to git repo; I updated commit sha1. New commits:
da9465f | Merge branch 'u/raghukul01/affine_patch_23807' of git://trac.sagemath.org/sage into affine_patch_23807 |
b94b7b7 | 23807: Corrected some doctests |
da804ce | Merge branch 'affine_patch_23807' into affine_patch_23807_rc3 |
34ed3e3 | 23807: Corrected dehomogenize in projective dynamics |
Corrected doctests
Reviewer: Ben Hutz, Travis Scrimshaw
Changed author from Ben Hutz to Ben Hutz, Peter Bruin, Raghukul Raman
I did a look through the code as well as some notebook testing. I didn't come up with anything significant from a usage perspective. Here are a couple minor comments. Someone more familiar with the memory management issues of UR should take a look at this as well.
The doc string
If the dehomogenized morphism is an endomorphism then a
:class:`DynamicalSystem_affine` given by dehomogenizing the
source and target of `self` with respect to the given indices.
If it is not an endomorphism then the morphism is returned.
could be simpler. It is totally dependent on the indicies, so something more like If the dehomogenizing indices are the same for the domain and codomain, then a dynamical system is returned. If the dehomogenizing indicies for the domain and codomain are different then the resulting affine patches are different and a scheme morphism is returned
Also I'd rather have a specific check than a blanket try/except
try:
return F.as_dynamical_system()
except:
return F
AffineSpace(n, R=None, names='x', ambient_projective_space=None, default_embedding_index=None)
so now UR is keyed off of not just the base ring and the variable names, but also the embedding index and the embedding codomain. Does this now violate the discussion above where UR was justified by each coordinate ring giving a unique space? I can essentially now make as many different Spec(QQ[x,y]) as I'd like all different.
something is wrong here ~line 710 affine_space.py
if PP is None:
PP = self._ambient_projective_space
if PP is None:
Quick comment: Do not have a bare except:
statement as it can also catch things like keyboard interrupts. You should instead do except Exception:
.
I don't see anything outright that would cause a memory leak. Is there some infinite sequence that would be reasonable test?
I agree with Ben's comment:36 overall.
so now UR is keyed off of not just the base ring and the variable names, but also the embedding index and the embedding codomain. Does this now violate the discussion above where UR was justified by each coordinate ring giving a unique space? I can essentially now make as many different Spec(QQ[x,y]) as I'd like all different.
In principle, I don't think you can create different Specs from the construction parameters (unless you did something evil, probably involving some monkey-patching too). This is because the two extra parameters default to None
and do not change. Now you will get unequal (because they are non-identical) affine patches if they have different embeddings/projective spaces, but from the ticket description, I am inferring they are only isomorphic, which is okay. However, I don't know the math, so I might need a little more detail about what sort of behavior you expect.
I do have some other comments on the branch:
You should not change what the test in homset.py
is actually testing. So you need to find a new parent that is not unique and construct its End
. This is something I might be able to do, but hopefully someone following this ticket knows a good example.
In affine_subscheme.py
, your added __init__
is missing doctests.
A while-we-are-at-it, can you change the doc of specialization in generic/morphism.py
to be properly formatted (by de-indenting the ::
)?
Your __classcall__
is missing doctests (create two objects with not-identical but equivalent entries and show they are the same by using is
).
This change is suspicious to me:
@@ -276,11 +276,8 @@ class AlgebraicScheme_subscheme_projective(AlgebraicScheme_subscheme):
phi = AA.projective_embedding(i, PP)
polys = self.defining_polynomials()
xi = phi.defining_polynomials()
- U = AA.subscheme([ f(xi) for f in polys ])
- U._default_embedding_index = i
- phi = U.projective_embedding(i, PP)
+ U = AA.subscheme([f(xi) for f in polys])
self.__affine_patches[i] = U
- U._embedding_morphism = phi
return U
def _best_affine_patch(self, point):
Is the default embedding index and morphism of U
set automatically? Before it didn't seem like it did, and I am not sure you're changes make it so.
You can simplify this:
-gens = [g[j] for j in range(i)] + [g[j] for j in range(i+1, n+1)]
+gens = g[:i] + g[i+1:]
With passing in AA
to affine_patch
, what if AA._default_embedding_index
is not equal to i
? This feels like it will put things into an inconsistent state as that corresponding AA
is stored and used again. (Side note: this lazily construct __affine_patches = {}
makes the code way more complicated than it needs to be for no gain IMO.)
Don't you need to do some changes in affine_space.py
for projective_embedding
to take into account the _default_embedding_index
and _ambient_projective_space
?
I don't understand why you added this:
+ i = default_embedding_index
+ if i is not None:
+ i = int(i)
Such a check should be done in the AffineSpace_generic.__init__
method because that is where is it "used" (basically a code-locality reason).
Instead of having a check like this:
@@ -185,7 +190,7 @@ class AlgebraicScheme_subscheme_affine(AlgebraicScheme_subscheme):
n = AA.dimension_relative()
if i is None:
try:
- i = self._default_embedding_index
+ return self._embedding_morphism
except AttributeError:
i = int(n)
else:
It would be better to set self._embedding_morphism = None
in the __init__
and then check
if self._embedding_morphism is not None:
return self._embedding_morphism
else:
i = int(n)
It is easier to understand the program flow and makes it easier to debug (mainly, when people do typos).
Replying to @tscrim:
This change is suspicious to me:
@@ -276,11 +276,8 @@ class AlgebraicScheme_subscheme_projective(AlgebraicScheme_subscheme): phi = AA.projective_embedding(i, PP) polys = self.defining_polynomials() xi = phi.defining_polynomials() - U = AA.subscheme([ f(xi) for f in polys ]) - U._default_embedding_index = i - phi = U.projective_embedding(i, PP) + U = AA.subscheme([f(xi) for f in polys]) self.__affine_patches[i] = U - U._embedding_morphism = phi return U def _best_affine_patch(self, point):
Is the default embedding index and morphism of
U
set automatically? Before it didn't seem like it did, and I am not sure you're changes make it so.
affine_patch function takes the embedding index as input, unlike Affine_Space
,in Affine_Subschemes
this embedding index is a necessary input. ie it cannot be None, so there is no need for _default_embedding_index
.
With passing in AA to affine_patch, what if AA._default_embedding_index is not equal to i? This feels like it will put things into an inconsistent state as that corresponding AA is stored and used again.
I don't really know how to deal with that, someone on top of the ticket suggested that AA
as a parameter should be remove, to which I agree.
(Side note: this lazily construct
__affine_patches = {}
makes the code way more complicated than it needs to be for no gain IMO.)
construction of most of these variable is done in similar way, any better method?
Don't you need to do some changes in affine_space.py for projective_embedding to take into account the _default_embedding_index and _ambient_projective_space?
I have taken then into account, any specific change, which I am missing?
Currently
These patches are isomorphic, but should not be the same object.
Finally, since this ticket is discovering modification of global immutable objects, I think it deserves status "major" at the very least.
CC: @pfili @sagetrac-atowsley @pjbruin @tscrim
Component: algebraic geometry
Keywords: sagedays@icerm, gsoc2018
Author: Ben Hutz, Peter Bruin, Raghukul Raman
Branch/Commit:
e67687d
Reviewer: Ben Hutz, Travis Scrimshaw
Issue created by migration from https://trac.sagemath.org/ticket/23807