Closed pjbruin closed 10 years ago
Branch pushed to git repo; I updated commit sha1. This was a forced push. Recent commits: | ||
---|---|---|
4265b4f | do not check twice for irreducibility | |
555055b | matrix2.pyx: no special case for finite fields in F.algebraic_closure() | |
7f7ed5b | rename/fix change_level(); new doctests | |
2de16f2 | add a _latex_ method to the elements |
|
4455c84 | fix comparison of pseudo-Conway polynomial trees |
Replying to @defeo:
Replying to @pjbruin:
Good idea, I'll rebase this ticket and then disable the check.
Why "rebase"? Maybe you meant "merge master"?
I saw your comment only after the above Git operation; I may just have commited the sin of rewriting history and/or have messed things up for you. I'm still more used to Mercurial, which is why rebasing sounded like the obvious thing to do. Besides, it turned out to remove some duplicate commits which were in the previous version of the branch (I noticed this before).
Replying to @pjbruin:
it turned out to remove some duplicate commits which were in the previous version of the branch (I noticed this before).
Although if I read the git-rebase
man page correctly, merging our branches will now create even more duplicates because of my rebase
. What is the best solution for this?
Although if I read the
git-rebase
man page correctly, merging our branches will now create even more duplicates because of myrebase
. What is the best solution for this?
What do you mean by "duplicate commits"?
It's not a big deal for me, I can easily rebase my branch on yours. It's not meant to enter Sage anyway: it's just some experiments.
But it might be a problem for Vincent, if you still want to share work. If you want to go back in time, you can do something like this (make sure you have no uncommited work)
git reset --hard 5fe4189
git cherry-pick 4265b4f
git merge master
git push -f
assuming the only commit that really did something was the last one, 4265b4f.
Anyway, as I said, I don't care. If this history is better for you, it's ok for me. I'll just rebase on top of your new history. Just try this if Vincent needs it.
Replying to @defeo:
What do you mean by "duplicate commits"?
The previous branch looked (on the Trac "commits" page) like two parallel and unconnected linear "chains" of commits, with all commits in the first also appearing in the other. When I was rebasing this, Git spit out several messages saying something to the effect that the patch had already been applied (in some cases I had to resolve a conflict first). After the rebasing, there was only one chain of commits left.
It's not a big deal for me, I can easily rebase my branch on yours. It's not meant to enter Sage anyway: it's just some experiments.
But it might be a problem for Vincent, if you still want to share work.
OK, let's leave it as is unless it creates trouble for Vincent (or someone else who might have been using this branch).
Replying to @pjbruin:
The previous branch looked (on the Trac "commits" page) like two parallel and unconnected linear "chains" of commits, with all commits in the first also appearing in the other. When I was rebasing this, Git spit out several messages saying something to the effect that the patch had already been applied (in some cases I had to resolve a conflict first). After the rebasing, there was only one chain of commits left.
Yes, you are right. There has been a little bit of a mess before the merge commit a197d0d79. Probably the mercurial patch was imported once by you and once by Vincent. No big deal, but since now you have rebased, there's no point in going back (unless someone was depending on the old commits).
Actually, there is #15390 which depends on the old branch; I'm not sure whether it is better for me to go back or for Vincent to rebase his branch at #15390 onto this one.
Just to get an idea of what would be good to do before setting this to "needs-review", here is the status of the items from Vincent's list (comment:9):
- .some_elements() returns a very stupid list
This can be made less stupid by implementing _an_element_()
to return self.gen(2)
.
- .gens() is not happy. I guess we can return a
Family
. The problem is that in the specification it is written that the method should return a tuple.
I think returning a Family
is acceptable given that there is no finite set of generators.
- many methods can be implemented in two lines as
def my_method(self): return self._value.my_method()
this concerns
minimal_polynomial
,trace
,norm
,multiplicative_order
, ...
I agree for minimal_polynomial
and multiplicative_order
; the latter was implemented by Vincent.
The trace and norm, on the other hand, are not well-defined since the definition requires a finite extension, for which there is no canical choice.
It remains to implement minimal_polynomial
, which can of course be done in two lines as above.
- in a similar way (calling apropriate method of _value and applying a morphism) it is straightforward to implement
frobenius
,pth_power
,pth_root
, ...
We now have pth_power
and pth_root
, again thanks to Vincent, and the field itself has frobenius
. I don't think an additional frobenius
on elements would be useful.
Something else: I would prefer if the implementation of nth_root()
could be improved before getting this ticket merged. The basic difficulty is figuring out in which subfield we have to look. Rather than trying extensions of degrees dividing n (is it true/clear that the degree has to divide n?), I think we should either factor xn - a (where a is the element whose n-th root we want) and look at the smallest degree of a factor, or we should compute the multiplicative order of a. Also, before doing anything else, we should take all factors p out of n and use pth_root()
.
I hope to have an update soon, at least for the remaining things mentioned in the previous comment.
Branch pushed to git repo; I updated commit sha1. New commits:
[70f07ca](https://github.com/sagemath/sagetrac-mirror/commit/70f07ca) | implement three more methods |
Branch pushed to git repo; I updated commit sha1. New commits:
7b3d68a | cosmetic improvement to nth_root() |
I thought a bit about nth_root()
but couldn't easily make it faster. Hence I just made two very minor fixes: raise an AssertionError
instead of a ValueError
if we don't find the root (since this should never happen) and move the TODO to the docstring.
Since the basic functionality of the ticket is usable, I'm setting it to needs_review
.
Branch pushed to git repo; I updated commit sha1. New commits:
ff35daa | Merge branch 'develop' into ticket/14990 |
Hello,
Sorry, the story started already long time ago. I would be happy to finish the review of this ticket and work again on #15390 (no trouble about the possible rebase, I will try the cherry-pick proposed by Luca).
In case you do not want to recompile your Sage over the 6.2.rc0, I put a version that merge the develop release at u/vdelecroix/14990 (with some extra, see below).
I am in trouble because of the UniqueFactory
. Firstly, in some doctests there is a direct call to AlgebraicClosureFiniteField_pseudo_conway
but in practice this should be avoided as we have
sage: F = AlgebraicClosureFiniteField_pseudo_conway(GF(5),'z')
sage: z = F.an_element()
sage: z == loads(dumps(z))
False
I had no idea how to make it clear. On the other hand, what it the purpose of a factory if there is only one implementation? Note that there will be a problem with the generic __reduce__
when there will be several implementations and I am pretty sure that many others trouble will show up.
Minor issues that I solved in a commit on my branch:
the doctest must be in an EXAMPLES
bloc and not an EXAMPLE
bloc.
I was wondering what was the point of __getstate__
(line 473) and __setstate__
(line 487) in the base class. It becomes clear after reading the code for the pseudo_conway class. Please, put some specification (better than "used for pickling") in __getstate__
and __setstate__
that would be useful to anybody who wants to implement their own algebraic closure (all right, there should not be a lot of them, but at least it will help people, like me, who are reading the code).
Add optional keywords to FiniteField.algebraic_closure
that are sent to the factory in order to be able to do
sage: GF(5).algebraic_closure(implementation='pseudo_conway')
for _get_polynomial
and _get_im_gen
we could use the decorator @abstract_method
(from sage.misc.abstract_method
). That way, their existence become part of the TestSuite
and there is no need for the NotImplementedError
. There are many example of its usage in sage/categories
.
It would be better to move the test lines 41-44 inside the constructor of AlgebraicClosure
and rather use AlgebraicClosureFiniteField_pseudo_conway
directly. That way we are sure that the good class is tested.
Implement some_elements
: this is important for having better automatic tests
Feel free to reuse the merge commit or the extra one from my branch. As soon as the situation of the factory is clear to me, the branch is good to go.
Thanks Vincent
Hello Vincent,
Sorry, the story started already long time ago. I would be happy to finish the review of this ticket and work again on #15390 (no trouble about the possible rebase, I will try the cherry-pick proposed by Luca).
Great!
In case you do not want to recompile your Sage over the 6.2.rc0, I put a version that merge the develop release at u/vdelecroix/14990 (with some extra, see below).
OK, we should probably switch to that branch (although merging the latest development branch is only necessary in case of conflicts, and it clutters the history).
I am in trouble because of the
UniqueFactory
. Firstly, in some doctests there is a direct call toAlgebraicClosureFiniteField_pseudo_conway
but in practice this should be avoided as we havesage: F = AlgebraicClosureFiniteField_pseudo_conway(GF(5),'z') sage: z = F.an_element() sage: z == loads(dumps(z)) False
I had no idea how to make it clear.
It is hopefully clear from the general structure, and the fact that these classes are not in the global namespace, that they are not meant to be called directly. I do not find it too important, but we can add a remark if you think it is useful. The above behaviour is quite bad, though...
On the other hand, what it the purpose of a factory if there is only one implementation? Note that there will be a problem with the generic
__reduce__
when there will be several implementations and I am pretty sure that many others trouble will show up.
There are indeed problems here. First, looking at the code again, I think that in the current implementation, the interaction between the factory and the pickling code is flawed. Second, I have been thinking about unique representation in other contexts (e.g. number fields and elliptic curves), and there seems to be something more fundamentally problematic.
From my perspective, the guiding philosophy should be that you can use UniqueRepresentation
or UniqueFactory
for objects that are defined up to unique isomorphism by a certain amount of "defining data", which corresponds to the arguments of the __init__()
method of an object derived from UniqueRepresentation
, resp. to the key
used by a UniqueFactory
.
The problem with of algebraic closures of finite fields is that in principle they are not defined up to unique isomorphism by a finite amount of defining data. This is exactly the reason why Conway polynomials were invented, and we could have unique representation if we only used Conway polynomials. However, the idea of this implementation was to use pseudo-Conway polynomials, which do not have the same uniqueness property.
It is not clear to me at the moment that trying to get unique representation at all in this case is the right thing to do. Especially with the current implementation, given that pseudo-Conway polynomials are not unique, it may be better to just (1) ensure that pickling works and (2) put some warnings in place that algebraic closures are not globally unique and that users should be careful if they want to ensure all elements live in the same algebraic closure.
Branch pushed to git repo; I updated commit sha1. New commits:
02d1520 | Merge branch 'develop' into 14990-closure_finite_field |
c14a903 | Minor improvements to algebraic closure of finite fields |
96bf62d | Trac 14990: do not try to get unique representation |
14e18af | Trac 14990: remove pickling code |
48e0ffe | Trac 14990: fix comparison of elements |
I decided to remove both the UniqueRepresentation code
and the pickling methods. The code is much clearer now. It was surprisingly easy to get everything (mainly comparison of elements) to work again; one just has to be careful in the case where two parents are equal but not identical.
(The branch is based on u/vdelecroix/14990
.)
Branch pushed to git repo; I updated commit sha1. New commits:
33f982f | Trac 14990: small documentation improvements |
Hello Peter,
I do not like
sage: GF(5).algebraic_closure() is GF(5).algebraic_closure()
False
As far as I understand, a pseudo Conway polynomial is not uniquely defined. But nevertheless, the implementation of PseudoConwayLattice.polynomial
is, no? The only thing that today prevents the uniqueness is the use of the database as shown in the following example:
sage: P1 = PseudoConwayLattice(5,use_database=True)
sage: P2 = PseudoConwayLattice(5,use_database=False)
sage: P1.polynomial(2)
x^2 + 4*x + 2
sage: P2.polynomial(2)
x^2 + x + 2
Maybe I am wrong and there is more than that.
So, I propose to restore some kind of unique representation based on the lattice. More precisely (if you agree)
AlgebraicClosureFiniteField_pseudo_conway
inherit from UniqueRepresentation
lattice
in the constructor of AlgebraicClosureFiniteField_pseudo_conway
that can also be used from something likeGF(5).algebraic_closure(implementation='pseudo_conway',lattice=MyFunnyLattice())
(that way we would have two different implementations of the algebraic closure depending if we use or not the database)
__reduce__
in AlgebraicClosureFiniteField_pseudo_conway
that avoid the factory (but I guess that it should be taken care by the UniqueRepresentation
)And then, it remains to find a nice way to avoid rebuilding new lattices each time a user asks for
sage: GF(5).algebraic_closure()
What do you think?
Vincent
Hello Vincent,
I do not like
sage: GF(5).algebraic_closure() is GF(5).algebraic_closure() False
Would you be happier if FiniteField.algebraic_closure()
were a cached_method
?
I see your point, but I really don't like the idea that constructing two algebraic closures of the same field can be expected to give identical objects. (Except if there are "trivial" reasons why it should, like caching the result in the above example, or if you use a mathematically well-defined construction like (non-pseudo) Conway polynomials.)
Hi,
As far as I understand, a pseudo Conway polynomial is not uniquely defined. But nevertheless, the implementation of
PseudoConwayLattice.polynomial
is, no? The only thing that today prevents the uniqueness is the use of the database
I think there are some random choices of roots in the pseudo-Conway algorithm. Am I wrong? And even then, in practice a pseudo-Conway lattice can never be computed completely, and when you are given a partial pseudo-Conway lattice, there are many different ways of completing it.
Anyway, I agree with Peter. This ticket is not only about pseudo-Conway polynomials. There's plenty of algorithmic ways of constructing the algebraic closure of GF(p), each with its pros and cons. None of them is canonical: even the famous "canonically embedded lattices" of Lenstra and De Smit, use an arbitrary lexicographic order at some point to make things canonical. So my opinion is that even for those "canonical" constructions, it is arguable whether they should be unique representations.
Hello,
Thanks Peter and Luca for the lights.
1) For the comparison of AlgebraicClosureFiniteField
you rely on the equality in PseudoConwayLattice
... which is not exactly what you want. It compares the nodes which is something dynamical by definition. We have for example
sage: P1 = PseudoConwayLattice(5, use_database=False)
sage: P2 = PseudoConwayLattice(5, use_database=False)
sage: P1 == P2
True
sage: _ = P1.polynomial(1)
sage: P1 == P2 # hum!?
False
sage: _ = P2.polynomial(1)
sage: P1 == P2 # hum hum!!??
True
It is fine for the lattices but not for the fields. The simplest fix would be
class AlgebraicClosureFiniteField_pseudo_conway:
...
def __cmp___(self, other):
...
return self._pseudo_conway_lattice is other._pseudo_conway_lattice
2) It makes sense to have something more flexible like
sage: AC = AlgebraicClosureFiniteField_pseudo_conway
sage: AC(GF(3), lattice=my_pc_lattice)
where "my_pc_lattice" is an instance of a subclass or PseudoConwayLattice
or something with the same specifications (i.e. at least a method polynomial
). That way we can already have two implementations of the algebraic closure (calling PseudoConwayLattice
with the option use_database=True
or use_database=False
).
3) In the example above, there is some redundancy as the pseudo-Conway lattice already knows the finite field... so it would be nice if the pseudo-Conway lattice implements a method base_ring
that returns the finite field it is based on.
4) It would also make sense in PseudoConwayLattice
to have a method associated_finite_field_algebraic_closure
(with a better name if possible).
Vincent
PS: will see later for the unique representation problems.
Replying to @videlec:
Hello,
Thanks Peter and Luca for the lights.
1) For the comparison of
AlgebraicClosureFiniteField
you rely on the equality inPseudoConwayLattice
... which is not exactly what you want. It compares the nodes which is something dynamical by definition. We have for examplesage: P1 = PseudoConwayLattice(5, use_database=False) sage: P2 = PseudoConwayLattice(5, use_database=False) sage: P1 == P2 True sage: _ = P1.polynomial(1) sage: P1 == P2 # hum!? False sage: _ = P2.polynomial(1) sage: P1 == P2 # hum hum!!?? True
It is fine for the lattices but not for the fields. The simplest fix would be
Is it really fine for lattices? I would say lattices with different polynomials shouldn't evaluate equal.
class AlgebraicClosureFiniteField_pseudo_conway: ... def __cmp___(self, other): ... return self._pseudo_conway_lattice is other._pseudo_conway_lattice
2) It makes sense to have something more flexible like
sage: AC = AlgebraicClosureFiniteField_pseudo_conway sage: AC(GF(3), lattice=my_pc_lattice)
where "my_pc_lattice" is an instance of a subclass or
PseudoConwayLattice
or something with the same specifications (i.e. at least a methodpolynomial
). That way we can already have two implementations of the algebraic closure (callingPseudoConwayLattice
with the optionuse_database=True
oruse_database=False
).
That sounds like a good idea.
3) In the example above, there is some redundancy as the pseudo-Conway lattice already knows the finite field... so it would be nice if the pseudo-Conway lattice implements a method
base_ring
that returns the finite field it is based on.
+1
4) It would also make sense in
PseudoConwayLattice
to have a methodassociated_finite_field_algebraic_closure
(with a better name if possible).
Would that be really useful? That does not really make sense, but one might want to create a lattice without the corresponding algebraic closure :)
Replying to @jpflori:
Replying to @videlec:
Hello,
Thanks Peter and Luca for the lights.
1) For the comparison of
AlgebraicClosureFiniteField
you rely on the equality inPseudoConwayLattice
... which is not exactly what you want. It compares the nodes which is something dynamical by definition. We have for examplesage: P1 = PseudoConwayLattice(5, use_database=False) sage: P2 = PseudoConwayLattice(5, use_database=False) sage: P1 == P2 True sage: _ = P1.polynomial(1) sage: P1 == P2 # hum!? False sage: _ = P2.polynomial(1) sage: P1 == P2 # hum hum!!?? True
It is fine for the lattices but not for the fields. The simplest fix would be
Is it really fine for lattices? I would say lattices with different polynomials shouldn't evaluate equal.
Ok, I answered too quickly here. The problem is that polynomials are added to the lattices but not at the same time.
In this case, the fact that when the polynomials of the same degree added to the two lattices are actually the same is pure luck. As Luca remarked, there is some randomness (ok, maybe not so random if you look at what is actually happenning, but it is designed in a way it should be random), and the polynomials may be different.
Anyway, I do agree with the fact that the lattice are once equal, then different and then equal again.
The only sensible thing to do to compare the lattices is to chek they have the same polynomials at every degree.
Maybe in the case where database=True
or conway polynomials are used we could make the lattice unique or consider two lattices with different degrees computed equal (but then we should also forbid lattices with database=True
to overflow the database limits) and whatsoever that's not really the issue here.
In the case of algebraic closure, I feel the same is enough. We don't really need identity of the underlying lattices.
It's dynamical indeed, but there's no other way to do that (except for construction where you have something canonical or at least unique, let's say with Conway polynomials).
But if we have that's even better (and would be easier to test: only a pointers comparison).
I agree with Jean-Pierre's analysis. Let me just add that one shouldn't read too much mathematical meaning into equality (==
) of parents. The main use for it is in deciding whether to allow coercion from one AlgebraicClosureFiniteField
to another. For that, the only thing that make sense is to check if the PCPL's are the same. Here we do definitely want to check "only" equality, not identity. If we did anything less than checking equality, say only checking if one lattice is equal to a sublattice of the other, then we would be asking for trouble. For example, if we had an element x in the subfield of degree n in one of the two, then we would have no guarantee that the subfield of degree n that would have to be constructed in the other instance (where we want to coerce x) would be the same.
Hi Peter,
Either I badly explained something or there is something contradictory in your argument. First of all you said
The main use for it [the equality] is in deciding whether to allow coercion from one
AlgebraicClosureFiniteField
to another.
Fine. But that was my point, equality is broken as we currently have with your branch applied
sage: p = next_prime(100000)
sage: K = GF(p)
sage: F1 = K.algebraic_closure()
sage: F2 = K.algebraic_closure()
sage: F1 == F2
True
sage: _ = F1.gen(3) # force computation in PCL 1
sage: F1 == F2
False
sage: _ = F2.gen(3) # same computation in PCL 2
sage: F1 == F2
True
So depending in which stage of the computation you are, there will or will not be a coercion between your fields! Do you agree that there is a problem here?
we do definitely want to check "only" equality, not identity [of PCL].
Here it depends on what you mean by "equality". If it is equality as mathematical object I do agree if it is equality as Python comparison I strongly disagree. The Python equality of PCL currently is: "two PCL are equal if they agree on their already made computations". And, as Jean-Pierre mentioned, it is not the purpose of that ticket to improve that.
Vincent
Hi Vincent,
2) It makes sense to have something more flexible like
sage: AC = AlgebraicClosureFiniteField_pseudo_conway sage: AC(GF(3), lattice=my_pc_lattice)
where "my_pc_lattice" is an instance of a subclass or
PseudoConwayLattice
or something with the same specifications (i.e. at least a methodpolynomial
). That way we can already have two implementations of the algebraic closure (callingPseudoConwayLattice
with the optionuse_database=True
oruse_database=False
).
This is implemented in one of the two new commits; one can now pass a lattice
argument, and the use_database
flag is now accepted here as well.
3) In the example above, there is some redundancy as the pseudo-Conway lattice already knows the finite field... so it would be nice if the pseudo-Conway lattice implements a method
base_ring
that returns the finite field it is based on.
It already has a public attribute p
for the characteristic; since the PCL is not really meant to be used directly anyway, I think it is redundant to also add a base_ring()
method.
4) It would also make sense in
PseudoConwayLattice
to have a methodassociated_finite_field_algebraic_closure
(with a better name if possible).
I agree with Jean-Pierre here; this doesn't seem to be useful. To compare, we don't have (and don't need) a method associated_finite_field()
for polynomials over Fp either.
Hi Vincent,
Either I badly explained something or there is something contradictory in your argument.
Or I badly explained something...
First of all you said
The main use for it [the equality] is in deciding whether to allow coercion from one
AlgebraicClosureFiniteField
to another.Fine. But that was my point, equality is broken
I don't see in what sense you can really say that equality is broken, since it is not designed to be a mathematically meaningful property in this case. I think we should use the most restrictive notion possible that still allows non-identical fields to compare equal. (In this way we stay as close to unique representation as possible, in the sense that two non-identical objects only compare equal under very precise circumstances.)
So depending in which stage of the computation you are, there will or will not be a coercion between your fields! Do you agree that there is a problem here?
No, I don't think there is a problem. Nobody forces us to have all kinds of coercions that could possibly make sense. We should encourage the user to do all computations in the same field; making algebraic_closure()
a cached method (as in one of the two new commits) is meant to help with this. As far as I'm concerned, coercion between equal but non-identical fields is only useful to make the "sanity check" loads(dumps(x)) == x
work. I think it would be a mistake to make efforts to make different instances "synchronized" in some way, e.g. by keeping coercion once the underlying pseudo-Conway lattices of two instances start to diverge.
we do definitely want to check "only" equality, not identity [of PCL].
Here it depends on what you mean by "equality". If it is equality as mathematical object I do agree if it is equality as Python comparison I strongly disagree.
In this context I really think that the only sensible notion of equality is to compare the finite sublattices that have already been computed. I understand that this does not fit the idea of "equality as mathematical object", but this is out of necessity, because an AlgebraicClosureFiniteField_pseudo_conway
object simply doesn't define a unique mathematical object. There is no guarantee that two partially computed PCL's will evolve in the same way. Think of a situation where the user pickles one instance, loads it in a newer Sage version where the algorithm for computing pseudo-Conway polynomials has changed, and computes another instance of "the same" field in that Sage version. Then as a consequence of the non-uniqueness of pseudo-Conway polynomials, the results may disagree even if the commands used to create them were exactly the same.
The Python equality of PCL currently is: "two PCL are equal if they agree on their already made computations". And, as Jean-Pierre mentioned, it is not the purpose of that ticket to improve that.
I think it is not even desirable in principle to improve this, because PCL's are non-unique by design. If you want a mathematically well-defined notion of equality (that is preserved under future extension of the lattices), you have to stick to (non-pseudo) Conway polynomials or use other canonical lattices of finite fields.
Replying to @pjbruin:
Hi Vincent,
2) It makes sense to have something more flexible like
sage: AC = AlgebraicClosureFiniteField_pseudo_conway sage: AC(GF(3), lattice=my_pc_lattice)
where "my_pc_lattice" is an instance of a subclass or
PseudoConwayLattice
or something with the same specifications (i.e. at least a methodpolynomial
). That way we can already have two implementations of the algebraic closure (callingPseudoConwayLattice
with the optionuse_database=True
oruse_database=False
).This is implemented in one of the two new commits; one can now pass a
lattice
argument, and theuse_database
flag is now accepted here as well.
Great. Thanks.
3) In the example above, there is some redundancy as the pseudo-Conway lattice already knows the finite field... so it would be nice if the pseudo-Conway lattice implements a method
base_ring
that returns the finite field it is based on.It already has a public attribute
p
for the characteristic; since the PCL is not really meant to be used directly anyway, I think it is redundant to also add abase_ring()
method.
Here I am not sure I agree. But anyway it would be better not to touch PseudoConwayLattice
anyway.
4) It would also make sense in
PseudoConwayLattice
to have a methodassociated_finite_field_algebraic_closure
(with a better name if possible).I agree with Jean-Pierre here; this doesn't seem to be useful. To compare, we don't have (and don't need) a method
associated_finite_field()
for polynomials over Fp either.
Agreed.
Hi Peter,
I still think that the equality of algebraic closures is broken for several reasons. I think (please correct me if you do not agree) that
Within your branch
sage: p = next_prime(100000)
sage: K = GF(p).algebraic_closure()
sage: x = K.gen(2)
sage: y = loads(dumps(x))
sage: x == y # sanity check
True
sage: _ = K.gen(5) # force computation
sage: x.parent() == y.parent()
False
sage: x == y
Traceback (most recent call last):
...
RuntimeError: BUG in map ...
In principle we could hope for a False above but the coercion system has a cache. This is the reason for the RuntimeError
.
This is not an easy problem. First of all, I have to say I don't understand precisely what it means for an object to be immutable. Roughly speaking it is supposed to mean that "its value cannot change", but that doesn't seem to be a completely well-defined notion either.
It seems to me that in the case of AlgebraicClosureFiniteField_pseudo_conway
, the indeterminacy of pseudo-Conway lattices forces us to say that (1) the "value" of an instance has to include the set of subfields that have been computed, and (2) the only way to guarantee immutability is to define equality using identity of the lattices.
If this reasoning is correct, and we accept that parents should be immutable (which does sound like a reasonable condition, although I'm not immediately convinced that it should hold in this situation), then we have to accept Vincent's opinion that for two instances to compare equal it is a necessary condition that the lattices be identical. In that case I guess we may as well go all the way back to comparing by identity of the fields themselves.
It is a bit annoying, and I do not see how to easily avoid breaking the sanity check loads(dumps(x)) == x
if we take this approach. Maybe we should just capitulate on this point and disable this check in the TestSuite
, or explicitly give the error as the expected result of the TestSuite
.
Hi,
Given that PseudoConwayLattice
is what it is, I agree to (1) and (2) in your comment:75.
Coercion are cached and I guess it forbids to have a mutable Parent
(unless they coerce with nothing).
And you are right, we have a big trouble as we can not loads/dumps PseudoConwayLattice
in the way we would like them to be. It seems reasonable to me to keep the error on x == loads(dumps(x))
and document it.
Nevertheless, we can fix it for most use cases by avoiding calling loads/dumps on the lattice. More precisely, when the user does not provide directly a lattice argument to the algebraic closure, we might cache the pseudo conway lattice used. In other words do something along the lines of
@cached_function
def cached_pseudo_conway_lattice(p, use_database):
return PseudoConwayLattice(p, use_database)
def AlgebraicClosureFiniteField(p, implementation='pseudo_conway', **kwds):
if implementation == 'pseudo_conway':
lattice = kwds.get('lattice')
use_database = kwds.get('use_database', True)
if lattice is None:
lattice = cached_pseudo_conway_lattice(p, use_database)
...
And then the pickling must be adapted by calling AlgebraicClosureFiniteField
. I am sure that it can work but I am not sure at all it is reasonable.
ping
Do you want that I give it a try?
Vincent
Hi Vincent,
And you are right, we have a big trouble as we can not loads/dumps
PseudoConwayLattice
in the way we would like them to be. It seems reasonable to me to keep the error onx == loads(dumps(x))
and document it.
I think this is the best solution. Caching the pseudo-Conway lattice just moves the problem but does not fundamentally solve it, since PCL have the same property of not being determined up to unique isomorphism by their defining parameters.
If you want to implement this approach, go ahead. You can try to simply do git revert 48e0ffed
and fix the ensuing doctest failures.
Someone asked me about this today, so I tried my suggestion from comment:78. At this moment the only thing that I'm not too sure about is if non-identical parents should be allowed to compare equal. At this moment they compare equal if and only if their PCLs compare equal. There is something to be said for the convention that two instances should be equal if and only if they are identical, but the current state of affairs does not seem to break anything, so we can also decide to leave it like it is.
Hi Peter,
Thank you.
1) I found that your explanation is rather vague: it is not clear if you speak about mathematics or the Sage implementation. Moreover, this weirdness only concerns pseudo-Conway implementation (which is right now the only one). In a future, we might implement Jean-Pierre idea: deal with Conway polynomial or have a certified- non-random version of pseudo-Conway.
2) It must absolutely be clear in the documentation of .algebraic_closure
that the pickling is broken! This is the main entry point for users.
3) Do you agree to add to the documentation the different weirdnesses I described in my comments ? (possibly in a TODO section)
Best Vincent
Hi Vincent,
1) I found that your explanation is rather vague: it is not clear if you speak about mathematics or the Sage implementation.
Both: mathematically speaking, algebraic closures are not unique up to unique isomorphism unless you fix some standard model, and therefore we necessarily have a similar non-uniqueness in Sage reflecting this mathematical fact. I tried to make clear how the mathematical fact influences the Sage implementation, but let me know if you have a concrete idea for improving this paragraph.
Moreover, this weirdness only concerns pseudo-Conway implementation (which is right now the only one).
Certainly, that is why the new paragraph starts with "In the current implementation".
In a future, we might implement Jean-Pierre idea: deal with Conway polynomial or have a certified- non-random version of pseudo-Conway.
Of course, but we are still in the present... 8-)
(Actually, I'm sceptical about the possibility of defining "certified-non-random version of pseudo-Conway" in a way that would improve on the original Conway polynomials. Besides, I would say that the "idea" of using Conway polynomials should be attributed to Conway!)
2) It must absolutely be clear in the documentation of
.algebraic_closure
that the pickling is broken! This is the main entry point for users.
OK, I'll add some explanation there too. But I am against phrasing it as "pickling is broken"; we should really say that this is an inherent "feature" of the non-unicity of algebraic closures, at least until we support any standard model.
3) Do you agree to add to the documentation the different weirdnesses I described in my comments ? (possibly in a TODO section)
I don't see quickly which "weirdnesses" are still remaining, could you give me a list?
Branch pushed to git repo; I updated commit sha1. New commits:
b1d4424 | Trac 14990: expand documentation of FiniteField_base.algebraic_closure() |
The goal of this ticket is a basic implementation of algebraic closures of finite fields. Most importantly, it provides the following:
AlgebraicClosureFiniteField_generic
(abstract base class)subfield(n)
returning a tuple consisting of the subfield of order pn and aRingHomomorphism_im_gens
giving the canonical embedding into the algebraic closureAlgebraicClosureFiniteField_pseudo_conway
(implements the specific defining polynomials of the finite subfields and the relations between the generators of these subfields)AlgebraicClosureFiniteFieldElement
(mostly a wrapper aroundFiniteFieldElement
, so actually an element of a finite subfield, but having the algebraic closure as its parent and taking care of coercion into larger subfields)AlgebraicClosureFiniteField
(to get unique parents)FiniteField.algebraic_closure()
(invokes the factory class)An example using the new functionality is the following analogue of the example from #8335:
To add, we first embed into
Fbar
:One can also do this without explicitly invoking the embeddings; as a shortcut,
Fbar
has a methodgen(n)
returning the fixed generator of the subfield of degree n, but as an element ofFbar
:It is conceivable that there will be different coexisting implementations (deriving from
AlgebraicClosureFiniteField_generic
). The current implementation uses Conway polynomials and the pseudo-Conway polynomials from #14958, as well as the functionality for finite field homomorphisms provided by #13214.Depends on #14958 Depends on #13214
CC: @roed314 @jpflori @sagetrac-JCooley @sagetrac-dfesti @defeo @videlec @sagetrac-erousseau
Component: algebra
Keywords: finite field algebraic closure
Author: Peter Bruin
Branch:
b0e1539
Reviewer: Vincent Delecroix
Issue created by migration from https://trac.sagemath.org/ticket/14990