sagemath / sage

Main repository of SageMath
https://www.sagemath.org
Other
1.33k stars 453 forks source link

Base sage.geometry.cone on the Parma Polyhedra Library (PPL) #10140

Closed vbraun closed 13 years ago

vbraun commented 13 years ago

As a first useful application of the PPL Cython library interface I have changed sage.geometry.cone.Cone to use the PPL wrapper instead of cddlib. Here is a quick benchmark with a fan that was somewhat challenging:

sage: from sage.schemes.generic.toric_variety_library import toric_varieties_rays_cones
sage: rays, cones = toric_varieties_rays_cones['BCdlOG']
sage: timeit('Fan(cones,rays)')
5 loops, best of 3: 1.95 s per loop

With the old Polyhedron/cddlib interface, I got instead

5 loops, best of 3: 42.1 s per loop

Apply

  1. attachment: trac_10140_sublattice_intersection.patch
  2. attachment: trac_10140_base_cone_on_ppl_original.patch
  3. attachment: trac_10140_reviewer.patch
  4. attachment: trac_10140_switch_point_containment_to_PPL.patch

Apply trac_10140_sublattice_intersection.patch, trac_10140_base_cone_on_ppl_original.patch, trac_10140_reviewer.patch, trac_10140_switch_point_containment_to_PPL.patch

Depends on #10039

CC: @novoselt @nthiery

Component: geometry

Keywords: ppl

Author: Volker Braun, Andrey Novoseltsev

Reviewer: Andrey Novoseltsev, Volker Braun

Merged: sage-4.7.1.alpha1

Issue created by migration from https://trac.sagemath.org/ticket/10140

vbraun commented 13 years ago
comment:1

In the current patch I fixed all doctest errors in cone.py and fan.py. There are some more broken doctests in other modules due to a different output ray ordering, and I will fix these once the ppl spkg and wrapper are reviewed.

novoselt commented 13 years ago
comment:2

Wonderful speed gain! However, when do you expect PPL to become a standard library? It is my impression that currently it is difficult to add new standard packages and it is more likely to become an optional one for a while. In which case there should be an option to use PPL, but its presence should not affect work of toric geometry.

Little picks at the patch:

  1. I don't see why it is necessary to remove the possibility of Cone construction from a Polyhedron.
  2. I think that new possible input C_Polyhedron should be documented in the INPUT section rather than a note in the end.
  3. Is it possible to make C_Polyhedrons immutable (on demand)?
  4. I prefer all line generators to be put in the end of ray list for non-strictly convex cones, if they were determined automatically. Because if one works with faces of such a cone, then all these generators are the same and, therefore, others are more "important". That's very minor, of course.
  5. I am trying to follow PEP8 style guide http://www.python.org/dev/peps/pep-0008/ which says: When raising an exception, use "raise ValueError('message')" instead of the older form "raise ValueError, 'message'". New line 1167 tries to change this recommended form to an old one ;-)
  6. Looking at what you have done in facet normals, maybe we should change cone.dual_lattice() method to always return ZZn if there is no dual method for the lattice? Thinking of the class group situation, it seems like a more sensible default.
  7. I think cones should not be printed explicitly (i.e. using "print") when ValueError is raised due to attempt to intersect cones in lattices of different dimension. The reason is that maybe some users want to intercept this exception and deal with it in their own way without showing any output. Also, I think that the check should be done not for equality of lattice dimensions, but for equality of lattices themselves, because cones of different lattices should not be intersected.
vbraun commented 13 years ago
comment:3
  1. Well you can always write Cone(Polyhedron.rays()). I took out the possibility to directly pass a Polyhedron to make sure that I caught all instances of old code.

  2. PPL stuff should not be visible to the user, I think. If anything, I'll probably move the note to a comment in the code.

  3. In the C++ library there is no particular support for that. Of course, in C++ you just declare things const if you don't want them mutable. So I guess we should emulate that by get/set_immutable() methods.

4,5,6. Sounds good.

  1. Oops left-over print() from debugging things :-)
novoselt commented 13 years ago
comment:4
  1. My concern was that it removes publicly exposed feature that has been in Sage for several months (and is likely to stay there a few months more). So I vote for resurrecting this possibility, which is actually very easy when things are PPL based - you just replace Polyhedron with Polyhedron.rays(), as you said. More importantly, if PPL will become something that users have to install separately, then ALL of the old code related to polyhedron -> cone should remain untouched, but there should be some new path to go around it. By the way, I am not quite familiar with packages - is it possible to change Sage library during package installation? If not and all PPL-related code should be in the library no matter whether PPL is installed or not, then "doctest-fixing" may be a bad approach here. Maybe we should change doctests to make them deterministic. Although I am not quite sure how.

    1. I see your point, but having non-documented input rubs me the wrong way. If you don't want to mention PPL in the documentation (which is quite reasonable), maybe we can add a function like _Cone_from_PPL for such input? For the actual class there is no need in this - it is already not supposed to be called directly by users.

    2. Yes, that's what I had in mind. My concern is that when you use PPL representation, you may need to actually change it somehow to get the result. So far in your code you were constructing a new object based on existing one, so that the original is intact, but it seems to me that this step is quite easy to forget and it would be nice to have something forcing you to do so.

    3. That explains it ;-)

jdemeyer commented 13 years ago
comment:5

This is quite possibly a stupid question, but does it mean we can get rid of cddlib if this gets merged?

5d2aaf09-c963-473a-bf79-1f742a72700f commented 13 years ago
comment:6

No! Currently cddlib is required by Gfan. It may be possible to convince Anders Jensen, the author of Gfan, to switch to PPL as well.

vbraun commented 13 years ago
comment:7

The updated version now cleans up all doctests.

vbraun commented 13 years ago
comment:8

For the patch bot:

Depends on #10233, #10039

Apply trac_10140_base_cone_on_ppl.patch, trac_10140_fix_toric_variety_doctests.patch

vbraun commented 13 years ago
comment:9

For the patch bot:

Depends on #9972, #10233, #10039.

Apply trac_10140_base_cone_on_ppl.patch, trac_10140_fix_toric_variety_doctests.patch

kiwifb commented 13 years ago
comment:10

I have trouble applying trac_10140_base_cone_on_ppl.patch. The following section doesn't apply at least:

@@ -1578,7 +1667,7 @@

         Let's take a 3-d cone on 4 rays::

-            sage: c = Cone([(1,0,1), (0,1,1), (-1,0,1), (0,-1,1)])
+            sage: c = Cone([(1,0,1), (0,1,1), (-1,0,1), (0,-1,1)], check=False)

         Then any ray generates a 1-d face of this cone, but if you construct
         such a face directly, it will not "sit" inside the cone::

Is it due to #9972? I can see that particular section in #9972 but slightly shifted in line number.

novoselt commented 13 years ago
comment:11

It is quite likely, since #9972 was invasive and written mostly by me, i.e. it was not coordinated with this one. What exactly do you mean by "have trouble"? I just tried to apply it on top of my queue and it succeeded with two fuzzy hunks.

For the record: I am going to review this ticket but I am waiting for PPL package to get approved ;-)

kiwifb commented 13 years ago
comment:12

Patching failed miserably. But I hadn't apply any of #9972. Since #9972 seems to going in 4.6.2, I will only apply the present patches from a 4.6.2.

novoselt commented 13 years ago

Work Issues: rebasement

novoselt commented 13 years ago
comment:13

This is on a fresh installation of 4.6.2.alpha2 without any patches applied:

applying trac_10140_base_cone_on_ppl.patch
patching file sage/geometry/cone.py
Hunk #30 succeeded at 1981 with fuzz 1 (offset 10 lines).
Hunk #39 succeeded at 2332 with fuzz 2 (offset 29 lines).
now at: trac_10140_base_cone_on_ppl.patch
vbraun commented 13 years ago

Attachment: trac_10140_fix_toric_variety_doctests.patch.gz

Rebased patch

vbraun commented 13 years ago
comment:14

I've uploaded my rebased patch.

novoselt commented 13 years ago
comment:15

François, does the new version work for you? (Now I don't get any fuzz for both patches.)

novoselt commented 13 years ago

Changed work issues from rebasement to none

kiwifb commented 13 years ago
comment:16

It applies if I start from 4.6.2.alpha2, but I have fuzz:

patching file sage/geometry/cone.py
Hunk #29 succeeded at 1969 (offset 10 lines).
Hunk #30 succeeded at 1980 with fuzz 1 (offset 10 lines).
Hunk #31 succeeded at 2076 (offset 30 lines).
Hunk #32 succeeded at 2110 (offset 30 lines).
Hunk #33 succeeded at 2129 (offset 30 lines).
Hunk #34 succeeded at 2217 (offset 30 lines).
Hunk #35 succeeded at 2237 (offset 30 lines).
Hunk #36 succeeded at 2271 (offset 30 lines).
Hunk #37 succeeded at 2289 (offset 30 lines).
Hunk #38 succeeded at 2314 (offset 30 lines).
Hunk #39 succeeded at 2330 with fuzz 2 (offset 29 lines).
Hunk #40 succeeded at 2464 (offset 29 lines).
Hunk #41 succeeded at 2492 (offset 29 lines).
Hunk #42 succeeded at 2523 (offset 29 lines).
Hunk #43 succeeded at 2608 (offset 29 lines).
Hunk #44 succeeded at 2874 (offset 38 lines).
Hunk #45 succeeded at 3100 (offset 38 lines).
Hunk #46 succeeded at 3119 (offset 38 lines).
Hunk #47 succeeded at 3189 (offset 38 lines).
Hunk #48 succeeded at 3204 (offset 38 lines).
patching file sage/geometry/fan.py
Hunk #3 succeeded at 1824 (offset 13 lines).
patching file sage/geometry/fan_morphism.py

Presumably the same fuzz you had before. I wonder if it is because of #10336 which was merged in alpha2? But it applies and that's the main thing.

kiwifb commented 13 years ago
comment:17

On a positive note, not only it applies but the tests pass without problem on a variety of platform (linux-x86,linux-amd64, OS X 10.5 (x86)).

novoselt commented 13 years ago
comment:18

Now that PPL package got positively reviewed, I will go over this one shortly.

novoselt commented 13 years ago

Reviewer: Andrey Novoseltsev

5d2aaf09-c963-473a-bf79-1f742a72700f commented 13 years ago
comment:19

This looks good to me, and #10039 has positive review now. It seems that Andrey's request to let Polyhedron objects be allowed as Cone input has been addressed, as well as the other issues he raised.

All tests in the geometry module pass with these patches on linux (64-bit), solaris (mark on skynet), and OS X 10.6.

I would give a positive review, but I will defer to Andrey in case of design disagreements.

novoselt commented 13 years ago
comment:20

Thank you, I definitely want to look over it again.

In particular, I don't like that the order of rays has changed in doctests in fan_morphism (the very end of the first patch). So I want to figure out why does it happen there and how to fix it, if possible.

novoselt commented 13 years ago
comment:21

Volker, this patch conflicts with mine on #10809. That ticket is ready to go if you are fine with the last set of changes. If you are, could you please set #10809 to positive review and rebase this one on top of it?

novoselt commented 13 years ago
comment:22

So - am I right that PPL does not preserve the order of rays when it computes the minimal set of generators?

I am a strong believer in preserving this order (i.e. if the given set is already minimal, the output should be the same, and if there are "extras" then they should be removed, but the rest should be still in order). If PPL does not preserve it, I propose adding such "sorting" to the cone constructor. It should also eliminate the need for many doctest fixes.

The only argument against, I think, is performance, but I doubt that it will be noticeable in toric geometry computations, where one usually wants to work with simplicial cones that don't have many rays.

novoselt commented 13 years ago
comment:23

One more suggestion: can we please completely return my code for constructing a cone from a Polyhedron? (Modulo parameter names, of course). Because:

vbraun commented 13 years ago
comment:24

Neither PPL nor cddlib preserve ray orders. PPL does return them in a canonical (internal) order, irregardless of the original order.

vbraun commented 13 years ago

Updated patch

vbraun commented 13 years ago
comment:25

Attachment: trac_10140_base_cone_on_ppl.patch.gz

I think its easier to just go with PPL's ordering. It also matches the behaviour of the Polyhedron class which will reorder generators. We already have the Cone_of_fan to keep track of ray orderings with respect to some particular enumeration of rays.

I can add back in a check that the polyhedron is really a cone, perhaps tomorrow. Generating a Cone from a Polyhedron is only for convenience, I don't see any need to make it fast. All actual computations are now done with PPL.

novoselt commented 13 years ago
comment:26

Replying to @vbraun:

I think its easier to just go with PPL's ordering. It also matches the behaviour of the Polyhedron class which will reorder generators. We already have the Cone_of_fan to keep track of ray orderings with respect to some particular enumeration of rays.

It is very annoying to deal with several objects that are "the same" but are slightly different, e.g. have different ordering of rays. Also, users may have some ideas about "convenient order" and provide rays in it: if I create the first quadrant as Cone([(1,0), (0,1)]), I expect that the first ray will be the first basis vector and the second - the second, as it was given. Remembering to add check=False is annoying. Making check=False default is very dangerous, as I have learned the hard way. I really really want to preserve the order whenever possible. I even plan to add the possibility to manually provide the order of facets and points for lattice polytopes, since it will allow convenient work with Cayley cones and polytopes of nef-partitions - they are easy to create with current code, but keeping track of face duality is very ugly.

I will be very happy to add the sorting myself on top your patch, fixing all affected doctests, so pleeease let me do it ;-)

I can add back in a check that the polyhedron is really a cone, perhaps tomorrow. Generating a Cone from a Polyhedron is only for convenience, I don't see any need to make it fast. All actual computations are now done with PPL.

I am more concerned about validity checks than performance here. And I am also happy to do the necessary copy-pasting here. So you don't really have to do anything, just say that you are OK with it!

vbraun commented 13 years ago
comment:27

I also want to remind you of the case of non-strict cones where the ray representation is not unique; There need not be a subset of the initial rays that is minimal.

I understand that its sometimes convenient to keep rays ordered in some particular way, but then you are really working with decorated rays and not just with cones. Instead of enforcing some particular order, maybe we can have a DecoratedCone (Though how is that different from IntegralRayCollection) / DecoratedLatticePolytope?

Having said that, I'm not totally opposed to it. But we should at least think a little bit about the points I mentioned...

novoselt commented 13 years ago
comment:28

That was indeed useful to think about it! But I didn't change my mind ;-)

Regarding non-strictly convex cones, I think that there is no point in trying to preserve user's order, but it would be nice to state it in the documentation: if users care about the order in these cases, they must use check=False option. Otherwise I think the output should be as it is now:

strict_ray_0, ..., strict_ray_k, line_1, -line_1, ..., line_m, -line_m

For strictly convex cones I indeed want to work with decorated cones, but I'd rather not call them that way explicitly everywhere. The reason is that these cones are designed for toric geometry and so their rays (or ray generators) correspond to homogeneous coordinates. If I want to have rays r1, r2, r3 and I want to associate to them variables x, y, z, then I would probably create a cone with rays r1, r2, r3, and then create a toric variety based on this cone with variable names x, y, z. If the order of rays may change during construction, it means that I need to construct the cone, look at the order of rays, and then rearrange my variable names accordingly, if necessary. This is inconvenient, so while mathematically the cone is determined by the set {r1, r2, r3} and associated variables are not x, y, z or even x1, x2, x3, but x_r1, x_r2, x_r3 and one should refer to them specifying the ray, in practice it is convenient to have some fixed order. Since there is no natural order on ray, the best one seems to be the one given by the user. I think that in most case users will actually provide the generating set, so computing it is almost redundant, but sometimes users (if they are like me ;-)) THINK that they provide the generating set, but it is not - that's why it is important to have check=True by default. Regarding this option, it also seems natural that if it is acceptable to give check=False for certain input, with check=True the output will be the same, but slower.

I am not sure if there is any sense in having separate classes for cones and decorated cones. It is also not obvious which one should be the base class. On the one hand, decoration is extra structure. On the other hand, storing stuff as tuples gives this structure "for free". Personally, I never wished to have cones without any order. That does not mean that others don't need them, but right now one can use current cones with is_equivalent instead of == and ray_set instead of rays to disregard the order.

Some more personal experience:

  1. The first time I had problems due to order change was with getting nef-partitions from nef.x. Because in general PALP preserved the vertices if they are vertices, but nef.x was reordering them when computing nef-partitions (I think this is no longer true in the last version of PALP, but I started using it a little earlier). The solution was to add sorting to some function in the guts of lattice polytope, so that this reordering is not exposed to the user.
  2. The second annoying issue with PALP was that the i-th face of dimension 0 could be generated by j-th vertex for i != j. While in principle vertices as points of the ambient space and vertices as elements of the face lattice are different, this discrepancy can be quite inconvenient: you need to remember that it exists and you need to insert extra code to do translation from one to another. So eventually I added sorting for 0-dimensional faces.
  3. Faces of reflexive polytopes and their polars are in bijection. I find it very convenient to write things like

    for y, yv in zip(p.faces(dim=2), p.polar().faces(codim=3))

    This eliminates the need for each face to be able to compute its dual. While it is not terribly difficult, it is certainly longer than having no need to do any computations at all. In order to accomplish it, there is some twisted logic in lattice polytopes that ensures that only one polytope in the polar pair computes its faces and the other one then "copies it with adjustments". I hope to redo it in a better way, but at least it is isolated from users, for whom the dual of the i-th face of dimension d is the i-the face of codimension d.

Somewhat related behaviour of other parts of Sage:

  1. Submodules compute their own internal basis by default, but there is an option to give user's basis. In the latter case the internal basis is still computed and one has access to either basis. One can also get coordinates of elements in terms of the user basis or in terms of the standard basis using coordinates for one and coordinate_vector for another and I never remember which one is which. Personally, I think that this design can be improved. But in any case they do have some order on the generators as well as other Parents without strict distinction between decorated and "plain" objects.
  2. QQ["x, y"] == QQ["y, x"] evaluates to False, so while these rings are generated by the same variables the order matters. (To my surprise, one can also define QQ["x, x"] and it will be a ring in two variables with the same name...)

By the way - currently cones DON'T try to preserve the given order of rays, they just happen to do it often and I didn't really notice it in the beginning (or maybe I assumed that cddlib preserves the order and there is no need to do anything). But I consider it a bug.

As I understand, some people need to work with polyhedra that have a lot of vertices, maybe thousands. In this case sorting according to the "original order" may be a significant performance penalty, so it makes sense that PPL does not preserve it. It may also be difficult for general polyhedra with generators of different type and non-uniqueness of minimal generating set (as it is the case for non-strictly convex cones). But cones for toric geometry in practice tend to be relatively simple and strictly convex (maybe with non-strictly convex dual, but users are less likely to construct those directly), so I think that this is not an issue. We can add an option to turn sorting on or off, but, of course, I'd like to have it on by default.

As I said, I will be very happy to do writing of sorting and doctest fixing myself!-)

novoselt commented 13 years ago
comment:29

Marshall, what's your opinion on the order topic?

5d2aaf09-c963-473a-bf79-1f742a72700f commented 13 years ago
comment:30

Replying to @novoselt:

Marshall, what's your opinion on the order topic?

I don't have strong feelings on it. I think it might be worth sorting things to some canonical order - sorting rays or vertices or whatever other input should be fast, even in python, compared to the actual polyhedral work. But I know with cddlib that was a bit of a hassle since the internal ordering might be different.

vbraun commented 13 years ago
comment:31
for y, yv in zip(p.faces(dim=2), p.polar().faces(codim=3))

This is absolutely terrible and the prime reason why the face lattice shouldn't make any promises with regards to ordering. Just so that nobody can use a construct like this. For anybody but the author it is completely illegible and utterly mysterious that the faces end up being dual. At the same time, it would be so much easier to write

for face in p.faces(dim=3):
  print face, face.dual()

Also I'm totally opposed to requiring rays in cones to always be decorated. The cones of a fan are not the only place that uses Cone. In fact, the main reason for this ticket is that the current (PALP-based) implementation of cones is unsuitable for Kahler cones because it is constrained to low dimension.

We could have an OrderedCone() function that constructs the cone while keeping the ray order intact if possible (at a performance penalty). This function could then even raise a ValueError if there is no unique minimal representation. But I still think that its a bad idea to implicitly have some particular order.

As far as associating rays with homogeneous variables, I think we really should subclass the multivariate polynomial ring. Then we can convert rays to homogeneous variables and properly check homogeneity of polynomials.

Finally, about `QQ['x, x']``: The variable name is just a string and can be pretty much anything. The issue was raised during the recent bug days and the real problem is that conversion to e.g. maxima is broken in such a case. Sage generally only cares about the order of the variables.

novoselt commented 13 years ago
comment:32

Replying to @vbraun:

 for y, yv in zip(p.faces(dim=2), p.polar().faces(codim=3))

This is absolutely terrible and the prime reason why the face lattice shouldn't make any promises with regards to ordering. Just so that nobody can use a construct like this. For anybody but the author it is completely illegible and utterly mysterious that the faces end up being dual. At the same time, it would be so much easier to write

 for face in p.faces(dim=3):
   print face, face.dual()

I agree that the second form is better (and I think that it is implemented). But in order to implement it efficiently, one still has to copy the face lattice of the original polytope.

Now let's consider faces of a polytope P and of the polytope 2*P. It is also more natural to write

for face in P.faces(dim=3):
    print y, 2*y

but if you would like 2y to be a face of 2P, not just a polytope obtained from y, how should it be implemented? If they are stored in the same order, you can easily and very efficiently iterate over faces and their multiples.

Move on to cones. Let tau be a face of sigma. What is tau.dual()? Is it the cone dual to the cone tau or the face of the cone dual to cone sigma which is dual to the face tau? So we need to have a different method name then, tau.dual_face() perhaps.

Now let P, P, C, and C be Cayley polytopes and cones with their duals. Faces of P and C are in direct relation, let F be a face of P. How should I get the face of C corresponding to it? Perhaps C(F) is a natural notation, but how is it going to be implemented? Finding a face generated by rays coinciding with vertices of F is quite inefficient and no matter how it is done the result probably should be cached somewhere. Where? Maybe in P or C and perhaps it is easy to achieve with a special class for supporting polytopes and supported cones. It is probably a good idea to implement this in a long run. But there are very many good ideas that have to be implemented and if faces are ordered in a consistent way, then those ideas can wait a little, because there is a satisfactory access to related faces now. And even when they are implemented, I think that the most efficient implementation will still rely on internal consistency of orders.

Maybe it is mysterious currently that the face order represents duality. Then it is my fault for not documenting it correctly and I will try to fix it. If it was clearly stated in the documentation, with an example how to use it, there would be nothing mysterious about it.

Note also, that I don't want to SORT faces in any particular way. I just want to have related faces in the same order as the "initial ones" happened to be. If nothing else, it leads to efficient implementation, so it is good to do it even if end-users will use some other interface.

Also I'm totally opposed to requiring rays in cones to always be decorated. The cones of a fan are not the only place that uses Cone. In fact, the main reason for this ticket is that the current (PALP-based) implementation of cones is unsuitable for Kahler cones because it is constrained to low dimension.

What I was trying to say is that some kind of decoration occurs on its own just because rays are internally stored as tuples. If you are really against it and want to store them as sets, that means that anything that uses rays of that cone should refer to complete rays and instead of other lists and tuples with order matching the order of rays you need to use dictionaries with rays as keys. Which certainly can be convenient but you can do it even if cones have some internal order of rays, so I don't see a reason for having extra classes.

We could have an OrderedCone() function that constructs the cone while keeping the ray order intact if possible (at a performance penalty). This function could then even raise a ValueError if there is no unique minimal representation. But I still think that its a bad idea to implicitly have some particular order.

I'd rather go with UnorderedCone() ;-) I suggested adding a parameter that will turn off sorting for performance gain. I also don't want to have IMPLICIT order, I want to completely and clearly document it in the description of the Cone constructor and its face methods when/if there is any particular order on anything!

As far as associating rays with homogeneous variables, I think we really should subclass the multivariate polynomial ring. Then we can convert rays to homogeneous variables and properly check homogeneity of polynomials.

Another good idea, but even after this conversion do you really want to use names like x_(1,0,0,0,0), x_(0,1,0,0,0), x_(0,0,1,0,0), x_(0,0,0,1,0), x_(0,0,0,0,1), x_(-1,-1,-1,-1,-1) for coordinates on P^5?! And for more complicated varieties expressions in such variables will be completely unreadable! So it is very natural to fix some order of rays and create variables whose index refers to the index of the corresponding ray. Even better is to have both "order matching" and "direct conversion".

Finally, about `QQ['x, x']``: The variable name is just a string and can be pretty much anything. The issue was raised during the recent bug days and the real problem is that conversion to e.g. maxima is broken in such a case. Sage generally only cares about the order of the variables.

Let's leave QQ["x, x,"] and come back to QQ["x, y"]. This notation asks for a polynomial ring over QQ with variables named x and y, correct? Well, QQ["y, x"] asks exactly the same, yet the rings are different. So, really, QQ["x, y"] asks for a polynomial ring over QQ with the first variable named "x" and the second one "y". In the same way I think that Cone[(1,0), (0,1)]) asks for a cone with the first ray (1,0) and the second ray (0,1).

If we do have

def Cone(..., keep_ray_order=True)

then interactive usage will be pretty, yet there will be no performance penalty for internal computations. Users don't construct Kaehler cone explicitly, they call Kaehler_cone and somewhere inside there is a call to Cone which is written once and may have as many extra parameters as necessary.

People refer to the first/second/third equation of the system, generator of the ideal etc. all the time even without assigning some labels/indexed names to them. It is natural to assume that you have specified the order of something when you have written this something in a certain order. Perhaps, the optimal solution is to let Cone([(1,0), (1,1), (0,1)]) to be the cone generated by three (ordered) rays - all of the given ones. And then have a function like minimal_ray_generators. I guess that's kind of what ideals are doing. I didn't think about this option before. And now I am a bit hesitant to implement it since I am not sure how many places in the code rely on the minimality of the set of generators for the strictly convex case...

I do realize that all your points are valid and my arguments significantly reflect my personal taste, however I also do use the toric code in addition to writing it and if I find something to be convenient it is likely that some other people also will find it convenient. Unfortunately, it is a bit hard to say how many (or how few) of these other people would like my way. Although I suspect that not very many will strongly dislike it since it is harmless (except for some potential slow down, which should not be a noticeable issue and I suggest to have a way around it when it is.) I'll ask on sage-combinat for some input.

At the very least I would like to be able to call a function sage.geometry.cone.keep_ray_order(True) that will turn on user-order preservation for the current session, even if the default is not to keep the order. But I still strongly believe in trying to preserve the order whenever possible, i.e. in the strictly convex case. (Overall, this issue is terribly similar to ambient_ray_indices discussed some time ago on #9972.)

novoselt commented 13 years ago
comment:33

I have started http://groups.google.com/group/sage-combinat-devel/browse_thread/thread/f3f3e9ceb93fe7bd

vbraun commented 13 years ago
comment:34

Since the faces of P and (2*P).polar() have no particular relationship I see it as a feature that we don't pretend otherwise. In the worst case you have to separately construct the face lattice of 2*P. Is it really that time-critical? Why can't you just work with P alone in that case?

The underlying implementation may very well rely on some ray order, this is why we have the check=False option. Just make sure that it is exposed by some reasonable class/method hierarchy, and make sure to spell out that the underlying implementation is subject to change and everyone has to go through the dual_face() method or whatever it is called.

Its funny that you mention the ideals, because they make no promise about the generators. The MPolynomialIdeal class tries to abstract the mathematical notion of ideal, there is no particular choice of generators implied (and any particular choice is subject to change without notice). This is why methods like ideal.groebner_basis() return a PolynomialSequence and not a MPolynomialIdeal.

novoselt commented 13 years ago
comment:35

Replying to @vbraun:

Since the faces of P and (2*P).polar() have no particular relationship I see it as a feature that we don't pretend otherwise. In the worst case you have to separately construct the face lattice of 2*P. Is it really that time-critical? Why can't you just work with P alone in that case?

I was talking about P and 2P, whose faces are in as direct relationship as it goes. But it also implies that faces of P and (2P).polar() are in a canonical inclusion-reversing bijection. If the order of their faces "match", one can iterate over related pairs without any extra effort/code and I found it to be extremely convenient and natural (having said that, I admit that I am biased and there can be a cleaner interface). Necessity to do such iterations arises in working with Hodge numbers and generating functions corresponding to nef-partitions. Regarding time: my first straightforward implementation using 2y as just a polytope obtained from y was taking about an hour on a certain (simple) example. After some optimization and in particular enforcing face order on P and 2P and using 2y as a face of 2P brought it down to less than a minute. Of course, part of the problem is that PALP is used via system calls so combining many of them into a single one helps a lot, but in any case nothing can beat zip(P.faces(2), twoP.faces(2)) ;-) Anyway, I plan to redo lattice polytopes ones I'm done with my thesis and I'll try to both explain this efficient method in the documentation and provide a more natural interface whenever possible.

Its funny that you mention the ideals, because they make no promise about the generators.

I meant that by default they don't even compute a minimal set of generators. But I don't recall any use for non-minimal representation of cones, so I think that there isn't any point to do the same for cones.

We got a reply from Nicolas on sage-combinat, which I copy below:


Just two cents from an outsider (I'll certainly will have a need for Cone's at some point, but don't have practical experience).

When there is no clear cut answer for a design decision, I tend, whenever possible, to just postpone it; more often than not, the answer will become clear by itself after accumulating practical experience. In that case, there could be an option like:

   Cone(rays=[(1,0),(0,1)], keep_order = True)

and the documentation could explicitly specify that the default value is currently undefined, and will be chosen later on. I guess for the moment I would unofficially set it to False, since that's the cheapest while True is somehow "adding a feature"; so that's less likely to break code in case of change later on.


So, I guess, I am currently loosing 1:2, unless you have changed your mind ;-)

If not, I propose the following:

  1. State in the documentation of Cone that by default the order of rays is going to be "fixed but random" and in particular may change in future versions of Sage. (Which may very well happen due to upgrade in PPL. Personally, I don't like when these "random orders" change and it is yet another reason to stick with the user ordering ;-))
  2. Add keep_order=False to the list of parameters. If keep_order=True, well, keep the order as much as possible in the strictly convex case, i.e. through away extra generators from the original ordered list. If keep_order=True and the cone is not strictly convex, perhaps give a UserWarning like "keep_order=True does not affect not strictly convex cones, see check=False instead!"
  3. Add a function sage.geometry.cone.keep_order(True/False), without importing it into the global name space, that will switch the default behaviour for the current session, so that if users always want keep_order=True, they don't have to repeat it all the time. Mention this function in the description of keep_order parameter in Cone.
  4. Perhaps in the documentation of that function we may mention that if users feel strongly that it should be the default always, they can explain it on the above sage-combinat discussion. (I suppose it is OK to include such links in docstrings.)
  5. In the documentation examples where the ray order is important, use keep_order=True instead of check=False (there are some examples in the patch where you have added this option).
  6. Maybe keep_ray_order is better than just keep_order.

Does it sound like a good compromise? (I.e. the one that leaves everyone mad (c) Calvin ;-))

vbraun commented 13 years ago
comment:36

How about ordered=[True|False], just to throw out yet another possibility. Though either name is fine with me. I agree that passing this option is preferable to check=False.

I don't see any point in documenting a way to change the default behavior. If anybody wants that then they can just overwrite Cone() in the global namespace with their own version. At least then its expected that no doctests work any more. I don't remember seeing a similar function anywhere else in the Sage library.

novoselt commented 13 years ago
comment:37

ordered seems unintuitive to me, let's stick with keep_order as the middle-length option.

I think that functions adjusting default behaviour are awesome and convenient (for those who use them). automatic_names comes to mind and there is something else similar in spirit. But if you are totally against it, scratch 3&4.

vbraun commented 13 years ago
comment:38

I don't see automatic_names as anything in that spirit. It only changes the preparser. Which is special since you don't call it yourself usually, so you can't pass options to it.

novoselt commented 13 years ago
comment:39

Hi Volker, I have started work on the reviewer patch, so please don't change the attached ones. (They don't seem to apply cleanly anymore, but I will take care of it.)

Hi Nicolas, I am adding you to the cc field in case you have further comments on the ray order ;-)

I have realized that

So instead of keep_order I would like to add the above check. Advantages:

  1. It is convenient if the user cares about the ray order, e.g. for constructing an affine toric variety straight from a cone, when one may want to associate particular variable names with particular rays.
  2. It allows easy writing of doctests which explicitly show rays without worrying that the order will change in future releases and a bunch of trivial adjustments will be required.
  3. It constructs the same cones with check=True and check=False options in cases when it is indeed OK to use check=False option.
  4. It is a bit more intuitive to the user and in particular nice for tutorials and live demonstrations.
  5. It makes me happy ;-)

Frankly, I cannot really think of any disadvantages...

So - can I proceed and implement this?

novoselt commented 13 years ago
comment:40

Why

lattice = rays[0].parent() 

was replaced with

lattice = rays[0].parent().ambient_module()

on the new line 378? I think if the input rays live in some sublattice, then the constructed cone also should live in the same sublattice. This distinction is important e.g. for constructing dual cones (I don't think that such duals will currently work, but it is not a reason to prohibit using sublattices ;-))

There are also several doctests where you have replaced things like Cone([(0,0)]) with Cone([], lattice=ToricLattice(2)) - what is your objection against the first form? I think it is a convenient and natural way to construct the origin cone, if you don't care about lattices.

novoselt commented 13 years ago
comment:41

In the intersection method of cones there is now

        if self.lattice_dim() != other.lattice_dim():
            raise ValueError('The cones must be in same-dimensional lattices.')

Why? And why in this form? I think that we probably should check that lattices are compatible, but not by dimension equality. We probably don't want to allow intersecting a cone in N with a cone in M. On the other hand, it is reasonable to allow intersection of cones living in sublattices of the same lattice. The lattice of the intersection should be the intersection of sublattices. Any objections?

novoselt commented 13 years ago

Work Issues: intersect sublattices

novoselt commented 13 years ago
comment:42

First of all, what I have done with Volker's patches:

  1. Folded and rebased them (on top of 3 FanMorphism patches as I wanted to check that nothing breaks, without them there is one fuzzy hunk but it still applies).
  2. Removed doctest fixes related to the ray order as it is restored by the reviewer patch.
  3. Removed the modification to cone-from-polyhedron code since the old one still works fine and makes all the checks.
  4. Left the original of the line mentioned in #10140 comment:40 since it seems to be the correct one to me.

The reviewer patch does the following:

  1. Preserves ray order for strictly convex cones given by the minimal generating set of rays, without performance penalty.
  2. Removes self.dual_lattice() is self.lattice() check as dual_lattice now returns ZZ^n if there is no "honest dual". (This was not the case when this ticket was created.)
  3. Modifies the intersection code (discussed above) slightly. This is not its final version, I need to play a bit with lattice intersection but I think it is the way to go.

While I was writing it, I ran tests on top of clean sage-4.7.alpha4 and there are breaks which seem to be related to rays[0].parent() without .ambient_module(). I believe that they are fixed by #10882.

novoselt commented 13 years ago

Description changed:

--- 
+++ 
@@ -12,6 +12,8 @@
 5 loops, best of 3: 42.1 s per loop

+Depends on #10039: Make Parma Polyhedra Library a standard library

- -Depends on #10039: Make Parma Polyhedra Library a standard library +Apply +1. attachment: trac_10140_base_cone_on_ppl_original.patch +2. attachment: trac_10140_reviewer.patch

vbraun commented 13 years ago
comment:43

Sounds good. I don't remember why I added rays[0].parent().ambient_module() but I'm pretty sure you'll find out when you run the doctests.

I don't know any place where we actually use intersections of cones in different sublattices. I would be fine with leaving this NotImplemented.