Closed f29946bc-ee7b-48cd-9abc-3445948c551d closed 8 years ago
Commit: 0d472a6
Sounds good, but don't you think it may be useful to know where the poset splits? Also, why is it only defined for lattices? The algorithm works in all cases.
I did not test it, but from the code's look I am not sure that it works for the chain of length 2, as the docstring indicates. Could you add a doctest for that?
Nathann
Replying to @nathanncohen:
Sounds good, but don't you think it may be useful to know where the poset splits?
Yes, I think that will be usefull. For posets we have is_connected()
, connected_components()
and disjoint_union()
. I guess we should have is_vertically_decomposable()
, vertically_indecomposable_parts()
and vertical_sum()
for lattices.
There are of course other options, like having a function (this one, with an argument?) returning list of "decomposition elements". The user could then run interval()
on them to get parts.
Also, why is it only defined for lattices? The algorithm works in all cases.
How should it be defined on non-connected posets? And I am not sure if this works with non-bounded posets; I thinked about bounded ones when writing this.
I did not test it, but from the code's look I am not sure that it works for the chain of length 2, as the docstring indicates. Could you add a doctest for that?
Arghs! You are right, of course. I forget the special case when writing the code. I'll correct it.
(Btw, this would be nice exercise of (totally unneeded) optimization. One should not need to look for all edged of Hasse diagram to see that a poset is indecomposable.)
There are of course other options, like having a function (this one, with an argument?) returning list of "decomposition elements".
+1 to that.
How should it be defined on non-connected posets? And I am not sure if this works with non-bounded posets; I thinked about bounded ones when writing this.
Hmmm, okay okay... I attempted to write a definition, but indeed for non-lattices you have 1000 different corner-cases, and th definition would be a mess.
(Btw, this would be nice exercise of (totally unneeded) optimization. One should not need to look for all edged of Hasse diagram to see that a poset is indecomposable.)
What do you mean? Your algorithm looks very reliable. I do not see it waste much.
Nathann
Replying to @nathanncohen:
There are of course other options, like having a function (this one, with an argument?) returning list of "decomposition elements".
+1 to that.
OK. What should be the name of the argument? certificate
? give_me_the_list=True
?
How should it be defined on non-connected posets? And I am not sure if this works with non-bounded posets; I thinked about bounded ones when writing this.
Hmmm, okay okay... I attempted to write a definition, but indeed for non-lattices you have 1000 different corner-cases, and th definition would be a mess.
Except for the 2-element lattice there is one simple definition that generalizes this:
any(P.cover_relations_graph().is_cut_vertex(e) for e in P)
But in any case, it is easy to move this to posets later if we want so.
(Btw, this would be nice exercise of (totally unneeded) optimization. One should not need to look for all edged of Hasse diagram to see that a poset is indecomposable.)
What do you mean? Your algorithm looks very reliable. I do not see it waste much.
If the poset has coverings 2 -> 6
and 4 -> 9
, then no element 3..8
can be a decomposition element. After founding, say, 2 -> 6
we could check 5 ->
, 4 ->
and so on. But after founding 4 -> 9
we should have a somewhat complicated stack to skip re-checking biggest covers of 4
and 5
. I guess that the algorithm would be slower in reality, but I am quite sure that it would be better in some theoretical meaning.
OK. What should be the name of the argument?
certificate
?give_me_the_list=True
?
Isn't there a terminology for those points? If it is only for lattices, maybe you could have return_cutvertices=True
or something?
Except for the 2-element lattice there is one simple definition that generalizes this:
any(P.cover_relations_graph().is_cut_vertex(e) for e in P)
Wouldn't work for a poset on three elements, one being greater than the two others (which are incomparable).
If the poset has coverings
2 -> 6
and4 -> 9
, then no element3..8
can be a decomposition element. After founding, say,2 -> 6
we could check5 ->
,4 ->
and so on. But after founding4 -> 9
we should have a somewhat complicated stack to skip re-checking biggest covers of4
and5
. I guess that the algorithm would be slower in reality, but I am quite sure that it would be better in some theoretical meaning.
HMmm... Skipping some edges without additional assumption on the order in which they are returned? I do not know... This is not so bad, for the moment :-)
Nathann
Branch pushed to git repo; I updated commit sha1. New commits:
893ecc1 | Added an option to get "decomposing elements". |
You don't have to write this algorithm twice to make it work in all situations. Once is enough. And if you are worried of the cost of a 'if' inside of the loop, then you should not be writing Python code.
Furthermore, be careful with '::' as they are not needed after an INPUT block. Build the doc to check it.
Nathann
Now it should work with empty lattice, 1-element lattice and 2-element lattice. There is backend ready for extending the function in lattices.py
. I may modify it as suggested by Nathann at comment 9. But the more important question:
How should we exactly define "decomposing elements"? Let's start with
Posets.ChainPoset(2).ordinal_sum(Posets.BooleanLattice(3), labels='integers')
Is 0
a decomposing element? What are "components" for the lattice? Maybe 0-1
, 1-2
and 2-9
. But then, what are components of 2-element lattice?
Replying to @nathanncohen:
If the poset has coverings
2 -> 6
and4 -> 9
, then no element3..8
can be a decomposition element. After founding, say,2 -> 6
we could check5 ->
,4 ->
and so on. But after founding4 -> 9
we should have a somewhat complicated stack to skip re-checking biggest covers of4
and5
. I guess that the algorithm would be slower in reality, but I am quite sure that it would be better in some theoretical meaning.HMmm... Skipping some edges without additional assumption on the order in which they are returned?
I dont' mean that. If the lattice has 100
elements, then 0
is the bottom and 99
is the top. If the lattice has coverings 0 -> 37
, 34 -> 88
and 77 -> 99
, then it is not vertically decomposable. There might be faster way to find those coverings than going throught all elements. But the code would be much more complicated.
Could you also add to your docstring a reference toward a textbook that defines this notion?
Branch pushed to git repo; I updated commit sha1. New commits:
ca909a0 | Indentation of INPUT block. |
Replying to @nathanncohen:
Could you also add to your docstring a reference toward a textbook that defines this notion?
Duh. Counting Finite Lattices by Heitzig and Reinhold defines it "- - contains an element which is neither the greatest not the least element of L but comparable to every element of L." On the other hand, On the number of distributive lattices by Erné and (same) Heitzig and Reinhold says "- - if it is either a singleton or the vertical sum of two nonempty posets - -", and vertical sum on two two-element lattice by their definition is the two-element lattice.
I select tscrim as another random victim. Travis, should we define the two-element lattice to be vertically decomposable or indecomposable?
(Or raise OtherError("developers don't know how to define this")
? :=)
)
Description changed:
---
+++
@@ -1 +1,10 @@
This patch adds a function `is_vertically_decomposable` to finite lattices.
+
+For testing see https://oeis.org/A058800 ; for example
+
+```
+sum([1 for L in Posets(6) if L.is_lattice() and
+ not LatticePoset(L).is_vertically_decomposable()])
+```
+
+returns 7 as it should.
OEIS uses the definition where the two-element lattice is vertically indecomposable: https://oeis.org/A058800. Does this suffice as a base for the definition?
Helloooooooo !
Yeah yeah I guess. Could you just add a link in the doc toward a textbook/paper that defines it the way you use it?
(Or raise OtherError("developers don't know how to define this")?
We have had very non-enlightening debates here about whether the empty graph is connected or not. In such a situation, I would add such a warning rather than have those stupid conversations :-P
Nathann
Branch pushed to git repo; I updated commit sha1. New commits:
667173c | Options to is_vertically_decomposable(). |
This code should now work.
I still don't know how to make the user interface... For posets we have boolean-valued is_connected()
and subposets-valued connected_components()
. But what should then be the function returning only "decomposing elements". I will ask in sage-devel.
Comments on documentation are welcome.
Description changed:
---
+++
@@ -8,3 +8,5 @@
returns 7 as it should.
+
+There is a place for possible optimization: If there is, say, covering relations bottom -> 5
, 3 -> 8
and 7 -> top
, is suffices to show that the lattice is not vertically decomposable. This might be faster on average. Now the complexity is linear to number of covering relations.
return_type vs return_elements.
Branch pushed to git repo; I updated commit sha1. New commits:
5e7224a | Check for 0- and 1-element lattices. |
Replying to @nathanncohen:
return_type vs return_elements.
But they are different things. "Internal" function in hasse_diagram.py
has only one yes/no -argument, so a Boolean seems right. "Interface" function in lattices.py
has three possible inputs.
But they are different things. "Internal" function in
hasse_diagram.py
has only one yes/no -argument, so a Boolean seems right. "Interface" function inlattices.py
has three possible inputs.
Sorry, I removed my comment on the argument's type right after I posted it, it was a mistake. The one I left, however, is about the fact that the argument that appears in the doc is not the one that appears in the function's definition.
Nathann
Branch pushed to git repo; I updated commit sha1. New commits:
d42f0ab | Splitted function, added examples. Also categorized the index of function. |
I will wait until #17226 gets to beta, and then rebase this. (Forgot that it was not there yet.) This will also categorize the index of functions.
I split the function to two parts. I will also wait if somebody comments this on sage-devel. I don't know what should be the name of argument for vertical_decomposition()
.
So this is open for comments and de facto ready for review, but not de iure in needs_review -phase.
Changed commit from d42f0ab
to none
Changed branch from u/jmantysalo/vertically_decomposable to none
Branch pushed to git repo; I updated commit sha1. New commits:
c0455d6 | Added vertical decomposition of the lattice. Also change index of function. |
Commit: c0455d6
OK, here is one possible way to split the functionality. Ready for review.
Branch pushed to git repo; I updated commit sha1. New commits:
abe1642 | Simplification as suggested by ncohen. |
Replying to @nathanncohen:
You don't have to write this algorithm twice to make it work in all situations. Once is enough.
Done this. Compiling, will change to needs_review if this seems to work.
Branch pushed to git repo; I updated commit sha1. New commits:
0abc938 | Special cases for empty, one- and two-element lattices. |
And yes, there was a bug. Now at last this should be ready for review.
(Thematic index of functions might look unnecessary for now. However, see #19197, #19197 etc.)
Nathann, what about this. You already read the code and the 2-element lattice case is now documented. Hence there is two questions left:
Jori,
I thought a bit before answering your email, because the reason I had not done anything on the ticket during the last 6 days is that I had chosen to not work on it anymore. I do not often "forget" things like tickets in needs_review: my mail inbox contains all the things I must attend to, and they all remain there until I do what I think I should do with them.
Among the reasons that led me there is that nothing specific makes your code invalid, and I have no reason to ask you to change it just because it does not suit my taste. I like things short, simple, concise. Three functions for only one feature is beyond me, it angers me by itself.
If I were to write it, you would have one Lattice method which would directly
work on the hasse diagram, with a return_recomposition
boolean flag to return
lists instead of boolean answers. One function, 20 lines, end of the story.
Right now, the Lattice method vertical_decomposition
contains around 20 lines
of code, none of which has the slightest interest to me. It's just wrapping
things into other things, and testing things that are already tested elsewhere.
What I know, however, is that it is impossible for you to get any kind of code into Sage and to work with it unless you have somebody to review your code. I surely know that. Depending on what I work on, depending on the times, it is either easy or hard to get anything in there, and from time to time I think that it would be better if you were allowed to put any code that you like into Sage without needing reviewers like me who drag their feet at every occasion.
Also, I admit that I do not have the energy to discuss the implementation details endlessly, and I also hate that this process may require you to implement code only because the only reviewer you have has a different taste.
Truth is, I don't want to be the reason why you cannot work properly on Sage's code, and I don't have a lot of ways out as not many would do the reviewing job otherwise. So I will try this: I will implement this as is the most natural to me, and you can feel free to not use it if you do not like it. Let's see how it works.
Sorry for the painful reviews.
Nathann
P.S.: the code is at u/ncohen/19123
Hmm... I continue to think. Or hope that somebody else reviews this.
I hope that having this on hasse_diagram.py
is useful later - it can be a quick optimization before frattini_sublattice
.
does not apply, needs rebase
Branch pushed to git repo; I updated commit sha1. New commits:
1356d17 | Rebasing. |
Err. Terminology nazi here: what you did is a 'merge'. A rebase is a difference operation which moves the commits around.
Nathann
This patch adds a function
is_vertically_decomposable
to finite lattices.For testing see https://oeis.org/A058800 ; for example
returns 7 as it should.
There is a place for possible optimization: If there is, say, covering relations
bottom -> 5
,3 -> 8
and7 -> top
, is suffices to show that the lattice is not vertically decomposable. This might be faster on average. Now the complexity is linear to number of covering relations.CC: @nathanncohen @tscrim @kevindilks
Component: combinatorics
Author: Jori Mäntysalo
Branch/Commit:
bf4d108
Reviewer: Kevin Dilks
Issue created by migration from https://trac.sagemath.org/ticket/19123