Closed mantepse closed 2 years ago
Unfortunately, there seems to be an inaccuracy in Stream_plethysm
:
sage: h = SymmetricFunctions(QQ).h()
sage: p = SymmetricFunctions(QQ).p()
sage: L = LazySymmetricFunctions(p)
sage: X = L(p[1])
sage: e = L(lambda n: h[n]) - 1 - X
sage: g = L(None, valuation=1)
sage: g.define(X - e(g))
sage: g[0]
0
sage: g[1]
p[1]
sage: g[2]
...booooom... (max recursion error)
In the example, h
has valuation 2, so to compute the degree 2 piece of h(g)
we only should be accessing the degree 1 piece of g
. But in Stream_plethysm.get_coefficient
, we compute self._compute_product(2, [1, 1], 1/2)
, which in turn accesses self._right[2]
.
An easy fix is as follows:
diff --git a/src/sage/data_structures/stream.py b/src/sage/data_structures/stream.py
index e54a90b8b88..b61d29544c2 100644
--- a/src/sage/data_structures/stream.py
+++ b/src/sage/data_structures/stream.py
@@ -1763,6 +1763,14 @@ class Stream_plethysm(Stream_binary):
p = self._p
ret = p.zero()
for mu in wt_int_vec_iter(n, la):
+ if any(j < self._gv for j in mu):
+ continue
temp = c
for i, j in zip(la, mu):
gs = self._right[j]
if not gs:
temp = p.zero()
break
temp *= p[i](gs)
ret += temp
return ret
However, it would probably be better not to generate these integer vectors in the first place. Moreover, it is possibly a waste to compute p[i](gs)
for each i
in la
separately, if la
has many parts repeated.
I have to leave now, but I just saw that possibly the implementation of plethysm in the species directory is more efficient.
I slightly improved the implementation. In particular, we can now specify degree one elements in the same way as for plethysm, and it is (slightly :-) faster now:
sage: from sage.data_structures.stream import Stream_function, Stream_plethysm, Stream_plethysm_old
sage: s = SymmetricFunctions(QQ).s()
sage: p = SymmetricFunctions(QQ).p()
sage: f = Stream_function(lambda n: s[n], s, True, 1)
sage: g = Stream_function(lambda n: s[[1]*n], s, True, 1)
sage: h = Stream_plethysm(f, g, p)
sage: %time _ = h[10]
CPU times: user 122 ms, sys: 6 µs, total: 122 ms
Wall time: 122 ms
sage: h2 = Stream_plethysm_old(f, g, p)
sage: %time _ = h2[10]
CPU times: user 2.13 s, sys: 0 ns, total: 2.13 s
Wall time: 2.13 s
Last 10 new commits:
9d6579b | improve documentation, move zero, one, characteristic, etc. to ABC |
feba6b8 | Working more on `__call__` for LazySymFunc. |
3f3e0f2 | Merge branch 'public/rings/lazy_talyor_series-32324' of https://github.com/sagemath/sagetrac-mirror into public/rings/lazy_talyor_series-32324 |
6727228 | Merge branch 'public/rings/lazy_talyor_series-32324' of trac.sagemath.org:sage into t/32324/public/rings/lazy_talyor_series-32324 |
028796d | Fixing numerous issues with `__call__` and expanding its functionality. Moving plethysm to a Stream_plethysm. |
9fb155f | Removing unused code from previous version. |
7f9dbb1 | Some last doc fixes and tweaks. |
4e03fee | remove unused local variable |
e780472 | Addressing the linter complaint. |
d5b86a8 | implement revert, improve plethysm |
Changed keywords from none to LazyPowerSeries
Author: Martin Rubey
Next I'd like to implement revert for TaylorSeries, derivative, derivative_with_respect_to_p1.
Still missing: functorial_composition, arithmetic_product, logarithm for SymmetricFunctions.
All of these are needed for #32367.
Should we def plethysm
and make __call__
an alias instead? This is the way it is done in combinat/sf/sfa.py
.
I agree that we should throw out integer vectors that are asking for things less than the valuation. Actually, this is simple to modify with the current code. Just subtract valuation * len(la)
from n
. Then you just add back in the valuation to each component of the integer vector.
It sure looks like you have just essentially duplicated the plethysm code, which I don't think we should do, especially for marginal speed gains. I think you are better off improving the symmetric functions code directly.
Another micro-optimization that can be done is to store len(l)
in the integer_vector_weighted.iterator_fast
code since this never changes (it can be surprising how much this extra little function call can affect speed).
Replying to @mantepse:
Should we
def plethysm
and make__call__
an alias instead? This is the way it is done incombinat/sf/sfa.py
.
It doesn't make any difference to me.
I don't understand. My code is quite different, and I gain a factor 20 on the original example.
Sorry, I just read what you wrote and took it at face value rather than actually reading the example. Indeed, that is quite an impressive speedup. I think my comment still holds about integrating it directly into the symmetric functions code. Superficially it still looks generally like the symmetric functions code. What have you changed to get that improvement?
Oh, i am sorry, that teaches me a lesson! Could we perhaps do a zoom meeting today? Maybe at 5pm Japan time?
Sounds good. I responded via email as well.
Branch pushed to git repo; I updated commit sha1. New commits:
6eebb35 | do not assume that the approximate valuation will not change over time |
Dependencies: #32324
NB: the new implementation of plethysm appears to be faster than the one in sage.combinat.sf.sfa!
Some things that should be fixed:
_scale_part
and _scale_c
need doctests. I might just inline the code however as they are such small and simple functions (they are needs for the symmetric functions implementation, but I think that could be improved).revert
in terms of (lazy) symmetric functions rather than species.sum_of_terms()
, I would just directly create a dictionary and go through p._from_dict()
to avoid the redirection.Even though its cached, I would make this change:
- if power[d]:
- terms = [(self._scale_part(m, i), self._raise_c(c, i)) for m, c in power[d]]
+ val = power[d]:
+ if val:
+ terms = {self._scale_part(m, i): coeff for m, c in val if (coeff := self._raise_c(c, i)))}
else:
- terms = []
+ return self._p.zero()
- return self._p.sum_of_terms(terms, distinct = True)
+ return self._p._from_dict(terms, remove_zeros=False)
(I only very recently learned of the :=
syntax. it is so great to have a way to assign equality in a statement in Python. It makes writing code like the above so much easier.)
However, there is also a decent part of me that would like to see the code duplication with sfa.plethysm()
reduced, in particular with the logic around the include/exclude. Although the same could be said for _scale_part
and _scale_c
, which are (essentially) the same as in the symmetric functions.
Lastly, shouldn't the compositional inverse also be implemented for lazy Laurent and Taylor series?
Branch pushed to git repo; I updated commit sha1. New commits:
eabb0cb | add some (currently failing) doctests |
Branch pushed to git repo; I updated commit sha1. New commits:
84547ca | more doctests |
Branch pushed to git repo; I updated commit sha1. New commits:
634edfa | fix bug in LazyCauchyProductSeries._mul_, be more exact in LazyLaurentSeries.revert |
revert for Taylor is still missing (possibly I can simply redirect to Laurent), and revert for SymmetricFunctions still needs some care.
Branch pushed to git repo; I updated commit sha1. New commits:
74841c0 | improve revert of LazySymmetricFunction |
Branch pushed to git repo; I updated commit sha1. New commits:
a8e663a | final fixes for LazySymmetricFunction.revert |
Branch pushed to git repo; I updated commit sha1. New commits:
5436995 | final fixes for LazySymmetricFunction.revert, part 2 |
Concerning code duplication:
scale_part might be a useful action on partitions. Can this be made fast?
the handling of degree one elements could go into a separate top-level function, but I have no idea where to place it. Possibly sfa.py? Alternatively it could be a method on rings. It might look as follows:
def _degree_one_elements(R, include=None, exclude=None):
"""
Return variables in the ring `R`.
INPUT:
- ``R`` -- a :class:`Ring`
- ``include``, ``exclude`` (optional, default ``None``) --
iterables of variables in ``R``
OUTPUT:
- If ``include`` is specified, only these variables are returned
as elements of ``R``. Otherwise, all variables in ``R``
(recursively) with the exception of those in ``exclude`` are
returned.
"""
if include is not None and exclude is not None:
raise RuntimeError("include and exclude cannot both be specified")
if include is not None:
degree_one = [R(g) for g in include]
else:
try:
degree_one = [R(g) for g in R.variable_names_recursive()]
except AttributeError:
try:
degree_one = R.gens()
except NotImplementedError:
degree_one = []
if exclude is not None:
degree_one = [g for g in degree_one if g not in exclude]
return [g for g in degree_one if g != R.one()]
Besides, is the following a bug?
sage: ZZ.variable_names()
('x',)
sage: ZZ
Integer Ring
I hope to implement implement multivariate plethysm and reversion for Taylor today, and fix the remaining (few) issues in #32367. Wish me good luck.
Replying to @mantepse:
- scale_part might be a useful action on partitions. Can this be made fast?
No faster than the current implementation I think. However, +1 for making it a method of partitions. (Note that it should return an element of _Partitions
, not the parent of itself.)
- the handling of degree one elements could go into a separate top-level function, but I have no idea where to place it. Possibly sfa.py? Alternatively it could be a method on rings. It might look as follows:
def _degree_one_elements(R, include=None, exclude=None): """ Return variables in the ring `R`. INPUT: - ``R`` -- a :class:`Ring` - ``include``, ``exclude`` (optional, default ``None``) -- iterables of variables in ``R`` OUTPUT: - If ``include`` is specified, only these variables are returned as elements of ``R``. Otherwise, all variables in ``R`` (recursively) with the exception of those in ``exclude`` are returned. """ if include is not None and exclude is not None: raise RuntimeError("include and exclude cannot both be specified") if include is not None: degree_one = [R(g) for g in include] else: try: degree_one = [R(g) for g in R.variable_names_recursive()] except AttributeError: try: degree_one = R.gens() except NotImplementedError: degree_one = [] if exclude is not None: degree_one = [g for g in degree_one if g not in exclude] return [g for g in degree_one if g != R.one()]
+1 as a top-level function in sfa.py
. I would call it _parse_degree_one_elements
and maybe tweak the one-line doc to be a little more precise about what it does. However, those are fairly trivial comments.
Besides, is the following a bug?
sage: ZZ.variable_names() ('x',) sage: ZZ Integer Ring
I think it is the result of very legacy code with it being a subclass of sage.structure.parent_gens.ParentWithGens
and likely a requirement of the names being non-empty. See also
sage: QQ.variable_names()
('x',)
I would like to call it a bug, but that is perhaps slightly unfair.
Good luck!
Replying to @tscrim:
Replying to @mantepse:
- scale_part might be a useful action on partitions. Can this be made fast?
No faster than the current implementation I think. However, +1 for making it a method of partitions. (Note that it should return an element of
_Partitions
, not the parent of itself.)
Thus, it should not be _acted_upon_
, but rather scale
, right?
It could be an action of NN
on partitions, but I think that is a much broader change than what is needed. If we wanted to do that, it would definitely need to be another ticket.
So yes, I would call it something like stretch()
.
Concerning the naming of the new function in sfa.py
: it seems to me that this might be useful in other contexts, too.
It has actually very little to do with degree_one_elements
. Rather, it provides a safe way to get all variables in a ring, possibly excluding some (and, rarely, only including some). (With the minor annoyance that it doesn't always work. For example, fraction fields over iterated polynomial rings will not give the correct answer.)
I'd thus like to call it for what it does, rather than what it is used for, eg. _variable_names_recursive
.
Branch pushed to git repo; I updated commit sha1. New commits:
af347b2 | refactor and doctest |
Replying to @mantepse:
I'd thus like to call it for what it does, rather than what it is used for, eg.
_variable_names_recursive
.
That's fine by me.
Branch pushed to git repo; I updated commit sha1. New commits:
0d9f02f | leftover |
Branch pushed to git repo; I updated commit sha1. New commits:
ae99175 | plethysm with tensor products, part 1 |
Same for
LazySymmetricFunctions
.compositional_inverse
might be a good alias.Depends on #32324 Depends on #34453 Depends on #34494
CC: @tscrim
Component: combinatorics
Keywords: LazyPowerSeries
Author: Martin Rubey
Branch/Commit:
f3f011f
Reviewer: Travis Scrimshaw
Issue created by migration from https://trac.sagemath.org/ticket/34383