sympy / sympy

A computer algebra system written in pure Python
https://sympy.org/
Other
13.07k stars 4.47k forks source link

Finding rational solutions of polynomial inequalities #26177

Open oscarbenjamin opened 10 months ago

oscarbenjamin commented 10 months ago

This is a simple implementation of the operation described in https://github.com/sympy/sympy/issues/26162#issuecomment-1925752873.

This is essentially a simplified version of a cylindrical algebraic decomposition but with one major restriction which is that it is only for strict inequalities and not equations or non-strict inequalities. This restriction simplifies the problem massively by allowing to work mostly only with rational numbers. Regardless of this limitation I still think that a function that can do this would be a very useful addition to SymPy.

Given a set of polynomials the method below finds example points giving rise to all combinations of the polynomials having different signs:

In [6]: for c, p in find_poly_sign([x**2 + y**2 - 1, x - y], [x, y]): print(p, And(*c))
{x: -3, y: -2} (x - y < 0) & (x**2 + y**2 - 1 > 0)
{x: -1, y: -2} (x - y > 0) & (x**2 + y**2 - 1 > 0)
{x: 0, y: -3/4} (x - y > 0) & (x**2 + y**2 - 1 < 0)
{x: -1/2, y: 0} (x - y < 0) & (x**2 + y**2 - 1 < 0)

In [7]: for c, p in find_poly_sign([x**2 + y**2 - 1, x*y], [x, y]): print(p, And(*c))
{x: -1, y: -2} (x*y > 0) & (x**2 + y**2 - 1 > 0)
{x: 1, y: -2} (x*y < 0) & (x**2 + y**2 - 1 > 0)
{x: -1/2, y: -1/2} (x*y > 0) & (x**2 + y**2 - 1 < 0)
{x: 1/2, y: -1/2} (x*y < 0) & (x**2 + y**2 - 1 < 0)

In [8]: for c, p in find_poly_sign([x - 1, x - 2], [x, y]): print(p, And(*c))
{x: 0, y: 0} (x - 2 < 0) & (x - 1 < 0)
{x: 3/2, y: 0} (x - 1 > 0) & (x - 2 < 0)
{x: 3, y: 0} (x - 2 > 0) & (x - 1 > 0)

This is useful for several reasons. One reason is that we can use the numerical values to check some condition that is known to depend only on the signs of the polynomials. Another reason is that this method already proves that the inequalities are or are not satisfiable. The output can also be viewed as a truth table that can be used to determine whether some inequalities imply others etc so you could use this to simplify a system of inequalities or to do something like ask(x**2+y**2 > 7, [x+y>2, x**2+y>10]) e.g.:

In [10]: for c, p in find_poly_sign([x**2 + y**2 - 7, x + y - 2, x**2 + y - 10], [x, y]): print(p, An
    ...: d(*c))
{x: -4, y: -3} (x + y - 2 < 0) & (x**2 + y - 10 > 0) & (x**2 + y**2 - 7 > 0)
{x: 0, y: -3} (x + y - 2 < 0) & (x**2 + y - 10 < 0) & (x**2 + y**2 - 7 > 0)
{x: 6, y: -3} (x + y - 2 > 0) & (x**2 + y - 10 > 0) & (x**2 + y**2 - 7 > 0)
{x: 0, y: -2} (x + y - 2 < 0) & (x**2 + y - 10 < 0) & (x**2 + y**2 - 7 < 0)
{x: 13/4, y: -1} (x + y - 2 > 0) & (x**2 + y - 10 < 0) & (x**2 + y**2 - 7 > 0)
{x: 2, y: 1} (x + y - 2 > 0) & (x**2 + y - 10 < 0) & (x**2 + y**2 - 7 < 0)

This shows that (x + y - 2 > 0) & (x**2 + y - 10 > 0) --> (x**2 + y**2 - 7 > 0).

This is the code. It likely has bugs and can certainly be made more efficient. I just wanted to put this here for discussion for now so that the code can be improved later:

def find_poly_sign(exprs: list[Expr], syms: list[Expr]) -> list[dict[Expr, Expr]]:
    """Find candidate points giving different signs to a set of polynomials.

    >>> from sympy.abc import x, y
    >>> find_poly_sign([x**2 + y**2, x - y], [x, y])
    [([x**2 + y**2 > 0, x - y < 0], {x: -2, y: -1}),
     ([x**2 + y**2 > 0, x - y > 0], {x: 0, y: -1})]
    """
    if not exprs:
        return [([], {s: S.Zero for s in syms})]
    polys, _ = parallel_poly_from_expr(exprs, syms, domain=QQ)
    candidates = _find_poly_sign(polys, syms)
    seen = set()
    results = []
    for candidate in candidates:
        key = tuple(sign(p.subs(candidate)) for p in polys)
        if key in seen:
            continue
        seen.add(key)
        polyvals = [e > 0 if k > 0 else e < 0 for e, k in zip(exprs, key)]
        polyvals = [i.canonical for i in polyvals]
        results.append((polyvals, candidate))
    return results

def _find_poly_sign(polys: list[Poly], syms: list[Expr]) -> list[dict[Expr, Expr]]:
    """Find candidate points giving different signs to a set of polynomials.
    """
    # No conditions to satisfy. Return (0, 0, ...)
    if not polys:
        return [{s: S.Zero for s in syms}]

    sym, rest = syms[0], syms[1:]

    # Base case: univariate polynomial. Find points numerically
    if not rest:
        [sym] = syms
        vals = _find_poly_sign_univariate(polys, sym)
        return [{sym: val} for val in vals]

    # Project out this variable and solve recursively for the others
    polys_proj = []
    for poly in polys:
        if poly.degree() > 0:
            polys_proj.append(poly.resultant(poly.diff()))
        else:
            polys_proj.append(poly(1))
    for n, poly1 in enumerate(polys):
        for poly2 in polys[n + 1:]:
            if poly1.degree() > 0 and poly2.degree() > 0:
                polys_proj.append(poly1.resultant(poly2))

    #polys_proj = [p for p in polys_proj if p.degree() > 0]

    instances_proj = _find_poly_sign(polys_proj, syms[1:])

    # For each candidate we can solve numerically for sym
    instances = []
    for inst_proj in instances_proj:
        polys_lift = [p.subs(inst_proj) for p in polys]
        #polys_lift = [p for p in polys_lift if p.degree() > 0]
        vals = _find_poly_sign_univariate(polys_lift, syms[0])
        for val in vals:
            instances.append({sym: val, **inst_proj})

    return instances

def _find_poly_sign_univariate(polys: list[Poly], sym: Expr):
    """Find numbers between which the polys change sign."""
    zeros = sorted(set().union(*[p.real_roots() for p in polys]))
    if not zeros:
        return [S.Zero]
    points = []
    points.append(_find_num_less(zeros[0]))
    for z1, z2 in zip(zeros[:-1], zeros[1:]):
        points.append(_find_num_between(z1, z2))
    points.append(_find_num_greater(zeros[-1]))
    return points

def _find_num_between(a: Expr, b: Expr) -> Expr:
    """Find a number between a and b where a and b are Rational or RootOf."""
    inf = floor(a)
    sup = ceiling(b)
    assert inf < sup
    diff = sup - inf
    if diff > 1:
        return (inf + sup) // 2
    mid = (inf + sup) / 2
    while not (a < mid < b):
        if mid <= a:
            inf = mid
        else:
            sup = mid
        mid = (inf + sup) / 2
    return mid

def _find_num_less(a: Expr) -> Expr:
    """Find a number less than a where a is Rational or RootOf."""
    inf = floor(a)
    if inf == a:
        inf -= 1
    return inf

def _find_num_greater(a: Expr) -> Expr:
    """Find a number greater than a where a is Rational or RootOf."""
    sup = ceiling(a)
    if sup == a:
        sup += 1
    return sup
oscarbenjamin commented 10 months ago

Another reason is that this method already proves that the inequalities are or are not satisfiable

Note that for checking satisfiability we know what signs each polynomial is required to have which can be used to prune cases making the whole operation much more efficient. It is probably possible to do some pruning in the general case but I am not sure what the exact conditions would be and where they could be checked. Certainly some common cases could be handled more efficiently.

oscarbenjamin commented 10 months ago

This is essentially a simplified version of a cylindrical algebraic decomposition but with one major restriction which is that it is only for strict inequalities and not equations or non-strict inequalities.

Note that even if sympy had CAD it would be useful to have a function like this because there are some cases where a full CAD would be far too expensive to compute but this more restricted version could still be computed. In many practical applications only full dimensional components are of interest.

oscarbenjamin commented 10 months ago

The find_poly_sign function above prunes cases that have the same signs for the polynomials. This assumes that we only care about finding cases where the polynomials have a give sign. That is different from enumerating all components within which the signs of the polynomials are unchanging which is what the _find_poly_sign function computes. Both cases are useful in different situations. For the truth table we want the former but in other cases we want the latter. You can see the difference in a simple case like:

In [11]: find_poly_sign([x**2], [x])
Out[11]: 
⎡⎛⎡ 2    ⎤         ⎞⎤
⎣⎝⎣x  > 0⎦, {x: -1}⎠⎦

In [12]: _find_poly_sign([Poly(x**2)], [x])
Out[12]: [{x: -1}, {x: 1}]

In more complicated cases the difference is less obvious but the number of cases can be much larger without this pruning e.g.:

In [19]: for c, p in find_poly_sign([x**2+y**2-1, x+2*y], [x, y]): print(p, And(*c))
{x: 3, y: -2} (x + 2*y < 0) & (x**2 + y**2 - 1 > 0)
{x: 5, y: -2} (x + 2*y > 0) & (x**2 + y**2 - 1 > 0)
{x: 0, y: -1/2} (x + 2*y < 0) & (x**2 + y**2 - 1 < 0)
{x: 1/2, y: 0} (x + 2*y > 0) & (x**2 + y**2 - 1 < 0)

In [20]: for p in _find_poly_sign([Poly(x**2+y**2-1, [x, y]), Poly(x+2*y, [x, y])], [x, y]): print(p)
    ...: 
{x: 3, y: -2}
{x: 5, y: -2}
{x: -1, y: -1/2}
{x: 0, y: -1/2}
{x: 7/8, y: -1/2}
{x: 2, y: -1/2}
{x: -2, y: 0}
{x: -1/2, y: 0}
{x: 1/2, y: 0}
{x: 2, y: 0}
{x: -2, y: 1/2}
{x: -7/8, y: 1/2}
{x: 0, y: 1/2}
{x: 1, y: 1/2}
{x: -5, y: 2}
{x: -3, y: 2}
oscarbenjamin commented 10 months ago

I can see several different cases:

  1. You want to find an example where the polynomials have a given sign (like checking satisfiability of inequalities).
  2. You want to find an example for each combination of signs (like the truth table).
  3. You want to find points that enumerate the regions where the polynomials have constant sign (lke _find_poly_sign does).

Certainly the first case and possibly the second case can be handled much more efficiently than the code shown by pruning cases in different ways inside _find_poly_sign. I suppose that a general function for this could allow an argument like list[Expr | Gt | Lt] and then if Expr is given it is assumed that all combinations of sign are wanted. A different function name could be used for distinguishing constant-sign vs sign like find_poly_sign and find_poly_constant_sign.

I'm not sure what a good API is...

oscarbenjamin commented 10 months ago

For comparison Mathematica has the functions SemiAlgebraicComponentInstances and GenericCylindricalDecomposition. Also Mathematica's CylindricalDecomposition function takes an option op to specify what kind of decomposition is wanted.

These functions all take inequalities as arguments though so none of the quite produces the set of all points giving all signs meaning that they are not directly comparable to find_poly_sign or find_poly_constant_sign (they are all case 1 above). It would be good to make functions that are similar to those Mathematica functions though.

https://reference.wolfram.com/language/ref/SemialgebraicComponentInstances.html https://reference.wolfram.com/language/ref/GenericCylindricalDecomposition.html https://reference.wolfram.com/language/ref/CylindricalDecomposition.html

oscarbenjamin commented 10 months ago

find_poly_constant_sign

Maybe this should be something like find_nonzero_instances. It is a function that finds rational points that (redundantly) enumerate all geometric components in which the polynomials are nonzero.

sylee957 commented 10 months ago

I think that there are only two articles so far relevant to the discussion.

https://academic.oup.com/comjnl/article/36/5/432/392361 https://core.ac.uk/download/pdf/82649664.pdf

However, I'm not really sure if LC + Discriminants + Pairwise resultants can be right projection, and we should read the papers more meticuously about how to define the projection operators. For example, McCallum's projection often has more preconditions, that you should use squarefree decomposition first, or it may not always work. And also, original McCallum's projection I believe, expects you to add all coefficients, than the leading coefficients, while it looks slightly different for Strzebonski's article.

And we should also amalgamate some knowledge that for strict inequalities, it is no problem to use that projection operator.

oscarbenjamin commented 10 months ago
  • One major bottleneck is that computing the roots

I'm pretty sure that RootOf can be made faster. Also there is an implementation of this in Flint but I am not sure how well exposed it is by python-flint:

In [1]: import flint

In [2]: p = flint.fmpz_poly([2, 0, 1])

In [3]: p
Out[3]: x^2 + 2

In [4]: p.complex_roots()
Out[4]: [([1.41421356237310 +/- 4.96e-15]j, 1), ([-1.41421356237310 +/- 4.96e-15]j, 1)]
  • So even if I have written everything with PolyElement, I can't avoid some overhead of converting to Expr and such things.

You can use Poly.real_roots rather than Expr.

  • The other major bottleneck comes from representing the results of solution as algebraic functions

Note that the code shown above deliberately does not do this. Every full-dimensional component is represented but only by rational points.

However, I'm not really sure if LC + Discriminants + Pairwise resultants can be right projection,

Maybe the projection that is needed depends on what you are trying to do. Our limited goal here maybe does not need as much as full CAD.

Can you think of a counter example where these are not sufficient to separate the full dimensional components?

I can see why square free would be needed e.g. to stop the discriminant from just being zero everywhere. For the resultant also I guess it is necessary to use gcd and then resultant of the cofactors. A full factorisation of everything would handle these cases but I'm not sure if that is a good idea: in context the factorisation is relatively cheap but the algorithm scales badly with the number of distinct polynomials so we want to avoid increasing that unecessarily.

I can't see why all coefficients rather than just LC would need to be added though. The roots are continuous functions of the coefficients provided the LC is nonzero.

sylee957 commented 10 months ago

Our limited goal here maybe does not need as much as full CAD.

I think that this applies to the limited CAD, like here as well. I think that the problem is often called 'open CAD'. I can't give the counter example yet, however, I think that it is better to ask for someone who comes up with the code to prove. It is not easy to see how it fails very complicated cases, because there are very limited numbers of examples given by the articles, and often the visualization is not intutitive if we think more of 4D+ cases. I think that Strzebonski's article may contain the proof.

sylee957 commented 10 months ago

You can use Poly.real_roots rather than Expr.

I thought that we should also avoid Poly as well, because that uses dense representation. Maybe we only need dense univariate polynomial just before calculating the roots.

sylee957 commented 10 months ago

A full factorisation of everything would handle these cases but I'm not sure if that is a good idea:

I think that Strzebonski's article does not worry about using full factorization.

In our implementation for polynomials with rational number coe cients we use the set of irreducible factors of F, and our experience is that polynomial factorization is not a signi cant part of the execution time of the whole algorithm

sylee957 commented 10 months ago

I think that Strzebonski's article is good because it can be implementation with proof, and contains concrete algorithm such that we can extend with design/API issues.

oscarbenjamin commented 10 months ago

I think that Strzebonski's article is good

Yes, I agree. It seems to describe exactly the algorithm that I was thinking of.

I am still unsure about what is good API though because different things can be useful for different situations.

oscarbenjamin commented 10 months ago

I thought that we should also avoid Poly as well, because that uses dense representation.

In this particular situation I am not sure that the dense representation is bad. I have come to think now that the Poly representation is perhaps "semi-dense" for multivariate polynomials. Perhaps a better description of it is that it is a "recursive" representation whereas PolyElement is a "flat" representation. An algorithm like this that naturally recurses through the variables seems to be almost precisely the situation that the DMP representation is designed for although it still might be the case that PolyElement is faster in many cases.

oscarbenjamin commented 10 months ago

A full factorisation of everything would handle these cases but I'm not sure if that is a good idea:

I think that Strzebonski's article does not worry about using full factorization.

I don't think that the cost of calling factor is significant in this context. Rather the cost is to do with having more polynomials after factorisation. For example if there are 2 polynomials and we factorise them each into 4 factors then we have 8 polynomials. Now instead of 2 discriminants and 1 resultant we need 8 discriminants and 28 resultants. Of course all of these will be lower degree though and maybe it is generally better to exchange a small number of large polynomials for a large number of small ones if the total degrees are the same (this is the case in other contexts).

We want to control the growth in the number of cells though and it is possible that we can end up having resultants between factors of the same polynomial that would lead to redundant cells. Presumably though the same polynomials will either end up appearing as discriminant factors or as resultant factors in the end so maybe we always end up having the same number of cells either way.

sylee957 commented 10 months ago

QEPCAD also contains the test sets, which can be helpful to verify the implementation in some scale

https://github.com/chriswestbrown/qepcad/tree/master/regressiontests

oscarbenjamin commented 10 months ago

It doesn't look like any of the qepcad test suite is a case of only strict inequalities.

sylee957 commented 9 months ago

I have tried to implement with PolyElement, using the projection as described above I think that one of the difficulty is that dealing with some missing features of sparse polynomial (For example, resultant still dispatches to dense version, however, we should have it implemented with sparse)

And the other major problem is fuzzy/surprising outputs of types for some functions like resultant, discriminant which mixes PolyElement/Ground outputs, and seems like causing errors, however, it can be improved.

I think that the low level API can just be simplified to 'points', and the problem is well defined that given a list of polynomials, regardless of the logical forumlations of inequality (like and-or combinations), a list of 'interesting' sample points can be returned. And I think that more high-level work can begin from there.

Expand code ```python3 from typing import cast, Any from itertools import combinations from collections.abc import Sequence, Collection, Iterator from sympy.polys.rings import PolyElement from sympy.polys import real_roots from sympy.core import Expr, Rational, S from sympy.functions import floor, ceiling # Begin polynomial utilities def lc_wrt(p: PolyElement, x: PolyElement) -> PolyElement: """Leading coefficient of the polynomial with respect to the variable x. TODO This may be missing utility feature for sparse polynomials. """ return p.coeff_wrt(x, p.degree(x)) def disc_wrt(p: PolyElement, x: PolyElement) -> PolyElement: """Discriminant of the polynomial with respect to the variable x. This computation of sign is redundant but just keeps the precise mathematical definition, and it may not be that much harmful if this is not major performance bottleneck. """ d = p.degree(x) s = (-1) ** ((d * (d - 1)) // 2) c = lc_wrt(p, x) res = res_wrt(p, p.diff(x), x) return s * res // c def res_wrt(p: PolyElement, q: PolyElement, x: PolyElement) -> PolyElement: """Resultant of the polynomials with respect to the variable x. TODO The sparse polynomial has missing utility feature for resultant. However, I assume last subresultants is always correct for the resultant. """ return cast(PolyElement, p.subresultants(q, x)[-1]) def factor_set(p: PolyElement) -> set[PolyElement]: """Set of factors of the polynomial.""" _, factors = p.factor_list() return {p for p, _ in factors} # End polynomial utilities # Begin interval utilities def rational_between(a: Expr, b: Expr) -> Rational: """Find a number between a and b where a and b are floats. TODO This is a simple algorithm to find a rational number between two floats. It may not be that efficient, however, rational numbers with huge denominators may not be readable for users, so we keep the denominator smallest power of 2 to keep the number readable. TODO SymPy Rationals are used here because floor, ceil with sympy expressions have some nonstandard behavior (not to return int), and QQ is not very happy about it. """ inf = floor(a) sup = ceiling(b) assert inf < sup diff = sup - inf if diff > 1: return (inf + sup) // 2 mid = (inf + sup) / 2 while not (a < mid < b): if mid <= a: inf = mid else: sup = mid mid = (inf + sup) / 2 return mid def rational_less(a: Expr) -> Rational: inf = floor(a) if inf == a: inf -= 1 return inf def rational_greater(a: Expr) -> Rational: sup = ceiling(a) if sup == a: sup += 1 return sup # End interval utilities # Begin substitution utilities def poly_subs( p: PolyElement, vars: Sequence[PolyElement], vals: Sequence[PolyElement] ) -> PolyElement: return p.subs(list(zip(vars, vals))) def polys_subs( polys: Collection[PolyElement], vars: Sequence[PolyElement], vals: Sequence[PolyElement], ): return {poly_subs(p, vars, vals) for p in polys} # End substitution utilities # Begin roots utilities # polys here are assumed to be univariate def all_real_roots(polys: Collection[PolyElement]) -> list[Expr]: """Return all real roots of the given polynomials. TODO Using as_expr with real_roots may not be efficient However, it seems like it is the only way to get roots in SymPy. """ # Check if some polynomials are always zero. # If polynomial is zero, everything should be roots or it may be a bug in the algorithm. assert all(not p.is_zero for p in polys) # We have to filter out the nonzero constant polynomials. # Nonzero constant polynomials have no roots, however, real_roots does not like that. exprs: list[Expr] = [p.as_expr() for p in polys if not p.is_ground] return sorted({r for p in exprs for r in real_roots(p)}) def between_real_roots(polys: Collection[PolyElement]) -> Iterator[Rational]: roots = all_real_roots(polys) if not roots: yield S.Zero return yield rational_less(roots[0]) for x, y in zip(roots[:-1], roots[1:]): yield rational_between(x, y) yield rational_greater(roots[-1]) # End roots utilities def sfrp(fs: Collection[PolyElement]): """Square-free coprime multiplicative basis of the polynomials. TODO sqf_list can be used, but it had some bugs in the past for wrong results, and also the author of the paper also experimented with full factorization, and notes that it is not very harmful, however, we may need to experiment performance with more coarser/finer factorizations. """ return {p for f in fs for p in factor_set(f)} def proj(gs: Collection[PolyElement], x: PolyElement): """Projection of the polynomials to the variable x.""" c = {lc_wrt(p, x) for p in gs} d = {disc_wrt(p, x) for p in gs} r = {res_wrt(p, q, x) for p, q in combinations(gs, 2)} return c.union(d, r) def recursive(polys: Collection[PolyElement], vars: Sequence[PolyElement]): """Recursive algorithm to project and extend the sample points.""" if len(vars) == 1: for root in between_real_roots(polys): yield (root,) return ps = proj(sfrp(polys), vars[-1]) for roots in recursive(ps, vars[:-1]): for root in between_real_roots(polys_subs(polys, vars[:-1], roots)): yield roots + (root,) ```

However, I get different results for the sample points

R, x, y = ring(['x', 'y'], QQ)
for point in recursive([x**2 + y**2 - 1, x - y], [x, y]):
    print(point)
(-2, -3)
(-2, -1)
(-3/4, -1)
(-3/4, -11/16)
(-3/4, 0)
(-3/4, 1)
(0, -2)
(0, -1/2)
(0, 1/2)
(0, 2)
(3/4, -1)
(3/4, 0)
(3/4, 11/16)
(3/4, 1)
(2, 1)
(2, 3)

https://www.desmos.com/calculator/dxi1kqm9j5

for point in recursive([x**2 + y**2 - 7, x + y - 2, x**2 + y - 10], [x, y]):
    print(point)
(-3, 0)
(-3, 3)
(-3, 6)
(-5/2, -1)
(-5/2, 0)
(-5/2, 2)
(-5/2, 4)
(-5/2, 5)
(-2, -2)
(-2, 0)
(-2, 2)
(-2, 5)
(-2, 7)
(1, -3)
(1, -1)
(1, 2)
(1, 5)
(1, 10)
(21/8, -1)
(21/8, -1/2)
(21/8, 0)
(21/8, 2)
(21/8, 4)
(3, -2)
(3, 0)
(3, 2)
(4, -7)
(4, -4)
(4, -1)

https://www.desmos.com/calculator/qacyfqomkj

oscarbenjamin commented 9 months ago

Looks good.

We can just use discriminant rather than resultant of p and p.diff. I just did that because it implicitly includes the LC.

There seems to be a problem with the types returned somewhere:

In [1]: a, b, c, d, e = symbols('a, b, c, d, e')

In [2]: p = Poly([1, a, b, c, d, e], x).as_expr()

In [3]: p1, p2 = p.subs(x, I*x).as_poly(I).all_coeffs()

In [4]: eqs = [LC(p2, x), resultant(p1, p2, x)]

In [5]: R, *symsp = ring('a, b, c, d, e', QQ)

In [6]: eqsp = [R(eq) for eq in eqs]

In [7]: edit find2.py
Editing... done. Executing edited code...

In [8]: for point in recursive(eqsp, symsp): print(point)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[8], line 1
----> 1 for point in recursive(eqsp, symsp): print(point)

File ~/current/active/sympy/find2.py:167, in recursive(polys, vars)
    165         yield (root,)
    166     return
--> 167 ps = proj(sfrp(polys), vars[-1])
    168 for roots in recursive(ps, vars[:-1]):
    169     for root in between_real_roots(polys_subs(polys, vars[:-1], roots)):

File ~/current/active/sympy/find2.py:156, in proj(gs, x)
    154 """Projection of the polynomials to the variable x."""
    155 c = {lc_wrt(p, x) for p in gs}
--> 156 d = {disc_wrt(p, x) for p in gs}
    157 r = {res_wrt(p, q, x) for p, q in combinations(gs, 2)}
    158 return c.union(d, r)

File ~/current/active/sympy/find2.py:156, in <setcomp>(.0)
    154 """Projection of the polynomials to the variable x."""
    155 c = {lc_wrt(p, x) for p in gs}
--> 156 d = {disc_wrt(p, x) for p in gs}
    157 r = {res_wrt(p, q, x) for p, q in combinations(gs, 2)}
    158 return c.union(d, r)

File ~/current/active/sympy/find2.py:30, in disc_wrt(p, x)
     28 c = lc_wrt(p, x)
     29 res = res_wrt(p, p.diff(x), x)
---> 30 return s * res // c

TypeError: unsupported operand type(s) for //: 'int' and 'PolyElement'