chan-y-park / loom

Python program to generate, draw, and analyze spectral networks of class S theories
10 stars 3 forks source link

get_ramification_points() misses some ramification points #2

Closed neitzke closed 9 years ago

neitzke commented 9 years ago

If I use the Seiberg-Witten curve

x**4 - 10*z*x**2 + 4*x + 9*z**2 = 0

and all other parameters as in default.ini, I expect to get 6 branch points (as in Figure 16 of 1209.0866) but I only get 4. It seems that some of them are being discarded by the check "if mx > 1" in get_ramification_points().

chan-y-park commented 9 years ago

I was able to reproduce the issue and identify where it went wrong.

As you pointed out, all the solutions we get by solving {f(x, z) = 0, df(x, z) = 0} are ramification points. The reason we calculate mx is not to discard some ramification points but to identify its ramification index, which is calculated by finding the multiplicity of the roots of f(x, z_0) = 0 where z_0 is the z-coord of the ramification point.

The problem here was that the accuracy we require (1e-6) was too high compared to the separation of the seemingly multiple roots (~1-e5). A makeshift solution would be either increasing the size of the accuracy or just raising an error in such a case. But I will think about it to figure out a better solution.

neitzke commented 9 years ago

It seems strange that the numerical error is so large, doesn't it?

chan-y-park commented 9 years ago

Indeed it is, actually it's even worse, the error is almost 1e-4.

The biggest error comes from Sage's solve that is used to solve {f(x, z) = 0, df(x, z) = 0}. By itself it's not that bad, about 1e-8, but it propagates and gets bigger as we substitute numerical values into the original equation to find the roots of f(x, z_0).

Using for example sympy.solve_poly_system gives a better numerical result, where the final error is ~1e-8 not ~1e-4, but the routine sometimes fails to solve a system of equations. That is, it's more precise when it works but it fails to solve a system of equations quite often.

Probably finding a better way to figure out a ramification index than finding multiplicities of roots of f(x, z_0)=0 numerically will be a way out.

chan-y-park commented 9 years ago

Use monodromy analysis to find out a ramification index instead of solving f(x,z_0)=0 and finding root multiplicities numerically.

neitzke commented 9 years ago

Hi, I can work on this (I am highly motivated since this bug prevents me from doing some of the things I wanted to do!) There must be a way to improve the numerical accuracy enough to make it a non-issue. But Chan's last comment makes me worry that the code is about to be changed in some substantial way, in which case I shouldn't work on the "old" version. What do you have in mind?

chan-y-park commented 9 years ago

Hi Andy, feel free to work on it! It would be really nice if we can improve the numerical accuracy of the current code, as it will solve the problem without any major overhaul.

What I had in mind was using the result of the monodromy analyses around branch points to figure out the ramification index. That is, currently we are calculating the ramification index of z_0 by finding out the multiplicities of the roots of f(x, z_0), and that's where the accuracy matters. But the other approach is using the trivialization to track the sheets around z_0 to figure out its ramification index.

We already have all the tools so probably it's a matter of assembling them in the right way, and probably I will implement it in the very near future, but it will take some time to implement and test the newer algorithm. And it's always nice to have two separate routines doing the same work for debugging purpose.

neitzke commented 9 years ago

One naive remark about your proposal is that the ramification index is not really determined by the monodromy: think of a covering like x^2 = z^2, which is ramified at 0 but has trivial monodromy around that point. Of course this is a sort of singular, non-generic situation; but I guess that you are trying to cover even such situations?

chan-y-park commented 9 years ago

Thanks for pointing that out, Andy, and sorry for my lack of mathematical rigor. ;) Probably I should have said that we need to find the ramification index of each branch point. (x, z) = (0, 0) in that case is not a branch point and therefore there will not be an S-wall emanating from it, and that's why we don't need to keep track of such points. The monodromy analysis will naturally say that such a point doesn't have any monodromy so it's good in principle. I don't know yet if there is any pitfall when we try to implement it numerically.

The current algorithm discards such a point by checking the multiplicity of the roots of f(x_0, z), mz, and discarding (x_0, z_0) when mz > 1. But now I realized that it will also discard a point when the local curve is like x^3 = z^2, which doesn't sound correct. Maybe we should keep points whose mx and mz are relatively prime.

Maybe I've been too naive because previously I've been plugging in nice curves whose ramification structures are analyzed by hand. Does my comment make sense, or do you find another hole in it?

neitzke commented 9 years ago

One way of getting a local structure like x^2 = z^2 would be to have two branch points colliding in the A_1 theory. In that case it would not be correct to ignore the point z = 0 -- it still emits 4 S-walls, despite the fact that there is no monodromy around it.

plonghi commented 9 years ago

At the moment we are nominally restricting to 'square root' type branch points (see note on line 204 of trivialization.py, in the include_trivialization branch). But I'm sure in the long run we will want to study AD points, rather than their perturbations, so it would be essential to deal correctly with such situations.

But, do we actually need to get the positions of ramification points in this way? It seems that right now the only use of ramification points is to get the underlying branch-points, which we could perhaps get in other ways (sage seems able to compute discriminants of univariate polynomials). Even if we still want to know the fiber coordinates of the ramification points, we can still get this info by parallel transport. In fact, this is precisely how we analyze branch-points: we track sheets from the basepoint to a branch point, look at the 'clusters of sheets' that collide, and extract the root info in that way (see analyze_branch_point and get_positive_roots_of_branch_point in the trivialization module). The same sheet-tracking function can give us the positions of the ramification points of course (note: it's essential to specify the kwarg is_path_to_bp).

So, an alternative could be to get the branch points in a different way (e.g. from the discriminant of the curve), and later get ramification points via parallel transport (if we even need them at all).

chan-y-park commented 9 years ago

To Andy:

That's a very interesting example. I agree with you that if such a singularity is from colliding two A_1 branch points, then we cannot disregard it, although I don't have a good idea how to study such a configuration without including infinitesimal Coulomb branch parameters. (BTW, is it a deformation of the singularity or the resolution of the singularity? I always feel confused between the two...)

However, that raises the following question: when a Seiberg-Witten curve factorizes into multiple components, will there be S-walls from the point where the components "touch" one another? Pietro and I have assumed that there will be no S-wall in such a case, but that caused much difficulty in understanding a Seiberg-Witten curve in a non-minuscule representation. Your example provides a nice counterexample for the assumption.

chan-y-park commented 9 years ago

To Pietro: I think what you suggested is similar to what I described as an algorithm using monodromy, am I correct, or am I missing some of your point?

Regarding using the discriminant of the curve, a given Seiberg-Witten curve is not a univariate polynomial, so we need to reduce it to a univariate polynomial of z to find the locations of branch points using {f(x, z) = 0, df(x, z) = 0}.

Actually, when I used Mathematica that's exactly what I did using Eliminate[{f(x, z) = 0, df(x, z) = 0}, x], but I don't know if we have the same functionality either in SymPy or in Sage. Hopefully there will be. :)

One subtle point is, when I used Mathematica I usually did a handful amount of analysis by hand and then proceeded to the generation of a spectral network after resolving any issue in the given curve, but here probably we want to do it automatically. Not a big huddle, but I expect some time and effort to be invested when implementing it.

neitzke commented 9 years ago

For any fixed value of z, the SW curve f(z,x) = 0 is a polynomial in the variable x, and we are interested in knowing whether it has multiple roots or not: that's the same as asking whether the discriminant of this polynomial in x, taken with respect to x, is zero or not. The discriminant will still be a function of z, call it D(z). So the points where multiple roots occur are exactly the zeroes of D(z).

plonghi commented 9 years ago

In reply to Chan: ''I think what you suggested is similar to what I described as an algorithm using monodromy, am I correct, or am I missing some of your point?'' I think the point is quite different: the algorithm that computes the monodromy tracks sheets along a path (based at the trivialization basepoint) that goes around a branch point, in Andy's example this will return a trivial answer. The algorithm I'm referring to instead tracks sheets along a path (based at the trivialization basepoint) that ends at the branch point, therefore it will detect that sheets actually collide there.

Regarding the univariate-ness, sorry for the confusion, I think Andy already cleared it up, but let me try and elaborate a bit: consider the SW curve as a polynomial in x, whose coefficients are functions of z: F(x,z) = \sum_i x^i f_i(z) then, regardless of what the coefficients are (functions or constants..) there should be an algorithm implemented in sage that gives you the discriminant of F(x,z) as a polynomial in z. The discriminant will be a function D(z), its zeroes will be branch points.

One more note on Andy's example: in that case, we will see that there is a branch point, and if it's an A_1 theory, we know that S-walls will be just of the obvious root type. The study of clusters of colliding sheets will not tell us much either: there are two sheets colliding. So how to tell that there are 4 walls, instead of 3? The only way to tell that there should be 4 S-walls (as far as I can tell) is to study the degree of the zero of D(z) which should have twice the order of a square-root branch point.

chan-y-park commented 9 years ago

Thanks for the clarifications!

But the original question was about getting the multiplicities of f(x, z_0), and a discriminant doesn't help us to find the multiplicities, am I correct? We had no problem finding (x_0, z_0) so far, which is what a discriminant help us to do.

The collisions of the sheets can definitely be detected in the way Pietro suggested in principle, but it has the same numerical issue as the current algorithm; that is, we need to detect the collisions numerically.

The reason that I only considered higher-order branch points with nontrivial monodromy is because I know how to draw S-walls coming out of those branch points only. For example, I didn't think about drawing spectral networks of x^2 = z^2; I don't mean it's not important, I'm just confessing that I was not careful enough. ;)

As Pietro pointed out, considering the discriminant will be helpful in detecting collisions of the branch points; previously I've been using the discriminant to detect collisions of branch points, which is helpful in finding out interesting part of the Coulomb branch moduli space. But again we need to find the multiplicities of the roots of D(z) numerically, and we come back to the same numerical issue.

Probably the best method will be the one that propagates the least amount of numerical error. If one method enables us to proceed further using only symbolic manipulations than the others, it will be the best one to employ.

Any suggestion/comment?

plonghi commented 9 years ago

Regarding propagation of the error, here are some thoughts on the two methods:

With he current approach, we first compute z_0, x_0 (1st numerical approximation), then we plug z_0 into F to get the function f_x_at_z_0 to get mx, then we use 'get_root_multiplicity()' which involves further numerical root finding (2nd numerical approximation). In this way we make two 'serial' numerical approximations, where the results (and errors) of the first one are plugged into the second calculation. If I understand correctly, that's how the error propagated to grow large.

With the Discriminant approach we would instead find the location z_0 of the branch point numerically, then we would expand D(z) analytically around z_0 and check out the degree D(z) ~ (z-z_0)^mz. In this way we get mz with only one numerical approximation (finding z_0). To find the various x_0, as well as the mx's, we would instead use the sheet-tracking -- avoiding the use of the approximated f_x_at_z_0. However, sheet tracking of course involves finding the roots of F(x,z) at a given z, for z along a path, and comparing new roots at each step with the previous ones. This is morally analogous to computing the roots of f_x_at_z_0, and finding mx still involves two levels of numerical approximations.

The bottomline for me is that the current method should be numerically equivalent to the method of discriminant+root-tracking. (except for a small improvement in the numerics for getting mz). I think Chan is right that we don't seem to gain much from switching to the discriminant method.

In addition, since the other method which came up (the monodromy method) would fail to handle the most general case, we might as well stick to the current one? Assuming so, the issue seems to be to find a suitable tuning of the 'accuracy' parameter within the get_root_multiplicity function. Presumably, this should be handled dynamically, rather than imposing a fixed value.

neitzke commented 9 years ago

There is no in-principle reason why one cannot compute everything to 100 digits precision here: it's not like the computation is too expensive or anything like that, it's just that we don't know a routine which does the job in a convenient way.

With the present code, it looks to me as if the problems with precision are coming in part from the fact that sage's "solve" function calls Maxima, and there seems to be no way to tell sage to ask Maxima for higher precision. There is some discussion of related problems at http://trac.sagemath.org/ticket/11643

neitzke commented 9 years ago

Maybe it's just as easy to use maxima directly: there, the commands

eqns: [x^4 - 10*z*x^2 + 4*x + 9*z^2 = 0, 4*x^3 - 20*z*x + 4 = 0]$
numer: true;
fpprec: 100;
bfloat(solve(eqns, [x,z]));

produce promising-looking output

[[x = 4.36982484401431670040238941510324366390705108642578125b-1 %i
 + 2.52291955000318035562401064453297294676303863525390625b-1, 
z = 1.72722600852236507495973683035117574036121368408203125b-1
 - 2.99164320291513019522966487784287892282009124755859375b-1 %i], 
[x = 2.52291955000318035562401064453297294676303863525390625b-1
 - 4.36982484401431670040238941510324366390705108642578125b-1 %i, 
z = 2.99164320291513019522966487784287892282009124755859375b-1 %i
 + 1.72722600852236507495973683035117574036121368408203125b-1], 
[x = - 5.0458392101551485797727991666761226952075958251953125b-1, 
z = - 3.45445196674950716353436064309789799153804779052734375b-1], 
[x = 1.4167875859275838035244987622718326747417449951171875b0 %i
 - 8.1798269411981061605132481417967937886714935302734375b-1, 
z = - 5.694364067613910673770760695333592593669891357421875b-1 %i
 - 3.28764262730062373218942184394109062850475311279296875b-1], 
[x = - 1.4167875859275838035244987622718326747417449951171875b0 %i
 - 8.1798269411981061605132481417967937886714935302734375b-1, 
z = 5.694364067613910673770760695333592593669891357421875b-1 %i
 - 3.28764262730062373218942184394109062850475311279296875b-1], 
[x = 1.6359653926185522276881556535954587161540985107421875b0, 
z = 6.57528469000421722512328415177762508392333984375b-1]]
neitzke commented 9 years ago

Sorry, my mistake: that output was indeed "promising-looking" but in fact most of those digits are garbage, at least according to Mathematica. To see the difference you can run in Mathematica

eqns = {x^4 - 10 z x^2 + 4 x + 9 z^2 == 0, 4 x^3 - 20 z x + 4 == 0}
NSolve[eqns, WorkingPrecision -> 100]

Plugging back in, Mathematica claims that its solution is accurate to an error around 10^(-100), while Maxima's is much worse.

neitzke commented 9 years ago

It appears that Maxima will not give us higher precision unless we use special commands which request the use of "bigfloat" numbers internally. This is a problem, because there is no "bigfloat" version of the "solve" command. There is a "bigfloat" version of the command "allroots" ("bfallroots"), which finds roots of a single polynomial, and I checked that this indeed works, but this would only help us in the cases where the discriminant is a polynomial (like Argyres-Douglas).

So at the moment it seems that finding the desired points to very high precision is not computationally expensive (Mathematica does it very fast) but we don't know any free software which does it.

neitzke commented 9 years ago

For polynomials without repeated zeroes, the Python command mpmath.polyroots seems to do the job, e.g.

mpmath.mp.dps = 100
mpmath.polyroots([589824,0,0,-143360,0,0,-6912])

finds zeroes of the discriminant in the example above.

Maybe this method is OK for us actually: at least in many examples, the discriminant will be a rational function, and we could just use the above to find roots of the numerator. It saves the trouble of shelling out to sage/maxima/whatever and seems to work to arbitrary precision.

chan-y-park commented 9 years ago

Thanks for trying various ideas, Andy! If nothing works we can think about writing a short code to communicate with Mathematica, but I hope we can avoid it if possible...

I wonder if the following two-tiered strategy will work: first we find "approximate" solutions as we currently do, then we improve the solutions one by one up to high precision.

plonghi commented 9 years ago

Chan, you beat me on time :)

Python standard libraries seem to have functions which find roots up to high precision, although with an important caveat. For example, sympy's 'findroot' (http://docs.sympy.org/0.7.0/modules/mpmath/calculus/optimization.html) does have a 'tolerance' parameter. The caveat with this kind of algorithm is that it seems to require an x0, whihc should be a starting point, or an interval in which to look for roots.

So, elaborating on Chan's suggestion, a (somewhat crude) possibility would be to use something like Maxima to look for a set of x0's, and then refine the precision of each x0-value using sympy.

neitzke commented 9 years ago

mpmath.polyroots does a good job of finding all the roots, at least as far as I can see.

chan-y-park commented 9 years ago

Did some experiments with mpmath.polyroots and got the following result: https://github.com/chan-y-park/loom/blob/high_precision/documentation/finding_roots.ipynb

It's better than a naive numerical root-finding, but unfortunately it doesn't find roots with arbitrary precision, unless I made a mistake somewhere; increasing the precision of both mpmath and sympy does not improve the result. But I hope this will be enough precision for us. How do you think about the result?

chan-y-park commented 9 years ago

mpmath root-finding is implemented in high_precision branch and is currently under test. After going through various configurations it will be merged to master branch soon. However, it turns out that, after all the efforts, the precision is not spectacular. For example, when running config/AD_SO_4.ini the errors in x-values are of order 1e-5.

Probably this not-so-small error stems from the fact that we are trying to find roots with multiplicity greater than one, which is exactly the case that a general numerical root-finding algorithm does not perform well.

chan-y-park commented 9 years ago

Another issue: sympy.discriminant fails in many cases, for which Mathematica has no problem. Maybe SAGE will be a bit better? Frustrating...

chan-y-park commented 9 years ago

Got some feedback from sympy git page: https://github.com/sympy/sympy/issues/9920

For now I will keep both old and new get_ramification_points() and let one choose to use one of the two in the configuration.

chan-y-park commented 9 years ago

After a small improvement even the old ramification-finding routine works fine with the curve, check config/issue_2.ini. Closing this issue until further precision-related problem is found.

neitzke commented 9 years ago

I'm surprised that the precision with mpmath is not better -- in the example I tried above it really did compute "z" to 100 digits accuracy, or at least it produced exactly the same 100 digits as Mathematica did.

neitzke commented 9 years ago

To be more precise: running

import mpmath
mpmath.mp.dps = 100
mpmath.polyroots([589824,0,0,-143360,0,0,-6912], error=True)

I get the output

([mpf('-0.3454452017044728957865412179751493544196585390393800847896897645941927366862162027487722996146377487948'),
  mpf('0.6575285254601248114263218533039252044454928899968739985047152778710034452413225544471510988743603934762'),
  mpc(real='0.1727226008522364478932706089875746772098292695196900423948448822970963683431081013743861498073188743974', imag='0.2991643202915129930321944755546606966171432326661004541916739245327038418153982104884004226450025421703'),
  mpc(real='0.1727226008522364478932706089875746772098292695196900423948448822970963683431081013743861498073188743974', imag='-0.2991643202915129930321944755546606966171432326661004541916739245327038418153982104884004226450025421703'),
  mpc(real='-0.3287642627300624057131609266519626022227464449984369992523576389355017226206612772235755494371801967381', imag='0.5694364067613911369662659922064171980893952010851505361028731742092422504367826433965562185474668067982'),
  mpc(real='-0.3287642627300624057131609266519626022227464449984369992523576389355017226206612772235755494371801967381', imag='-0.5694364067613911369662659922064171980893952010851505361028731742092422504367826433965562185474668067982')],
 mpf('1.428734239102843727769729435381676199314887493528498326518907304034540794883422744594310027325215037061e-101'))

This seems different from what you say above, that you can't get arbitrary precision. Or have I misunderstood?

chan-y-park commented 9 years ago

Hi Andy, thanks for the information.

The problem we have here is that we are trying to find multiple roots of a polynomial equation with floating-point coefficients that have a finite precision.

In our case, finding roots of the discriminant D(z), when all the roots are distinct or when it's coefficients are exact, indeed can be done with arbitrarily high precision. Actually the old routine also had no problem with that, even without using mpmath.

The difficulty is that, after finding a root of D(z), z = z_0, then we want to find multiple roots of f(x, z_0) = 0 for a fixed numerical value of z_0. There are two issues here:

1) As I mentioned previously in this thread, finding multiple roots numerically is always challenging, even if the numerical coefficients are exact. To illustrate this, let's try finding the roots of (x-1)^2 (x-2)(x-3)(x-4)(x-5) = 120 - 394 x + 499 x^2 - 310 x^3 + 100 x^4 - 16 x^5 + x^6.

>>> from sympy import mpmath
>>> mpmath.mp.dps = 100
>>> mpmath.polyroots([1, -2, 1], maxsteps=200, extraprec=100)
[mpf('1.0'), mpf('1.0')]
>>> cs = [120, -394, 499, -310, 100, -16, 1]
>>> mpmath.polyroots(cs, maxsteps=200, extraprec=100)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/sympy/mpmath/calculus/polynomials.py", line 188, in polyroots
    % maxsteps)
sympy.mpmath.libmp.libhyper.NoConvergence: Didn't converge in maxsteps=200 steps.
>>>

2) Each coefficient of f(x, z_0) is in general a floating-point number with a finite precision, then we will furnish mpmath.polyroots the numerical coefficients that have finite precisions, and the error propagates, or rather it cannot achieve the convergence to a high precision within an moderate number of iterations. And in general the Coulomb branch parameters we input are also floats with finite precisions.

To illustrate this, if I try to get the numerical roots of (x-1.0)^2 (x-2.0)(x-3.0)(x-4.0)(x-5.0),

>>> from sympy import mpmath
>>> mpmath.mp.dps = 100
>>> from sympy.mpmath import mpf
>>> cs = ['120.', '-394.', '499.', '-310.', '100.', '-16.', '1']
>>> pcs = [mpmath.mpf(c) for c in cs]
>>> mpmath.polyroots(pcs)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/sympy/mpmath/calculus/polynomials.py", line 188, in polyroots
    % maxsteps)
sympy.mpmath.libmp.libhyper.NoConvergence: Didn't converge in maxsteps=50 steps.
>>>

It fails to find the roots. If the polynomial has a slightly different coefficient so that the multiple roots split a bit,

>>> cs = ['120', '-394', '499', '-310', '100', '-16.000001', '1']
>>> pcs = [mpmath.mpf(c) for c in cs]
>>> mpmath.polyroots(pcs)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/sympy/mpmath/calculus/polynomials.py", line 188, in polyroots
    % maxsteps)
sympy.mpmath.libmp.libhyper.NoConvergence: Didn't converge in maxsteps=50 steps.
>>> mpmath.polyroots(pcs, maxsteps=200, extraprec=200)
[mpf('0.1999986979816524482083209492160576922489423819621060140422173190249473385931644598255211045767802926188'),
mpf('0.2500035555892802754229854190690512526297101692105172606801106845297020491540191540192007728591362749616'), 
mpf('0.3333299582478959468713743141201961685837808503690520521182371464624579338062501867926342657329280973462'), 
mpf('0.5000013333202965670385243536977039123264171130977851270543719133604085254357592732325601616182516434862'), 
mpf('0.9997957698511905030064766976160934315168473326143457792188636724071587112823257576055119468279561445208'), 
mpf('1.000204018343017592785651599614230876027635486079527100219532597548658775061814501857905081718280880394')]

it returns but as you can see the error is huge.

Another complication is that we are mixing sympy and mpmath here because we want to substitute numerical Coulomb branch parameters into the symbolic equations. We can furnish rational numbers for the parameters, but that makes the whole analysis really slow in the sympy part.

Having said all these, do we really need an arbitrary precision? It would be good to have arbitrary precision, but on the other hand it's always a trade-off between accuracy/precision vs faster execution when we do a numerical analysis, and I think the current code hits an adequate balance for now.

Everything is now in the master branch, and if you have time please take a look at get_ramification_points() in geometry.py. I would be more than excited if you can let me know where to improve! :)

chan-y-park commented 9 years ago

One idea: how about finding multiple roots by using $\frac{df(x, z_0)}{dx} = 0$? Well, if there is a way to make it work with numerically approximate multiple roots...

neitzke commented 9 years ago

I am surprised if your item 2) is really a problem, since you can have z_0 to arbitrary precision (say 100 digits) and so you have the coefficients of f(x,z_0) to arbitrary precision.

As for 1), I agree the most obvious thing to do would be to instead find the roots of the derivative, and then just look for the ones at which f also vanishes.

neitzke commented 9 years ago

But maybe you are right that the current precision is good enough, anyway - at least, the root-finding now seems to work for the example that originally motivated this question - thanks a lot!

chan-y-park commented 9 years ago

Indeed item 2) is less a problem than item 1), which is the fundamental difficulty here. If I elaborate the idea of using derivative a bit more, probably we first solve {f(x, z) = 0, df(x, z)/dz = 0} to find solutions {z_i, x_i}, then find the multiplicity of x_i by evaluating d^(n)f(x, z_i)/dx |_{x = x_i} for a loop of n and see at which n we get a derivative which is significantly different from zero. This I think can be implemented numerically.

chan-y-park commented 9 years ago

From Andy: https://github.com/chan-y-park/loom/issues/18

With the current master, choosing differentials

{4: 9_z^2, 3: 4, 2: - 10_z} I get the error

23954: Getting aligned x's at the base point z = (-6.0611049247490908971e-9 - 1.1388728074616771985j).
23954: Analyzing a branch point at z = (0.33367444189398689285 + 0.089407797240399354414j).
23954: Analyzing a branch point at z = (-0.24426664109717826733 + 0.24426664109717822897j).
23954: Analyzing a branch point at z = (-0.6351237842637036409 - 0.17018090508725803687j).
23954: Analyzing a branch point at z = (0.17018090508725793715 + 0.63512378426370366762j).
23954: Analyzing a branch point at z = (0.46494283925340683174 - 0.46494283925340675874j).
23954: Analyzing a branch point at z = (-0.089407797240399302024 - 0.33367444189398690689j).
23954: start cpu time: 1442779826.52
23954: Generate a single spectral network at theta = 1.0.
23954: Start growing a new spectral network...
23954: Seed S-walls at branch points...
Exception in Tkinter callback
Traceback (most recent call last):
  File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1535, in __call__
    return self.func(*args)
  File "/home/andy/loom/loom/gui.py", line 262, in button_generate_action
    phase=eval(self.entry_phase.get()),
  File "/home/andy/loom/loom/api.py", line 78, in generate_spectral_network
    spectral_network.grow(config, sw)
  File "/home/andy/loom/loom/spectral_network.py", line 51, in grow
    bp, config)
  File "/home/andy/loom/loom/s_wall.py", line 641, in get_s_wall_seeds
    omega_1 = exp(2*pi*1j/rp.i)
ZeroDivisionError: complex division by zero
chan-y-park commented 9 years ago

Implemented a new method using derivatives, seems working well. Closing the issue until a new problem appears.