Closed jeanluct closed 9 years ago
This is the output of closure_tests
for n=6
punctures. Basically it tries all n!
ways of closing the braid and computes the entropy. It's comforting that mindist
ends up at the low end of the range, and that Xproj
and Yproj
are fairly close to each other. The spread in entropy is fairly large, probably because the braid is not that long.
I think it's best to close and put this issue on hold for now. We've been downplaying "iterative" approaches when dealing with an open braid, rightfully so.
I don't understand the second paragraph of your original comment, with target entropy and taking a subset of a very long braid. Are you assuming you have a very long exactly closed braid, but if you take a truncation of it, it is a (shorter) open braid. Then you are trying to close that short, open braid, such that the entropy estimate matches the large (closed) braid?
I was calling a target entropy an entropy that we were confident in, and then we try to reproduce it by closing shorter braids. You can call that part of the original comment obsolete, though, I think.
OK, then I get everything else. I do think, however, that matter of closing might be too system-specific to give an across the board answer though, just going by that discussion about circular orbits.
Yes. But if you have no choice then the figure above suggests mindist
is the better way to go.
The function braid.closure closes a braid, by default using the most direct method (no new crossings in projection), but also by minimizing L^2. Are these the best choices? Why not, say, the lowest entropy?
Maybe find some cases where we have a target entropy, that is, we can generate a very long braid so we can compute the entropy, but we'd like to estimate the entropy from a subset of that. (Use Duffing in the chaotic sea?).
How do we avoid the (n!) scaling of such methods?