Closed rabernat closed 6 years ago
Thanks for starting discussion of this point. I am scratching my head a bit about how our outermost contour-search problem is a possible use case for the bisection method/a root finding algorithm. Here's what I get tripped up on: 1) I don't see how to use the bisection method to find a contour of a given constant LAVD in our image, 2) I don't see how using the bisection method would obviate the need to increment this contour constant to find the outermost contour satisfying the CD condition. In other words, I am confused about the basic idea here. @rabernat would you please help me out here and clarify this?
I worked on this over the weekend and have a PR I will submit soon.
The idea is that we are hunting for the contour with a certain convexity deficiency (CD). In the image below, the x
axis represents the LAVD value (relative to the local maximum) and the y
axis represents the CD. The red dot is the target CD.
The bisection method provides an algorithm to search for this contour systematically. Right now, we just naively start at the center (x=0) and move outward until we hit the target CD. The problem with this is that it is very sensitive the choice of increment in LAVD (i.e. delta x). Using a constant value of this increment (i.e. step size) is probably the wrong choice, since it will be computationally wasteful for some large eddies yet not precise enough for small eddies. A bisection method should be both more accurate and more computationally efficient.
Thank you for the clarification, the above explanation makes sense to me now. So in the new bisection method, we would just specify a CD error tolerance (instead of an LAVD step size)? And perhaps we would have to also specify the analogue of b_1
in the above plot (perhaps via min_distance
)?
@rabernat Would you recommend updating the revision to include the new bisection material? (In the paper, I pick a very small increment (10^-8) to make sure to resolve the boundary (see e.g., Figure 7))
In our experience with @anirban89, the small increment is not necessarily ideal either. The reason is that the CD can be noisy very close to the local maximum. So yes, I think the bisection should be more robust overall. But of course, it would have to be tested a bit before relying on it.
I will try to submit this PR within a day or two so we can see it in detail.
So in the new bisection method, we would just specify a CD error tolerance (instead of an LAVD step size)?
Yes exactly.
And perhaps we would have to also specify the analogue of b_1 in the above plot (perhaps via min_distance)?
Correct. The initial stage of the algorithm needs to be a bit different, since first we have to exceed the target CD before we can start bisecting.
The algorithm in #67 is working pretty well. However, I am realizing some of our assumptions about convexity deficiency may not be correct. Have a look at some of these figures, which show convexity deficiency as a function of contour value. The relationship is clearly not monotonic.
This is closed by #67
A reviewer of @nathanieltarshish's recent paper rightly criticized our contour-search method for using a fixed step size. I have always been a bit worried about the sensitivity to this parameter. We make it small in order to avoid false negatives, but that makes the method very expensive.
Instead, we should use a more intelligent bisection method.
The relevant section of the code is here: https://github.com/rabernat/floater/blob/master/floater/rclv.py#L322-L371
cc @anirban89, @geosciz