[x] Page 3 - "sequence positive" -> "sequence of positive"
[x] Distinction between optimal and execution optimal
[x] Experiments: how many queries were tested for each graph (clearly
more than one as there is the notion of success rate but it is not
clear from the text)
[x] References should be checked:
there are no pages;
[3] prm -> PRM
[16] has a conference version (ICRA 16)
[22] was in NIPS 03 not Science 04
[ ] Check references in general
Reviewer 5:
[x] in eq. 6, f-value is g+eps*hx. In the algorithm though, on
line 12, f-value is g+h{SD}, where h_{SD} is h_x+h_p. Where does eq. 6
come from?
[x] In the 2nd paragraph of the proofs for the same theorem, the
authors say that f(v{j+1}) \leq f(v{j}) whenever v_{j+1} is expanded
before v_j. This holds for A* with consistent heuristics. When using
inconsistent heuristics, which is the case in the paper, this no longer
holds true. So, unless I’m missing something, this proof needs to be
corrected.
[x] Small point: at the end of the same proof, the authors say f
\leq f(v_s^i). I believe they forgot to have a variable for f.
[ ] The second concern I have is about the experimental analysis. I would
have also wanted to see a comparison of the proposed approach against a
multi-resolution planning such as the one in “Framed-quadtree path
planning for mobile robots operating in sparse environments” even
though it is mostly for navigation planning as opposed to
high-dimensional planning. I would have also wanted to see a comparison
against some methods that optimize PRM’s density such as Sparse
Roadmaps by Bekris’s group.
[x] Finally, what was the heuristic function used by A and weighted A on
regular PRMs during experiments? I don’t think the paper mentions this.
The efficiency of these algorithm clearly depends dramatically on what
heuristic functions they use.
Reviewer 11:
[x] "However even in a single query setting using a
precomputed roadmap offers two distinct advantages over a
RRT: determinism and the ability to precompute properties
(such as swept volume) of edges." Related Work, Section A, Last
Paragraph.
If it is really a single-query application, none of this computation
can be
resaved, therefore negating the benefit of "precomputation". Also,
determinism
is only guaranteed with the same random seed or with quasi-random
sequences
with the same bases. (which this paper does employ).
[ ] "As the roadmaps are
dependent on the obstacles these approaches do not benefit
from precomputation or multiple queries with non-static
obstacles." Related Work, Section B, First Paragraph
These methods are still reasonable alternatives to compare against in
the
experimental setups you provided since they are static environments. It
would
also put this method in context with other roadmap based approaches.
[ ] "These approaches rely on knowledge of the task to choose the high
density
regions and so are only appropriate if the search can be preemptively
categorized into “easy” and “hard” regions." Related Work, Section B,
Second
Paragraph
What examples can you provide where this is cumbersome? Providing
justification for this point is crucial to strengthening the argument
for
using the strategy of this paper. In a sense, the obstacle-aware
algorithms
from the previous paragraph are able to determine these easy and hard
regions
algorithmically.
[x] "Experiments were performed comparing Selective Densi-
fication with many methods of searching the same graph,
and also with bidirectional RRT-Connect from OMPL [26]
on three scenarios for a 7-DOF robotics arm." Experiments, First
Paragraph. Why are none of the other OMPL algorithms compared here? Providing
results for any of the off the shelf roadmap methods would strengthen the paper
signficantly. In addition, it appears that two different things are
being evaluated: one is the motion planning results, in terms of finding a
path and what the cost of the smoothed path is, and the second comparison is on
the search method used in the Selective Densification graph. In this sense,
the search comparisons are plentiful, but the motion planning comparisons
are minimal.
Regarding the plots in Figure 4:
[x] Why is there a success fraction for all the methods using the Selective
Densification graph?
[x] Are there different graphs being generated with
different Halton sequences?
[x] For the bidirectional RRT method, this makes sense
since you would need to run with multiple different random seeds, but for the
same graph and environment, the solution would be the same correct?
[x] Clarification on this point is needed.
[x] Also, how many attempts are being considered in these cases?
[x] Are the start and goal points the same in each run?
Other questions not answered in the paper:
[x] How much precomputation time is needed to generate the 18-layer graph
used in the experiments? Stating this would alleviate concerns about time or
memory needed to build this baseline graph used for the search.
[ ] What was the proportion of edges at each depth needed to solve each of
the given problems? How was the 18 layer threshold selected? What clearance
of path is that guaranteeing to be able to find (per Theorem V.3)?
Providing this type of analysis would bolster the paper by showing results in practice
where the search method is effective at staying at higher layers, while also
showing that it can leverage the smaller edges at deeper layers.
[x] I added stuff to acknowledgements for funding support. I think it pushed the paper over the size limit.
[x] There are lots of latex errors, please fix.
[x] "probabilistically optimal" is not a thing, it should be "asymptotically optimal"
[x] "balances time of edge checks with expansion" -> "time spent on edge checks vs. expansion"
[x] "We employ both forms of precomputation to front-load computation into an offline process that is not time-sensitive and also reduce the computation by using the same graph in multiple environments." -> This is really vague and sounds incorrect to use the same roadmap in multiple envs. Please make it clear you're just talking about the robot's swept volume for each edge being reused. Also update the response to reviewers for this point.
[ ] Please double-check the proof with someone else in the lab. I don't have time to go through it.
[x] Be consistent about "I" vs. "We" in the response to reviewers.
[x] Try not to be too terse in response to reviewers, e.g. "Fixed". It sounds dismissive. You could write something like "This has been fixed in the introduction" instead of "Fixed".
[x] This sounds very dismissive: "Page numbers are intentionally omitted, as it saves a few lines and I am running low on space". Be more diplomatic: "We would be happy to include page numbers but because of the tight space constraints there is no way to put them in without exceeding 8 pages."
Reviewer 1:
Reviewer 5:
Reviewer 11:
[x] "However even in a single query setting using a precomputed roadmap offers two distinct advantages over a RRT: determinism and the ability to precompute properties (such as swept volume) of edges." Related Work, Section A, Last Paragraph. If it is really a single-query application, none of this computation can be resaved, therefore negating the benefit of "precomputation". Also, determinism is only guaranteed with the same random seed or with quasi-random sequences with the same bases. (which this paper does employ).
[ ] "As the roadmaps are dependent on the obstacles these approaches do not benefit from precomputation or multiple queries with non-static obstacles." Related Work, Section B, First Paragraph These methods are still reasonable alternatives to compare against in the experimental setups you provided since they are static environments. It would also put this method in context with other roadmap based approaches.
[ ] "These approaches rely on knowledge of the task to choose the high density regions and so are only appropriate if the search can be preemptively categorized into “easy” and “hard” regions." Related Work, Section B, Second Paragraph What examples can you provide where this is cumbersome? Providing justification for this point is crucial to strengthening the argument for using the strategy of this paper. In a sense, the obstacle-aware algorithms from the previous paragraph are able to determine these easy and hard regions algorithmically.
[x] "Experiments were performed comparing Selective Densi- fication with many methods of searching the same graph, and also with bidirectional RRT-Connect from OMPL [26] on three scenarios for a 7-DOF robotics arm." Experiments, First Paragraph. Why are none of the other OMPL algorithms compared here? Providing results for any of the off the shelf roadmap methods would strengthen the paper signficantly. In addition, it appears that two different things are being evaluated: one is the motion planning results, in terms of finding a path and what the cost of the smoothed path is, and the second comparison is on the search method used in the Selective Densification graph. In this sense, the search comparisons are plentiful, but the motion planning comparisons are minimal.
Regarding the plots in Figure 4:
Other questions not answered in the paper:
[x] How much precomputation time is needed to generate the 18-layer graph used in the experiments? Stating this would alleviate concerns about time or memory needed to build this baseline graph used for the search.
[ ] What was the proportion of edges at each depth needed to solve each of the given problems? How was the 18 layer threshold selected? What clearance of path is that guaranteeing to be able to find (per Theorem V.3)? Providing this type of analysis would bolster the paper by showing results in practice where the search method is effective at staying at higher layers, while also showing that it can leverage the smaller edges at deeper layers.