Open editorialbot opened 3 months ago
Thanks @logological for your message. It has been busy for the last month; I will resume the reviewing.
Hi @haider8645, With your revision, the paper really improved in my opinion. Some other updates from my side:
- I’ve also checked the authorship box now.
- One open question was about the ROS2 integration. Do you plan to include that in the published version or is it future work?
- I got the code running but still have to verify if the QT5.15 bug is now solved.
In the meantime, I’ve moved on to the documentation (i.e., the readme file and auto-generated doc) and I have some remarks. Please find them below in order of the checklist/appearance in the docs:
- The statement of need is missing from the docs
- The figures in the doxygen docs are not working (they are in ugv_nav4d/doc/figures/ while the file points to ugv_nav4d/build/doc/figures/)
- In the readme, in the Planning section, there is an open TODO.
- The example usage is based on the (debug) gui with a provided map. Although great for quickly getting a feel for ugv_nav4d can do, I’m still unsure how/where to start if I wanted to use your software on your robot. I think such a real-world example could greatly help new users to get started. Suppose I have a robot and a setup in the lab. What would I need to do to get started. What information would go in and what would come out. Both in terms of GUI usage and API calls on the real robot. The review checklist ask for an example “ideally to solve real-world analysis problems”. So I guess technically you don’t necessarily need it, but I would highly recommend such an example.
I plan to dive a bit deeper into the functionality next week.
ROS2 integration is looking good. Working on it parallel to the review here.
I will work on the other points you mentioned about the missing statement of need and the real-world-example. I guess this where the ROS2 wrapper would be a good starting point. The open TODO is in the readme is an old artifact, I can remove it.
Just something extra (about the real world usage): There is a known challenge for mapping in dynamic environments (which is out of scope of the planner and more of a mapping problem), that the map (MLS) is generated based on a pointcloud and the pointcloud should not have any dynamic objects because then they will appear in the map as ghost artifacts. So for the planner to work in dynamic environments, one would need to pre-process the pointcloud and remove any dynamic points from moving objects so that the map is clean and suitable for the planning. The detection of dynamic points in point clouds is ongoing research: https://github.com/PRBonn/4DMOS
That looks nice!
Am in the process of moving all the test ply files out of the repo. In the meantime you can use this small part of a utah environment utah.zip for your tests as well :) (you will need to disable the grid to be able to click on the map because the patches are below the grid)
Hi @haider8645, With your revision, the paper really improved in my opinion. Some other updates from my side:
- I’ve also checked the authorship box now.
- One open question was about the ROS2 integration. Do you plan to include that in the published version or is it future work?
- I got the code running but still have to verify if the QT5.15 bug is now solved.
In the meantime, I’ve moved on to the documentation (i.e., the readme file and auto-generated doc) and I have some remarks. Please find them below in order of the checklist/appearance in the docs:
- The statement of need is missing from the docs
- The figures in the doxygen docs are not working (they are in ugv_nav4d/doc/figures/ while the file points to ugv_nav4d/build/doc/figures/)
- In the readme, in the Planning section, there is an open TODO.
- The example usage is based on the (debug) gui with a provided map. Although great for quickly getting a feel for ugv_nav4d can do, I’m still unsure how/where to start if I wanted to use your software on your robot. I think such a real-world example could greatly help new users to get started. Suppose I have a robot and a setup in the lab. What would I need to do to get started. What information would go in and what would come out. Both in terms of GUI usage and API calls on the real robot. The review checklist ask for an example “ideally to solve real-world analysis problems”. So I guess technically you don’t necessarily need it, but I would highly recommend such an example.
I plan to dive a bit deeper into the functionality next week.
ROS2 integration is looking good. Working on it parallel to the review here.
I will work on the other points you mentioned about the missing statement of need and the real-world-example. I guess this where the ROS2 wrapper would be a good starting point. The open TODO is in the readme is an old artifact, I can remove it.
Just something extra (about the real world usage): There is a known challenge for mapping in dynamic environments (which is out of scope of the planner and more of a mapping problem), that the map (MLS) is generated based on a pointcloud and the pointcloud should not have any dynamic objects because then they will appear in the map as ghost artifacts. So for the planner to work in dynamic environments, one would need to pre-process the pointcloud and remove any dynamic points from moving objects so that the map is clean and suitable for the planning. The detection of dynamic points in point clouds is ongoing research: https://github.com/PRBonn/4DMOS
The ROS integration look really nice! I think the extra information on real-world usage would be very valuable in the documentation. When combined with a rough outline on which steps to take, I think that would be enough information for potential users to get started.
Just wanted to check in with everyone regarding the status of the review. I see there are still some open issues in the project repository; are these currently blocked by external factors or are we waiting on the submitters/reviewers for further action?
Hi @logological,
My only open checkbox is the example usage. Once that is updated in the documentation, the paper can be accepted in my opinion. Although I'm still a fan of including the ROS integration in the published version of the software (which would mean that I'll have to review that functionality as well).
Hi @logological
I am working on the example usage as requested by @OlgerSiebinga. THe ROS2 integration is looking good. A planned path on the traversability map can be seen in rviz2. I am testing the ros2 wrapper and will publish it afterwards in the coming days.
Submitting author: !--author-handle-->@haider8645<!--end-author-handle-- (Muhammad Haider Khan Lodhi) Repository: https://github.com/dfki-ric/ugv_nav4d Branch with paper.md (empty if default branch): main Version: v1.0.0 Editor: !--editor-->@logological<!--end-editor-- Reviewers: @soraxas, @OlgerSiebinga Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@brean & @soraxas, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @logological know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @soraxas
📝 Checklist for @OlgerSiebinga