openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
712 stars 38 forks source link

[REVIEW]: DSO++: Modern reimplementation of Direct Sparse Odometry #4521

Closed editorialbot closed 1 year ago

editorialbot commented 2 years ago

Submitting author: !--author-handle-->@Solonets<!--end-author-handle-- (Sergei Solonets) Repository: https://github.com/RoadlyInc/DSOPP Branch with paper.md (empty if default branch): feature/joss_paper Version: v 1.0 Editor: !--editor-->@hugoledoux<!--end-editor-- Reviewers: @Mirmix, @jhidalgocarrio Archive: Pending

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/1104fe14c42d6c73740b256f1907ff5f"><img src="https://joss.theoj.org/papers/1104fe14c42d6c73740b256f1907ff5f/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/1104fe14c42d6c73740b256f1907ff5f/status.svg)](https://joss.theoj.org/papers/1104fe14c42d6c73740b256f1907ff5f)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@Mirmix & @jhidalgocarrio, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @hugoledoux know.

✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨

Checklists

πŸ“ Checklist for @Mirmix

πŸ“ Checklist for @jhidalgocarrio

editorialbot commented 2 years ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 2 years ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.30 s (2037.9 files/s, 142033.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C++                            208           3096            463          15849
C/C++ Header                   226           1934           5927           8630
CMake                          114            335            236           2149
Python                          21            380            317           1340
YAML                             7             35             14            479
Bourne Shell                     9             70              6            276
Protocol Buffers                12             10              0            131
TeX                              1             10              0            106
Markdown                         4             42              0             99
Dockerfile                       1             11              0             82
-------------------------------------------------------------------------------
SUM:                           603           5923           6963          29141
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/TPAMI.2017.2658577 is OK
- 10.48550/arXiv.1708.07878 is OK
- 10.1007/978-3-030-01237-3_42 is OK
- 10.1109/IROS.2018.8593376 is OK
- 10.1109/LRA.2021.3140129 is OK
- 10.48550/arXiv.2003.01060 is OK
- 10.48550/arXiv.1904.11932 is OK
- 10.1109/LRA.2018.2855443 is OK
- 10.1109/LRA.2018.2855443 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 2 years ago

Wordcount for paper.md is 427

editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

Mirmix commented 2 years ago

Review checklist for @Mirmix

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Mirmix commented 2 years ago

Thanks for the efforts. Here is my review of the paper.

What we mean by research software
JOSS publishes articles about research software. This definition includes software that: solves complex modeling  problems in a scientific context (physics, mathematics, biology, medicine, social science, neuroscience, 
engineering); supports the functioning of research instruments or the execution of research experiments; 
extracts knowledge from large data sets; offers a mathematical library, or similar. 
While useful for many areas of research, pre-trained machine learning models
and notebooks are not in-scope for JOSS.
***
Substantial scholarly effort
JOSS publishes articles about software that represent substantial scholarly effort on the part of the authors. Your software should be a significant contribution to the available open source software that either enables 
some new research challenges to be addressed or makes addressing research challenges significantly better (e.g., faster, easier, simpler).

The main contribution of the authors is to make DSO more modular and have extensive testing of the functionality of already existing software. So here, we can't talk much about improvement in time/accuracy performance, thus it may seem like more of an engineering effort than a scholarly effort. Which is totally fine in JOSS.

However, I believe the user should see clear reasons which can be in the documentation/user-friendliness of the software that can attract the user to use this software rather than using an original and well-known implementation of DSO.

Some factors that may be considered by editors and reviewers when judging effort include:
    * Age of software (is this a well-established software project) / length of commit history.
    * Number of commits.
    * Number of authors.
    * Total lines of code (LOC). Submissions under 1000 LOC will usually be flagged, those under 300 LOC will be desk rejected.
    * Whether the software has already been cited in academic papers.
    * Whether the software is sufficiently useful that it is likely to be cited by your peer group.

At the current stage, I think the repository is badly documented. It lacks documenting lots of part like installation (each step guidance), dataset formats, how dataset and code organized, what some files(.sh) are for and etc. I have faced multiple issues while trying to install the repository. Personally, I think the overall repository structure/tree is too modular and contains too many files/folders that makes it harder to track than the original DSO implementation.

Original DSO has much better documentation with an end-to-end installation manual, which makes it easy-to-use and easy-to-install (and/though it does not use docker). The structure of the code is more user-friendly and easy to grasp.

The repository has some ".sh" files without any documentation and explanation for them. I would like to also see end-to-end installation documentation.

The paper should be between 250-1000 words. Authors submitting papers significantly longer than 1000 words may be asked to reduce the length of their paper.

Excluding the (intertext) references the paper is roughly around 350 words. It fits with the guideline, but I find it a bit short. I would be glad if the author wrote more about the "delta" of their work, against the original implementation. I would like to also see time performance and maybe memory consumption which shows that the current reimplementation does not hit the performance.

Datasets issues

Your code use some test data from TUM MONO, consider citing it in README.

Your code also uses some test data by downloading it using "download_test_data.sh". You should also put information about how the calibration files and other files are organized, what are the fields in each .csv, .txt, *.gt files are.

Some installation issues

Docker

This software uses docker with Ubuntu of 18.04. If we are talking about the modern implementation of DSO, I would personally expect the usage of a more recent Ubuntu version and more recent packages. There is no also proper manual, thus the user is expected to have knowledge about how to use docker, and how to install this software which is not usually the cases among the target audience (students).

You may also consider to put docker image in docker hub server, for simple docker pull option.

I have faced many issues while installation (using naive docker installation): At the first step I naively run docker build and docker run at the location of Dockerfile

@DSOPP/docker/ubuntu$ sudo docker build -t dsopp -f Dockerfile .
sudo docker run -it dsopp

There was docker build error due to lcov_1.15 was not found, so consider solving it with lcov_1.16 version in the Dockerfile(line 79):

RUN wget http://ftp.de.debian.org/debian/pool/main/l/lcov/lcov_1.16.orig.tar.xz \
    && tar xvf lcov_1.16.orig.tar.xz \
    && cd lcov-1.16 \
    && make install \
    && cd .. \
    && rm -rf lcov*

You need to also consider adding the following packages to install(download_test_data.sh uses them, so needs them)

pip install gdown
apt-get install zip

General repository build

When I try to git clone/cmake/make (build repository in standard way) the repository as shown in the README it gives another error:

mkdir build;
cd build;
cmake .. -DCMAKE_CXX_COMPILER=g++-10 -DCMAKE_BUILD_TYPE=Release;
make -j;
-- Configuring done
-- Generating done
-- Build files have been written to: /app/build/DSOPP/build
root@17731041aa19:/app/build/DSOPP/build# make 
Scanning dependencies of target check_format
--- /app/build/DSOPP/pydsopp/track/frame.py (original)
+++ /app/build/DSOPP/pydsopp/track/frame.py (reformatted)
@@ -23,12 +23,10 @@
     """
     Frame in the world
     """
-
     class AttachedFrame:
         """
         Frame attached to a keyframe (Frame)
         """
-
         def __init__(self, id_, timestamp, odometry_t_keyframe_frame):
             self.id = id_
             self.timestamp = timestamp
--- /app/build/DSOPP/pydsopp/track/camera_calibration.py    (original)
+++ /app/build/DSOPP/pydsopp/track/camera_calibration.py    (reformatted)
@@ -11,7 +11,6 @@
     """
     Pinhole camera model
     """
-
     def __init__(self, focal_x, focal_y, center_x, center_y):
         """
         Intialize Pinhole camera via 3x3 calibration matrix
@@ -62,7 +61,6 @@
     """
     Camera calibration
     """
-
     def __init__(self, intrinsics, image_size, model_type):
         if model_type.name == 'pinhole':
             self.model = Pinhole(*intrinsics[:4])
--- /app/build/DSOPP/pydsopp/track/track.py (original)
+++ /app/build/DSOPP/pydsopp/track/track.py (reformatted)
@@ -49,7 +49,6 @@
     """
     Class for exctract data from track storage
     """
-
     def __init__(self, track_storage):
         """
         Arguments:
--- /app/build/DSOPP/pydsopp/track/export/colmap_track_export.py    (original)
+++ /app/build/DSOPP/pydsopp/track/export/colmap_track_export.py    (reformatted)
@@ -12,7 +12,6 @@
     """
     Internal class for building colmap data, representation of 3D point with index and 2D projection indices
     """
-
     @dataclass
     class Projection:
         frame_index: int
@@ -32,7 +31,6 @@
     """
     Internal class for building colmap data
     """
-
     @dataclass
     class Point2D:
         pixel: np.array
--- /app/build/DSOPP/pydsopp/transformations/motion.py  (original)
+++ /app/build/DSOPP/pydsopp/transformations/motion.py  (reformatted)
@@ -6,7 +6,6 @@

 class Motion:
     """ 3 dimensional group of similarity transformations """
-
     def __init__(self, q, t):
         """
         Construct `Motion` from q,t or sympy matrix
CMakeFiles/check_format.dir/build.make:74: recipe for target 'CMakeFiles/check_format' failed
make[2]: *** [CMakeFiles/check_format] Error 1
CMakeFiles/Makefile2:2326: recipe for target 'CMakeFiles/check_format.dir/all' failed
make[1]: *** [CMakeFiles/check_format.dir/all] Error 2
Makefile:111: recipe for target 'all' failed
make: *** [all] Error 2

Python library only build

In README there is another error/typo

Original: git clone --depth 1 https://github.com/RoadAR/DSOPP
Should be: git clone --depth 1 https://github.com/RoadlyInc/DSOPP

I have managed to run "python library only build"

git clone --depth 1 https://github.com/RoadlyInc/DSOPP
mkdir build && cd build
cmake ../ -DBUILD_DOC=ON -DCHECK_FORMAT=ON
make pydsopp_lib

After running python scripts I was getting the following error message:

FileNotFoundError: [Errno 2] No such file or directory: '*.bin'

Maybe this is due to previous issues but ctest was also failing:

The following tests FAILED:
    1 - test_track2tum_exporter (Not Run)
    2 - test_config_loader (Not Run)
    3 - test_pinhole_camera_model (Not Run)
    4 - test_atan_camera_model (Not Run)
    5 - test_ios_camera_model (Not Run)
    6 - test_simple_radial (Not Run)
    7 - test_division_camera_model (Not Run)
    8 - test_tum_fov_model (Not Run)
    9 - test_reprojects (Not Run)
    10 - test_projector (Not Run)
    11 - test_se3_motion (Not Run)
    12 - test_generalized_epipolar_line (Not Run)
    13 - test_triangulation (Not Run)
    14 - test_essential_matrix (Not Run)
    15 - test_affine_brightness (Not Run)
    16 - test_ceres_pose_alignment (Not Run)
    17 - test_photometric_bundle_adjustment (Not Run)
    18 - test_local_parameterization_s2 (Not Run)
    19 - test_ceres_pose_alignment_noised (Not Run)
    20 - test_analytical_diff (Not Run)
    21 - test_linear_system (Not Run)
    22 - test_incremental_solver (Not Run)
    23 - test_ceres_camera_optimization (Not Run)
    24 - test_geometric_bundle_adjustment (Not Run)
    25 - test_tracking_features_extractor (Not Run)
    26 - test_visualizer (Not Run)
    27 - test_image_folder_provider (Not Run)
    28 - test_camera (Not Run)
    29 - test_undistorter (Not Run)
    30 - test_get_camera_model (Not Run)
    31 - test_camera_mask (Not Run)
    32 - test_image_video_provider (Not Run)
    33 - test_npy_folder_provider (Not Run)
    34 - test_camera_transforms (Not Run)
    35 - test_camera_settings (Not Run)
    36 - test_frequency_keyframe_strategy (Not Run)
    37 - test_depth_estimation (Not Run)
    38 - test_landmarks_activator (Not Run)
    39 - test_reference_frame_depth_map (Not Run)
    40 - test_track_storage (Not Run)
    41 - test_track_frames (Not Run)
    42 - test_marginalization (Not Run)
    43 - test_fbs_feature_matcher (Not Run)
    44 - test_fbs_tracker (Not Run)
    45 - test_pydsopp_json_exporter (Failed)
    46 - test_extract_tracking_features (Failed)
    47 - test_python_reader (Failed)
Errors while running CTest

Conclusions

Document the package and installation extensively.

Your title claim

DSO++: Modern reimplementation of Direct Sparse Odometry

So I would expect it to use more recent OS (ubuntu) and package (dependency) versions

You claim that:

It provides researchers with implemented and tested modules for direct SLAM: photometric bundle adjustment, feature metric bundle
adjustment, marginalization, etc., which could be useful for any subsequent work as well as student projects. 

If it is a goal, I would expect the repository to provide tutorial codes, and end-to-end easy-to-use/install manual for an average student interested in Multi-View Geometry. If anyhow you see "pydsopp" as possible tutorial examples, then you need to well comment and document them.

You also claim that:

Secondly, similar to any other Structure From Motion algorithms it can be directly used by researchers in the area of photogrammetry, 3D reconstruction, differentiable rendering, and other similar areas.

If it is a goal, it should bring some kind of simplicity over already existing photogrammetry software/libraries like colmap/openMVG/openGV and should be also well documented.

I believe if you are reimplementing/refactoring the already existing library, at least it should be at the same level of well-documented, user and developer friendly as already existing software. Furthermore explaining "delta" and necessity for this new library well is very important. Unfortunately with the current stage of your repository and work, I believe the original DSO is much more appealing and easy to use.

Solonets commented 2 years ago

Dear @Mirmix, thank you for your detailed review. We added much more extensive documentation and fixed some of the problems you mention. Would you be ready to check it out once again?

jhidalgocarrio commented 2 years ago

Review checklist for @jhidalgocarrio

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

jhidalgocarrio commented 2 years ago

Thanks to the Authors for this contribution. I reviewed the installation process

Building DSOPP

Installing python dependencies

pip3 -r requirements.txt this command is missing the install option. PR here

Build

Using the virtualenv on Ubuntu 18.04

mkdir build;
cd build;
cmake .. -DCMAKE_BUILD_TYPE=Release;
make -j;

build stops here:

[ 19%] Completed 'eigen_external'
[ 19%] Built target eigen_external
[ 98%] Linking CXX static library ../../../lib/libgmock_main.a
[ 98%] Built target gmock_main
[100%] Linking CXX executable yaml-cpp-tests
[100%] Built target yaml-cpp-tests
[ 20%] No install step for 'yaml-cpp_external'
[ 20%] No test step for 'yaml-cpp_external'
[ 20%] Completed 'yaml-cpp_external'
[ 20%] Built target yaml-cpp_external
Makefile:100: recipe for target 'all' failed
make: *** [all] Error 2

Scrolling up I see two errors:

[  7%] Running python protocol buffer compiler on semantic_legend.proto
c++: error: unrecognized command line option β€˜-fconcepts-diagnostics-depth=2’; did you mean β€˜-fno-diagnostics-color’?
src/common/time/CMakeFiles/time.dir/build.make:75: recipe for target 'src/common/time/CMakeFiles/time.dir/src/time.cpp.o' failed
make[2]: *** [src/common/time/CMakeFiles/time.dir/src/time.cpp.o] Error 1
CMakeFiles/Makefile2:2204: recipe for target 'src/common/time/CMakeFiles/time.dir/all' failed
make[1]: *** [src/common/time/CMakeFiles/time.dir/all] Error 2
-- Found Threads: TRUE                                                                                                                                                                                 
[ 16%] Built target gflags_external                                                                                                                                                                    
-- Configuring done                                                                                                                                                                                    
CMake Error in src/CMakeLists.txt:                                                                                                                                                                     
  Target "benchmark" requires the language dialect "CXX20" (with compiler                                                                                                                              
  extensions), but CMake does not know the compile flags to use to enable it.                                                                                                                                                                                                                                                                                                                     

CMake Error in src/CMakeLists.txt:                                                                                                                                                                     
  Target "benchmark_main" requires the language dialect "CXX20" (with                                                                                                                                  
  compiler extensions), but CMake does not know the compile flags to use to                                                                                                                            
  enable it.  
DaniilSinitsyn commented 2 years ago

@jhidalgocarrio Thank you for your effort to build our code! And thank you for spotting our mistake in pip command. Unfortunately, I forgot to add a section about using recent versions of g++ compiler and cmake...

I added those sections there. And comments about the possibility of facing a building issues through non-docker way.

Would you be ready to check it out once again?

hugoledoux commented 2 years ago

@Solonets @DaniilSinitsyn the comment above from @Mirmix is very relevant (for the paper):

I would be glad if the author wrote more about the "delta" of their work, against the original implementation. "

Since your work is a rewrite and improvements over the original DSO, the users of the original code should be informed about what has changed and why you think it's better.

Mirmix commented 2 years ago

Thanks for the updates and for addressing some of my aforementioned issues. Although there are still a few unaddressed issues, to not repeat myself here (and since the main updates are in installation) I will be mainly focused on Docker-based installation issues that are still there.

I tried to install the repo using docker. As I mentioned before some packages were forgotten (gdown was added, but zip and unzip were forgotten) so to solve it I created a pull request, please consider merging it.

I have run ctest and Tests #45 and #46 are failed

      Start  1: test_track2tum_exporter
 1/47 Test  #1: test_track2tum_exporter ..............   Passed    0.05 sec
      Start  2: test_config_loader
 2/47 Test  #2: test_config_loader ...................   Passed    5.88 sec
      Start  3: test_pinhole_camera_model
 3/47 Test  #3: test_pinhole_camera_model ............   Passed    6.72 sec
      Start  4: test_atan_camera_model
 4/47 Test  #4: test_atan_camera_model ...............   Passed    0.02 sec
      Start  5: test_ios_camera_model
 5/47 Test  #5: test_ios_camera_model ................   Passed    0.00 sec
      Start  6: test_simple_radial
 6/47 Test  #6: test_simple_radial ...................   Passed    0.00 sec
      Start  7: test_division_camera_model
 7/47 Test  #7: test_division_camera_model ...........   Passed    0.00 sec
      Start  8: test_tum_fov_model
 8/47 Test  #8: test_tum_fov_model ...................   Passed    0.00 sec
      Start  9: test_reprojects
 9/47 Test  #9: test_reprojects ......................   Passed    0.00 sec
      Start 10: test_projector
10/47 Test #10: test_projector .......................   Passed    0.00 sec
      Start 11: test_se3_motion
11/47 Test #11: test_se3_motion ......................   Passed    0.00 sec
      Start 12: test_generalized_epipolar_line
12/47 Test #12: test_generalized_epipolar_line .......   Passed    0.04 sec
      Start 13: test_triangulation
13/47 Test #13: test_triangulation ...................   Passed    0.00 sec
      Start 14: test_essential_matrix
14/47 Test #14: test_essential_matrix ................   Passed    0.00 sec
      Start 15: test_affine_brightness
15/47 Test #15: test_affine_brightness ...............   Passed    0.59 sec
      Start 16: test_ceres_pose_alignment
16/47 Test #16: test_ceres_pose_alignment ............   Passed    3.29 sec
      Start 17: test_photometric_bundle_adjustment
17/47 Test #17: test_photometric_bundle_adjustment ...   Passed    4.93 sec
      Start 18: test_local_parameterization_s2
18/47 Test #18: test_local_parameterization_s2 .......   Passed    0.00 sec
      Start 19: test_ceres_pose_alignment_noised
19/47 Test #19: test_ceres_pose_alignment_noised .....   Passed    1.21 sec
      Start 20: test_analytical_diff
20/47 Test #20: test_analytical_diff .................   Passed    0.74 sec
      Start 21: test_linear_system
21/47 Test #21: test_linear_system ...................   Passed    8.34 sec
      Start 22: test_incremental_solver
22/47 Test #22: test_incremental_solver ..............   Passed    1.74 sec
      Start 23: test_ceres_camera_optimization
23/47 Test #23: test_ceres_camera_optimization .......   Passed    8.32 sec
      Start 24: test_geometric_bundle_adjustment
24/47 Test #24: test_geometric_bundle_adjustment .....   Passed    0.01 sec
      Start 25: test_tracking_features_extractor
25/47 Test #25: test_tracking_features_extractor .....   Passed    0.67 sec
      Start 26: test_visualizer
26/47 Test #26: test_visualizer ......................   Passed    0.01 sec
      Start 27: test_image_folder_provider
27/47 Test #27: test_image_folder_provider ...........   Passed    0.02 sec
      Start 28: test_camera
28/47 Test #28: test_camera ..........................   Passed    0.02 sec
      Start 29: test_undistorter
29/47 Test #29: test_undistorter .....................   Passed    0.76 sec
      Start 30: test_get_camera_model
30/47 Test #30: test_get_camera_model ................   Passed    0.00 sec
      Start 31: test_camera_mask
31/47 Test #31: test_camera_mask .....................   Passed    0.98 sec
      Start 32: test_image_video_provider
32/47 Test #32: test_image_video_provider ............   Passed    5.52 sec
      Start 33: test_npy_folder_provider
33/47 Test #33: test_npy_folder_provider .............   Passed    3.33 sec
      Start 34: test_camera_transforms
34/47 Test #34: test_camera_transforms ...............   Passed    0.06 sec
      Start 35: test_camera_settings
35/47 Test #35: test_camera_settings .................   Passed    0.03 sec
      Start 36: test_frequency_keyframe_strategy
36/47 Test #36: test_frequency_keyframe_strategy .....   Passed    0.00 sec
      Start 37: test_depth_estimation
37/47 Test #37: test_depth_estimation ................   Passed    7.14 sec
      Start 38: test_landmarks_activator
38/47 Test #38: test_landmarks_activator .............   Passed    9.07 sec
      Start 39: test_reference_frame_depth_map
39/47 Test #39: test_reference_frame_depth_map .......   Passed    1.48 sec
      Start 40: test_track_storage
40/47 Test #40: test_track_storage ...................   Passed    0.23 sec
      Start 41: test_track_frames
41/47 Test #41: test_track_frames ....................   Passed    0.03 sec
      Start 42: test_marginalization
42/47 Test #42: test_marginalization .................   Passed    0.00 sec
      Start 43: test_fbs_feature_matcher
43/47 Test #43: test_fbs_feature_matcher .............   Passed    0.76 sec
      Start 44: test_fbs_tracker
44/47 Test #44: test_fbs_tracker .....................   Passed   33.03 sec
      Start 45: test_pydsopp_json_exporter
45/47 Test #45: test_pydsopp_json_exporter ...........***Failed    0.09 sec
      Start 46: test_extract_tracking_features
46/47 Test #46: test_extract_tracking_features .......***Failed    0.02 sec
      Start 47: test_python_reader
47/47 Test #47: test_python_reader ...................   Passed    2.00 sec

96% tests passed, 2 tests failed out of 47

Label Time Summary:
Coverage    =  10.35 sec*proc (25 tests)

Total Test time (real) = 107.21 sec

The following tests FAILED:
     45 - test_pydsopp_json_exporter (Failed)
     46 - test_extract_tracking_features (Failed)
Errors while running CTest

Another thing that would make me glad if it will be resolved, is I saw you have tried to remove the password prompts inside the docker with lines below (from Dockerfile), which is good.

ARG USER_ID
ARG GROUP_ID

RUN addgroup --gid $GROUP_ID user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user

But unfortunately, it was not working as expected, when i was trying to run

(sudo) apt-get install XXX

it was asking me for a password. I have tried my Ubuntu password and empty string it did not work.

(base) user@169e5297fc95:~/Workspace/DSOPP/build$ sudo apt-get install smth
[sudo] password for user: 
Sorry, try again.
[sudo] password for user: 
Sorry, try again.
[sudo] password for user: 
sudo: 3 incorrect password attempts
(base) user@169e5297fc95:

Maybe it can be somehow helpful, when I am printing echo $USER I am getting an empty string, and whoami returns "user"

(base) user@169e5297fc95:~/Workspace/DSOPP/build$ echo $USER

(base) user@169e5297fc95:~/Workspace/DSOPP/build$ whoami
user
Solonets commented 2 years ago

@Solonets @DaniilSinitsyn the comment above from @Mirmix is very relevant (for the paper):

I would be glad if the author wrote more about the "delta" of their work, against the original implementation. "

Since your work is a rewrite and improvements over the original DSO, the users of the original code should be informed about what has changed and why you think it's better.

Hi @hugoledoux , @Mirmix . Thank you for the comments. I agree there is a value in writing it down. We extended the paper with this info. Could you please check it?

Mirmix commented 2 years ago

@editorialbot generate pdf

editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jhidalgocarrio commented 2 years ago

Here is my review update.

100% tests passed, 0 tests failed out of 47

Label Time Summary: Coverage = 12.91 sec*proc (25 tests)

Total Test time (real) = 122.93 sec



- [ Track 30 Seconds test Success] 
./build/src/application/dsopp_main runs w/o output or terminal message. However, it generates a track.bin file with the following visualization. Is it the correct track for the "track 30 seconds" example?

![dsopp_track30seconds_screenshot](https://user-images.githubusercontent.com/4748753/182351487-1843aa6b-b8df-4850-951e-e095976a687b.png)

In that case you should add this in the Readme
Solonets commented 2 years ago

Hi @jhidalgocarrio, thank you for your effort. There was an undefined behavior in CMake which caused no visualization during the run. I fixed it.

Regarding DCMAKE_CXX_STANDARD maybe we could continue this discussion in PR if you shall open one.

Mirmix commented 2 years ago

Hi, thanks for the updates. I ran the code with docker, Pangolin GUI was not working with the default docker run command (most probably due to the fact that the default network driver for docker is bridge), after explicitly setting the network driver to "host" I was able to run the DSOPP on test sequences with GUI on my Ubuntu20.04. Thus please consider this pull request.

Like other visual SLAM/Odometry algorithms DSO aims to run in real-time (by the robotic community usually for visual monoslam to be considered real-time minimum frame rate supposed to be at least 25 or 30 Hz). In the GUI, I was observing a much slower frame rate. Can you please share the main reason for that? It will be even better if you share with us the time performance comparison with the same default settings and hardware on the same dataset (maybe on some TUMMONO sequences) between your work and the original DSO implementation.

Solonets commented 2 years ago

Hi @Mirmix, thank you for your PR!

Regarding the speed: we have the same speed with DSO on the same datasets (even though we have not yet optimized our code with SSE instructions. The test data has a bit higher image resolution, that's why it's a bit slower (DSO has the same speed on this data). You can validate that on tummono dataset, the instructions are also given in the repo.

DSO also doesn't work real time on the standard setting and uses special flags to enforce real time. Those flags are similar (but not the same) to our fast.yaml presets file

Mirmix commented 2 years ago

Thanks for your clarifying comments.

Regarding the time performance:

I have tried to run your data on TUMMONO seq14 (choose this one since it is a relatively small and SIMPLE dataset) with fast.yaml settings, stills seem slow. In the original DSO paper, section 3.1 in Figure 18 papers shows that in their "fast" setting they can achieve 28 keyframes per second. I would like to remind you that not all frames are keyframes ( more aggressive motions and big scene changes promote keyframe rates, whereas small minor changes mean fewer new keyframes ) which means that the actual framerate is higher than that. Since your work is a reimplementation of existing work it is important to quantitatively benchmark the time performance with the same default settings and at least on the TUMMONO benchmark to show that your work does not degrade the performance of the existing work in the first place. Please correct me if I am missing something.

Other minor things, regarding the GUI. The button for GNSS and Correspondences was not reacting for example again TUMMONO seq14. I understand GNSS is not supposed to work for indoor dataset and maybe for TUMMONO dataset since there is no valid GNSS data for those, in that case, I think it would be better to hide those buttons at the beginning. Similar things for Correspondences, when I press the button under the Correspondences "Candidates/Failed/Relocalized/Skipped" they were not reacting. In brief, it would be better to have relevant buttons for each dataset to reduce the user's confusion.

It would be also great to understand what each button is supposed to do. I know most of them are guessable but still documenting them and having no need to guess would be more user-friendly. For example what "Immature" button under the "Landmarks view" triggers. Or what "Rel baseline" button under the "Landmarks view" triggers for monocular data.

And one minor thing sometimes when I was playing too much with the buttons I was getting the following error and to be able to run the program again I needed to exit from the docker and run the docker again:

Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  130 (MIT-SHM)
  Minor opcode of failed request:  3 (X_ShmPutImage)
  Value in failed request:  0x780
  Serial number of failed request:  53
  Current serial number in output stream:  54
jhidalgocarrio commented 2 years ago

Hi @Solonets,

I see now we have live visualization of camera poses and the map. It is very helpful. Thanks. I share the same comments with @Mirmix I don't see the GUI buttons reacting. I also get the same error when the GUI freezes.

$ ./build/src/application/dsopp_main --config_file_path ./test/test_data/tummono/sequence_10/standart.yaml
Framebuffer with requested attributes not available. Using available frame buffer. You may see visual artifacts

Another comment is speed. DSOPP performs around 10 FPS in TUMMONO, sequence 10, which is slower than in real-time.

Besides these comments, I opened an issue regarding documentation. I think this work's main advantage is demonstrating and showing to the VO community how a modern photometric image alignment is implemented. I believe this is an important aspect. The open issue addresses documentation for three aspects: Energy Function, Selected Points, and BA.

Solonets commented 2 years ago

Hi @Mirmix @jhidalgocarrio, thank you for your review. Right now we do not make any performance claims. There is a way to speed it up, and we will work in this direction later. Please make your decision based on the current version. Still, you can compile dsopp using single precision (as in dso) using the CMake flag -DUSE_FLOAT=ON. Depending on the PC it gives a performance boost from 20 to 50%.

@jhidalgocarrio I will add info about the tracker and backend into the repo.

hugoledoux commented 2 years ago

@Solonets I have read all the comments and re-read the updated version of the paper. I like your submission a lot, but I have a few doubts/questions:

  1. Authorship. If I look at the commits, only 4 of the 6 authors have committed code. JOSS is primarily a software journal, and we do require authors to have contributed to the software. You can see our policy here: https://joss.readthedocs.io/en/latest/submitting.html#authorship , this means that not only commits are looked actually. Could both of you please elaborate a bit here on this issue?

  2. Contributions: I am puzzled as to why the Python bindings that you provide are not listed as a contribution of the submission. This is an advantage over the original library. In my opinion, this ought to be emphasised more, and described in the paper.

  3. Performance. I am a bit confused by your comments above "There is a way to speed it up, and we will work in this direction later.". Do you mean later as in the coming months or you are working on this? As @Mirmix pointed out, since you reimplement DSO and since this is a SLAM engine, I would expect some form of performances. It does not need to be faster, but being slower would be pretty bad, wouldn't it? You claim that it's as fast, but I think this should be reported and benchmarked in the readme/paper.

Solonets commented 2 years ago

Hi @hugoledoux. Thank you for your feedback. I will answer your questions one by one.

  1. We developed the code before in the private repository. I later removed some modules and submitted the code as one commit to the new repository. All the people in the title except for one (CEO) heavily contributed to the code. After reading the authorship policy I agree we have to remove him from the authors.

  2. There are no really extensive bindings in the code. There is a python library to read the resulting pose graph and some rare bindings to some functionality, but that's it.

  3. The dso code is extremely optimized with SSE instructions. It's a lot of work to be done to speed up each module with them. I am currently looking for students at TUM to take over this work. Still, on my computer with similar settings, the speed is very close to the original dso. fps

hugoledoux commented 1 year ago

@Solonets I realise that we left this discussion a bit unclear. Are you currently working on at least documenting the performances, or in improving it?

If you need more time, I can also put the submission on hold before you fix this.

kthyng commented 1 year ago

@Solonets, @hugoledoux It's probably about time to choose a goal date for continuing to work on this submission, or to otherwise put it on pause or withdraw for now.

hugoledoux commented 1 year ago

I agree, I had put a reminder of 1mo on this, and we're here.

I suggest we pause it and when/if @Solonets reacts and fixes the issues we can continue

arfon commented 1 year ago

I suggest we pause it and when/if @Solonets reacts and fixes the issues we can continue

@hugoledoux – it's now more than 5 months since we heard back from the author. I would suggested rejecting at this point as it seems like the author has abandoned this submission.

hugoledoux commented 1 year ago

@solonets We haven't heard from you in several months and you never answered about the timeframe to fix what was asked, so I am afraid I'll reject the paper at this point.

hugoledoux commented 1 year ago

@editorialbot reject

editorialbot commented 1 year ago

I'm sorry @hugoledoux, I'm afraid I can't do that. That's something only eics are allowed to do.

hugoledoux commented 1 year ago

@arfon my powers are limited it seems, but yes let's reject this paper

kthyng commented 1 year ago

@editorialbot reject

editorialbot commented 1 year ago

Paper rejected.