Closed editorialbot closed 2 weeks ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
β
OK DOIs
- 10.5281/zenodo.7233714 is OK
- 10.1016/j.celrep.2021.109730 is OK
- 10.48550/arXiv.1906.08172 is OK
- 10.1038/s41593-018-0209-y is OK
- 10.1007/3-540-44480-7_21 is OK
- 10.1038/s41592-019-0686-2 is OK
π‘ SKIP DOIs
- No DOI given, and none found for title: The OpenCV Library
- No DOI given, and none found for title: Indie Vicon System Cost and Solo Operator Guide
- No DOI given, and none found for title: Large-Scale Bundle Adjustment in SciPy
- No DOI given, and none found for title: OpenMMLab Pose Estimation Toolbox and Benchmark
β MISSING DOIs
- 10.3389/fspor.2020.00050 may be a valid DOI for title: Evaluation of 3D Markerless Motion Capture Accurac...
β INVALID DOIs
- None
Software report:
github.com/AlDanial/cloc v 1.90 T=2.88 s (98.9 files/s, 518906.4 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
CSV 19 0 0 1260835
XML 132 0 24029 195128
Python 91 2762 1891 8776
TOML 14 113 17 694
Markdown 20 260 0 585
SVG 4 0 0 292
YAML 4 25 12 135
TeX 1 9 0 123
-------------------------------------------------------------------------------
SUM: 285 3169 25949 1466568
-------------------------------------------------------------------------------
Commit count by author:
2806 mprib
139 Mac Prible
3 dependabot[bot]
2 Ryan Govostes
1 Jaeseung Han
1 Rohan
Paper file info:
π Wordcount for paper.md
is 731
β
The paper includes a Statement of need
section
License info:
β
License found: BSD 2-Clause "Simplified" License
(Valid open source OSI approved license)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Hi @mprib and reviewers @Timozen and @davidpagnon - this is the review thread for the submission. All of our communications will happen here from now on.
Meanwhile, please check the post at the top of the issue for instructions on how to generate your own review checklist. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues directly in the software repository. If you do so, please mention this thread so that a link is created (and I can keep an eye on what is happening). Please also feel free to comment and ask questions in this thread. It is often easier to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
Please feel free to ping me (@mooniean) if you have any questions or concerns. Thanks!
Thank you @davidpagnon and @Timozen for serving as reviewers on this project. I am grateful for your time and look forward to your feedback.
Thank you for making this tool available! I heard good things about it a few months ago but did not find time to test it, so this review is the perfect occasion.
The installation procedure and the execution are quite user-friendly, and the output is the same as AniPose or Pose2Sim so it can be reused as is (or almost, I'll implement very minimal changes to my code so that people can just copy paste your caliscope Config.toml file into their Pose2Sim calibration folder).
Several aspects make it a pain-free first experience:
As you point out in the paper, Caliscope's main goal is to make multi-camera calibration easy and straightforward for non-experts. However, you also include a tool to perform pose estimation, triangulate it, and rig the output. I feel like calibration is a huge problem, and having a tool dedicated solely to that purpose could be quite convenient for countless people. Calibration is also needed for the study of animals, objects, or even for architecture, and most people would not care about human pose. And for those who care, MediaPipe/BlazePose is getting outdated so they may want to use something more recent such as RTMlib. Moreover, that last bit of the package seems to be a bit less polished. What would you think of decorrelating these two parts and making them two separate tools?
On the other hand, you coded quite a convenient toolbox to record from multiple webcams and synchronize them. This could be mentioned in the paper, especially since it actually automatically generates the required "frame_time_history.csv" file that is required. I do realize this may sound contradictory to my previous point, but I feel like it would be even better to directly include it in the Caliscope. It would not make the toolbox heavier since the requirements are the same, and people would be able to record and calibrate a scene from one single window.
Off-topic: I believe the toolbox used to be called "pyxy3d". I like the new name, which looks much less like a bunch of letters collapsed together and sounds nice and easy to remember.
@davidpagnon,
Thank you for your detailed feedback on caliscope! I recognize the time it takes to dive into a new project and catalogue this information. I'm incredibly grateful for your work on this.
And I'm grateful that this has provided to an opportunity to get connected to the author of Pose2Sim! I'm passionate about the potential of emerging tools to improve the study of movement and hope that this might be the beginning of a dialogue rather than a one-off interaction.
I wanted to reply briefly to note that I am targeting a more extensive follow up with point-by-point replies by the end of this upcoming week.
Best,
Mac
@davidpagnon ,
Thank you again for your comments. My replies are woven in below. For suggested improvements that I have not yet been able to resolve, I have opened up issues on the repo and provide links to those here.
Regards,
Mac
Thank you for making this tool available! I heard good things about it a few months ago but did not find time to test it, so this review is the perfect occasion.
Several aspects make it a pain-free first experience:
A GUI that launches without having to type any Python command
Clear tutorial and guidelines, and videos at each but the last stage (pose estimation)
Demonstration data which constitutes a clean model to follow
Paper:
It is clear, short, and concise.
I absolutely don't want to push it as there is obvisouly a conflict of interest, but you could cite the Pose2Sim calibration in your literature. It does not use bundle adjustment, is not as convenient as your abstract class, but it also provide 5 distortion coefficients and allows the user to calibrate with elements of the scene if they forgot to take a calibration board with them.
Thank you for the suggestion and I absolutely agree that it's worthwhile to discuss the two projects. I have included this new paragraph in the paper to highlight similarities and distinctions:
Pose2Sim (Pagnon et al., 2022) is another tool that merges both camera calibration and markerless pose estimation. Similar to Caliscope, Pose2Sim employs a camera model composed of 4 pinhole camera parameters and 5 distortion parameters. Additionally, both projects export output to the .trc file format to facilitate integration with the biomechanical modelling software OpenSim. While Pose2Sim does not perform bundle adjustment to refine the extrinsic camera estimates, it does provide a number of features that are valuable components of a motion tracking workflow which are not present in Caliscope and would be useful in future motion tracking tools. These include the ability to calibrate camera extrinsics based on scene elements, the capacity to distinguish between multiple subjects in view at once, and more sophisticated triangulation methods that can incorporate the confidence of a modelβs prediction of a pose landmark.
More information could be given on the pose estimation part, especially on the holistic opensim optimization, the rig generation, and how to use it with Rigmarole (unless you want to keep it as a bonus feature outside of the scope of Caliscope).
I have added text to the Post Processing section of the docs to clarifies the Holistic OpenSim tracker.
Essentially, it is a filtered subset of the Mediapipe Holistic output. In test usage, files of the raw Holistic output became unweildy due to containing several hundred points for the face and dozens for the hands; but then using the smaller output of Mediapipe Pose became problematic because it converts to a much lower resolution for point detection. Holistic Opensim was my attempt to make a smaller file with better detection that could track the wrist and skull better.
Regarding the rig generation/Rigmarole: the "bonus features" characterization is spot on. My intention is to try to keep Caliscope largely agnostic to these downstream tasks and to primarily try to do "one thing well" (camera calibration),: and to provide the foundation of a framework for integrating future pose estimation tools into the pipeline.
Doc:
The text on the GitHub.io website is not the same as in the Readme.md file. As I am not sure which is the most up to date, I suppose it could be confusing to other users as well. Maybe you could have a minimal Readme.md file that redirects to the website?
I have added the following to the ReadMe for clarity:
This README provides a general overview and quick guide to install Calicope
on your system. For a more detailed description of the calibration process and workflow, please see our docs.
I installed it within a conda environment instead of venv, but it still goes smoothly
You specify that the user either needs to have their cameras already synchronized and with the same number of frames, or to generate a frame_time_history.csv file. However, you do not redirect to your multiwebcam package (actually you do in the Readme.md but not in the documentation). It would be very convenient to give this hint to the user.
Thank you for the feedback. I have added a Video Capture tab to the workflow section of the docs that directs to MultiWebCam.
Although there can be some indetermination on rotations, most people still use a checkerboard for calibration. Nothing important but it would be nice to include this option.
Thank you. I would need to better understand how to resolve the indeterminate rotations to know how this might be implemented but it does seem like an interesting future addition.
The code testing seems to be testing almost all functions and classes, which is great. However, the workflow could do some linting (none done for now), test on more OS (only Linux for now), and with more Python versions (only 3.11 for now).
Using your CI action linked below as a template I've updated the testing to python 3.10, and 3.11 across macOS/Linux/Windows. And finally got around to making sure the code lints with ruff.
Obviously, according to your instructions, you've done some manual testing on Windows/MacOS/Linux, but it could be made automatic upon each push to the repository. See for example this package, which tests on all OS, with python versions 3.8 to 3.11.
See comment above. Thanks for the useful resource.
It's also a bit strange that the installation procedure asks for Python 3.10 while the tests run on 3.11. Out of curiosity, the pyproject.toml requires python >=3.10, <3.12: what prevents it from running on other versions? Lastly, you make the user install Caliscope via pip, but tests use poetry instead.
For simplicity I now have the tests installing with pip. I have expanded the tests to include 3.10 and 3.11.
There was a patch of python 3.9 that came out that I found was associated with random segfaults in PyQt, so that one got taken off the list.
There are currently issues with the combination of Python 3.12 + Mediapipe + MacOS, so 3.12 is not included. This should not effect the core camera calibration workflow, so future developments that do not involve mediapipe (or macOS) should not experience this roadblock.
Really not needed, but I take advantage of this point to make a suggestion: you could add another workflow to automatically publish your package to pypi upon each new release (see (this example workflow](https://github.com/davidpagnon/Sports2D/blob/main/.github/workflows/publish-on-release.yml))
But typing poetry publish
really does give me so much joy!
GUI:
It's a matter of personal taste, but if the terminal window is not wide enough, the ASCII art collapses and it almost looks like a bug. I would suggest removing it.
Removed
To make everything clearer, what would you think of adding a description for each button upon hovering over them with the mouse? This is a small adjustment but it could make the user experience slightly easier.
Issue created to add tooltips: https://github.com/mprib/caliscope/issues/636
The input boxes in which I enter integers are too small. It would be nice to widen these boxes.
Issue created to widen text boxes: https://github.com/mprib/caliscope/issues/641
In the issue above I include some screenshots of how it is intended to appear (and shows on my system). I have recently removed a qt theme for simplicity's sake which may also impact the character legibility.
I do not quite get how the Charuco board generation works. Once I have set the number of rows and columns and the square size, why are there no margins on both sides of the image when I increase both its width and height?
I'm wondering if images are getting clipped in the GUI? The issue linked above (641) includes some screenshots of how the board should display. My hope is that at the very least the properly rendered board is what is save out as a png
.
It could be useful to add a button to stop camera calibration, volume calibration, and post-processing once they have started. Or just to make these buttons clickable a second time to abort. For example, this can be useful if we process long videos and realize we chose wrong parameters.
Agree this would be an improvement and have added it as an issue, though this is one of those long-term issues that may take some time to reflect on how to best implement : https://github.com/mprib/caliscope/issues/637
Just for context, I have found that killing and re-launching these threads can easily result in pretty bad memory leaks. Realistically, it may take some time before I could implement that improvement in a stable way, though I agree it would be a good future addition.
What is the use of "Add grid" and "Calibrate" in the Camera tab? It seems like "Autocalibrate" is the only useful button.
The first two are the original interface and Autocalibrate was added later. I think most of the time autocalibrate will work fine, but I see the value in having the ability to select specific frames as wanted.
"Zoom" only takes effect when "Apply distortion" is ticked.
This is as intended. It's possible for a bad distortion model to appear reasonable because it also appears zoomed in when "undistorted". When zoomed out, the bizarre distortion around the image edges is apparent.
It would make more sense to move the "Calibrate capture volume" to the "Capture volume" tab, in the same way that "Camera calibration" is in the "Cameras" tab.
I agree that this suggestion would improve the flow. I have created an issue to incorporate this in a future version: https://github.com/mprib/caliscope/issues/644
It is hard to know along which axis to adjust the coordinate system since there is no clear way of knowing which color corresponds to which axis. A simple way would be to simply comply to the xyz -> RGB standard.
I definitely acknowledge that step requires experimentation to orient to the controls. The colors are the defaults offered by PyQtGraph's AxisItem object and I do not see a way in their documentation to modify the color selection.
PyQtGraph is amazing, though. Strongly recommend.
X+, X-, etc do not invert the axis, but rotate the 3D points around axes by 90Β°. I would simply change the buttons to X+90Β°, X-90Β°, etc.
I will plan to include the X+90, etc in the tooltips for clarity.
What is holistic OpenSim? OpenSim is not installed, so how do you perform scaling, inverse kinematics, and rigging? Is it a custom algorithm? I would like to know a bit more about it.
The holistic opensim tracker is discussed above. It is essentially a subset of the holistic mediapipe output that could drive IK on a full skeletal model in OpenSim, but without lots of points for the hands and face.
The scaling is just taking the mean distance between select landmarks across a dynamic calibration trial. Those distances are then used to scale a blender rig that is animated via IK by the subsequent motion trials.
The rigging component is in a separate package, Rigmarole which is based on Blender. Those are definitely side-quests and not the primary goal of the package.
Side remarks (which would take more time to implement)
As you point out in the paper, Caliscope's main goal is to make multi-camera calibration easy and straightforward for non-experts. However, you also include a tool to perform pose estimation, triangulate it, and rig the output. I feel like calibration is a huge problem, and having a tool dedicated solely to that purpose could be quite convenient for countless people. Calibration is also needed for the study of animals, objects, or even for architecture, and most people would not care about human pose. And for those who care, MediaPipe/BlazePose is getting outdated so they may want to use something more recent such as RTMlib. Moreover, that last bit of the package seems to be a bit less polished. What would you think of decorrelating these two parts and making them two separate tools?
I definitely view the "Post Processing" tab as a proof of concept to demonstrate the quality of the calibration and what could be done with it. I am highly drawn to the idea of "do one thing" and see the value of just having the camera system calibration spit out a config which can be used by by other tools.
Landmark tracking, triangulation, filtering, and inverse kinematics are all such rich domains themselves. It would be great to have loosely coupled tools that could pass the ball down field to other packages via standardized human readable file formats.
On the other hand, you coded quite a convenient toolbox to record from multiple webcams and synchronize them. This could be mentioned in the paper, especially since it actually automatically generates the required "frame_time_history.csv" file that is required. I do realize this may sound contradictory to my previous point, but I feel like it would be even better to directly include it in the Caliscope. It would not make the toolbox heavier since the requirements are the same, and people would be able to record and calibrate a scene from one single window.
With video recorded from a synchronized camera system (like FLIR), frame_time_history.csv
is not required. When that file is not present, frames will just be bundled up in lock-step.
Regarding MultiWebCam
, I have become hesitant to promote the use of webcams strongly.
Working with them across multiple operating systems is a bit of a nightmare, and my expectation is that bundling MultiWebCam
with Caliscope
would create issues.
MultiWebCam
did allow me to test a lot of things out cheaply while iterating in my home office, but the resolution/frame rate/motion blur will limit many use cases.
That being said, I also know how much the Vicon systems in our lab cost, and my gut tells me that given how good and inexpensive modern phone cameras are, there has to be a better way than the major mocap vendors.
here I begin a bit of a ramble about the project broadly
I'm excited to pursue alternate hardware approaches in the future such as FLIR, or more DIY approaches such as raspberry pi's with MIPI cameras. I strongly suspect we are approaching at a time where inexpensive off-the-shelf components can capture high quality synchronized footage appropriate for research and clinical applications.
I see this project fundamentally as managing the conversion of raw videos to a human readable camera calibration config.toml
.
I want it to be as agnostic as possible to the adjacent workflows so that it can be easily adapted to integrate with these tools as they emerge and develop.
I'm glad that this review has given an opportunity to connect with someone who is working on these problems as well and hope that we might continue a conversation about them.
I see a future where there is a broad ecosystem of open source tools that can be integrated into a comprehensive motion capture/biomechanical workflow.
My hope with Caliscope
is to contribute one small link in the chain.
Off-topic: I believe the toolbox used to be called "pyxy3d". I like the new name, which looks much less like a bunch of letters collapsed together and sounds nice and easy to remember.
Thank you! Naming things is agonizingly difficult...
Thank you for this extensive response! I'm satisfied with the answers I do not address below.
Essentially, it is a filtered subset of the Mediapipe Holistic output. In test usage, files of the raw Holistic output became unweildy due to containing several hundred points for the face and dozens for the hands; but then using the smaller output of Mediapipe Pose became problematic because it converts to a much lower resolution for point detection. Holistic Opensim was my attempt to make a smaller file with better detection that could track the wrist and skull better.
So I'm not sure I totally get it. I understand that you "scale a blender rig that is animated via IK" but then you do not use OpenSim scaling and IK , do you?
Thank you for the feedback. I have added a Video Capture tab to the workflow section of the docs that directs to MultiWebCam.
I feel like it would be worth linking to MultiWebCam at each mention of frame_time_history.csv, or at least there as when I first had a look at the overview section, I thought you had to create your own script or create the file manually, which can be scary for new users.
But typing poetry publish really does give me so much joy!
Okay, thank you, I don't know much about poetry and did not realize there was such a nice way to publish a package. And more importantly, I would not want to deprive you of such a joy!
In the issue above I include some screenshots of how it is intended to appear (and shows on my system). I have recently removed a qt theme for simplicity's sake which may also impact the character legibility.
I answered on this issue: https://github.com/mprib/caliscope/issues/641
Just for context, I have found that killing and re-launching these threads can easily result in pretty bad memory leaks. Realistically, it may take some time before I could implement that improvement in a stable way, though I agree it would be a good future addition.
Okay, good to know!
What is the use of "Add grid" and "Calibrate" in the Camera tab? It seems like "Autocalibrate" is the only useful button.
The first two are the original interface and Autocalibrate was added later. I think most of the time autocalibrate will work fine, but I see the value in having the ability to select specific frames as wanted.
So the use of "Add grid" and "Calibrate" is for selecting specific frames? I do not see anything about it in the doc, it may be something to add?
Working with them across multiple operating systems is a bit of a nightmare, and my expectation is that bundling MultiWebCam with Caliscope would create issues.
Okay, it makes sense!
What are FLIR cameras? I thought they were mostly for infrared video capture, are you telling me they are also good for RGB capture, synchronization, and that they are rather cheap? In any case, I quite like the idea of using Raspberry Pis with MIPI cameras!
I see a future where there is a broad ecosystem of open source tools that can be integrated into a comprehensive motion capture/biomechanical workflow.
Well, I would not be against the idea of a community creating the Blender of Mocap π It would be quite a long-term project but it does not seem totally out of reach indeed!
@davidpagnon,
I've cropped the conversation for brevity and hope that this addresses any unresolved points. If there are any outstanding questions remaining, please just let me know!
Best,
Mac
Essentially, it is a filtered subset of the Mediapipe Holistic output. In test usage, files of the raw Holistic output became unweildy due to containing several hundred points for the face and dozens for the hands; but then using the smaller output of Mediapipe Pose became problematic because it converts to a much lower resolution for point detection. Holistic Opensim was my attempt to make a smaller file with better detection that could track the wrist and skull better.
So I'm not sure I totally get it. I understand that you "scale a blender rig that is animated via IK" but then you do not use OpenSim scaling and IK , do you?
That is correct. I had originally created that paired down version with the intention of some ad hoc experimentations in OpenSim and ended up repurposing it for use in Blender where I did some automated tools in a different package.
Given the opportunity for confusion, I have renamed it to Simple Holistic
.
Thank you for the feedback. I have added a Video Capture tab to the workflow section of the docs that directs to MultiWebCam.
I feel like it would be worth linking to MultiWebCam at each mention of frame_time_history.csv, or at least there as when I first had a look at the overview section, I thought you had to create your own script or create the file manually, which can be scary for new users.
Excellent suggestion. Link to sample data and MultiWebCam added there.
What is the use of "Add grid" and "Calibrate" in the Camera tab? It seems like "Autocalibrate" is the only useful button.
The first two are the original interface and Autocalibrate was added later. I think most of the time autocalibrate will work fine, but I see the value in having the ability to select specific frames as wanted.
So the use of "Add grid" and "Calibrate" is for selecting specific frames? I do not see anything about it in the doc, it may be something to add?
Thank you. Added instructions associated with this to the Intrinsic Calibration docs.
What are FLIR cameras? I thought they were mostly for infrared video capture, are you telling me they are also good for RGB capture, synchronization, and that they are rather cheap? In any case, I quite like the idea of using Raspberry Pis with MIPI cameras!
I was sloppy with my language. Teledyne FLIR provides cameras that can record synchronized RGB video (among other types) using a shared hardware trigger.
Aside: a potential advantage that I see for a raspberry pi based camera system is that each "camera server" could provide both image capture and potentially some local landmark estimation. Processing times from a multicamera system can become unwieldy when all frames are processed on a single machine, but if distributed across multiple devices it could be far more manageable.
I have evaluated the software on MacOS 15 using a MacBook with an M1 processor.
Following the installation procedure in https://mprib.github.io/caliscope/installation/#__tabbed_2_2, the program did not start.
After checking the main [readme.md](http://readme.md/) of the repo, I saw that for MacOS, some env variables need to be set. This resolved the issue ^^
As @davidpagnon already mentioned, the [readme.md](http://readme.md/) and documentation seem to need to be in complete sync; this should be resolved.
I will first say that this is the first time I have worked with multi-view pose estimation software. However, I have my fair share of camera calibration experience.
I followed the Example Project page. Please note that I found this 404 link while checking the Overview page: https://mprib.github.io/caliscope/sample_data.md
The tutorial YouTube video of the project https://youtu.be/voE3IKYtuIQ should be, IMHO, more prominent for users so that inexperienced people (like me, in this case, ^^) can follow along.
This was extremely helpful and might make the overall experience smooth!!!
The intrinsic calibration ran quickly on my machine, and the log helped me understand what the tool was trying to do in the background.
The extrinsic calibration ran at the same speed for me.
After the reloading of the workflow, the window size (which I adjusted) reset to the original, which is a minor thing, but it confused me for a second ^^
Initially, I was a bit confused about the repetitive reloading of the workspace; however, I quickly realized that whenever a menu option was unavailable, such as "Post Processing," the underlying reason was that something was not yet in the project folder.
So, I quickly learned to utilize the "reload workspace" button whenever something was added to the project or one processing step finished.
After that, I was able to reproduce the test project results.
When I attempted to extract the "FACE" option, the whole tool crashed without any notice and what the underlying cause is/was.
I could repeat this crash several times without a problem, and I checked the log files, but they contained no actual clue.
INFO| caliscope.recording.video_recorder| 101| End of sync packets signaled...breaking record loop
INFO| caliscope.recording.video_recorder| 149| releasing video writers...
INFO| caliscope.recording.video_recorder| 151| releasing video writer for port 1
INFO| caliscope.recording.video_recorder| 151| releasing video writer for port 2
INFO| caliscope.recording.video_recorder| 151| releasing video writer for port 3
INFO| caliscope.recording.video_recorder| 156| Initiate storing of frame history
INFO| caliscope.recording.video_recorder| 176| Storing frame history to /test/recordings/sitting/FACE/frame_time_history.csv
INFO| caliscope.recording.video_recorder| 159| Initiate storing of point history
INFO| caliscope.recording.video_recorder| 170| Storing point data in /test/recordings/sitting/FACE/xy_FACE.csv
INFO| caliscope.recording.video_recorder| 164| About to emit `all frames saved` signal
INFO| caliscope.post_processing.post_processor| 73| (Stage 1 of 2): 99% of frames processed for (x,y) landmark detection
INFO| caliscope.post_processing.post_processor| 96| Reading in (x,y) data..
INFO| caliscope.post_processing.post_processor| 99| Filling small gaps in (x,y) data
INFO| caliscope.post_processing.gap_filling| 42| Gap filling for (x,y) data from port 1. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling| 42| Gap filling for (x,y) data from port 2. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling| 42| Gap filling for (x,y) data from port 3. Filling gaps that are 3 frames or less...
INFO| caliscope.post_processing.gap_filling| 71| (x,y) gap filling complete
INFO| caliscope.post_processing.post_processor| 101| Beginning data triangulation
INFO| caliscope.cameras.camera_array| 209| Creating camera array projection matrices
INFO| caliscope.triangulate.triangulation| 193| Processing points from camera 1
INFO| caliscope.triangulate.triangulation| 193| Processing points from camera 2
INFO| caliscope.triangulate.triangulation| 193| Processing points from camera 3
INFO| caliscope.triangulate.triangulation| 201| Assembling undistorted dataframe
INFO| caliscope.triangulate.triangulation| 120| About to begin triangulation...due to jit, first round of calculations may take a moment.
[1] 92789 bus error caliscope
However, the tool still wrote the results into the appropriate folder, and reloading the project also worked. This is not a significant issue, but perhaps you can find a way to capture such crashes and write the log. Please also note that after restarting the tool, the previous log is gone, so I could not investigate the first crash. IMHO, this is not a reason for me not to continue the paper's publication, but it should be something you should be aware of.
I like the style of the paper. It was written clearly, showing the shortcomings or issues of the current state and which value Caliscope adds to the area.
As most information is also inside the repository and documentation, further explanations are optional from my side.
@mprib I still need to check the 'Installation instructions' checkbox, but it would be nice if you could update the online documentation to contain the MacOS problems and fixes. After that, I will be okay with everything.
@Timozen,
Thank you for providing this feedback! I am grateful to get your perspective on the project and particularly glad to have the experience of a MacOS user as part of the review.
My replies are embedded in context below:
Installation
I have evaluated the software on MacOS 15 using a MacBook with an M1 processor.
Following the installation procedure in https://mprib.github.io/caliscope/installation/#__tabbed_2_2, the program did not start.
After checking the main [readme.md](http://readme.md/) of the repo, I saw that for MacOS, some env variables need to be set. This resolved the issue ^^
As @davidpagnon already mentioned, the [readme.md](http://readme.md/) and documentation seem to need to be in complete sync; this should be resolved.
I very much appreciate the catch. I have updated the docs to direct the setting of environment variables as is done on the README. Apologies for the confusion.
Usage
I will first say that this is the first time I have worked with multi-view pose estimation software. However, I have my fair share of camera calibration experience.
I followed the Example Project page. Please note that I found this 404 link while checking the Overview page: https://mprib.github.io/caliscope/sample_data.md
Thank you. Updated now to the correct link: https://mprib.github.io/caliscope/sample_project/
The tutorial YouTube video of the project https://youtu.be/voE3IKYtuIQ should be, IMHO, more prominent for users so that inexperienced people (like me, in this case, ^^) can follow along.
This was extremely helpful and might make the overall experience smooth!!!
Thank you for this advice. I've linked and bolded the walkthrough in the intro of the README.
The intrinsic calibration ran quickly on my machine, and the log helped me understand what the tool was trying to do in the background.
The extrinsic calibration ran at the same speed for me.
After the reloading of the workflow, the window size (which I adjusted) reset to the original, which is a minor thing, but it confused me for a second ^^
This is a hiccup that is related to PyQtGraph, an excellent library. The flicker out and back in is one quirk that happens when loading the 3D viewer which I haven't found a way around.
Initially, I was a bit confused about the repetitive reloading of the workspace; however, I quickly realized that whenever a menu option was unavailable, such as "Post Processing," the underlying reason was that something was not yet in the project folder.
So, I quickly learned to utilize the "reload workspace" button whenever something was added to the project or one processing step finished.
After that, I was able to reproduce the test project results.
Thank you for the feedback about the first time through. I have the addition of tool tips as an Issue to resolve. I recall clearly the addition of tooltips you did on JeFaPaTo and it really was a helpful thing. Glad to see where the points of confusion lie with Caliscope.
When I attempted to extract the "FACE" option, the whole tool crashed without any notice and what the underlying cause is/was.
I could repeat this crash several times without a problem, and I checked the log files, but they contained no actual clue.
INFO| caliscope.recording.video_recorder| 101| End of sync packets signaled...breaking record loop INFO| caliscope.recording.video_recorder| 149| releasing video writers... INFO| caliscope.recording.video_recorder| 151| releasing video writer for port 1 INFO| caliscope.recording.video_recorder| 151| releasing video writer for port 2 INFO| caliscope.recording.video_recorder| 151| releasing video writer for port 3 INFO| caliscope.recording.video_recorder| 156| Initiate storing of frame history INFO| caliscope.recording.video_recorder| 176| Storing frame history to /test/recordings/sitting/FACE/frame_time_history.csv INFO| caliscope.recording.video_recorder| 159| Initiate storing of point history INFO| caliscope.recording.video_recorder| 170| Storing point data in /test/recordings/sitting/FACE/xy_FACE.csv INFO| caliscope.recording.video_recorder| 164| About to emit `all frames saved` signal INFO| caliscope.post_processing.post_processor| 73| (Stage 1 of 2): 99% of frames processed for (x,y) landmark detection INFO| caliscope.post_processing.post_processor| 96| Reading in (x,y) data.. INFO| caliscope.post_processing.post_processor| 99| Filling small gaps in (x,y) data INFO| caliscope.post_processing.gap_filling| 42| Gap filling for (x,y) data from port 1. Filling gaps that are 3 frames or less... INFO| caliscope.post_processing.gap_filling| 42| Gap filling for (x,y) data from port 2. Filling gaps that are 3 frames or less... INFO| caliscope.post_processing.gap_filling| 42| Gap filling for (x,y) data from port 3. Filling gaps that are 3 frames or less... INFO| caliscope.post_processing.gap_filling| 71| (x,y) gap filling complete INFO| caliscope.post_processing.post_processor| 101| Beginning data triangulation INFO| caliscope.cameras.camera_array| 209| Creating camera array projection matrices INFO| caliscope.triangulate.triangulation| 193| Processing points from camera 1 INFO| caliscope.triangulate.triangulation| 193| Processing points from camera 2 INFO| caliscope.triangulate.triangulation| 193| Processing points from camera 3 INFO| caliscope.triangulate.triangulation| 201| Assembling undistorted dataframe INFO| caliscope.triangulate.triangulation| 120| About to begin triangulation...due to jit, first round of calculations may take a moment. [1] 92789 bus error caliscope
However, the tool still wrote the results into the appropriate folder, and reloading the project also worked. This is not a significant issue, but perhaps you can find a way to capture such crashes and write the log. Please also note that after restarting the tool, the previous log is gone, so I could not investigate the first crash. IMHO, this is not a reason for me not to continue the paper's publication, but it should be something you should be aware of.
This is definitely something I want to address so I've created an issue here for the face tracker crashing: https://github.com/mprib/caliscope/issues/648
In the meantime I've removed it from the set of sample trackers to avoid confusion.
GUI
* First of all, I know the effort that goes into creating such a sophisticated GUI; props to that! * The concerns I would have raised already have been brought up by @davidpagnon and answered by @mprib. * It would still be helpful if this information is in the tool docs with an explanation.
Paper
I like the style of the paper. It was written clearly, showing the shortcomings or issues of the current state and which value Caliscope adds to the area.
As most information is also inside the repository and documentation, further explanations are optional from my side.
@mprib I still need to check the 'Installation instructions' checkbox, but it would be nice if you could update the online documentation to contain the MacOS problems and fixes. After that, I will be okay with everything.
Absolutely! With the most recent merge, the MacOS fixes are now live on the docs.
JeFaPaTo was an awesome project to see because I knew what putting together a full-fledged qt tool like that required. Your review means a lot to me. Thank you for the time you've invested working through the pain points (and documenting them) so that I can make this a more smooth experience for others.
@mprib, that all sounds very good to me! I checked the last missing box in my review panel, and I would be good to go now to continue with the next part of the submission phase @mooniean :)
Concerning the opened issue https://github.com/mprib/caliscope/issues/648, if you need my support and testing again, just let me know over there ^^
Apologies for the delay!
@davidpagnon are you happy for the review to proceed?
I am, thanks! The issues have been taken into account. Not all have been processed yet, but it is a work in progress and the remaining ones are details: the toolbox is perfectly usable and useful.
@editorialbot set <DOI here> as archive
@editorialbot set <version here> as version
@editorialbot generate pdf
@editorialbot check references
and ask author(s) to update as needed@editorialbot recommend-accept
@mprib Last final checks! After you've finished the tasks in the comment above, let me know :)
@mooniean,
The JOSS release has been uploaded to Zenodo: https://zenodo.org/records/13905848
DOI: 10.5281/zenodo.13905848
I have double checked the ORCID/Author/Title/License info.
Please let me know if there is anything that I need to clean up on my end.
Thank you!
Mac
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Upon reviewing the proof I have made minor updates to the paper.bib to address some issues.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot set 10.5281/zenodo.13905848 as archive
Done! archive is now 10.5281/zenodo.13905848
@mprib Hello! Those changes were sneaky, I was going through my reading :)
Just one missing word in the paper: Line 61:
Additionally, both projects export their output to the
Please also post here the version number.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
β
OK DOIs
- 10.5281/zenodo.7233714 is OK
- 10.1016/j.celrep.2021.109730 is OK
- 10.48550/arXiv.1906.08172 is OK
- 10.1038/s41593-018-0209-y is OK
- 10.21105/joss.04362 is OK
- 10.1007/3-540-44480-7_21 is OK
- 10.1038/s41592-019-0686-2 is OK
π‘ SKIP DOIs
- No DOI given, and none found for title: The OpenCV Library
- No DOI given, and none found for title: Indie Vicon System Cost and Solo Operator Guide
- No DOI given, and none found for title: Large-Scale Bundle Adjustment in Scipy
- No DOI given, and none found for title: OpenMMLab Pose Estimation Toolbox and Benchmark
β MISSING DOIs
- 10.3389/fspor.2020.00050 may be a valid DOI for title: Evaluation of 3D Markerless Motion Capture Accurac...
β INVALID DOIs
- None
@mprib check the missing DOIs (and try to find the ones for the "SKIP" too)
@mooniean,
I have updated paper.md
with the edit and paper.bib
with the missing DOI. I have double checked the SKIPs and it appears that these do not have DOIs.
The version number in the repo, the zenodo release, and PyPI is v0.5.1.
Please let me know if anything additional is needed.
Thank you!
Mac
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
β
OK DOIs
- 10.5281/zenodo.7233714 is OK
- 10.1016/j.celrep.2021.109730 is OK
- 10.48550/arXiv.1906.08172 is OK
- 10.1038/s41593-018-0209-y is OK
- 10.3389/fspor.2020.00050 is OK
- 10.21105/joss.04362 is OK
- 10.1007/3-540-44480-7_21 is OK
- 10.1038/s41592-019-0686-2 is OK
π‘ SKIP DOIs
- No DOI given, and none found for title: The OpenCV Library
- No DOI given, and none found for title: Indie Vicon System Cost and Solo Operator Guide
- No DOI given, and none found for title: Large-Scale Bundle Adjustment in Scipy
- No DOI given, and none found for title: OpenMMLab Pose Estimation Toolbox and Benchmark
β MISSING DOIs
- None
β INVALID DOIs
- None
@editorialbot set v0.5.1 as version
Done! version is now v0.5.1
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
β
OK DOIs
- 10.5281/zenodo.7233714 is OK
- 10.1016/j.celrep.2021.109730 is OK
- 10.48550/arXiv.1906.08172 is OK
- 10.1038/s41593-018-0209-y is OK
- 10.3389/fspor.2020.00050 is OK
- 10.21105/joss.04362 is OK
- 10.1007/3-540-44480-7_21 is OK
- 10.1038/s41592-019-0686-2 is OK
π‘ SKIP DOIs
- No DOI given, and none found for title: The OpenCV Library
- No DOI given, and none found for title: Indie Vicon System Cost and Solo Operator Guide
- No DOI given, and none found for title: Large-Scale Bundle Adjustment in Scipy
- No DOI given, and none found for title: OpenMMLab Pose Estimation Toolbox and Benchmark
β MISSING DOIs
- None
β INVALID DOIs
- None
:wave: @openjournals/dsais-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/5997, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
@editorialbot generate pdf
π checking out the following:
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:wave: @mprib - Thanks for producing a really nice submission. I just need you to address the following before I accept this for publication:
No need to conduct a new release. Just let me know when this has been fixed and I'll accept.
Thanks!
@crvernon,
I've made the update now to paper.md
.
Thank you!
Mac
@editorialbot generate pdf
Submitting author: !--author-handle-->@mprib<!--end-author-handle-- (Donald Prible) Repository: https://github.com/mprib/caliscope Branch with paper.md (empty if default branch): Version: v0.5.1 Editor: !--editor-->@mooniean<!--end-editor-- Reviewers: @Timozen, @davidpagnon Archive: 10.5281/zenodo.13905848
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@Timozen & @davidpagnon, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mooniean know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @davidpagnon
π Checklist for @Timozen