openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
721 stars 38 forks source link

[REVIEW]: LenslessPiCam: A Hardware and Software Platform for Lensless Computational Imaging with a Raspberry Pi #4747

Closed editorialbot closed 1 year ago

editorialbot commented 2 years ago

Submitting author: !--author-handle-->@ebezzam<!--end-author-handle-- (Eric Bezzam) Repository: https://github.com/LCAV/LenslessPiCam Branch with paper.md (empty if default branch): Version: v1.0.4 Editor: !--editor-->@danasolav<!--end-editor-- Reviewers: @raolivei13, @siddiquesalman Archive: 10.5281/zenodo.8036869

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/251f14f2ffe4ccf239796ad4a71e2bb7"><img src="https://joss.theoj.org/papers/251f14f2ffe4ccf239796ad4a71e2bb7/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/251f14f2ffe4ccf239796ad4a71e2bb7/status.svg)](https://joss.theoj.org/papers/251f14f2ffe4ccf239796ad4a71e2bb7)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@antipa & @vboomi & @raolivei13, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @danasolav know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @raolivei13

📝 Checklist for @siddiquesalman

editorialbot commented 2 years ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 2 years ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.08 s (578.0 files/s, 79113.8 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          35            727           1008           3548
Markdown                         7            159              0            569
TeX                              1             11              0            114
Arduino Sketch                   1              2              1             12
Bourne Shell                     1              1              0              7
-------------------------------------------------------------------------------
SUM:                            45            900           1009           4250
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 2 years ago

Wordcount for paper.md is 2069

editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1364/OPTICA.431361 is OK
- 10.1109/ICASSP.2017.8005297 is OK
- 10.1109/ICASSP.2019.8682923 is OK
- 10.1561/2200000016 is OK
- 10.1109/CVPR.2018.00068 is OK
- 10.1137/080716542 is OK
- 10.1364/OE.27.028075 is OK

MISSING DOIs

- Errored finding suggestions for "Build your own DiffuserCam: Tutorial", please try later
- Errored finding suggestions for "Pycsou", please try later
- Errored finding suggestions for "A method for solving the convex programming proble...", please try later

INVALID DOIs

- None
danasolav commented 2 years ago

@editorialbot remove @antipa from reviewers

editorialbot commented 2 years ago

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@editorialbot commands

danasolav commented 2 years ago

@editorialbot remove @antipa from reviewers

editorialbot commented 2 years ago

@antipa removed from the reviewers list!

danasolav commented 2 years ago

@vboomi, @raolivei13 the review process takes place here. Please see the instructions in the thread above (generate your checklists etc.) and in this link. Thanks!

raolivei13 commented 2 years ago

Review checklist for @raolivei13

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

ebezzam commented 2 years ago

Hi @raolivei13 thank you for your review. I was wondering if you could elaborate on some of the points you haven't checked? I've left a comment on each point below.

Perhaps some of these things were not made clear in the paper, which would be great to receive your feedback on how we can better present / what we should include to fill in the gaps. Thanks!

Substantial scholarly effort

Why do you think the work the doesn't meet the scope eligibility described in the JOSS guidelines?

Data sharing

We describe in the README where to get the data for our examples.

Reproducibility

It's true that when it comes to hardware, it takes more of an effort to reproduce. To this end, we tried to be as detailed as possible to reproduce our camera though Medium posts. Otherwise in terms of reconstruction, we provided scripts that we hope are straightforward to reproduce the results we present in the paper.

Functionality

Again, as hardware is involved the functionality for measurement may be difficult to reproduce. But in terms of reconstruction, we hope the following scripts make it straightforward to confirm that side of things:

Performance

The "Efficient reconstruction" section describes some of our performance claims, which can be reproduced with these scripts:

Automated tests

We provide unit tests in this folder which can be run with pytest

vboomi commented 2 years ago

Review checklist for @vboomi

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

raolivei13 commented 2 years ago

Hello, there are some points where I made a mistake and will fix it. I will review the checkpoints at some point this week.

best, Richard

On Mon, Oct 17, 2022 at 2:02 AM Eric Bezzam @.***> wrote:

Hi @raolivei13 https://github.com/raolivei13 thank you for your review. I was wondering if you could elaborate on some of the points you haven't checked? I've left a comment on each point below.

Perhaps some of these things were not made clear in the paper, which would be great to receive your feedback on how we can better present / what we should include to fill in the gaps. Thanks! Substantial scholarly effort

Why do you think the work the doesn't meet the scope eligibility described in the JOSS guidelines? Data sharing

We describe in the README https://github.com/LCAV/LenslessPiCam#data-for-examples- where to get the data for our examples. Reproducibility

It's true that when it comes to hardware, it takes more of an effort to reproduce. To this end, we tried to be as detailed as possible to reproduce our camera though Medium posts @.***/a-complete-lensless-imaging-tutorial-hardware-software-and-algorithms-8873fa81a660>. Otherwise in terms of reconstruction, we provided scripts https://github.com/LCAV/LenslessPiCam/tree/main/scripts that we hope are straightforward to reproduce the results we present in the paper. Functionality

Again, as hardware is involved the functionality for measurement may be difficult to reproduce. But in terms of reconstruction, we hope the following scripts make it straightforward to confirm that side of things:

Performance

The "Efficient reconstruction" section describes some of our performance claims, which can be reproduced with these scripts:

Automated tests

We provide unit tests in this folder https://github.com/LCAV/LenslessPiCam/tree/main/test which can be run with pytest

— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/4747#issuecomment-1280524225, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASLW27KAQJ3OXTTWDKRIELLWDUI33ANCNFSM6AAAAAAQIN7XNU . You are receiving this because you were mentioned.Message ID: @.***>

danasolav commented 1 year ago

@raolivei13, @vboomi , how is the review process going?

danasolav commented 1 year ago

@raolivei13 @vboomi could you please provide an update on your review process?

raolivei13 commented 1 year ago

Hello, I have reviewed some of the missing points. When it comes to "Reproducibility" I was a little hesitant on checking this off, because it can be interpreted in different ways. Reproducibility in terms of the Software? Or Reproducibility in terms of the Hardware? Since this is a paper which is probably getting published in a Software Journal, we are probably looking at how the Software cab be easily re - used for a similar experiment, which involves some sort of image reconstruction with a Lensless Camera. The Hardware on the other hand, seems to go hand in hand with the Software, i.e "there would be no problem here if we didn't consider the use of a lensless camera". The paper illustrates the problem quite well, but If I were to reproduce this experiment, I would be a little lost on how to set up the experiment on the hardware side of things. For example: "How do I carefully remove the lens from the PiCamera?", "How to capture the Impulse Response of the system (i.e PSF) ?". Not sure if this Hardware Reproducibility is important considering the nature of the paper, which is leaning on the software approach. This is all I have to say, I hope this comment was somewhat insightful, but at the end of the day, as I mentioned before this is a Software paper, and maybe the Hardware aspects of the paper might not be too important.

Best, Richard

danasolav commented 1 year ago

@raolivei13, thank you for the important comment. @ebezzam, since this software is inherently linked to certain hardware. I recommend adding to your repository sufficient details on the hardware and experimental setup, such that replication of your setup by new users is straightforward and unambiguous.

ebezzam commented 1 year ago

Hi @raolivei13, thank you for that comment on reproducing the hardware. I agree that it is a bit ambiguous whether the hardware is also meant to be reproduced. Nonetheless, this is something we did strive to achieve (reproducibility of hardware and accessibility of components), and you can find the instructions on building the camera in the blog post that is referenced in the README and the "About" section of the repository. Moreover, I just added another comment in the Setup section. It is also mentioned in Line 80 of the paper. Please let me know if you think there is another way this information could be made clearer.

We opted for a Medium article as we found it to be a much more friendlier/interactive way to present the hardware side of things:

But if you feel like more of this info should be place in the README, let us know!

ebezzam commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

ebezzam commented 1 year ago

I've updated the PDF as there have been developments in the project, most notably:

danasolav commented 1 year ago

@siddiquesalman, are you able to join this review in place of @vivek? Thanks, Dana

siddiquesalman commented 1 year ago

Hi @danasolav, yes I can join the review of this work.

danasolav commented 1 year ago

@editorialbot remove @vboomi from reviewers

editorialbot commented 1 year ago

@vboomi removed from the reviewers list!

danasolav commented 1 year ago

@editorialbot add @siddiquesalman as reviewer

editorialbot commented 1 year ago

@siddiquesalman added to the reviewers list!

danasolav commented 1 year ago

@siddiquesalman, thank you for joining this review. Please generate your checklist by typing: "@editorialbot generate my checklist."

siddiquesalman commented 1 year ago

Review checklist for @siddiquesalman

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

danasolav commented 1 year ago

@siddiquesalman, how is the review process going? Please note that we aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. Please feel free to ping me (@danasolav) if you have any questions/concerns.

siddiquesalman commented 1 year ago

The repository builds on this popular DiffuserCam tutorial from Waller Lab which provides extensive details on how to build a diffuser based lensless camera and also provides the python implementation of a few algorithms for easy testing. The proposed repository is an extension of the same with some changes. Therefore I have concerns about the usefulness of this submission in its current form and have some suggestions that might improve it.

Strengths:

  1. A more structured approach to Pi camera based lensless imaging.
  2. Simple framework for remote capture of raw bayer data.
  3. Faster implementation of existing traditional reconstruction algorithms.

Weaknesses and Suggestions:

  1. Reconstruction Quality: The current reconstruction resolution is really small for the example image you have provided. I believe a controlled smaller scene that occupies only a part of the field of view (FoV) was used to avoid heavy cropping of measurement. However, this is not a very useful scenario in the real world where you have a significant amount of light coming from the whole FoV of the camera (that can span 40-45 degrees on each side for your prototype). So, please include results and measurements for larger scenes displayed on the monitor and real world scenes of objects placed in front of the camera that occupy a large chunk of the FoV. How does your algorithm scale for these larger scenes? Please provide FoV analysis in the corresponding Medium article or the document accompanying the submission. This is important to justify the usefulness of the system.
  2. Biscarrat et. al have provided the python implementations of ADMM, GD, FISTA algorithms which are pretty convenient to use for people working in lensless imaging including myself. The package provided here simply extends it to RGB images and provides a faster implementation. One way to improve upon existing implementation would be to incorporate GPU implementation through python packages like JAX or PyTorch. For example, this repository has GPU implementation of ADMM for lensless camera in PyTorch. Similar re-implementation can be done for other algorithms. I understand that GPUs are not always available, it is still a useful feature to have as it allows future integration of machine learning into it.
  3. How does your reconstruction software work for different PSFs and captures already available here and here?

Overall, I like the idea of having a more structured documentation and software for Pi Camera based lensless imaging. However, the usefulness of the current draft is not fully justified. Above suggestions should be incorporated to do that.

ebezzam commented 1 year ago

Hi @siddiquesalman, thank you for your comments! It’s great to have feedback from someone else working in lensless imaging.

Thank you for pointing out some of the strengths. I would also like to point out:

Your comments reflect that some of the presentation could be improved. Our paper is already quite lengthy with regards to other JOSS papers, so if you have suggestions on what is essential to keep/rearrange that would be useful. @danasolav do you have suggestions on this?

Regarding the weaknesses and suggestions:

  1. It wasn't our intention to cherry-pick a controlled scene. It was one that allows for a "meaningful" comparison with the DiffuserCam tutorial, as the quality is very poor with this baseline camera (Fig 1). Otherwise, results from student projects have shown wider scenes displayed on a monitor (line 51, example project, see page 10), and Figure 5 shows a wider scene. For FoV analysis, do you mean something like Fig 3 in DiffuserCam3D?
  2. I’ve added GPU / PyTorch support with this PR. I’ve also updated Table 1 to reflect computation time when using PyTorch (with and without GPU). Thank you for this suggestion, as the availability of PyTorch / GPU significantly speeds up computation time (800x and 2000x for GD and ADMM respectively with PyTorch + GPU!) and allows for machine learning integration. As our main platform is a Raspberry Pi, we didn’t intend to have PyTorch support, so we still make it an optional dependency such that users can install the library and use the numpy versions without having to install PyTorch.
  3. Figure 5 shows reconstruction with the LenslessLearning dataset. And we are actually working to integrate FlatNet by June!

Once again, thank you for going over our work and your suggestions.

ebezzam commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

siddiquesalman commented 1 year ago

Thanks for addressing my comments @ebezzam

  1. I think including reconstruction of real world scenes, in addition to the monitor displayed ones, will address this point (including the FoV question). Including reconstruction results like Fig S3 in the DiffuserCam3D supplementary material (link) or Fig 12 in the PhlatCam paper (link) would do the job. Comparison with baseline camera is not needed. Since the low cost prototype is a contribution, it's good to have an idea about how it works for these real world objects.
  2. Thanks for including GPU support for this. This improves the algorithmic/software contribution.
  3. Thanks for pointing this out.
  4. I think for some of the figures (like Figures 1 and 3), the images can be cropped to save some space as the majority of the background has zeros.
danasolav commented 1 year ago

@siddiquesalman thank you for these additional comments. @ebezzam please address these comments, and then I'll proceed with the final reviewing step.

danasolav commented 1 year ago

@raolivei13 , can you please check the submission one last time and confirm if you recommend acceptance?

ebezzam commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

ebezzam commented 1 year ago

@danasolav @raolivei13 I've included the images requested by @siddiquesalman above sorry for the delay!

danasolav commented 1 year ago

@siddiquesalman @raolivei13 could you please confirm that @ebezzam 's revisions answer all of your questions and requests?

siddiquesalman commented 1 year ago

@danasolav all my comments have been addressed by @ebezzam. Thanks.

danasolav commented 1 year ago

@ebezzam, please see the following minor comments regarding the paper. After addressing these, we'll be able to proceed with the acceptance process:

Thanks, Dana

ebezzam commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

ebezzam commented 1 year ago

@danasolav, thank you for the detailed and very helpful comments! I've addressed everything in the above generated PDF. Please let me know if I missed anything.

Below are some comments on a few of your points:

Line 40: please add more information on the comparison shown in Figure 1. They don't seem to compare a reconstruction of the same image, so what is the significance of this comparison?

The purpose is to compare:

to show that reconstructions aren’t as good and limited to grayscale. I’ve done a new measurement with DiffuserCam so that the image is the same for both cameras in Figure 1.

Line 95: the function name does not compile properly and extends beyond the line. Could you force a manual new line?

In the Docker compiled version (without line numbers) it renders correctly, example. Could it be an artifact from the peer-reviewed version with line numbers?

Line 143: it would be helpful to explain the meaning of these numbers and their limits, where applicable.

I’ve added more description about each metric, their limits, and links/references. In 154-160, I’ve added an interpretation of Figure 6 and Table 2, which motivates the next section on using measured / simulated data.

ebezzam commented 1 year ago

Hi @danasolav, just wondering if you had time to look at the changes I made and if they satisfy your points? We'll be presenting LenslessPiCam as demo at a conference next week, and would be great (if possible) to have it published by then. Thanks!

danasolav commented 1 year ago

@editorialbot check references

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1364/OPTICA.431361 is OK
- 10.1109/ICASSP.2017.8005297 is OK
- 10.1109/ICASSP.2019.8682923 is OK
- 10.1561/2200000016 is OK
- 10.1109/CVPR.2018.00068 is OK
- 10.1137/080716542 is OK
- 10.1364/OE.27.028075 is OK

MISSING DOIs

- None

INVALID DOIs

- None