openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
720 stars 38 forks source link

[PRE REVIEW]: UndersmoothedUnfolding: Undersmoothed uncertainty quantification for unfolding in ROOT #3645

Closed whedon closed 3 years ago

whedon commented 3 years ago

Submitting author: @jlylekim (Junhyung Lyle Kim) Repository: https://github.com/jlylekim/UndersmoothedUnfolding Version: v1.0.0 Editor: @danielskatz Reviewers: @matthewfeickert Managing EiC: Daniel S. Katz

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a"><img src="https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a/status.svg)](https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a)

Author instructions

Thanks for submitting your paper to JOSS @jlylekim. Currently, there isn't an JOSS editor assigned to your paper.

@jlylekim if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). In addition, this list of people have already agreed to review for JOSS and may be suitable for this submission (please start at the bottom of the list).

Editor instructions

The JOSS submission bot @whedon is here to help you find and assign reviewers and start the main review. To find out what @whedon can do for you type:

@whedon commands
whedon commented 3 years ago

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
whedon commented 3 years ago

Wordcount for paper.md is 764

whedon commented 3 years ago
Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.04 s (430.5 files/s, 148707.6 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
C++                              3            393           1229           2900
reStructuredText                 7             89             57            165
C/C++ Header                     1             39            169            156
make                             2             61             96            128
TeX                              1             11              0             93
Markdown                         2             30              0             76
Python                           1             46             98             36
-------------------------------------------------------------------------------
SUM:                            17            669           1649           3554
-------------------------------------------------------------------------------

Statistical information for the repository '1c517b1420aba53d2e097bb6' was
gathered on 2021/08/22.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
Lyle Kim                        14         13779          13235           99.99
Mikael Kuusela                   1             1              1            0.01

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
Lyle Kim                    544            3.9          3.8               49.63
whedon commented 3 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1142/S0217751X20501456 is OK

MISSING DOIs

- 10.1214/15-aoas857 may be a valid DOI for title: Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification
- 10.1214/17-aoas1053 may be a valid DOI for title: Shape-constrained uncertainty quantification in unfolding steeply falling elementary particle spectra
- 10.1088/1748-0221/7/10/t10003 may be a valid DOI for title: TUnfold, an algorithm for correcting migration effects in high energy physics

INVALID DOIs

- None
whedon commented 3 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

danielskatz commented 3 years ago

👋 @dpsanders - would you be able to edit this submission for JOSS?

danielskatz commented 3 years ago

@whedon invite @dpsanders as editor

whedon commented 3 years ago

@dpsanders has been invited to edit this submission.

danielskatz commented 3 years ago

@jlylekim - While we find an editor, you could work on the possibly missing DOIs that whedon suggests, but note that some may be incorrect. Please feel free to make changes to your .bib file, then use the command @whedon check references to check again, and the command @whedon generate pdf when the references are right to make a new PDF. Whedon commands need to be the first entry in a new comment.

whedon commented 3 years ago

Checking the BibTeX entries failed with the following error:

Failed to parse BibTeX on value "year" (NAME) [#, "@", #, {:title=>["TUnfold, an algorithm for correcting migration effects in high energy physics"], :author=>["Schmitt, Stefan"], :journal=>["Journal of Instrumentation"], :volume=>["7"], :number=>["10"], :pages=>["T10003"], :doi=>["10.1088/1748-0221/7/10/T10003"]}]

jlylekim commented 3 years ago

@whedon check references

whedon commented 3 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1214/15-AOAS857 is OK
- 10.5075/epfl-thesis-7118 is OK
- 10.1214/17-AOAS1053 is OK
- 10.1088/1748-0221/7/10/T10003 is OK
- 10.1016/S0168-9002(97)00048-X is OK
- 10.1142/S0217751X20501456 is OK

MISSING DOIs

- None

INVALID DOIs

- None
jlylekim commented 3 years ago

@whedon generate pdf

whedon commented 3 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jlylekim commented 3 years ago

Submitting author: @jlylekim (Junhyung Lyle Kim) Repository: https://github.com/jlylekim/UndersmoothedUnfolding Version: v1.0.0 Editor: Pending Reviewer: Pending Managing EiC: Daniel S. Katz

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a"><img src="https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a/status.svg)](https://joss.theoj.org/papers/fac3ef5490cce1b2ed193d0bdcd3ea9a)

Author instructions

Thanks for submitting your paper to JOSS @jlylekim. Currently, there isn't an JOSS editor assigned to your paper.

@jlylekim if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). In addition, this list of people have already agreed to review for JOSS and may be suitable for this submission (please start at the bottom of the list).

Editor instructions

The JOSS submission bot @whedon is here to help you find and assign reviewers and start the main review. To find out what @whedon can do for you type:

@whedon commands

Thank you for pointing out the missing DOIs--they are added in the .bib file.

Also, here is a list of potential reviewers: andrewfowlie ghammad rafaelab

Thank you, Lyle

danielskatz commented 3 years ago

@whedon assign me as editor

whedon commented 3 years ago

OK, the editor is @danielskatz

danielskatz commented 3 years ago

👋 @andrewfowlie, @ghammad, @rafaelab - would one or two of you want to review this submission for JOSS?

matthewfeickert commented 3 years ago

@danielskatz :wave: given our discussion on IRIS-HEP Slack.

danielskatz commented 3 years ago

Thanks @matthewfeickert - I'll add you as a reviewer, and once we get another reviewer, we will start the review in a new thread

danielskatz commented 3 years ago

@whedon assign @matthewfeickert as reviewer

whedon commented 3 years ago

OK, @matthewfeickert is now a reviewer

andrewfowlie commented 3 years ago

Looks interesting and I’m in general willing and able to contribute, but I’m afraid my schedule is too hectic right now.

danielskatz commented 3 years ago

@jlylekim - do you have any other suggestions for reviewers? (they don't have to be from our list, as long as they are knowledgable and not conflicted)

matthewfeickert commented 3 years ago

Hi @jlylekim. Before the review gets started please read through the JOSS Review criteria. At the moment there are parts of your submission that don't address requirements there, so it will help speed up the review if you can address them ahead of time.

You're also vendoring ROOT's TUnfold in TUnfoldV17.cxx (which is fine given that ROOT is licensed under LGPL and so if ROOT has the proper license to do so then you can vendor as well — though note that your LICENSE is just a template and not actually filled in)

///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////
// Junhyung Lyle Kim, Rice University and Mikael Kuusela, Carnegie Mellon University //
// Date: 1/17/2020                                                                   //
// Modified from TUnfoldV17.cxx (version 17.8) by Stefan Schmitt                     //
//                                                                                   //
// Our contribution starts from line XXX to line YYY                                 //
// The rest is from the original TUnfold (version 17.8)                              //
//                                                                                   //
// Our modiciation and its documentation can be found in                             //
// https://jlylekim.github.io/UndersmoothedUnfolding/                                //
//                                                                                   //
///////////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////////

and so submitting code contributions only from lines 3706 to 3877 in TUnfoldV17.cxx.

I'll defer to @danielskatz and others editors here, but before this goes forward to a full review it would be helpful to understand if this meets the scholarly effort criteria for JOSS as this is a short extension to TUnfold.

danielskatz commented 3 years ago

@matthewfeickert - I took 2900 LOC of C++ to be a substantial contribution, and extending an existing package is not a problem. But now I see that what you are saying is that the contribution is actually just 150 LOC. Is this correct?

matthewfeickert commented 3 years ago

But now I see that what you are saying is that the contribution is actually just 150 LOC. Is this correct?

@danielskatz Yes. As far as I can tell from visual inspection the following diff and the code comments, the contribution that differs from TUnfold v17.8 (TUnfold is distributed on Stefan Schmitt's website https://www.desy.de/~sschmitt/tunfold.html and we normally see it vendored in ROOT) are these 171 lines.

Diff of the source files: ```console $ mkdir TUnfold $ curl -sLO https://www.desy.de/~sschmitt/TUnfold/TUnfold_V17.8.tgz $ tar -xf TUnfold_V17.8.tgz --directory TUnfold $ curl -sL https://raw.githubusercontent.com/jlylekim/UndersmoothedUnfolding/349d4d95363eaea2cabd377576caa1a82b036a38/TUnfoldV17.cxx -o UndersmoothedUnfolding_TUnfoldV17.cxx $ diff TUnfold/TUnfoldV17.cxx UndersmoothedUnfolding_TUnfoldV17.cxx 0a1,17 > /////////////////////////////////////////////////////////////////////////////////////// > /////////////////////////////////////////////////////////////////////////////////////// > // Junhyung Lyle Kim, Rice University and Mikael Kuusela, Carnegie Mellon University // > // Date: 1/17/2020 // > // Modified from TUnfoldV17.cxx (version 17.8) by Stefan Schmitt // > // // > // Our contribution starts from line XXX to line YYY // > // The rest is from the original TUnfold (version 17.8) // > // // > // Our modiciation and its documentation can be found in // > // https://jlylekim.github.io/UndersmoothedUnfolding/ // > // // > /////////////////////////////////////////////////////////////////////////////////////// > /////////////////////////////////////////////////////////////////////////////////////// > > > 39c56 < TUnfold is used to decompose a measurement y into several sources x, --- > TUnfold is used to decompose a measurement y into several sources x, 49c66 < features to TUnfold, such as: --- > features to TUnfold, such as: 61c78 < Detailed documentation and updates are available on --- > Detailed documentation and updates are available on 119a137,138 > #include "Math/ProbFunc.h" > 398c417 < } --- > } 552c571 < n_rho++; --- > n_rho++; 675c694 < --- > 938c957 < Int_t col_dest=(i_dest Int_t col_dest=(i_dest BnumEl++; 1564c1583 < --- > 1587c1606 < --- > 1663c1682 < /// context of TUnfoldDensity, where binnig schemes are implemented --- > /// context of TUnfoldDensity, where binnig schemes are implemented 1665c1684 < /// returned. --- > /// returned. 1707a1727 > std::cout << "MODIFIED FOR UNDERSMOOTHING" << std::endl; 1825c1845 < static_cast(binlist)); --- > static_cast(binlist)); 1906c1926 < /// \param[in] i0 truth histogram bin number --- > /// \param[in] i0 truth histogram bin number 1908c1928 < /// \param[in] i1 truth histogram bin number --- > /// \param[in] i1 truth histogram bin number 1910c1930 < /// \param[in] i2 truth histogram bin number --- > /// \param[in] i2 truth histogram bin number 1913c1933 < /// the arguments are used to form one row (k) of the matrix L, where --- > /// the arguments are used to form one row (k) of the matrix L, where 1923c1943 < if(i2>=0) { --- > if(i2>=0) { 2086c2106 < /// the value scale_left+scale_right --- > /// the value scale_left+scale_right 2194,2195c2214,2215 < /// start_bin. Along the first (second) dimension, there are < /// nbin1 (nbin2) bins and adjacent bins are spaced by --- > /// start_bin. Along the first (second) dimension, there are > /// nbin1 (nbin2) bins and adjacent bins are spaced by 2563c2583 < // the maximum tau is determined from the chi**2 values --- > // the maximum tau is determined from the chi**2 values 2571c2591 < Error("ScanLcurve","too few input bins, NDF<=0 %d",GetNdf()); --- > Error("ScanLcurve","too few input bins, NDF<=0 %d",GetNdf()); 2726c2746 < // the curvature is stored in the array cCi[] as a function of cTi[] --- > // the curvature is stored in the array cCi[] as a function of cTi[] 2854c2874 < (*lCurve)=new TGraph(n,x,y); --- > (*lCurve)=new TGraph(n,x,y); 2937c2957 < --- > 2946c2966 < --- > 3072c3092 < // calculate the inverse of the contribution to the error matrix --- > // calculate the inverse of the contribution to the error matrix 3263c3283 < // --- > // 3271c3291 < --- > 3350c3370 < /// --- > /// 3436c3456 < Int_t nbin=rhoij->GetNbinsX(); --- > Int_t nbin=rhoij->GetNbinsX(); 3495c3515 < --- > 3637c3657 < --- > 3679a3700,3886 > //*********************************************************************// > //*********************************************************************// > // Contribution of JLK and MK starts here // > //*********************************************************************// > //*********************************************************************// > > TVectorD TUnfoldV17::ComputeCoverage(TMatrixD *beta, Double_t tau) > { > if(!fVyyInv) { > GetInputInverseEmatrix(0); > if(fConstraint != kEConstraintNone) { > fNdf--; > } > } > > // calculate bias > TMatrixDSparse *AtVyyinv=MultiplyMSparseTranspMSparse(fA,fVyyInv); > fEinv=MultiplyMSparseMSparse(AtVyyinv,fA); > TMatrixDSparse *lSquared=MultiplyMSparseTranspMSparse(fL,fL); > AddMSparse(fEinv,tau*tau,lSquared); > > // T 2 T -1 > // fE = [A Vyyinv A + tau L L] > Int_t rank=0; > fE = InvertMSparseSymmPos(fEinv,&rank); > > // calculate linear estimator > TMatrixDSparse *EAtVyyinv=MultiplyMSparseMSparse(fE,AtVyyinv); > TMatrixDSparse *EAtVyyinvA=MultiplyMSparseMSparse(EAtVyyinv,fA); > > TMatrixDSparse *Unit; > Unit=new TMatrixDSparse(*EAtVyyinvA); > Unit->UnitMatrix(); > > AddMSparse(EAtVyyinvA, -1.0, Unit); > > TMatrixDSparse *biasSparse=MultiplyMSparseM(EAtVyyinvA,beta); > > if (fBiasScale != 0.0) { > TMatrixDSparse *lSquaredScaled = &((*lSquared)*=tau*tau); > TMatrixD *fX0Scaled = &((*fX0)*=fBiasScale); > TMatrixDSparse *taulSquaredfX0=MultiplyMSparseM(lSquaredScaled, fX0Scaled); > TMatrixDSparse *EtaulSquaredfX0=MultiplyMSparseMSparse(fE, taulSquaredfX0); > AddMSparse(biasSparse, 1.0, EtaulSquaredfX0); > } > > Double_t *bias_data = biasSparse->GetMatrixArray(); > > // calculate SE > TMatrixDSparse *EAtVyyinvVyy = MultiplyMSparseMSparse(EAtVyyinv, fVyy); > > // Transpose EAtVyyinv > TMatrixDSparse *EAtVyyinvt = new TMatrixDSparse(EAtVyyinv->GetNcols(),EAtVyyinv->GetNrows()); > EAtVyyinvt->Transpose(*EAtVyyinv); > > > TMatrixDSparse *SE = MultiplyMSparseMSparse(EAtVyyinvVyy, EAtVyyinvt); > SE->Sqrt(); > > > // extract diagonal elements of SE > const Int_t *SE_rows=SE->GetRowIndexArray(); > const Int_t *SE_cols=SE->GetColIndexArray(); > const Double_t *SE_data=SE->GetMatrixArray(); > // SEii: diagonals of SE > TVectorD SEii(SE->GetNrows()); > Int_t nError=0; > for(Int_t iA=0;iAGetNrows();iA++) { > for(Int_t indexA=SE_rows[iA];indexA Int_t jA=SE_cols[indexA]; > if(iA==jA) { > if(!(SE_data[indexA]>=0.0)) nError++; > SEii(iA)=SE_data[indexA]; > } > } > } > > Int_t dim = fXToHist.GetSize() - 2; > // dim should equal the # of bins in TRUE space > // Subtracts 2 because in TUnfold constructor, > // 2 is added to the # of bins in TRUE space > // for overflow purpose > > TVectorD Coverage_probability(dim); > > for(Int_t i=0; i Coverage_probability(i) = ROOT::Math::normal_cdf(bias_data[i]/SEii(i)+1) > - ROOT::Math::normal_cdf(bias_data[i]/SEii(i)-1); > } > return Coverage_probability; > } > > TVectorD TUnfoldV17::ComputeCoverage(TH1 *hist_beta, Double_t tau) > { > // converting TH1 hist_beta to TMatrixD beta > TMatrixD *beta; > beta = new TMatrixD(GetNx(), 1); > for (Int_t i = 0; i < GetNx(); i++) { > (*beta) (i, 0) = hist_beta->GetBinContent(fXToHist[i]); > } > > TVectorD Coverage_probability = ComputeCoverage(beta, tau); > return Coverage_probability; > } > > Double_t TUnfoldV17::UndersmoothTau(Double_t tau, Double_t epsilon, Int_t max_iter) > { > if(tau <= 0) { > Error("UndersmoothedUnfolding::UndersmoothTau", "Tau should be strictly positive"); > return std::numeric_limits::infinity(); > } > if(epsilon <= 0) { > Error("UndersmoothedUnfolding::UndersmoothTau", "Tolerance should be strictly positive"); > return std::numeric_limits::infinity(); > } > if(max_iter <= 0) { > Error("UndersmoothedUnfolding::UndersmoothTau", "Max number of iteration should be strictly positive"); > return std::numeric_limits::infinity(); > } > > Double_t nominalCoverage = ROOT::Math::normal_cdf(1) - ROOT::Math::normal_cdf(-1); > > //Double_t scaling = sqrt(0.95); > Double_t scaling = 0.90; > > Int_t num_iter = 0; > Int_t fixed = 0; > Double_t coverages[2]={0.0}; > > TMatrixD *fXHat = 0; > TMatrixD *fXHatPrev = 0; > > do { > if (fixed != 1) { > fXHatPrev = fXHat; > > TMatrixDSparse *AtVyyinv=MultiplyMSparseTranspMSparse(fA,fVyyInv); > TMatrixDSparse *rhs=MultiplyMSparseM(AtVyyinv,fY); > TMatrixDSparse *lSquared=MultiplyMSparseTranspMSparse(fL,fL); > if (fBiasScale != 0.0) { > TMatrixDSparse *rhs2=MultiplyMSparseM(lSquared,fX0); > AddMSparse(rhs, tau*tau * fBiasScale ,rhs2); > DeleteMatrix(&rhs2); > } > fEinv=MultiplyMSparseMSparse(AtVyyinv,fA); > AddMSparse(fEinv,tau*tau,lSquared); > Int_t rank=0; > fE = InvertMSparseSymmPos(fEinv,&rank); > TMatrixDSparse *xSparse=MultiplyMSparseMSparse(fE,rhs); > fXHat = new TMatrixD(*xSparse); > DeleteMatrix(&rhs); > DeleteMatrix(&xSparse); > //std::cout << "Obtained new BetaHat" << std::endl; > } > > TVectorD computedCoverage = ComputeCoverage(fXHat, tau); > coverages[0] = coverages[1]; > coverages[1] = computedCoverage.Min(); > Info("UndersmoothTau", "Current computed coverage: %lf", coverages[1]); > > if (coverages[0] > coverages[1]) { > Info("UndersmoothTau", "Computed coverage decreased\nPrevious: %lf, Now: %lf", > coverages[0], coverages[1]); > > fixed = 1; > fXHat = fXHatPrev; > TVectorD computedCoverage = ComputeCoverage(fXHat, tau); > coverages[1] = computedCoverage.Min(); > } > tau = tau * scaling; > Info("UndersmoothTau", "Decreasing tau to %lf", tau); > > num_iter = num_iter+1; > } while (coverages[1] < nominalCoverage-epsilon && num_iter tau = tau / scaling; > Info("UndersmoothTau", "Obtained estimated coverage: %lf", coverages[1]); > return tau; > } > > //*********************************************************************// > //*********************************************************************// > // Contribution of JLK and MK ends here // > //*********************************************************************// > //*********************************************************************// > > > 3693d3899 < $ curl -sL https://raw.githubusercontent.com/jlylekim/UndersmoothedUnfolding/349d4d95363eaea2cabd377576caa1a82b036a38/TUnfold.h -o UndersmoothedUnfolding_TUnfold.h $ diff TUnfold/TUnfold.h UndersmoothedUnfolding_TUnfold.h 0a1,4 > // Modified by Junhyung Lyle Kim and Mikael Kuusela > // starting from TUnfold.h (version 17.8) by Stefan Schmitt > > 70c74 < // dim(x) should not exceed O(100) // --- > // dim(x) should not exceed O(100) // 284c288 < // Unfold with given choice of tau and input --- > // Unfold with given choice of tau and input 300a305,314 > ////////////////////////////////////////////////////////////////////////////////////////// > ////////////////////////////////////////////////////////////////////////////////////////// > // Implemented by Junhung Lyle Kim and Mikael Kuusela // > TVectorD ComputeCoverage(TMatrixD *beta, Double_t tau); // > TVectorD ComputeCoverage(TH1 *hist_beta, Double_t tau); // > Double_t UndersmoothTau(Double_t tau_init, Double_t epsilon=0.01, Int_t max_iter=1000); // > ////////////////////////////////////////////////////////////////////////////////////////// > ////////////////////////////////////////////////////////////////////////////////////////// > > 342c356 < inline Double_t GetEpsMatrix(void) const { return fEpsMatrix; } --- > inline Double_t GetEpsMatrix(void) const { return fEpsMatrix; } ```
danielskatz commented 3 years ago

👋 @jlylekim - please confirm that this is correct

jlylekim commented 3 years ago

Hi @danielskatz and @matthewfeickert,

Yes, if we restrict the LOC to TUnfoldV17.cxx file, then it’s true that the implementation of our main algorithm is from line 3706 to line 3877.

However, if we may add a bit more context:

  1. UndersmoothedUnfolding implements a recently proposed modification of Tikhonov-regularized unfolding so that nominal uncertainty quantification is achieved in a fully data-driven manner which is not possible in existing implementations.

  2. The reason why we started from TUnfold and ROOT (instead of implementing from scratch) is because those provide the existing standard implementation of Tikhonov regularization for the high-energy physics unfolding problem. If we provided something in Python, for instance, our implementation of the same final functionality would contain many more lines of new code because we would need to implement the standard functionality provided by TUnfold first, but it would be harder from the target users' perspective.

  3. More specifically, the way UndersmoothedUnfolding works is that, starting from an initial estimate of the regularization strength, UndersmoothedUnfolding provides a principled and fully data-driven approach to gradually decrease the regularization strength so that nominal empirical coverage is achieved. The implemented algorithm can be understood as a meta-algorithm that requires other algorithms for estimating the initial regularization strength, for computing the unfolded solution for a given regularization strength, computing stable matrix inverses, etc. We could have implemented these other algorithms ourselves and have many more lines in our source code. However, since TUnfold and ROOT are the standard libraries for this type of problems with tried and tested implementations of these requisite subalgorithms, we felt that it is more appropriate to base our implementation on these existing packages.

In sum, while the total LOC in TUnfoldV17.cxx may seem small, we believe it is mainly due to the nature of our method being a meta-algorithm. This is why we provide two demo files (which contain 623 lines of additional code), one for Gaussian peaks case and the other for steeply falling spectrum case, demonstrating (i) the applicability of UndersmoothedUnfolding in different unfolding scenarios, (ii) the failure of existing algorithms in the same scenarios, and (iii) the flexibility that UndersmoothedUnfolding can be used starting from virtually any initial estimate (e.g., TUnfold::scanLcurve).

We hope this clarifies some concerns raised by the reviewers.

danielskatz commented 3 years ago

@jlylekim - While I appreciate your comments, and particularly that you are extending an existing code, we are still going to reject this as not meeting the substantial scholarly effort criterion for review by JOSS

danielskatz commented 3 years ago

@whedon reject

whedon commented 3 years ago

Paper rejected.

rafaelab commented 3 years ago

@danielskatz @jlylekim Sorry for the late reply. I just came back from vacation. I see this has already been sorted :)