openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
725 stars 38 forks source link

[REVIEW]: The Walrus: the fastest calculation of hafnians, Hermite polynomials and Gaussian boson sampling #1705

Closed whedon closed 4 years ago

whedon commented 5 years ago

Submitting author: @nquesada (Nicolás Quesada) Repository: https://github.com/xanaduAI/thewalrus Version: v0.10.1 Editor: @katyhuff Reviewers: @amitkumarj441, @poulson Archive: 10.5281/zenodo.3585911

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67"><img src="https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67/status.svg)](https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@amitkumarj441 & @poulson, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @katyhuff know.

Please try and complete your review in the next two weeks

Review checklist for @amitkumarj441

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Review checklist for @poulson

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

whedon commented 5 years ago

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @PhilipVinc, @poulson it looks like you're currently assigned to review this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

nquesada commented 5 years ago

@whedon generate pdf

whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

nquesada commented 5 years ago

@whedon generate pdf

whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

poulson commented 5 years ago

I am back from overseas travel and am now working on the review. I was able to run the python tests without a hitch, but I filed issue https://github.com/XanaduAI/thewalrus/issues/56 to suggest fixing the hardcoded eigen3 path in tests/Makefile.

poulson commented 5 years ago

Submitted https://github.com/XanaduAI/thewalrus/issues/57 regarding the default Makefile rule in examples/ leading to an error.

poulson commented 5 years ago

I am very impressed by the library of algorithms for computing halfnians, but I am worried there is insufficient justification for the claim that your library is the "fastest" given the lack of benchmarks relative to pre-existing libraries.

For example, on the subject of computing permanents, you have the wonderful documentation here: https://the-walrus.readthedocs.io/en/latest/permanent_tutorial.html but the forecasts for computing the permanent of a 35 x 35 matrix seem to be just under an hour. Whereas the timings at https://codegolf.stackexchange.com/questions/97060/calculate-the-permanent-as-quickly-as-possible?rq=1 seem to require just a few minutes.

Also, as noted, both Mathematica and Maple contain library routines for computing permanents. I recognize that these are both proprietary packages -- but I believe the 'fastest' claims demand at least some connection to whatever is closest to state-of-the-art.

As a sidenote, I extended examples/example.cpp to compute halfnians up to size 30 x 30 after increasing nmax from 10 to 15 and noticed that double-precision numbers overflow starting at m=15. Have you considered falling back to higher precision if overflow is detected?

nquesada commented 5 years ago

Thanks @poulson ! We are working on fixing the Makefile and also on improving the permanent code. Regarding being "the fastest", I guess we really like the idea of the a Walrus being the fastest, at something ;) . More to the point, we never claimed anything about permanents, they are in the library for historical reasons, but they are not mentioned in the scope of the library and appear only in the paper submission to provide some historical context. We have done hafnian bench marking (cf. the JEA paper in the bibliography) and also looked at the codegolf implementations.

poulson commented 5 years ago

Thank you for the fast response, @nquesada! I see what you mean about the claim being specific to hafnians, but you also point out that haf([0, W; W^T, 0]) = perm(W), so it is possible to at least make a qualitative connection between hafnian runtimes and best-in-class permanent runtimes.

But you are right that the codegolf timings at https://codegolf.stackexchange.com/questions/157049/calculate-the-hafnian-as-quickly-as-possible relative to https://the-walrus.readthedocs.io/en/latest/hafnian_tutorial.html and Fig. 5 of https://arxiv.org/pdf/1805.12498.pdf appear more relevant.

It seemed possible that I was misinterpreting the much faster claims of the codegolf, so I downloaded a copy of the C++ source from the "miles" submission and modified the datatype (TYPE) from int to double, and disabled multithreading by making S equal to 0. I then generated a random 52 x 52 matrix by forming a 52 x 52 matrix with entries independently drawn from the symmetric normal distribution, added it onto its transpose, then clipped entries to {-1, 0, 1} based upon mapping (-infinity, -1] to -1, (-1, 1) to 0, and [1, infinity) to 1. Running with a single core on my laptop took 67 seconds with Miles's code compiled with -O3.

As an aside, I originally tried for a 104 x 104 matrix, but Miles's code led to a segfault.

I then modified the examples/example.cpp driver in thewalrus by including #include <chrono> then appending the following into the main function, just before return 0;:

    {
      std::vector<double>  mat{
        // The 52^2 {-1, 0, 1} entries of the 52 x 52 matrix go here.
      };
      std::cout << "Starting 52 x 52 hafnian calculation." << std::endl;
      auto start = std::chrono::steady_clock::now();
      double hafval = libwalrus::loop_hafnian(mat);
      auto end = std::chrono::steady_clock::now();
      std::chrono::duration<double> elapsed_seconds = end - start;

      // print out the result
      std::cout << hafval << "(" << elapsed_seconds.count() << " seconds.)"
                << std::endl;
    }

I recompiled with g++ example.cpp -std=c++11 -O3 -Wall -I/usr/include -I../include -I/home/poulson/.local/eigen3 -fopenmp -march=native -c and the code has been running (with OMP_NUM_THREADS=1) for quite some time (more than 10 minutes). I can edit in the time when it completes.

Is it possible that the code from miles in the codegolf is incorrect? I am happy to share the random 52 x 52 {-1, 0, 1} matrix I generated and the results from his submission.

poulson commented 5 years ago

It has been another 21 minutes and thewalrus is still running. I also read a bit more and Miles's code is an implementation of Algorithm 2 of https://arxiv.org/pdf/1107.4466.pdf, which you cite as the "Recursive Algorithm" from [5] in https://the-walrus.readthedocs.io/en/latest/algorithms.html.

However, I see that this algorithm is only defined for non-loop hafnians, and I am comparing it against the loop hafnian. I will redefine a random symmetric 52 x 52 matrix with zero diagonal and retest with Miles's implementation of the recursive algorithm against your thewalrus::hafnian, rather than thewalrus::loop_hafnian.

Please excuse the confusion!

nquesada commented 5 years ago

Hi @poulson . The code from @eklotek (aka "miles") is correct. The algorithm he implemented, which comes from a paper by A. Bjorklund, scales like O(n^5 2^{n/2}) where n is the size of the matrix. The code from https://arxiv.org/pdf/1805.12498.pdf scales like O(n^3 2^{n/2}) which is asymptotically faster, but in real life (not n \to \infty) the algorithm implemented by miles and derived by A. Bjorklund in 2012 is faster. This algorithm, the one derived by A. Bjorklund in 2012 and referred to as the recursive algorithm in the documentation of the walrus is the default option for calculating hafnians. That being said you should not compare against the code golf one because, at least when called from python, all the calculations are done in quad precision. As a matter of fact the the algorithm used by the walrus is a quad precision openMPd version of the Bjorklund 2012 algorithm implemented by miles. This is all acknowledged in the source code https://github.com/XanaduAI/thewalrus/blob/master/include/recursive_hafnian.hpp

Hope this clarifies the confusion!

nquesada commented 5 years ago

Just saw your latest message. The problem with loop hafnians is that as far as I know, and I asked A. Bjorklund about this, there is no generalization of the "recursive" algorithm to loop hafnians.

poulson commented 5 years ago

I looked through the linked source code and you seem to mean long double when you say "quad precision", but, according to the wikipedia entry on long double: "With gcc on Linux, 80-bit extended precision is the default". This has been my experience in practice, and it seems to only be older chips that supported hardware 128-bit arithmetic which had compilers mapping long double to 128-bit IEEE.

I recompiled my modification of Miles's source code to use TYPE as long double and saw the runtime for the 52 x 52 matrix increase from ~1 minute 20 seconds to ~9 minutes 30 seconds, with the same answer produced in both cases.

But, to be fair, I am calling example/example.cpp with an std::vector<double> input and believe it is doing its work using double, not long double.

I have been testing the exact same matrix against thewalrus::hafnian (not thewalrus::loop_hafnian) and it seems to be taking much more time. I now realize that this is due to what you mentioned above, that the asymptotically faster Bjorklund 2012 algorithm is used by thewalrus::hafnian, and one must explicitly call thewalrus::hafnian_recursive.

When I modified the example driver to call thewalrus::hafnian_recursive, it took about 9 minutes, though this is on top of the loop_hafnian and hafnian routines still running, so take this with a grain of salt. Either way, this is noticeably slower than the Miles code run with TYPE equal to double (about a minute and 20 seconds).

One simple optimization it seems you could make -- please correct me if I am wrong -- is to change the signature of https://github.com/XanaduAI/thewalrus/blob/master/include/recursive_hafnian.hpp#L44,

template <typename T>
inline T recursive_chunk(std::vector<T> b, int s, int w, std::vector<T> g, int n)

to pass b and g as const references rather than by value since you don't seem to modify either in the routine.

EDIT: Switching to const references knocked the hafnian_recursive time down from 8 minutes to 7.5 minutes and preserved the result.

poulson commented 5 years ago

I think it would be fair to say that it is the responsibility of the unqualified interfaces (i.e., hafnian and loop_hafnian) to route to the best algorithms based upon the matrix dimensions. There is such a large difference in runtime that such a simple fix seems worthwhile.

EDIT: For context, both loop_hafnian and hafnian took about three hours on the 52 x 52 matrix. The former was 11,188 seconds, and the latter 10, 473 seconds. hafnian_recursive took 480 seconds as-is, 450 seconds with the const reference change mentioned above, and the @eklotek implementation took about 80 seconds.

poulson commented 5 years ago

I believe the rest of the time difference -- from @eklotek's 80 seconds up to the ~450 second "const reference" version of hafnian_recursive -- will largely be explained by you reordering the loops in the main computational kernel, which were originally:

        for (int u = 0; u < n; u++) {
            TYPE *d = e+u+1,
                  p = g[u], *x = b+(T(s)-1)*m;
            for (int v = 0; v < n-u; v++)
                d[v] += p*x[v];
        }

        for (int j = 1; j < s-2; j++)
            for (int k = 0; k < j; k++)
                for (int u = 0; u < n; u++) {
                    TYPE *d = c+(T(j)+k)*m+u+1,
                          p = b[(T(s-2)+j)*m+u], *x = b+(T(s-1)+k)*m,
                          q = b[(T(s-2)+k)*m+u], *y = b+(T(s-1)+j)*m;
                    for (int v = 0; v < n-u; v++)
                        d[v] += p*x[v] + q*y[v];
                }

so that the majority of array accesses are no longer unit stride, leading to large performance degradations due to decreased cache reuse. Said code segment seems to have been translated to

    for (u = 0; u < n; u++) {
        for (v = 0; v < n - u; v++) {
            e[u + v + 1] += g[u] * b[v];

            for (j = 1; j < s - 2; j++) {
                for (k = 0; k < j; k++) {
                    c[(n + 1) * (j * (j - 1) / 2 + k) + u + v + 1] +=
                        b[(n + 1) * ((j + 1) * (j + 2) / 2) + u]
                        * b[(n + 1) * ((k + 1) * (k + 2) / 2 + 1) + v]
                        + b[(n + 1) * (k + 1) * (k + 2) / 2 + u]
                        * b[(n + 1) * ((j + 1) * (j + 2) / 2 + 1) + v];
                }
            }
        }
    }

with the u loop now on the outside, and the k loop on the interior. Notice that the updates to c are now with stride n+1 instead of stride 1, and some of the reads of b are also now of highly non-uniform stride.

If you preserve the original data access patterns, I suspect you will see a substantial performance improvement.

eklotek commented 5 years ago

I'm glad to see that code is still of some interest. I'll try to address a few of the concerns. This is all from memory so some parts may be a bit fudged.

The segfault you experienced is most likely due to how I used stack space to store the submatrices that were visited recursively. Switching to malloc there shouldn't encounter that problem, but will be a bit slower with the 2^n times you will be calling malloc/delete on small matrices at the leaves.

The large performance speedup that algorithm 2 demonstrates is from reusing the previously "squeezed" submatrices instead of only applying the squeeze at the end (leaves of the binary tree). Since the values of n are small (even n = 50 for a 100x100 matrix would take over a month on a standard desktop) most speedups aren't realized due to large constant overhead from the setup/initialization/convergence. Using FFT/Karatsuba (in my attempts) doesn't improve convolution speed over the simple discrete pattern (which is also an easy target for vectorization).

The convolution pattern was used after noticing that vectorization wasn't happening good since the data visited had a non-simple access pattern from the four nested loops. Their version hoists the 2 innermost loops out to the top, which affects the matrix since it is no longer being accessed with stride 1. Transposing the matrix could be beneficial.

Thanks for adopting the code even though I never made a proper commit, and thanks for investigating its runtime in more depth. It was originally only intended for integer use, but the TYPE define was for experimenting with floats for FMA vectorization.

nquesada commented 5 years ago

Hey @poulson : I looked into the problem of having "overflows" in examples/example.cpp when you use all one matrices of size ~ 30. It turns out it is not an overflow, it is a problem with diagonalizing the very special matrices you get when you pass an all ones matrix; in this case the matrices that pow_trace hast to diagonalize are also all ones matrices and for some reason eigen does not get the right answer. We observed the same problem back in the day when we called LAPACKE from C. To test this I created a matrix with all ones outside the diagonal on some other values in the diagonal. Since the diagonal does not matter you should get the same result (n-1)!! (for an n x n matrix) and indeed in this case you do get the right result. Here is the code:

#include <iostream>
#include <complex>
#include <vector>
#include <libwalrus.hpp>
#include <stdlib.h>
#include <time.h>

int main() {

  int m =14;
  // create a 2m*2m all ones matrix
  int n = 2 * m;
  std::vector<std::complex<double>> mat(n * n, 1.0);

  for(int i=0;i<n;i++){
    mat[n*i+i]=1.0/(1.0+i); # set to 1.0 to recover the bug!
  }

  // calculate the hafnian
  std::complex<double> hafval = libwalrus::hafnian(mat);
  // print out the result
  std::cout << n << " " << hafval << std::endl;

  return 0;
};

There are at least two ways to go around this. One is to change the signature of all the function to pass success/fail argument that comes from the diagonalization routines. Another one is to internally change the values in the diagonal of any input matrix so that they are not all equal.

poulson commented 5 years ago

@nquesada I would be very surprised if the same bug appeared in LAPACK's Aggressive Early Deflation Hessenberg QR as in Eigen's adaptation of EISPACK via JAMA.

It is worth noting that -- somewhat ironically -- Eigen's generic eigensolver is grossly suboptimal for even moderate sized matrices. But maybe this is irrelevant for you given the small matrices.

But, at this point, my main concern is that your library is currently not justified in its claim to be the "fastest" calculation of "hafnians, Hermite polynomials and Gaussian boson sampling", nor the GitHub repo's claim as being "The fastest exact library for hafnians, torontonians, and permanents for real and complex matrices."

I recommend modifying the recursive hafnian formulation to use unit stride access in the manner originally used by @eklotek (or using its transpose) and making this algorithm the default when one calls thewalrus::hafnian, and modifying the GitHub repo's title to only claim the fastest hafnian computation.

katyhuff commented 5 years ago

@PhilipVinc have you been able to start your review process?

poulson commented 5 years ago

@whedon generate pdf

whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

nquesada commented 5 years ago

@whedon generate pdf

whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

poulson commented 5 years ago

@whedon check references

whedon commented 5 years ago
Attempting to check references...
whedon commented 5 years ago

OK DOIs

- 10.1007/BF02781659 is OK
- 10.1007/978-3-319-51829-9 is OK
- 10.1016/0304-3975(79)90044-6 is OK
- 10.1017/S0013091500011299 is OK
- 10.1103/PhysRevA.100.032326 is OK
- 10.1063/1.5086387 is OK
- 10.1103/PhysRevA.100.022341 is OK
- 10.1145/3325111 is OK
- 10.1145/3325111 is OK
- 10.1103/PhysRevA.98.062322 is OK
- 10.1103/PhysRevA.50.813 is OK
- 10.1016/j.jmva.2007.01.013 is OK
- 10.1002/(SICI)1098-2418(1999010)14:1<29::AID-RSA2>3.0.CO;2-X is OK
- 10.22331/q-2019-03-11-129 is OK

MISSING DOIs

- https://doi.org/10.1137/1.9781611973099.73 may be missing for title: Counting perfect matchings as fast as Ryser
- https://doi.org/10.1088/0305-4470/34/31/312 may be missing for title: Multi-dimensional Hermite polynomials in quantum optics

INVALID DOIs

- None
nquesada commented 5 years ago

The missing DOIs have been added in https://github.com/XanaduAI/thewalrus/pull/71

poulson commented 5 years ago

I am happy to recommend the paper for publication at this point and hope that @PhilipVinc can provide their feedback soon.

nquesada commented 5 years ago

Thanks for refereeing our paper @poulson! Your comments have been really useful.

katyhuff commented 5 years ago

@nquesada Thank you for engaging actively in this process. Earlier this week I contacted @PhilipVinc via email in case notifications were not reaching them. Hopefully that review is forthcoming! I will keep you updated.

nquesada commented 5 years ago

@whedon generate pdf

whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

nquesada commented 5 years ago

@katyhuff did you hear back from @PhilipVinc ?

katyhuff commented 5 years ago

@nquesada I'm sorry to say I have not. @PhilipVinc is not responding to my emails. I am going mention a few people in this issue who may be able to review instead.

katyhuff commented 5 years ago

Dear Lei Wang (@wangleiphy) I'm emailing to request your expertise in quantum computational physics. Specifically, I'd like to ask for your review of a submission to the Journal of Open Source Software. I'm one of the editors for JOSS and am in need of a reviewer to evaluate The Walrus: A library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Usually, I would request your review earlier in the process, but this time, we're halfway through the review and we need a replacement reviewer who would be able to look at the software in the next week or two. It is a specialized utility and, because of your quantum computing expertise, I think you would be a phenomenal reviewer. Here is some information about the submission:

Here is what you would be agreeing to. First of all, JOSS is not a regular journal. Given your involvement in open-source scientific software development, I imagine you may have heard of it already, but if you haven't, perhaps you'll like the concept: it's an open, developer-friendly journal for open-source research software. The review process is focused on best practices, and emphasizes reviewing the code, rather than reviewing what is written about the code. If the authors have done a good job developing their software, the process should be extremely rapid. Much more information is available in our reviewer documentation.

What do you think?

Would you be willing to review for this paper? I would be delighted for your help! Please let me know in the next few days whether you are available.

Best, Katy Huff

katyhuff commented 5 years ago

Dear Amit Kumar Jaiswal (@amitkumarj441) I'm emailing to request your expertise in quantum computational physics. Specifically, I'd like to ask for your review of a submission to the Journal of Open Source Software. I'm one of the editors for JOSS and am in need of a reviewer to evaluate The Walrus: A library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Usually, I would request your review earlier in the process, but this time, we're halfway through the review and we need a replacement reviewer who would be able to look at the software in the next week or two. It is a specialized utility and, because of your quantum computing expertise, I think you would be a phenomenal reviewer. Here is some information about the submission:

Here is what you would be agreeing to. First of all, JOSS is not a regular journal. Given your involvement in open-source scientific software development, I imagine you may have heard of it already, but if you haven't, perhaps you'll like the concept: it's an open, developer-friendly journal for open-source research software. The review process is focused on best practices, and emphasizes reviewing the code, rather than reviewing what is written about the code. If the authors have done a good job developing their software, the process should be extremely rapid. Much more information is available in our reviewer documentation.

What do you think?

Would you be willing to review for this paper? I would be delighted for your help! Please let me know in the next few days whether you are available.

Best, Katy Huff

katyhuff commented 5 years ago

Dear Xuemei Gu (@XuemeiGu), I'm emailing to request your expertise in quantum computational physics. Specifically, I'd like to ask for your review of a submission to the Journal of Open Source Software. I'm one of the editors for JOSS and am in need of a reviewer to evaluate The Walrus: A library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Usually, I would request your review earlier in the process, but this time, we're halfway through the review and we need a replacement reviewer who would be able to look at the software in the next week or two. It is a specialized utility and, because of your quantum computing expertise, I think you would be a phenomenal reviewer. Here is some information about the submission:

Here is what you would be agreeing to. First of all, JOSS is not a regular journal. Given your involvement in open-source scientific software development, I imagine you may have heard of it already, but if you haven't, perhaps you'll like the concept: it's an open, developer-friendly journal for open-source research software. The review process is focused on best practices, and emphasizes reviewing the code, rather than reviewing what is written about the code. If the authors have done a good job developing their software, the process should be extremely rapid. Much more information is available in our reviewer documentation.

What do you think?

Would you be willing to review for this paper? I would be delighted for your help! Please let me know in the next few days whether you are available.

Best, Katy Huff

XuemeiGu commented 5 years ago

Dear Openjournals/Joss-Reviews and Katy,

Thank you very much for the invitation for reviewing the paper. But I have to say sorry that I have no time for reviewing. I'm busy with implementing some quantum experiments and some other things. Sorry for that, I wish you all the best.

best wishes, Xuemei Gu

On Mon, Oct 21, 2019 at 9:14 AM Katy Huff notifications@github.com wrote:

Dear Xuemei Gu (@XuemeiGu https://github.com/XuemeiGu), I'm emailing to request your expertise in quantum computational physics. Specifically, I'd like to ask for your review of a submission to the Journal of Open Source Software http://joss.theoj.org/. I'm one of the editors for JOSS and am in need of a reviewer to evaluate The Walrus: A library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Usually, I would request your review earlier in the process, but this time, we're halfway through the review and we need a replacement reviewer who would be able to look at the software in the next week or two. It is a specialized utility and, because of your quantum computing expertise, I think you would be a phenomenal reviewer. Here is some information about the submission:

Here is what you would be agreeing to. First of all, JOSS is not a regular journal. Given your involvement in open-source scientific software development, I imagine you may have heard of it already, but if you haven't, perhaps you'll like the concept: it's an open, developer-friendly journal for open-source research software. The review process is focused on best practices, and emphasizes reviewing the code, rather than reviewing what is written about the code. If the authors have done a good job developing their software, the process should be extremely rapid. Much more information is available in our reviewer documentation. https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

What do you think?

Would you be willing to review for this paper? I would be delighted for your help! Please let me know in the next few days whether you are available.

Best, Katy Huff

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/1705?email_source=notifications&email_token=AI2KDE2WF7DGIBRSHGVBP2LQPT66FA5CNFSM4ITINND2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBYZESI#issuecomment-544313929, or unsubscribe https://github.com/notifications/unsubscribe-auth/AI2KDEYPVQU2JNRZPNGCGKTQPT66FANCNFSM4ITINNDQ .

-- Xuemei Gu PhD Student

State Key Laboratory for Novel Software Technology, Nanjing University 163 Xianlin Avenue, Qixia District, 210023, Nanjing City, China

Institute for Quantum Optics and Quantum Information, Vienna Boltzmanngasse 3, A-1090 Vienna, Austria

http://iqoqi-vienna.at/ Office: +43 (0) 1 4277 29568

katyhuff commented 5 years ago

Thanks for letting us know @XuemeiGu .

poulson commented 5 years ago

@katyhuff FWIW, I don't think any quantum computing knowledge is required to review this software; I believe anyone on the mathematical side of numerical linear algebra would be a suitable reviewer; especially anyone with experience computing permanents or with a bit of group theory knowledge (to understand the generalization from determinants to immanents using characters of the symmetric group).

amitkumarj441 commented 5 years ago

Dear @katyhuff,

Thanks for the invitation to review.

I got buried in preparing reports and reviews since last week. I have a sneak look at the paper and hoping to send my reviews early next week.

Apologies for the delay in response.

Best Regards, Amit Kumar Jaiswal

katyhuff commented 5 years ago

@amitkumarj441 Excellent news. Thank you for agreeing to review. A JOSS review primarily involves checking submissions against a checklist of essential software features and details in the submitted paper and code. Please see the checklist at the top of this issue for your to-do list regarding the review. More information about the review process can be found in our reviewer documentation. Please also, if you have not already, fill out your information at this signup form so that we can put your on our reviewer list, officially: https://joss.theoj.org/reviewer-signup.html .