reproducibility-challenge / iclr_2019

ICLR Reproducibility Challenge 2019
https://reproducibility-challenge.github.io/iclr_2019/
219 stars 40 forks source link

Submission for issue #100 #133

Open hosseinhejazian opened 5 years ago

hosseinhejazian commented 5 years ago

The issue number is #100

reproducibility-org commented 5 years ago

Hi, please find below a review submitted by one of the reviewers:

Score: 7 Reviewer 2 comment : The report presents a reproducibility study on the effectiveness of Gaussian Process Latent Variable Models (GP-LVM) to estimate time varying covariance matrices which are used in downstream tasks in Finance domain. The report clearly presents the problem statement, and explains the technical details of GP-LVM in good detail. However, the report focuses probably a bit more on the implementation issues faced. The report mentions various technical challenges faced in implementation which is unnecessary for a formal report, should have gone in to the appendix.

The report lacks a bit in hyperparameter search, but makes up in thorough treatment in performing ablation studies. Although given the dataset size being small, I suspect running each experiment probably wouldn't had taken too much time either.

The discussion is comprehensive and thorough, and the report mentions a good discussion on reproducibility for the authors of the original paper. Although, some of the points are overdone (such as typo and missing references). Confidence : 3

reproducibility-org commented 5 years ago

Hi, please find below a review submitted by one of the reviewers:

Score: 8 Reviewer 1 comment : [Problem statement] The method proposed by the original paper is clear and well explained. The problem may lack some motivation (i.e. why linear models are not enough) but the overall view of the paper is clear.

[Code] The authors of this report tried to use the code given by the authors of the original paper. They explored a lot of hyper-parameters, which produce an almost exhaustive analysis of the influence of different parameters. However the team did not submit their code reproduction. It may be good to report if a lot of the original code had to be changed, and put it online.

[Communication with original authors] Some communication was made on the OpenReview platform between the authors.

[Hyperparameter Search] Very good hyperparameter search. a lot of details is given in terms of what parameter influences what behavior.

[Discussion on results] Unfortunately the results were not always in accordance with the original paper. One big factor for this can be the fact that this team had to reduce the size (dimension) of the original data presented in the original paper due to computing resources. Overall, the discussion is quite good, and the authors of this report even provide a set of suggestions to the original paper. This is quite nice and shows that the team did a great effort in understanding the original work.

[Overall organization and clarity] This report is well structured and the text is easy to read, even if the topic covered can get quite theoretical. However, I sometimes find it hard to go to the Appendix every time I want to see a graph mentioned in the report. I understand that the Appendix is a great place to put extra information in order to save space for a more compact text report, but having a few of the most important figures inside the body of the report could help a lot the reader to "visualize" what is described in the text.

Very good report overall. Confidence : 4