modelsconf2018 / artifact-evaluation

2 stars 6 forks source link

[marussy] Incremental View Model Synchronisation #14

Open grammarware opened 6 years ago

grammarware commented 6 years ago

Submitted by @kris7t to https://github.com/modelsconf2018/artifact-evaluation/tree/master/marussy

manuelleduc commented 6 years ago

Dear @kris7t,

I am currently trying to generate the report from the viewmodel-data-analysis-results-short.Rmd R markdown report.

The execution of the benchmark worked nicely. However, when running the knit generation of the report, I obtain the following error:

Quitting from lines 429-435 (viewmodel-data-analysis-results-short.Rmd) 
Error in -c("modificationMix", "experiment") : 
  invalid argument to unary operator
Calls: <Anonymous> ... select.data.frame -> select_vars -> map_if -> map -> lapply -> FUN
Execution halted

I am working with RStudio 1.1.442 on ubuntu 18.04, with R 3.4.4 and tidyverse 1.2.1

Please let me know if you need complementary information.

Thanks Manuel

kris7t commented 6 years ago

Dear @manuelleduc ,

On my machine, I can run the report generation inside RStudio 1.1.453 (that shouldn't matter too much) on Arch Linux (that shouldn't matter either) with R 3.5.0 (possible culprit?) and tidyverse 1.2.1 (at least that's the same). To be more precise,

> sessionInfo()
R version 3.5.0 (2018-04-23)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Arch Linux

Matrix products: default
BLAS/LAPACK: /usr/lib/libopenblas_nehalemp-r0.3.0.dev.so

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C               LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8     LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                  LC_ADDRESS=C               LC_TELEPHONE=C             LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] bindrcpp_0.2.2  forcats_0.3.0   stringr_1.3.1   dplyr_0.7.6     purrr_0.2.5     readr_1.1.1     tidyr_0.8.1     tibble_1.4.2    ggplot2_3.0.0   tidyverse_1.2.1

loaded via a namespace (and not attached):
 [1] Rcpp_0.12.17       RColorBrewer_1.1-2 highr_0.7          cellranger_1.1.0   pillar_1.2.3       compiler_3.5.0     plyr_1.8.4         bindr_0.1.1        tools_3.5.0       
[10] digest_0.6.15      lubridate_1.7.4    jsonlite_1.5       nlme_3.1-137       gtable_0.2.0       lattice_0.20-35    pkgconfig_2.0.1    rlang_0.2.1        psych_1.8.4       
[19] cli_1.0.0          rstudioapi_0.7     yaml_2.1.19        parallel_3.5.0     haven_1.1.2        withr_2.1.2        xml2_1.2.0         httr_1.3.1         knitr_1.20        
[28] hms_0.4.2          grid_3.5.0         tidyselect_0.2.4   glue_1.2.0         R6_2.2.2           readxl_1.1.0       foreign_0.8-70     modelr_0.1.2       reshape2_1.4.3    
[37] magrittr_1.5       scales_0.5.0       rvest_0.3.2        assertthat_0.2.0   mnormt_1.5-5       colorspace_1.3-2   stringi_1.2.3      lazyeval_0.2.1     munsell_0.5.0     
[46] broom_0.4.5        crayon_1.3.4

I could get the code block you mention run also by replacing

 select(-c("modificationMix", "experiment")) %>%

with

 select(-c(modificationMix, experiment)) %>%

i.e. https://gist.githubusercontent.com/kris7t/a33c7d5b97f290c42ebd1725d792f14b/raw/c6692d2d0db6e7693ffc8b624b3913162e08892f/viewmodel-data-analysis-results.Rmd

Let me know if this quick fix resolves the issue! I'll create a fixed revision of the Zenodo deposit (with a better-documented R environment) then.

Best, Kristóf

manuelleduc commented 6 years ago

The code substitution you proposed fixed the problem, thank you.

manuelleduc commented 6 years ago

Summary
The submitted artifact is part of a paper presenting a view model transformation approach. This approach proposes a reactive, incremental, validating, and inconsistency-tolerant transformation engine. Moreover, the proposed transformation language allows the safe composition of transformations. The paper evaluate the scalability of the proposed transformation language by applying it to two heterogeneous use cases.

Documented
An exhaustive list of the artifacts is provided in the documentation. The documentation is well structured and very detailed, allowing the user to either execute the benchmarks directly on personal computer or on a dedicated server (e.g. AWS cloud server).

Consistent
The provided artifacts are relevant to the evaluation proposed in the paper. The data produced by the benchmark are relevant to the research questions proposed in the paper's evaluation. The results obtained on my own development environment are comparable to the results of the paper.

Complete All the use-cases evaluated in the paper are backed by relevant artifacts.

Exercisable
The explanations are clear and allows easily the replication of the paper's experiments. The artifacts follow eclipse and java standards. The results produced by the report generation are easily linked to the results of the paper.

Availability
An unique DOI have been generated on zenodo with the exhaustive list of identified artifacts. The artifacts are well standardized and can be easily exploited by anybody familiar with Java and Eclipse.

Minor comments
I believe the runtime error I got during the report generation will be documented in a future version of the artifact.

Bitico commented 6 years ago

Their approach supports the creation and handling of model views over heterogeneous largescale models, and the proposed EMF-based tool is part of their work. That work has shown promising results when creating and loading views on huge models concerning scalability, validation and inconsistency-tolerant.

(1) Is the artifact consistent with the paper? The artifacts are consistent with the paper and their results has been benchmarked on a practical large-scale use case from the MegaMRt2 project, implementing a runtime/design time feedback loop.

(2) Is the artifact as complete as possible? Yes, it was quite easy to replicate their benchmarks and results.

(3) Is the artifact well-documented? All the artifacts are provided using Zenodo features, and an exhaustive readme is provided so to replicate the benchmarks step-by-step easily.

(4) Is the artifact easy to (re)use? Yes, the artifact is easy to reuse. The authors also provided a zipped update site for the Eclipse plugin for the tool.

mherzberg commented 6 years ago

Artefact summary

The submitted artefact accompanies the paper "Incremental View Model Synchronization Using Partial Models", which presents a view model transformation approach that provides a compositional transformation language. The artefact consists of an Eclipse plugin project, benchmark configuration files and R scripts to generate the reports in the paper.

Consistency with the paper

I was able to run the 'short.json' benchmark and then use the provided R script to produce a report without any problems. The results in the reports matched the results in the paper. I would expect similar results for the other benchmarks.

Completeness of artefact

The artefact looks complete, it contains the compiled Eclipse projects, benchmark configurations, and report generation files to closely reproduce the results in the paper, as well as the source code and used models.

Artefact documentation

The artefact contains documentation to run the benchmarks, setup the project as a user, and even to setup the project for further development. The versions of the used tools (Eclipse, Java) are documented, too. I was able to build the project using Maven without problems.

Ease of reuse

The presented results are reproducible. The artefact is generally well documented, and it seems that the benchmark and project can be adjusted without too much effort.

grammarware commented 6 years ago

Dear @kris7t,

Based on all the comments and the reviews provided by the members of the Artifact Evaluation Committee of MoDELS 2018, we have reached the conclusion that this artifact conforms to the expectations and is hereby approved. Please use the badge instructions page to add the badge of approval to your article, and add the link to the Zenodo entry with DOI https://doi.org/10.5281/zenodo.1308969 to the camera ready version of the paper.

Thank you very much for putting extra effort into the preparation and finalising of the artifact. If any of the comments above are still not addressed, please try to accommodate them before the conference.