modelsconf2018 / artifact-evaluation

2 stars 6 forks source link

[shin] Hardware-in-the-Loop Testing of CPS #7

Open grammarware opened 6 years ago

grammarware commented 6 years ago

Submitted by @seungyeob to https://github.com/modelsconf2018/artifact-evaluation/tree/master/shin

Paper: http://orbilu.uni.lu/handle/10993/36092

zolotas4 commented 6 years ago

Hi,

In the https://figshare.com/s/240acc73c40ec949f27a artefact repository I cannot find any instructions on how to work with the artefact. There is only a small description for each of the files provided in the zip. Could you please provide instructions on how to run the artefacts (software needed, dependencies, how to import, how to execute, etc.)?

Thanks, Thanos

grammarware commented 6 years ago

Ping @seungyeob

seungyeob commented 6 years ago

Dear Thanos (@zolotas4),

Our artifacts include the HITECS language grammar and case study models since this is the main contribution of our paper. We did not include any runnables in our artifacts. Specifically, the zip file that you mentioned contains the (sanitized) HITECS specifications of our case study which are readable by using a text editor. The artifact shows how to use the HITECS language via concrete examples and helps readers of our paper to have a better understanding.

Regards, Seung Yeob

zolotas4 commented 6 years ago

Hi Seung (@seungyeob),

Thanks for your reply.

So, if I would like to use your proposed language how can I do? If I want to replicate your experiments how can I do?

Regards, Thanos

jdirocco commented 6 years ago

Hi @zolotas4, By creating a new xtext project, the provided Hitecs xtext grammar (with xtext extension) allows generating editor and facilities that enable the creation of Hitecs specifications. Look at https://www.eclipse.org/Xtext/documentation/102_domainmodelwalkthrough.html for generating Hitecs editor.

jdirocco commented 6 years ago

The artefacts provided are part of the paper presenting an approach to help engineers systematically specify and analyze CPS HiL test cases. Summarizing the artifact contribution consists of three parts:

======================================================================== (1) Is the artifact consistent with the paper? The artifacts are consistent with the paper since the Hitecs grammar provided with the artifacts can be used to define Hitecs specifications. I suggest providing the whole project that contains Hitecs grammar instead of just the xtext file. However, by creating a new xtext project, the provided Hitecs xtext grammar allows generating editor and facilities that enable the creation of Hitecs specifications. At last, the generated editor permits to formulate Hitecs specifications defined in the archive.

(2) Is the artifact as complete as possible? No, it doesn't permit to replicate the evaluation experiments that support the approach. Moreover, as I previously said, the xtext projects is partial. It just includes the grammar definition.

(3) Is the artifact well-documented? The documentation is good and each artifact is provided with a dedicated description. If the xtext grammar is given, the artifact documentation should be improved by providing information on how one can create an editor that permits to create HITECT specifications. (4) Is the artifact easy to (re)use? It needs the Xtext knowledge to generate the corresponding editor. Moreover, The excel files don't provide any suggestions to replicate the experiment with the artifacts defined into repository.

zolotas4 commented 6 years ago

Thanks @md2manoppello!

seungyeob commented 6 years ago

Dear Thanos (@zolotas4) and Juri (@md2manoppello),

We were not aware that the artifact evaluation requires executable tools. At this moment, the repository contains our case study material and some additional artifacts that help readers to have a better understanding about our paper. We are currently working on writing a tool paper to be submitted to the MODELS 2018 tools and demonstrations track based on the approach presented in our research paper. We will also try to update the repository to include executables and the related instructions for running the executables.

Thanks and best regards, Seung Yeob

grammarware commented 6 years ago

Dear @seungyeob,

Artefact evaluation requires artefacts, clearly described in their roles and purposes. If they are tools, they should be runnable, if they are models, they should be compatible with the framework used to define them, if they are datasets, they should conform to the claims in the paper, etc. The only confusion so far came from the lack of clear positioning of the artefacts in general and with respect to the paper.

Indeed, if you want to focus on the tool itself, the tools&demos track is where you need to be. Good luck there!

Yours, Vadim.

hernanponcedeleon commented 6 years ago

Summary

This artifact accompanies a paper proposing a method to help engineers in specifying and analysing hardware-in-the-loop test cases. The authors develop an executable domain specific language (DSL) and provide methods to check if test cases are well-behaved and estimate the execution times of test cases.

The artifact consists of:

Assessment

Coverability: Met expectations

All the (relevant) Figures / Tables of the paper are covered by the artifcat. Figure 3 seems to be covered by file HITECS.xtext. However I do not find a one-to-one mapping between the Figure and the artifact. Some resources such as components, test cases, schedulers are part of both the Figure and the grammar. However, others such as Assertions or Oracle are not. It could be the case that these resources are part of UML testin profile (HITECS builds on to of it), but this is not clear. Anyway the paper gives the impression that Figure 3 shows the specific resources from HITECS so I would have expected these resources to be part of its grammar. Specific specifications such as those shown in Figures 4. 5 and 7, are given in the specifications.zip file. The data for Table 8 and Figures 11 and 12 is given by the two excel files.

Packaging: Met expectations

The artifact is easy to download. Since the artifact only consist on files that need to be manually checked, not much more was expected from the packaging.

Reproducibility: Fell below expectations

I would say that the part of the artifact that could be considered for a Reproducibility assessment is the one about generating the data shown in Table 8 and Figures 11 and 12. However the authors do not provide a way to generate this data, they just provide the data per se.

Documentation: Met expectations

The artifact is well documented with a clear explanation to relate each part of the artifact with its counterpart in the paper.

zolotas4 commented 6 years ago

Summary The authors provide the grammar of an executable DSL called HITECS that can be used for the specification of hardware-in-the-loop test cases for cyper-physical systems. Except the .xtext file that contains the grammar, the authors also provide some specifications that conform to it and two spreadsheets containing the results of the experiments presented in the paper.

Is the artefact consistent with the paper? Yes, the artefact is consistent with the paper.

Is the artefact as complete as possible? No, the artefact is not as complete as possible as the infrastructure for running the experiments is not provided.

Is the artefact well-documented? The documentation is good and complements the information provided in the paper. However, the artefact documentation should be improved by providing information on how one can create an editor that can be used to create HITECT specifications as this is not trivial for those not having previous experience with Xtext.

Is the artefact easy to (re)use? The artefact is not easy to re-use if someone has no previous experience with Xtext. In addition, the experiment results cannot be reproduced as there is no guidance available on how to do it and the infrastructure used to run the experiments (e.g., simulator) is not provided.

grammarware commented 6 years ago

Dear @seungyeob,

The Artifact Evaluation Committee of MoDELS 2018 has reached the conclusion that this artifact does not conform to the expectations and is not approved for inclusion as auxiliary material for the publication. We are sorry to inform you of this, but all three reviewers emphasised the incomplete nature of the artifact. After re-examining the paper which this artifact was meant to accompany, ourselves, we also feel that the ability to follow the execution path displayed on Figure 8 of the paper, is crucial for usefulness of the artifact, so the grammar on its own, together with the spec codebase, do not constitute a substantial addition to the paper as it is now, and thus does not merit an artifact badge.

We second and support your own judgement in that submitting a tool demo paper is the great course of action, and wish you best of luck with that.