openETCS / validation

WP4: Validation and verification strategy
8 stars 22 forks source link

Review of D4.2.1 (TWT) #171

Closed srieger closed 10 years ago

srieger commented 10 years ago

Section 1 (Institut Mines-Télécom)

Introduction

1.3.1 Modeling decisions

Section 1.3.2

Sections 1.4, 1.5 and 1.6

Section 1.7

General remarks

srieger commented 10 years ago

151

srieger commented 10 years ago

Section 3 (Uni Bremen)

Goals

Method/Approach

Results

Conclusions/Lessons learned

General remarks

srieger commented 10 years ago

Section 4 (Uni Rostock)

Method/Approach

Means

Results

Future Activities The following comment addresses not the authors but the consortium in general:

srieger commented 10 years ago

Section 5 (Systerel)

Model Checking

Formal Proof

ProB

Table 2

Formal proof results

Model-Checking of Predicates

Object of Verification

Result

Verification processes applicable to a SCADE model

nhnghia commented 10 years ago

Dear Stefan, Thank you for reviewing the section of Institut Mines-Telecom. We have taken into account your comments in our new version. Here are some answers to your questions.

Introduction Clearly state that the actual modelling in WP3 has not started yet to indicate that we have currently no "real" input. 1.3.1 Modeling decisions

You state that normally a system is described by formulae which in my opinion is not true.

We would like to talk about existing modeling approches that described the system as formula.

Description of the messages depicted in Figure 1 is missing

We added this in our new version

"The second aspect is related to the situation when, for some reason, exchange messages are lost. In order to deal with this situation we augment our model with timeout functions. If there is no input before the timeout expires then the player (RBC, a train under control, and/or the environment) has to make their own decision and for the safety issues usually it is the decision to stop the train." I believe that timeouts are mentioned only rarely in the standard. Do you have any indication of underspecification here? Especially it seems that in the standard lost messages do not seem to be taken into account properly in every situation.

Your model of train and RBC is very rough. Did you check whether it is conformant to the standard? How did you derive it?

Where in the standard can I find the safe position requirements stated on page 9? Please add a reference.

As you said above, currently there is no real input and our model is somewhat abstract. We need to refine it in the next steps, e.g., the lost messages may happened when a train functions in Limited Supervision mode (Subset 26-4.4.19.3.1.1) or in Staff Responsible mode (Subset 26-4.11.1.2), timeout values can be used in Movement Authority (Subset 26-3.8), etc.

Write out abbreviations as ETFSM and EFSM at least once. Their meaning is not clear to me.

We added their descriptions.

To me the purpose of the simulator is not clear. It seems to use an XML-based language while IF and SysML languages are considered for model checking

Currently the simulator is generated from a XML description. We are studying to generate it from a SysML description that is also contained in an XML-based specification. The simulator is used to run some tests. It may be useful when we need to check some property againist a model in some specific condition since model-checking checks exhaustively the model. Thus Simulation-based testing can be considered as a low-cost solution to check a model (against some test cases)

How do you transform SysML models to Promela?

In fact, we are trying to use IFx model-checker to check IF specification, hence a translation from SysML models to IF models are beeing studied rather than to Promela.

To me this section seems like a collection of possible verification techniques and corresponding tools. Some you applied to your example models (the testing techniques). Which ones did you apply/do you plan to apply in the future in WP4?

We have reformulated this section.

Section 1.4.1: How did you derive those requirements? The title of the section seems to be a bit misleading as the process of derivation is not described.

You state: "The model is a very close representation of the system specification provided by the standard." I strongly disagree. You have a model at a extremely high level of abstraction; the reference to the standard (from which parts did you extract your model) is missing.

I believe, that your model can be seen as demonstration that your proposed methods work or maybe even how they could be applied in the context of openETCS. But you should align with the other partners and go much more into detail when you specify a real test model.

Currently, our model is somewhat abstract. It will be refined in the next steps, e.g., our Moving mode can be refined into TRIP mode of OBU, or Stop mode to System Failure mode, ...

Provide your (preliminary) results and models via GitHub, e.g., create a user story as other partners have done.

We are going to do that.

srieger commented 10 years ago

Dear @nhnghia ,

thank you very much for your response to my comments. To me it clarifies a lot and helps me to understand your approach.

mgudemann commented 10 years ago

Dear @srieger,

I included many of your comments in the changes at the PR https://github.com/openETCS/validation/pull/179

You mention you checked only type-based properties, so what is the additional benefit with regards to a type-check.

In B, typing invariants have to be explicit. In addition, it is possible to specify dependent types which are not analyzable using simple type checking. This is why these invariants exist.

Why do you consider a different part of the standard here? Above you mentioned the procedure On-Sight.

The B-model covers the on-sight procedure. While I also developed an Event-B model for this, I presented another case-study for the Event-B model as safety properties had been identified for this section of Subset 026. This allowed for giving direct examples for the different V&V aspects.

What about the Scade Prover?

SCADE Prover only allows Boolean "is never true" safety properties. This requires more complex formalization than using set-theoretic predicates or temporal logic of other model checkers and is less expressive. Note also, that the prover is not part of the certified aspect of SCADE.

I hope this clarifies your comments and questions.

MariellePetitDoche commented 10 years ago

@srieger:

"Formal Proof

How do you specify properties to be verified? They are proof obligations as well and not generated by the POG."

Properties to verify are specified by Invariant or pre or post conditions of operations. Then as they are specified, POG are automatically generated to verify them.

""A proved model will always meet the safety and security qualifications ; however that doesn’t mean it will behave in regards to the specification! This is the domain of validation, as discussed in the introduction." Not clear to me what you mean by that. Verification is checking whether the implementation meets the specification while validation is checking whether the Specification is correct." This means the modelled is only verified (ie. each level of the Bmodel or the implementation verify a more abstract one, a specification, is meets) but not validated (ie. we are not ensure the most abstract specification is correct in regards of an informal specification).

"You mention you checked only type-based properties, so what is the additional benefit with regards to a type-check." Only type-based properties as invariant, however function behaviour are specified (see .ref component in our VnV User story) thus it is proved that the implementation of a function verified the specified functional behaviour.

"What about the Scade Prover?" Which tool do you mean, there exists a model-checker for SCADe model, but obviously not open-source so we have prefered not to use it for the project.

"Here you are only at process level in contrary to B part. The basic process should be similar for Scade and B if you consider the different roles (e.g., verifier) or the reporting etc." Indeed some verification are common, however, formal verification can currently be stronger and more generic with open-source or free of charge tools with B method than with Scade.

MarcBehrens commented 10 years ago

Review closing session 19.3.2014: Comment on review for section 3 pending @cecilebraun .

cecilebraun commented 10 years ago

Sorry for the delay and thanks @srieger :

What happens if your test model is too abstract, e.g., in the extreme case it could be the automaton with a single state that can do everything? Clearly all possible paths are available then.

The test generator will of course generate test for what we had modelled only. If the test model is too coarse If the model is too abstract it can happens the following:

As your generate test cases from an abstract model according to a certain coverage strategy, I assume that if you have full coverage in the test model, this does not imply the same for the code, right?

The use of the different test strategies will ensure a better coverage of the code but of course full coverage on the test model does not imply full coverage on the implementation. Results

Did you document the deficiencies in the specification? We started to document all issues we found to clarify them with railway experts and eventually improve the standard in the future.

This an ongoing work with Uwe Steinke. I will be interested to know where I can help providing this deficiencies.

MarcBehrens commented 10 years ago

Review closing session 02.4.2014: Issue closed, please reopen if there is something missing.