rjust / defects4j

A Database of Real Faults and an Experimental Infrastructure to Enable Controlled Experiments in Software Engineering Research
MIT License
703 stars 299 forks source link

A few inquiries about how to use the tool #11

Closed mwatt closed 8 years ago

mwatt commented 9 years ago

Hello,

Is it possible to configure the tool to list all executed tests no matter the result? Or even better, to provide the files that JUnit generates after a run?

Could you provide an example how to use run_coverage.pl to run a coverage analysis using the tests that are already part of a project?

I tried to run this command and no test was executed:

lang_1_buggy$ run_coverage.pl -p Lang  -d src/test/java/ -o /tmp/result_db

However, this was the output of running such command:

Smartmatch is experimental at /vagrant/defects4j/framework/bin/run_coverage.pl line 128.

Thanks!

rjust commented 9 years ago

Hi @mwatt,

A few comments and questions:

1) JUnit reports for developer-written tests I pushed a recent refactoring of the build files, which eases this task. The main build file (framework/projects/defects4j.build.xml) defines the target (run.dev.tests) that executes all developer-written tests. This target is eventually called when you invoke defects4j test. You can add an additional JUnit formatter that exports the information you need (e.g., use the pre-defined plain formatter and set the usefile option -- https://ant.apache.org/manual/Tasks/junit.html).

2) run_coverage.pl The run_coverage.pl script performs the code coverage analysis for generated test suites only.

3) Coverage analysis for developer-written tests We had a defects4j coverage command at some point but decided to remove it since the code coverage of the developer-written tests does not change. We could add it back in if this seems generally useful. Regarding the coverage analysis, would you want to measure code coverage for the modified files only or the entire project? Also, what tests would you want to execute?

Thanks, René

mwatt commented 9 years ago

Hi René,

Thank you for making (1) easier for me. I will try it and let you know if I have problems, but I think it will work out fine.

Regarding (3), my research group is working on a new test adequacy metric. We would like to compare our metric with code coverage. I will run the analysis over the entire project and its different versions. At this stage we are only focusing on developer-written tests but I expect we will include generated test suites in the future.

Thank you, Matías

rjust commented 8 years ago

Hi Matías,

The most recent improvements to the command-line interface should support all of your requirements. These changes will be included in the next release, but they are already available in the repo.