Open krishnapaparaju opened 6 years ago
For generating missing unit tests, please check out these links if we can reuse work / collaborate with team at Rice University.
As part of understanding codebase, I have done a small experiment which converts a Java application code base into AST with https://github.com/antlr/antlr4. 'Antlr' does support rich grammars for other well known languages. Please see the attached screen shot capturing AST for the Java application I have tried. Once ASTs are in place, next step can be to figure out 'important functions' at a code base before start planning around AI based test case generation.
@krishnapaparaju how are these generated tests supposed to be better than no tests ?
@maxandersen this is an experiment to alert if unit tests are missing + auto generate unit tests
Implementations around DeepCoder paper (https://openreview.net/pdf?id=ByldLrqlx) can be found at https://github.com/vaporized/DeepCoder-tensorflow & https://github.com/HiroakiMikami/deep-coder.
@maxandersen this is an experiment to alert if unit tests are missing + auto generate unit tests
yes, I get that - but what value does it give the users to be told unit tests are missing vs having coverage tests running that would reveal the same ?
what kind of unit tests are expected to be generated ? looking at paper it seem to just be academic boundary tests ? (which can be fine but it would be a big stretch to call those sufficient as unit tests)
@krishnapaparaju have you tried researching for existing projects? I stumbled upon this after short search https://github.com/randoop/randoop
Moving this to Backlog
We can start off with Java source code...
[ ] Identify frequently used functions at a source code base (eg: GitHub repository)
[ ] Flag if any of these functions do not have corresponding unit tests (at Che thru LSP)
[ ] Generate source code for unit tests if unit tests are missing for these important functions