Though Deep Neural Networks (DNNs) are being used extensively in many fields including safety critical systems such as autonomous driving and medical diagnostics, there are chances for the DNN to exhibit erroneous behaviours.
Testing a DNN is not that feasible as classical software testing. In a classical software, we know whats happening at every point of time. When you have a problem, you exactly know where it occurs and how to rectify them. Hence we would use white-box testing which touches the internals of the software. But in a AI software, testing the software internals is of no use. After a model is trained, we have no clue whats happening inside all those networks, it just happens. Hence the best option is to follow the black-box testing approach in which we test the functionality of the software from the outside.
The goal of this RnD is to test deep neural networks. The focus is not on testing the performance of a deep neural network, instead to test its capability. To aid this testing, we use behaviour driven development methods which takes in both requirements from test engineers and the capabilities of blender thereby generating synthetic test dataset in blender for testing the learned DNN models.
The control starts with the src/main.cpp file where the blender application is started given a bpy script. Inside the bpy script, the entire model creation and alteration process is carried out. This script uses the functions from src/bpy_functions.py. Models to be imported in blender are inside input folder.
The input folder contains the dataset with images and the ground truth csv. Training uses this data to train the model and gives out model.pth to the output folder.
The model.pth is validated with a new set of data and the output is stored in the output folder.
To generate dataset, use the command
make dataset
To train the model, use the command
make train
To validate the model, use the command
make validate
To run the BDD tests, use the command
make test