Title |
Evaluation of Model Influence of Training Data according to Neuron Coverage Change |
Authors |
김주형(Ju-Hyeong Kim) ; 유동연(Dong-Yeon Yoo) ; 이정원(Jung-Won Lee) |
DOI |
https://doi.org/10.5573/ieie.2022.59.5.83 |
Keywords |
Deep learning; Coverage testing; Neuron coverage |
Abstract |
software testing techniques are largely divided into black box tests and white box tests, and research on black box testing is being conducted more actively due to their difficulty in identifying their internal structure. However, for the explainability of learning results, a white box test is also required because that can observe the internal behavior of the model so research on the generation of test cases using neuron coverage is also being conducted recently. In this paper, neuron coverage is not only applied to learning model test, but also to evaluate the effect of selection of learning data on model performance. 11.25% larger neuron coverage was confirmed when models generated using even feature distribution train dataset then original train dataset. In addition, when the model is re-learned using trainn dataset showing greater neuron coverage, it can be seen that the rate of change in neuron coverage increases from 50% to 379%, and greater change, lower accuracy. Additionally, comparing the previous two datasets with PGD, it can be seen that the change in neuron coverage is low, along with the appearance of PGD side showing accuracy down to 20%. In this regard, by comparing the variance of each neuron value in entire neuron distribution, it can be confirmed that greater variance value, lower the accuracy for deep learning model of the dataset. |