University of Minnesota
Software Engineering Center

You are here

Implications of Ceiling Effects in Defect Predictors

Date of Publication: 
May 2008
Associated Research Groups: 
Publication Files: 
Context: There are many methods that input static code features and output a predictor for faulty code modules. These data mining methods have hit a "performance ceiling"; i.e., some inherent upper bound on the amount of information offered by, say, static code features when identifying modules which contain faults. Objective: We seek an explanation for this ceiling effect. Perhaps static code features have "limited information content"; i.e. their information can be quickly and completely discovered by even simple learners. Method:An initial literature review documents the ceiling effect in other work. Next, using three sub-sampling techniques (under-, over-, and micro-sampling), we look for the lower useful bound on the number of training instances. Results: Using micro-sampling, we find that as few as 50 instances yield as much information as larger training sets. Conclusions: We have found much evidence for the limited information hypothesis. Further progress in learning defect predictors may not come from better algorithms. Rather, we need to be improving the information content of the training data, perhaps with case-based reasoning methods.
PROMISE '08 Proceedings of the 4th international workshop on Predictor models in software engineering
@inproceedings{Menzies:2008:ICE:1370788.1370801, author = {Menzies, Tim and Turhan, Burak and Bener, Ayse and Gay, Gregory and Cukic, Bojan and Jiang, Yue}, title = {Implications of ceiling effects in defect predictors}, booktitle = {Proceedings of the 4th international workshop on Predictor models in software engineering}, series = {PROMISE '08}, year = {2008}, isbn = {978-1-60558-036-4}, location = {Leipzig, Germany}, pages = {47--54}, numpages = {8}, url = {}, doi = {}, acmid = {1370801}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {defect prediction, naive bayes, over-sampling, under-sampling}, }