Copyright and Licensing
Articles accepted for publication will be licensed under the Creative Commons BY-NC-SA. Authors must sign a non-exclusive distribution agreement after article acceptance.
Nowadays, the prediction about dental implant failure is determined through clinical and radiological evaluation. For this reason, predictions are highly dependent on the Implantologists’ experience. In addition, it is extremely crucial to detect in time if a dental implant is going to fail, due to time, cost, trauma to the patient, postoperative problems, among others. This paper proposes a procedure using multiple feature selection methods and classification algorithms to improve the accuracy of dental implant failures in the province of Misiones, Argentina, validated by human experts. The experimentation is performed with two data sets, a set of dental implants made for the case study and an artificially generated set. The proposed approach allows to know the most relevant features and improve the accuracy in the classification of the target class (dental implant failure), to avoid biasing the decision making based on the application and results of individual methods. The proposed approach achieves an accuracy of 79% of failures, while individual classifiers achieve a maximum of 72%.
https://orcid.org/0000-0002-9891-3371
https://orcid.org/0000-0001-5384-8476
https://orcid.org/0000-0001-7752-1515
J. E. B. Tamez, F. N. Zilli, L. A. Fandiño, and J. M. Guizar, “Factores relacionados con el éxito o el fracaso de los implantes dentales colocados en la especialidad de Prostodoncia e Implantología en la Universidad de La Salle Bajío,” Revi. Esp. Cirugía Oral y Maxilofac., vol. 286, pp. 1–9, 2016.
J. Domínguez, J. Acuña, M. Rojas, J. Bahamondes, and S. Matus, “Study of association between systemic diseases and dental implant failure,” Rev. Clínica Periodoncia, Implantol. y Rehabil. Oral, vol. 6, no. 1, pp. 9–13, 2013.
A. L. I. Oliveira, C. Baldisserotto, and J. Baldisserotto, “A comparative study on machine learning techniques for prediction of success of dental implants,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 3789 LNAI, pp. 939–948, 2005.
R. S. Moayeri, M. Khalili, and M. Nazari, “A Hybrid Method to Predict Success of Dental Implants,” Int. J. Adv. Comput. Sci. Appl., vol. 7, no. 5, pp. 1–6, 2016.
A. C. Braga, P. Vaz, J. C. Sampaio-Fernandes, A. Felino, and M. P. Tavares, “Decision model to predict the implant success,” Proc. 12th Int. Conf. Comput. Sci. Its Appl., vol. 7333 LNCS, no. PART 1, pp. 665–674, 2012.
C. E. Shannon, “A Mathematical Theory of Communication,” Bell Syst. Tech. J., vol. 27, no. 3, pp. 379–423, 1948.
J. R. Quinlan, “Induction of Decision Trees,” Mach. Learn., vol. 1, no. 1, pp. 81–106, 1986.
L. Breiman, “Random Forest,” Mach. Learn., vol. 45, no. 1, pp. 5–32, 2001.
K. Kira and L. A. Rendell, “A Practical Approach to Feature Selection,” Mach. Learn. Proc., pp. 249–256, 1992.
K. Pearson, “On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling,” London, Edinburgh, Dublin Philos. Mag. J. Sci., vol. 50, no. 302, pp. 157–175, 1900.
C. Chang and C. Lin, “LIBSVM : A Library for Support Vector Machines,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, pp. 1–39, 2011.
N. S. Altman, “An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression,” Am. Stat., vol. 46, no. 3, pp. 175–185, 1992.
C. D. Manning, P. Raghavan, and H. Schutze, “Text classification and Naive Bayes,” in Introduction to Information Retrieval, Cambridge University Press, 2009, pp. 253–287.
B. Irie and Sei Miyake, “Capabilities of Three-layered Perceptrons,” IEEE nternational Conf. Neural Networks, pp. 641–648, 1988.
C. Cortes and V. Vapnik, “Support-Vector Networks,” Mach. Learn., vol. 20, no. 3, pp. 273–297, 1995.
I. D. Mienye, Y. Sun, and Z. Wang, “Prediction performance of improved decision tree-based algorithms: A review,” Procedia Manuf., vol. 35, pp. 698–703, 2019.
M. Togaçar, B. Ergen, and Z. Cömert, “Detection of lung cancer on chest CT images using minimum redundancy maximum relevance feature selection method with convolutional neural networks,” Biocybern. Biomed. Eng., vol. 40, no. 1, pp. 23–39, 2020.
G. Kou, P. Yang, Y. Peng, F. Xiao, Y. Chen, and F. E. Alsaadi, “Evaluation of feature selection methods for text classification with small datasets using multiple criteria decision-making methods,” Appl. Soft Comput. J., vol. 86, p. 105836, 2020.
G. Louppe, L. Wehenkel, A. Sutera, and P. Geurts, “Understanding variable importances in forests of randomized trees,” Adv. Neural Inf. Process. Syst. 26, pp. 431–439, 2013.
M. Bennasar, Y. Hicks, and R. Setchi, “Feature selection using Joint Mutual Information Maximisation,” Expert Syst. Appl., vol. 42, no. 22, pp. 8520–8532, 2015.
A. Chaudhary, S. Kolhe, and Rajkamal, “Performance Evaluation of feature selection methods for Mobile devices,” J. Eng. Res. Appl., vol. 3, no. 6, pp. 587–594, 2013.
M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Inf. Process. Manag., vol. 45, no. 4, pp. 427–437, 2009.
L. Gao, M. Ye, X. Lu, and D. Huang, “Hybrid Method Based on Information Gain and Support Vector Machine for Gene Selection in Cancer Classification,” Genomics, Proteomics Bioinforma., vol. 15, no. 6, pp. 389–395, 2017.
T. Z. Phyu and N. N. Oo, “Performance Comparison of Feature Selection Methods,” MATEC Web Conf., vol. 42, p. 06002, 2016.
M. Peker, A. Arslan, B. Sen, F. V. Celebi, and A. But, “A novel hybrid method for determining the depth of anesthesia level: Combining ReliefF feature selection and random forest algorithm (ReliefF+RF),” Int. Symp. Innov. Intell. Syst. Appl., 2015.
J. Fierrez, A. Morales, R. Vera-Rodriguez, and D. Camacho, “Multiple classifiers in biometrics. part 1: Fundamentals and review,” Inf. Fusion, vol. 44, no. December 2017, pp. 57–64, 2018.
Y. Miao, H. Jiang, H. Liu, and Y. dong Yao, “An Alzheimers disease related genes identification method based on multiple classifier integration,” Comput. Methods Programs Biomed., vol. 150, pp. 107–115, 2017.
C. Catal and M. Nangir, “A sentiment classification model based on multiple classifiers,” Appl. Soft Comput. J., vol. 50, pp. 135–141, 2017.
M. Pandey and S. Taruna, “Towards the integration of multiple classifier pertaining to the Student’s performance prediction,” Perspect. Sci., vol. 8, pp. 364–366, 2016.
D. Ruano-Ordás, I. Yevseyeva, V. B. Fernandes, J. R. Méndez, and M. T. M. Emmerich, “Improving the drug discovery process by using multiple classifier systems,” Expert Syst. Appl., vol. 121, pp. 292–303, 2019.
E. Alfaro, M. Gamez, and N. García, “adabag : An R Package for Classification with Boosting and Bagging,” J. Stat. Softw., vol. 54, no. 2, 2013.
T. Therneau, B. Atkinson, and B. Ripley, “rpart: Recursive Partitioning and Regression Trees,” R Packag. version, 2019.
H. F. Nweke, Y. W. Teh, G. Mujtaba, and M. A. Al-garadi, “Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions,” Inf. Fusion, vol. 46, no. June 2018, pp. 147–170, 2019.
N. B. Ganz, A. E. Ares, and H. D. Kuna, “Predicting dental implant failures by integrating multiple classifiers,” Rev. Cienc. y Tecnol., vol. 34, pp. 13–23, 2020.
J. Han, M. Kamber, and J. Pei, Data Mining: Concepts and Techniques. Morgan Kaufmann, 2012.
B. T. Pham, M. D. Nguyen, K. T. T. Bui, I. Prakash, K. Chapi, and D. T. Bui, “A novel artificial intelligence approach based on Multi-layer Perceptron Neural Network and Biogeography-based Optimization for predicting coefficient of consolidation of soil,” Catena, vol. 173, no. September 2018, pp. 302–311, 2019.
G. Isabelle, W. Maharani, and I. Asror, “Analysis on Opinion Mining Using Combining Lexicon-Based Method and Multinomial Naïve Bayes,” 2018 Int. Conf. Ind. Enterp. Syst. Eng. (ICoIESE 2018), vol. 2, no. IcoIESE 2018, pp. 214–219, 2019.
K. Bhattacharjee and M. Pant, “Hybrid Particle Swarm Optimization-Genetic Algorithm trained Multi-Layer Perceptron for Classification of Human Glioma from Molecular Brain Neoplasia Data,” Cogn. Syst. Res., vol. 58, pp. 173–194, 2019.
D. Chong, N. Zhu, W. Luo, and X. Pan, “Human thermal risk prediction in indoor hyperthermal environments based on random forest,” Sustain. Cities Soc., vol. 49, no. April, p. 101595, 2019.
S. Xu, “Bayesian Naïve Bayes classifiers to text classification,” J. Inf. Sci., vol. 44, no. 1, pp. 48–59, 2018.
sciki-learn, “Tuning the hyper-parameters of an estimator,” 2019. [Online]. Available: https://scikit-learn.org/stable/modules/grid_search.html#grid-search. [Accessed: 30-Jul-2020].
Jakob Richter, “mlrHyperopt,” 2019. [Online]. Available: https://jakob-r.de/mlrHyperopt/index.html. [Accessed: 29-Jun-2021].
X. Fan and H. Shin, “Road vanishing point detection using weber adaptive local filter and salient-block-wise weighted soft voting,” IET Comput. Vis., vol. 10, no. 6, pp. 503–512, 2016.
L. N. Eeti and K. M. Buddhiraju, “A modified class-specific weighted soft voting for bagging ensemble,” Int. Geosci. Remote Sens. Symp., vol. November, pp. 2622–2625, 2016.
H. He and E. A. Garcia, “Learning from imbalanced data,” IEEE Trans. Knowl. Data Eng., vol. 21, no. 9, pp. 1263–1284, 2009.
L. Oliveira, U. Nunes, and P. Peixoto, “On Exploration of Classifier Ensemble Synergism in Pedestrian Detection,” IEEE Trans. Intell. Transp. Syst., vol. 11, no. 1, pp. 16–27, 2010.
M. Mohandes, M. Deriche, and S. O. Aliyu, “Classifiers Combination Techniques: A Comprehensive Review,” IEEE Access, vol. 6, pp. 19626–19639, 2018.
N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” J. Artif. Intell. Res., vol. 16, pp. 321–357, 2002.
Copyright (c) 2021 Nancy Ganz, Alicia E. Ares, Horacio D. Kuna
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Articles accepted for publication will be licensed under the Creative Commons BY-NC-SA. Authors must sign a non-exclusive distribution agreement after article acceptance.
Review Stats:
Mean Time to First Response: 89 days
Mean Time to Acceptance Response: 114 days
Member of:
ISSN
1666-6038 (Online)
1666-6046 (Print)