ENHANCING INTERPRETABILITY AND FIDELITY IN CONVOLUTIONAL NEURAL NETWORKS THROUGH DOMAIN-INFORMED KNOWLEDGE INTEGRATION
Keywords:
ion implantation, multilayer structure, changing of distribution of concentration of dopant, model of process, analytical approach for analysisDOI:
https://doi.org/10.17654/0972361724062Abstract
This study addresses the need for robust disease detection methods in vegetable crops by introducing a novel initialization method for convolutional neural networks (CNNs). Rather than creating a new CNN architecture, our approach focuses on infusing expert knowledge from phytopathology directly into the model’s foundation. This innovative initialization ensures that the CNN possesses a contextual understanding of intricate disease patterns specific to tomatoes. Additionally, our study redefines the role of heatmaps as a dynamic metric for assessing model fidelity in real-time. Unlike traditional post hoc applications, heatmaps are integrated into the model evaluation process, providing insights into decision-making processes and alignment with expert-derived expectations. This dual innovation aims to enhance transparency and fidelity in CNNs, offering a nuanced and effective solution for disease detection in agriculture. The study contributes to advancing artificial intelligence applications in agriculture by providing accurate predictions and a deeper understanding of the underlying decision mechanisms crucial for crop health management.
Received: May 10, 2024
Revised: June 12, 2024
Accepted: June 19, 2024
References
R. Ahmad, I. Alsmadi, W. Alhamdani and L. A. Tawalbeh, Zero-day attack detection: a systematic literature review, Artificial Intelligence Review 56(10) (2023), 10733-10811.
J. M. Alonso Moral, C. Castiello, L. Magdalena and C. Mencar, Toward explainable artificial intelligence through fuzzy systems, Explainable Fuzzy Systems: Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems, Springer, 2021, pp. 1-23.
N. R. Ashwin, Z. Cao, N. Muralidhar, D. Tafti and A. Karpatne, Deep learning methods for predicting fluid forces in dense particle suspensions, Powder Technology 401 (2022), 117303.
M. Brundage et al., Toward trustworthy AI development: mechanisms for supporting verifiable claims, 2020. arXiv preprint arXiv:2004.07213.
T. Chauhan and S. Sonawane, Explicable AI for surveillance and interpretation of coronavirus using X-ray imaging, 2023 International Conference on Emerging Smart Computing and Informatics (ESCI), IEEE, 2023, pp. 1-6.
V. A. G. da Cunha, J. Hariharan, Y. Ampatzidis and P. D. Roberts, Early detection of tomato bacterial spot disease in transplant tomato seedlings utilising remote sensing and artificial intelligence, Biosystems Engineering 234 (2023), 172-186.
Y. Deng, L. Wang, C. Zhao, S. Tang, X. Cheng, H. W. Deng and W. Zhou, A deep learning-based approach to extracting periosteal and endosteal contours of proximal femur in quantitative CT images, 2021.
arXiv preprint arXiv:2102.01990.
G. Geetharamani and A. Pandian, Identification of plant leaf diseases using a nine-layer deep convolutional neural network, Computers and Electrical Engineering 76 (2019), 323-338.
X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, 2010, pp. 249-256.
S. M. Javidan, A. Banakar, K. A. Vakilian and Y. Ampatzidis, Tomato leaf diseases classification using image processing and weighted ensemble learning, Agronomy Journal 116(3) (2024), 1029-1049.
J. L. Leevy, T. M. Khoshgoftaar, R. A. Bauder and N. Seliya, A survey on addressing high-class imbalance in big data, Journal of Big Data 5(1) (2018), 1-30.
D. Minh, H. X. Wang, Y. F. Li and T. N. Nguyen, Explainable artificial intelligence: a comprehensive review, Artificial Intelligence Review 55 (2022), 3503-3568.
S. G. Paul, A. A. Biswas, A. Saha, M. S. Zulfiker, N. A. Ritu, I. Zahan, M. Rahman and M. A. Islam, A real-time application-based convolutional neural network approach for tomato leaf disease classification, Array 19 (2023), 100313.
P. J. Phillips, C. Hahn, P. Fontana, A. Yates, K. K. Greene, D. A. Broniatowski and M. A. Przybocki, Four principles of explainable artificial intelligence, NISTIR 8312, 2021, 36 pp.
A. M. Roy, J. Bhaduri, T. Kumar and K. Raj, WilDect-YOLO: an efficient and robust computer vision-based accurate object localization model for automated endangered wildlife detection, Ecological Informatics 75 (2023), 101919.
R. Thangaraj, P. Pandiyan, S. Anandamurugan and S. Rajendar, A deep convolution neural network model based on feature concatenation approach for classification of tomato leaf disease, Multimedia Tools and Applications 83(7) (2024), 18803-18827.
L. Von Rueden et al., Informed machine learning – a taxonomy and survey of integrating prior knowledge into learning systems, IEEE Transactions on Knowledge and Data Engineering 35(1) (2021), 614-633.
Y. Xu, S. Kohtz, J. Boakye, P. Gardoni and P. Wang, Physics-informed machine learning for reliability and systems safety applications: state of the art and challenges, Reliability Engineering and System Safety 230 (2023), 108900.
A. D. Selbst and S. Barocas, The intuitive appeal of explainable machines, Fordham L. Rev. 87 (2018), 1085.
J. ArunPandian and G. Gopal, Data for: Identification of plant leaf diseases using a 9-layer deep convolutional neural network, 2019. Retrieved from https://api.semanticscholar.org/CorpusID:192568744.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Pushpa Publishing House, Prayagraj, India

This work is licensed under a Creative Commons Attribution 4.0 International License.
____________________________
Attribution: Credit Pushpa Publishing House as the original publisher, including title and author(s) if applicable.
No Derivatives: Modifying or creating derivative works not allowed without written permission.
Contact Pushpa Publishing House for more info or permissions.
Journal Impact Factor: 