Several Applications of Convolutional Neural Networks in Medical Imaging

Authors

  • Zheng Jiang

DOI:

https://doi.org/10.62051/npafb665

Keywords:

Convolutional neural networks; medical imaging; application.

Abstract

With the development of artificial intelligence, convolutional networks have powerful multi-dimensional data processing capabilities and can extract and process features in various images, which has great potential in the field of medical image processing. The development of convolutional neural networks has greatly promoted the development of computer aided diagnosis technology. This paper reviews the principle of four kinds of convolutional neural networks, including AlexNet, GoogleNet, U-Net, R-CNN, and their specific application research, such as diagnosis and analysis of brain tumors, classification of skin lesions, and detection of breast cancer. Compared with traditional convolutional networks, these new models have their own advantages and disadvantages. This paper also summarizes the advantages and disadvantages of these four neural networks. In the end, this paper also puts forward some current challenges in medical image research based on convolutional neural networks and the future prospects of medical image analysis technology combined with convolutional neural networks.

Downloads

Download data is not yet available.

References

[1] Desai M and Shah M, An anatomization on breast cancer detection and diagnosis employing multi-layer perceptron neural network (MLP) and Convolutional neural network (CNN). Clinical eHealth, 2021, 4: 1-11.

[2] Tasnim Z, Shamrat F M, Billah M M, Chakraborty S, Chowdhury A N, Nuha H A, Zahir S, and Karim A, Deep Learning Predictive Model for Colon Cancer Patient using CNN-based Classification. International Journal of Advanced Computer Science and Applications, 2021, 12(8): 687-696.

[3] Zhang Y, Qian L, Li X, Zhang Q, Wei Z, and Ji W. Dual-branch Residual Network for Image Super-Resolution. Journal of Digital Imaging, 2020, 33(6): 1410-1425.

[4] Hosny K M, Kassem M A, and Fouad M M, Classification of Skin Lesions into Seven Classes Using Transfer Learning with AlexNet. Journal of Digital Imaging, 2020, 33: 1325-1334.

[5] Shilpa R, Deepika G, Sandeep K, Kantipudi P, Alharbi H, and Mohammad A U. Efficient 3D AlexNet Architecture for Object Recognition Using Syntactic Patterns from Medical Images. Computational Intelligence and Neuroscience, 2022.

[6] Siddique N, Paheding S, Elkin C P, and Devabhaktuni V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications. in IEEE Access, 2021, 9: 82021-82060.

[7] Allah A M, Sarhan A, and Elshennawy N M. Edge U-Net: Brain tumor segmentation using MRI based on deep U-Net model with boundary information. Expert Systems with Applications, 2023, 213: 118833.

[8] Hirano G, Iyatomi Y, and Kato T. Automatic diagnosis of melanoma using hyperspectral data and GoogLeNet. Skin Research and Technology, 2020, 26(4): 486-493.

[9] Sekhar T V, Jyothi N V and Babu S R. Brain Tumor Classification Using Fine-Tuned GoogLeNet Features and Machine Learning Algorithms: IoMT-Enabled CAD System. IEEE Access, 2020, 8: 202302-202313.

[10] Huseiny M S and Sajit A S. Transfer learning with GoogLeNet for detection of lung cancer. Indonesian Journal of Electrical Engineering and Computer Science, 2021, 22(2): 1078-1086.

[11] Girshick R, Donahue J, Darrell T and Malik J. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(1): 142-158.

[12] Su Y, Li D, and Chen X. Lung Nodule Detection based on Faster R-CNN Framework. Computer Methods and Programs in Biomedicine, 2021.

[13] Chiao J, Chen K Y, Liao K Y, Hsieh P H, Zhang G, and Huang T C. Detection and classification of breast tumors using mask R-CNN on sonograms. Medicine, 2019, 98(19): 15200.

Downloads

Published

25-11-2024

How to Cite

Jiang, Z. (2024) “Several Applications of Convolutional Neural Networks in Medical Imaging”, Transactions on Computer Science and Intelligent Systems Research, 7, pp. 200–205. doi:10.62051/npafb665.