Summary: | The use of Convolutional Neural Network (CNN) in the computer-aided diagnostic (CAD) system in the radiological medicine field involving breast cancer is gaining wider attention due to the ability of this system to use mammogram images directly to obtain a robust model. However, large amounts of data are required to produce the best model. Moreover, their effectiveness may be hindered since grayscale mammogram images show atypical color distributions compared to those observed in three-channel RGB images by the network during training. This study proposes a multi-input CNN method to classify benign and malignant breast masses in mammograms. The converted images served as additional data from the original image and are trained parallelly. Using threechannel scaled-color images provides additional features that can be learned to construct more distinct learning weights for each class. We use an established digital mammogram dataset, the INbreast, to test the proposed method. The best model shows an increase in accuracy performance at 92.54% and an AUROC score of 0.9820 when using multi-input CNN. The fusion of features from the additional images that have been processed to three-channel as an addition to the original image shows performance improvement without the need for other external images by exploiting existing data. © 2023 IEEE.
|