Presenter Information

Andres E. Dewendt UrdanetaFollow

Unique Presentation Identifier:

P40

Program Type

Graduate

Faculty Advisor

Dr. Robin Ghosh

Document Type

Poster

Location

Face-to-face

Start Date

29-4-2025 11:30 AM

Abstract

The National Cancer Institute forecasts 2,001,140 cancer diagnoses in 2024, with approximately 600,000 expected deaths. Breast cancer is projected to be the most prevalent, with about 310,000 cases. Early diagnosis is critical to improving outcomes, and various diagnostic technologies, including imaging, biopsies, and blood tests, play a vital role. Image testing methods include X-rays, ultrasounds, magnetic resonance imaging (MRI), and PET scans. Artificial intelligence (AI) has recently significantly improved cancer detection, improving speed, accuracy, and effectiveness. This research project uses a convolution neural network (CNN) to analyze ultrasound breast images, classifying them as benign, malignant, or normal. Our CNN model was trained on a dataset of 1,644 images: 548 normal, 548 malignant, and 548 benign. The images were obtained from four public sources, pre-processed in grayscale, and resized to 75x75 pixels. After dividing the data set into training, validation and testing sets, the model was trained for 100 epochs with a learning rate of 0.1e-4. The results indicated a training accuracy of 0.98 and a validation accuracy of 0.83, revealing some overfitting. The model was also evaluated with a test data set that resulted in a test accuracy of 0.88. Given the results obtained, transfer learning was used to evaluate the performance of our model by comparing it to three other popular open-source models that are also based on the CNN architecture. The three models tested are ResNet50, VGG-16, and Inception V3. After training these models and using the test data set to evaluate performance, ResNet50 had the best performance with 96.8 percent accuracy, followed by VGG-16 with 95.6 percent, Inception V3 with 92.6 percent, and finally our model with 88.3 percent. In conclusion, the transfer learning models had a superior performance compared to our basic CNN architecture. Nevertheless, this research study provided a feasible and effective model based on a basic CNN architecture, which still has room for improvement through parameter tuning and the addition of more layers. Ultimately, our goal is to incorporate bounding boxes or segmentation algorithms to identify tumor locations, providing medical professionals with enhanced diagnostic tools for cancer detection.

Share

COinS
 
Apr 29th, 11:30 AM

Deep Learning-Based Multi-Class Classification of Breast Cancer Ultrasound Images Using Convolutional Neural Networks

Face-to-face

The National Cancer Institute forecasts 2,001,140 cancer diagnoses in 2024, with approximately 600,000 expected deaths. Breast cancer is projected to be the most prevalent, with about 310,000 cases. Early diagnosis is critical to improving outcomes, and various diagnostic technologies, including imaging, biopsies, and blood tests, play a vital role. Image testing methods include X-rays, ultrasounds, magnetic resonance imaging (MRI), and PET scans. Artificial intelligence (AI) has recently significantly improved cancer detection, improving speed, accuracy, and effectiveness. This research project uses a convolution neural network (CNN) to analyze ultrasound breast images, classifying them as benign, malignant, or normal. Our CNN model was trained on a dataset of 1,644 images: 548 normal, 548 malignant, and 548 benign. The images were obtained from four public sources, pre-processed in grayscale, and resized to 75x75 pixels. After dividing the data set into training, validation and testing sets, the model was trained for 100 epochs with a learning rate of 0.1e-4. The results indicated a training accuracy of 0.98 and a validation accuracy of 0.83, revealing some overfitting. The model was also evaluated with a test data set that resulted in a test accuracy of 0.88. Given the results obtained, transfer learning was used to evaluate the performance of our model by comparing it to three other popular open-source models that are also based on the CNN architecture. The three models tested are ResNet50, VGG-16, and Inception V3. After training these models and using the test data set to evaluate performance, ResNet50 had the best performance with 96.8 percent accuracy, followed by VGG-16 with 95.6 percent, Inception V3 with 92.6 percent, and finally our model with 88.3 percent. In conclusion, the transfer learning models had a superior performance compared to our basic CNN architecture. Nevertheless, this research study provided a feasible and effective model based on a basic CNN architecture, which still has room for improvement through parameter tuning and the addition of more layers. Ultimately, our goal is to incorporate bounding boxes or segmentation algorithms to identify tumor locations, providing medical professionals with enhanced diagnostic tools for cancer detection.