Download PDFOpen PDF in browserReducing Model Complexity for COVID-19 Classification: a Pruning-Based ApproachEasyChair Preprint 158516 pages•Date: February 20, 2025AbstractAI and Deep learning has been played important role in the diagnosis of COVID-19 cases during outbreak situation and proved on of the excellent and effective method to detect, classify and segment the lung region using lung CT/X-ray/ Ultrasound images. Deep learning models for medical image analysis often require significant computational resources, making them challenging to deploy on resource-constrained devices. Pruning is most widely used method to reduce the model's complexity while maintaining its performance. Magnitude-based pruning [10] is simple and most widely used pruning method that can identify and remove less significant weights within the network, aiming to achieve a balance between model sparsity and accuracy. Using Polynomial Decay pruning schedule, it is possible to dynamically adjust the sparsity throughout the training process, gradually increasing the number of pruned weights to reach a target sparsity of 50%. In this study, we applied magnitude based pruning to a Deep learning classification mode[7] for the detection of COVID-19 cases on lung CT, reducing the total number of parameters from 7.85 million (29.96 MB) to 3.93 million (15.00 MB). Despite this substantial reduction, the pruned model maintained competitive performance, with an accuracy of 90% (training) and 92% (testing) after pruning, compared to 98% (training) and 97% (testing) before pruning. The results demonstrate that pruning significantly reduces model complexity and memory footprint while preserving segmentation accuracy, making it suitable for real-time applications and deployment on edge devices. This study highlights the effectiveness of model compression techniques in deep learning-based medical imaging, providing a trade-off between efficiency and accuracy. Future work will explore additional optimization strategies, such as mixed-precision training and hardware-aware pruning, to further improve model deployment capabilities. Keyphrases: COVID-19, Magnitude based pruning, Optimization, Polynomial decay
|