Date of Award

Spring 5-9-2025

Level of Access Assigned by Author

Open-Access Thesis

Degree Name

Master of Science (MS)

Department

Computer Science

First Committee Advisor

Salimeh Yasaei Sekeh, Co-Advisor

Second Committee Member

Chaofan Chen, Co-Advisor

Third Committee Member

Andre Khalil

Additional Committee Members

Gregory Nelson

Abstract

Transfer Learning has advanced AI applications across domains like autonomous systems, natural language processing, and medical imaging. By leveraging pre-trained models, transfer learning enhances performance on small datasets. However, traditional methods – such as ensemble and multi-source transfer learning – suffer from high computational costs, memory constraints, and the need for simultaneous model access, limiting their use in resourceconstrained healthcare settings. To overcome these challenges, we propose a sequential transfer learning framework that enables incremental learning from multiple source models while reducing memory and computational demands. Unlike existing approaches, our method allows efficient fine-tuning without requiring all models to be available simultaneously. Empirical evaluations on benchmark datasets (BRACS, BACH, IDC, and Places365) using ResNet-50 and transformer-based architectures demonstrate that our approach can match or surpass the performance of traditional multi-source and ensemble methods, while improving efficiency and integrating diverse knowledge sources. By establishing key theoretical insights into multi-source and sequential transfer learning, this work advances transfer learning methodologies and their potential diagnostic accuracy in clinical applications.

Share