Our project used machine learning and explainable AI to develop highly accurate yet interpretable models for enhancing Alzheimer’s disease prediction. Utilising the ADNI and SHAREWave8 datasets, we established that transfer learned models fine-tuned on multiple target input features (20 input features from about 79,144 respondents for the SHAREWave8 dataset) achieved approximately 6.79% higher AUC, 7% higher weighted average precision, and 7% higher weighted average recall and F1 scores compared to the same transfer model trained on a limited scale dataset (the ADNI dataset characterised by 10 input features from about 16, 421 participants).
The trained models provided transparent insights into the factors influencing dementia risk predictions by incorporating the SHAP framework therefore highlighting the critical challenge of balancing accuracy with interpretability for healthcare professionals. Our project’s research methodology improves model generalizability and performance across diverse datasets through transfer learning hence contributing significantly to healthcare analytics by offering a robust approach to developing predictive models that are both effective and explainable. Ultimately our research findings on this project pave the way for practical application in clinical settings to aid early Alzheimer’s detection and intervention.