Explainable Deep Learning for Medical Imaging Classification
dc.contributor.supervisor | Ifeachor, Emmanuel | |
dc.contributor.author | Courtman, Megan | |
dc.contributor.other | Faculty of Science and Engineering | en_US |
dc.date.accessioned | 2024-09-26T15:33:58Z | |
dc.date.available | 2024-09-26T15:33:58Z | |
dc.date.issued | 2024 | |
dc.identifier | 10611005 | en_US |
dc.identifier.uri | https://pearl.plymouth.ac.uk/handle/10026.1/22598 | |
dc.description.abstract |
Machine learning is increasingly being applied to medical imaging tasks. However, the "black box'' nature of techniques such as deep learning has inhibited the interpretability and trustworthiness of these methods, and therefore their clinical utility. In recent years, explainability methods have been developed to allow better interrogation of these approaches. This thesis presents the novel application of explainable deep learning to several medical imaging tasks, to investigate its potential in patient safety and research. It presents the novel application of explainable deep learning to the detection of aneurysm clips in CT brains for MRI safety. It also presents the novel application of explainable deep learning to the detection of confounding pathology in radiology report texts for dataset curation. Furthermore, it makes novel contributions to Parkinson’s research, using explainable deep learning to identify progressive brain changes in MRI brain scans, and to identify differences in the brains of non-manifesting carriers of Parkinson's genetic risk variants in MRI brain scans. In each case, convolutional neural networks were developed for classification of data, and Shapley Additive exPlanations (SHAP) were used to explain predictions. A novel pipeline was developed to apply SHAP to volumetric medical imaging data. The application of explainable deep learning to various types of data and task demonstrates the flexibility of the combination of convolutional neural networks and SHAP. Additionally, these applications highlight the importance of combining explainability with clinical expertise, to check the viability of the models and to ensure that they meet a clinical need. These novel applications represent useful new tools for safety and research, and potentially for improvement of clinical care. | en_US |
dc.language.iso | en | |
dc.publisher | University of Plymouth | |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | Artificial intelligence | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Medical imaging | en_US |
dc.subject | MRI safety | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Explainable artificial intelligence | en_US |
dc.subject | SHapley Additive exPlanations | en_US |
dc.subject | Parkinson's disease | en_US |
dc.subject | Natural language processing | en_US |
dc.subject | Health data science | en_US |
dc.subject | Convolutional neural networks | en_US |
dc.subject.classification | PhD | en_US |
dc.title | Explainable Deep Learning for Medical Imaging Classification | en_US |
dc.type | Thesis | |
plymouth.version | publishable | en_US |
dc.identifier.doi | http://dx.doi.org/10.24382/5231 | |
dc.rights.embargoperiod | No embargo | en_US |
dc.type.qualification | Doctorate | en_US |
rioxxterms.funder | Engineering and Physical Sciences Research Council | en_US |
rioxxterms.identifier.project | EP/T518153/1 | en_US |
rioxxterms.version | NA | |
plymouth.orcid_id | 0000-0002-8984-7798 | en_US |
Files in this item
This item appears in the following Collection(s)
-
01 Research Theses Main Collection
Research Theses Main