Show simple item record

dc.contributor.supervisorIfeachor, Emmanuel
dc.contributor.authorCourtman, Megan
dc.contributor.otherFaculty of Science and Engineeringen_US
dc.date.accessioned2024-09-26T15:33:58Z
dc.date.available2024-09-26T15:33:58Z
dc.date.issued2024
dc.identifier10611005en_US
dc.identifier.urihttps://pearl.plymouth.ac.uk/handle/10026.1/22598
dc.description.abstract

Machine learning is increasingly being applied to medical imaging tasks. However, the "black box'' nature of techniques such as deep learning has inhibited the interpretability and trustworthiness of these methods, and therefore their clinical utility. In recent years, explainability methods have been developed to allow better interrogation of these approaches.

This thesis presents the novel application of explainable deep learning to several medical imaging tasks, to investigate its potential in patient safety and research. It presents the novel application of explainable deep learning to the detection of aneurysm clips in CT brains for MRI safety. It also presents the novel application of explainable deep learning to the detection of confounding pathology in radiology report texts for dataset curation. Furthermore, it makes novel contributions to Parkinson’s research, using explainable deep learning to identify progressive brain changes in MRI brain scans, and to identify differences in the brains of non-manifesting carriers of Parkinson's genetic risk variants in MRI brain scans. In each case, convolutional neural networks were developed for classification of data, and Shapley Additive exPlanations (SHAP) were used to explain predictions. A novel pipeline was developed to apply SHAP to volumetric medical imaging data.

The application of explainable deep learning to various types of data and task demonstrates the flexibility of the combination of convolutional neural networks and SHAP. Additionally, these applications highlight the importance of combining explainability with clinical expertise, to check the viability of the models and to ensure that they meet a clinical need. These novel applications represent useful new tools for safety and research, and potentially for improvement of clinical care.

en_US
dc.language.isoen
dc.publisherUniversity of Plymouth
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectArtificial intelligenceen_US
dc.subjectMachine learningen_US
dc.subjectMedical imagingen_US
dc.subjectMRI safetyen_US
dc.subjectDeep learningen_US
dc.subjectExplainable artificial intelligenceen_US
dc.subjectSHapley Additive exPlanationsen_US
dc.subjectParkinson's diseaseen_US
dc.subjectNatural language processingen_US
dc.subjectHealth data scienceen_US
dc.subjectConvolutional neural networksen_US
dc.subject.classificationPhDen_US
dc.titleExplainable Deep Learning for Medical Imaging Classificationen_US
dc.typeThesis
plymouth.versionpublishableen_US
dc.identifier.doihttp://dx.doi.org/10.24382/5231
dc.rights.embargoperiodNo embargoen_US
dc.type.qualificationDoctorateen_US
rioxxterms.funderEngineering and Physical Sciences Research Councilen_US
rioxxterms.identifier.projectEP/T518153/1en_US
rioxxterms.versionNA
plymouth.orcid_id0000-0002-8984-7798en_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV