Blog
News

The Ethics of AI

March 19, 2024
AEP Team member + A/Prof. Mangor Pedersen

WHO recently released AI ethics and governance guidelines for Large Multimodal Models (LMM). The guidance outlines five broad applications of LMMs for health including scientific research and highlights risks including bias and data security.    

What, if any, are the impacts of the publication findings and recommendations for research studies, such as the AEP, that are using AI to create prognostic and diagnostic models?  

MP: First, I think it is fantastic that WHO has released these guidelines. This will make AI safer for researchers, clinicians, and the public. As a core part of the AEP is to use novel AI tools to improve care in Epilepsy, we have also published our own ethical guidelines for the use of AI in the AEP (https://osf.io/preprints/osf/kag75). It is worth pointing out that the AEP uses a range of AI tools to improve epilepsy care. 

Is this guidance welcomed in the scientific research field? How and if not, why? 

MP: This is very welcomed by the scientific research field, as it is intended to safeguard researchers. 

What are some of the challenges for using LMM in research? 

MP: Although the AEP is a groundbreaking project aiming to collect a large-clinical dataset prospectively, it remains to be seen how recent LMM models can be used in this context. However, recent studies suggest that contemporary generative AI approaches are promising in delivering clinically reliable information (https://www.nature.com/articles/s41586-023-06291-2), so I guess this is a case of watch this space … 

What can be done to mitigate introducing bias and prejudice into training data? 

MP: To me this is one of the greatest challenges facing clinical AI research, and one that we are ideally placed to tackle in the AEP. We aim to reduce data bias and increase fairness in AI models by collecting data from various locations and establishing a cross-section of the great diversity we have in Australia. 

If AI is used in diagnosis - is it communicated to a patient? If not, why? And is it something that should be? will be, in the future?  how might this be done to maintain trust? 

MP: This is undoubtedly the aim while we move towards a model with more digital care. For us as researchers, the immediate goal is to understand how and why the AI algorithm works the way they do. This ensures that we do not treat the algorithms as a black box but as a symbiotic process in which human knowledge is paramount to AI advancement. 

Are ethics certifications of LMMs and/or transparency of data used to train models the way ahead for use in research?  

MP: I think these approaches are all a part of good science, and will lead to progress within the field. The more significant impact on AI safety is probably due to emerging legislative approaches to AI, such as the EU AI-act (https://artificialintelligenceact.eu/the-act/), which is an excellent step towards safe AI.  

 

AEP Participant: Bruce Jeffrey

It was the day before his birthday, in February 2022, when Bruce experienced his first seizure during the night. “I was completely unaware of what was happening and only gained consciousness in the ambulance.”

AEP Participant: Gary Alway

Gary has been living with epilepsy for almost three decades. In his early 20s his epilepsy was fairly-well managed with medication, and his seizures were rare. But then everything changed. He began having multiple seizures and blackouts every day, culminating in a car crash nine years ago, caused by a seizure.

AEP Participant: Fiona Waugh

Fiona didn’t experience her first seizure until 34-years-of-age and after a further two tonic clonic seizures in as many days, she was diagnosed with epilepsy. “Since diagnosis I’ve remained drug-resistant with a high frequency of seizure activity. But I’ve always had a desire to try and get on top of it, which has led me to make some big treatment decisions over the years.”