Program Type

Graduate

Faculty Advisor

Dr. Robin Ghosh

Document Type

Poster

Loading...

Media is loading
 

Location

Online

Start Date

18-4-2024 8:30 AM

Abstract

Artificial Intelligence (AI) is established as a vital branch within computer science, representing a technology designed to clone human intelligence for proficient problem-solving capabilities. Over the past decade, AI has quietly become ingrained in everyday life, with many people unaware of its pervasive influence until recently. The introduction of ChatGPT in late 2022 abruptly thrust AI into the forefront of public consciousness, leading to widespread recognition of its applications. Literature surveys have shown a rapid proliferation of publications on AI, particularly with a sharp increase in writings dedicated to ChatGPT in recent months. AI applications have exploded in every possible domain of everyone's life, including medicine and healthcare. The lack of trust and transparency in AI-based healthcare and other applications resulted in the emergence of a new field known as Explainable Artificial intelligence (XAI), where algorithms are developed to provide human-understandable explanations for AI-generated decisions. XAI is a set of processes and methods that explain how the AI model reached its prediction. While many XAI surveys are conducted in the healthcare sector, this paper mainly focuses on recent (2019 -2024) findings of XAI within the domain of cardiology and neuroscience. The primary objective of this study is to assess various XAI methodologies and machine learning models to elucidate the decision-making processes of AI, enabling clinicians to validate and interpret AI-derived insights with confidence relevant to the fields of cardiology and neuroscience. This paper systematically reviews the advancement of XAI by carefully selecting and analyzing the latest research within cardiology and neuroscience. Multiple journal databases were comprehensively searched following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 2020. This review includes 130 articles examining various XAI techniques, including SHAP, LIME, Decision Trees, Decision boundary plots, Perturbation-based methods, and many others. Additionally, the paper explores the potential challenges in the development of XAI, offering insights to guide future research efforts in the field.

Keywords: black-box models, explainable artificial intelligence, cardiology, neuroscience

Research Symposium Poster- S Pidugu.pptx (706 kB)
Poster - S Pidugu

Share

COinS
 
Apr 18th, 8:30 AM

A Comprehensive Analysis of Explainable Artificial Intelligence (XAI) Methods in Cardiology and Neuroscience: A Systematic Review

Online

Artificial Intelligence (AI) is established as a vital branch within computer science, representing a technology designed to clone human intelligence for proficient problem-solving capabilities. Over the past decade, AI has quietly become ingrained in everyday life, with many people unaware of its pervasive influence until recently. The introduction of ChatGPT in late 2022 abruptly thrust AI into the forefront of public consciousness, leading to widespread recognition of its applications. Literature surveys have shown a rapid proliferation of publications on AI, particularly with a sharp increase in writings dedicated to ChatGPT in recent months. AI applications have exploded in every possible domain of everyone's life, including medicine and healthcare. The lack of trust and transparency in AI-based healthcare and other applications resulted in the emergence of a new field known as Explainable Artificial intelligence (XAI), where algorithms are developed to provide human-understandable explanations for AI-generated decisions. XAI is a set of processes and methods that explain how the AI model reached its prediction. While many XAI surveys are conducted in the healthcare sector, this paper mainly focuses on recent (2019 -2024) findings of XAI within the domain of cardiology and neuroscience. The primary objective of this study is to assess various XAI methodologies and machine learning models to elucidate the decision-making processes of AI, enabling clinicians to validate and interpret AI-derived insights with confidence relevant to the fields of cardiology and neuroscience. This paper systematically reviews the advancement of XAI by carefully selecting and analyzing the latest research within cardiology and neuroscience. Multiple journal databases were comprehensively searched following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 2020. This review includes 130 articles examining various XAI techniques, including SHAP, LIME, Decision Trees, Decision boundary plots, Perturbation-based methods, and many others. Additionally, the paper explores the potential challenges in the development of XAI, offering insights to guide future research efforts in the field.

Keywords: black-box models, explainable artificial intelligence, cardiology, neuroscience