CAP: Capacity Building for Trustworthy AI in Medical Systems (TAIMS)

The NSF Expanding AI Innovation through Capacity Building and Partnerships grant is a stepping stone towards building an artificial intelligence institute at Meharry Medical College devoted to foundational and use-inspired advances in Trustworthy AI in Medical Systems.

TAIMS will bring AI researchers and medical professionals together to work synergistically, contributing to fundamental scientific advances in trustworthy AI in medical systems. The institute is dedicated to developing a skilled and diverse future workforce through deeply integrated activities that advance education, broaden participation and prepare future AI experts through exciting new programs. 

Our Vision
The proposed AI institute will address algorithmic and societal biases, security and privacy concerns, and advance research in trustworthy AI to improve the utility of clinical AI systems for physician decision making and use in hospitals.

The institute will focus on four major research themes:
1) explainable AI in medical systems,
2) ethical and responsible AI for medical systems,
3) security and privacy-preserving AI in medical systems
4) use-inspired research in medical systems.

Team

Vibhuti Gupta

PI: Vibhuti Gupta, Ph.D.
Assistant Professor, Computer Science and Data Science

T.L. Wallace

Co-PI: T.L. Wallace, Ph.D.
Chair, Biomedical Data Science Department
Professor, Computational Sciences

Co-PI: Uttam Ghosh, Ph.D.
Associate Professor of Cybersecurity

Co-PI: Todd Gary, Ph.D.
Director, External Research and Development Partnerships
Assistant Professor, Biomedical Data Science

David Lockett

Co-PI: David Lockett
Grants Proposal Development and Awards Management Specialist

Student Research Assistants

Julian Broughton

Destiny Pounds

Ange Rukundo

About Artificial Intelligence in Medical Systems

AI and machine learning (ML) have made significant progress in medical systems and have achieved human-level performance in skin cancer classification, diabetic retinopathy detection, chest radiograph diagnosis, and the detection and treatment of sepsis. While these AI/ML achievements are encouraging and can lead to better treatment and diagnosis, few clinical AI solutions are deployed in hospitals or are actively utilized by physicians.

Existing clinical AI methods have algorithmic and societal biases present in the development pipeline that lead to misinformation, misdiagnosis and health disparities. For example, the existing methods are biased to specific ethnic groups, which result in unreliable predictions for other ethnic groups. Moreover, the black-box approach to AI clinical decisions is susceptible to cyberattacks, raising security and privacy concerns.

1. Explainable AI in medical systems

Artificial Intelligence in health care and medicine is expected to be increasingly critical, especially as a resource in areas such as research, diagnostics, and clinical practice. Many practitioners, clinicians, researchers, and patients may not trust or have the experience to use AI unless it is explainable, verifiable, and trustable.

Explainable AI (XAI), also known as Interpretable AI is artificial intelligence in which humans can understand the reasoning of decisions or predictions made by the AI. It contrasts with the “black box” concept in machine learning, where even the AI’s designers cannot explain why it arrived at a specific decision.

This research thrust will focus on the following activities:

  • Develop XAI and interpretable methods for high dimensional, longitudinal and time-series medical data.
  • Develop XAI methods to explain AI model failures.
  • Develop novel XAI tools that can be applicable on various real-world health care datasets.

2. Ethical and Responsible AI in medical systems

Ethical and responsible AI deals with adherence of well-defined ethical guidelines regarding fundamental values, including individual rights, non-discrimination, and non-manipulation, along with legal regulations for ethical use of AI tools and technologies. The ethical use of AI tools is beneficial to society by producing cleaner products, reducing harmful environmental impacts, increasing public safety, and improving human health. But if used unethically, this will lead to disinformation, deception, human abuse, bias, prejudice, discrimination, and privacy concerns.

In medical systems, these ethical concerns must be considered seriously as they will directly impact people’s health as algorithmic and societal bias in the predictions lead to misdiagnosis, maltreatment, and disparities affecting the trustworthiness of AI systems.

The major goals of this specific research theme will be to:

  • Develop clear guidelines on how to ethically and responsibly develop AI for medical systems.
  • Develop clear guidelines on how to ethically and responsibly deploy AI for medical applications.
  • Detect the ways AI can go wrong for medical systems, and bring it to the attention of AI developers and users.
  • Develop approaches to detect and mitigate human/data-induced biases in the medical datasets and models.

3. Security and privacy-preserving AI in medical systems

Security in health care involves protecting electronic health records (EHRs), health tracking devices, medical equipment, and software used for health care delivery and management from unauthorized access, use and disclosure. There are three goals of security: protecting the confidentiality, integrity, and availability (CIA) of critical patient data, which, if compromised, could put patient lives at risk.

This research thrust will focus on these activities:

  • Develop security and privacy-preserving methods with use-cases in health care.
  • Develop AI/ML methods for identifying and predicting the security threats in health care.
  • Privacy-preservation using federated learning in health care.

4. Use-inspired research in trustworthy AI in medical systems

The use-inspired research in medical systems combines techniques developed in the above research themes applies them to real-world use cases to build trustworthy AI systems. The major goals of this specific research theme are to:

  • Develop a test bed to demonstrate a trustworthy AI system by leveraging existing techniques and combining them with methods developed in the project.
  • Apply the developed test bed on a real-world medical application to demonstrate the utility of it.
  • Provide the information to the diverse users based on the results of the developed test bed for demonstrating trustworthy AI in medical systems.

These goals focus on the following specific use cases to achieve the broader science goals of the AI Institute:

Use case #1: Dermatology: Develop explainable, secure and generalizable AI tools to apply on the real-world use cases for diagnosis and prediction of skin diseases.

Dermatologists diagnose a wide variety of skin diseases including skin cancers, inflammatory conditions like atopic dermatitis and psoriasis, and contagious diseases such as measles. Globally an estimated 3 billion people have inadequate access to medical care for skin diseases. Due to advancement in AI tools, we can detect skin diseases and identify individuals with potential skin diseases, however most of the existing tools are biased, lack diversity in the datasets, are non-explainable and have noisy diagnostic labels.

The large size of the imaging dataset used for building these AI tools causes security and privacy issues. This leads to ethical challenges and non-trustable AI solutions in medical systems that can’t be utilized by physicians for an accurate diagnosis and prognosis of skin diseases. Thus, there is a great need to develop ethical and trustworthy AI solutions in dermatology domain. 

Use Case #2: Intensive care: Develop explainable, secure, and generalizable AI tools to apply on the real-world use cases for risk prediction in ICU setting patients.

Intensive care units provide care to patients who are critically or seriously ill or injured. More than 5 million patients are admitted annually to U.S. ICUs for intensive or invasive monitoring; support of airway, breathing, or circulation; stabilization of acute or life-threatening medical problems; comprehensive management of injury and/or illness; and maximization of comfort for dying patients.

Recently due to AI advancements, there has been some work on publicly available datasets to build models for clinical prediction tasks in intensive care such as in-hospital mortality, physiological decompensation, length of stay and phenotype classification. However, there are still challenges in bias and fairness in risk prediction models which hamper the successful adoption of medical AI tools. Thus, there is a great need to develop ethical and trustworthy AI solutions in intensive care unit research to build fair models for risk prediction tasks.

Educational initiatives

Educational outreach activities play a crucial role in maximizing the impact of AI teaching and trustworthy AI. These activities aim to extend the reach of these programs beyond their immediate participants and engage a wider audience.

Multiple education and outreach activities will be organized as part of capacity building in trustworthy AI in medical systems. Educational activities consist of seminar series on trustworthy AI in medical systems and developing a new course on trustworthy AI for the biomedical data science and data science Ph.D. programs.

Outreach activities include trustworthy AI teaching modules for high school educators and summer academies for K-12 students.

Trustworthy AI teaching modules for K-12 Educators

Educational outreach activities play a crucial role in maximizing the impact of AI teaching and trustworthy AI. These activities aim to extend the reach of these programs beyond their immediate participants and engage a wider audience.

Our capacity building initiative for TAIMS (Trustworthy AI in medical systems) is committed to broadening participation and engaging women, K-12 educators and students and underrepresented communities in the field of AI and machine learning. To this effort, Vibhuti Gupta, Ph.D., and his student Destiny Pounds developed nine-lesson trustworthy AI teaching modules that introduces educators to different aspects of trustworthy AI with detailed explanations, examples and hands-on exercises.

About the Teaching Modules

We developed nine modules that include detailed explanations, examples and hands-on exercises. Each module includes a:

  • Pre-recorded video,
  • A PDF of the slide deck from that video, and,
  • A use-case slice that demonstrates the real-world application of the concept learned and a scenario question to think the solution based on your learnings. 

A transitional video helps reinforce key concepts before you proceed with the next module. Some hands-on modules require some prior programming exposure, ideally in Python, however some of them only require a tool to identify and mitigate biases in the datasets.

Please review the Trustworthy AI Modules for K-12 Educators Manual for helpful instructions before staring the modules.

Hands-On Exercises

You can also try the optional exercises at the links below. These exercises are related to the content in Module 3, so we recommend having at least completed that module. Available on Google Collab, you can do a walkthrough of the notebooks and see a demonstration of the concepts covered in Module 3.

Detecting and mitigation sex bias on credit risk decisions

Identifying Bias in Mental Health Data

Aggregated slides of modules and use cases

If you prefer, you can also access PDFS with all of the module slides and use-cases at these links.

Modules

Use Cases

An introduction to the Trustworthy AI Teaching Modules

Module 1: Intro to Ethical and Trustworthy AI

Module 2: Trustworthy AI: Ethics

Module 3: Trustworthy AI: Fairness

AI Fairness Slides

AI Fairness Use Case

Module 3 to 4 Transitional Video

Module 4: Trustworthy AI: Privacy

Module 5: Trustworthy AI: Security

Module 6: Trustworthy AI: Robustness

Module 7: Trustworthy AI: Safety

Module 8: Trustworthy AI: Explainability

Module 9: Trustworthy AI: Accountability

Trustworthy AI Conclusion Module

Please complete this brief survey after finishing all modules.

The Ethical and Responsible AI in Medical Systems Seminar Series explores topics related to the potential risks, opportunities, and challenges for ethical and responsible AI in health care.

Previous Seminars

Humanistic exchange or simply a human interface: examining the progress of medicine in the framework of inequity

Imanni Sheppard, Ph.D.
Assistant Professor, and Co-Director, Bioethics and Medical Humanities Thread,
UIUC Carle Illinois College of Medicine

Presentation Slides

Application of AI in Chronic Disease Management

Shumit Saha, Ph.D.
Assistant Professor, Biomedical Data Science

Presentation Slides

Blockchain and AI for transforming Healthcare Through Secure Data Management and Enhanced Patient Care

Debashis Das, Ph.D.
Postdoctoral fellow
Meharry School of Applied Computational Sciences

Presentation available soon

Legal policies, regulations and ethics in healthcare

Vibhuti Gupta, Ph.D.
Assistant Professor, Computer Science and Data Science

Presentation Slides

ChatGPT is Just the Beginning, Generative AI Will Transform Computing

Jules White, Ph.D.
Professor of Computer Science
Senior Advisor to the Chancellor for Generative AI in Education and Enterprise Solutions
Vanderbilt University

Presentation available soon

Institutional review board review variation in bioethics health research and central IRB

Abdul Sawas, Ph.D.
Human Subjection Protection Administrator
Meharry Medical College

Rajbir Singh, MBBS
Executive Director for Precision Medicine and Health Equity Trials Design
Meharry Medical College

Presentation available soon

TAIMS Planning Workshop No. 1

The TAIMS Planning Workshop featured two days of engaging sessions focused on identifying the requirements, gaps, and challenges for trustworthy AI in medical systems. The work will help build a plan for mapping out the research themes, important questions, methods, challenges, and applications that will define our direction towards building an artificial intelligence institute at Meharry Medical College devoted to foundational and use-inspired advances in Trustworthy AI in Medical Systems.

The workshop featured speakers from multiple, diverse disciplines including medical, basic science, bioinformatics, computer science, and data science. The keynote speakers were Troy Tazbaz, director, Digital Health Center of Excellence, Center for Devices and Radiological Health and Office of Strategic Partnerships & Technology Innovation, at the U.S. Food and Drug Administration, and Suresh K. Bhavnani, Ph.D., M.Arch., FAMIA, professor, biomedical informatics and director, Discovery and Innovation through Visual Analytics at the University of Texas Medical Branch.

Session Recordings

Ethical Considerations in Healthcare

Vibhuti Gupta, Ph.D.
Assistant Professor, Computer Science and Data Science
School of Applied Computational Sciences

The Current State of AI and the Implications of Generative AI in Health care

Brenden Fowkes
Global Industry Technology Leader
Healthcare

AI For Clinical Diagnostic Decision Making: Can Explainability be a Backstop Against Biased AI?

Sarah Jabbour
Ph.D. Candidate, Computer Science and Engineering
University of Michigan

Control-Fused Intrusion Detection Systems for Cyber Security in Unmanned Aerial Vehicles

Mohammad Ashique Rahman, Ph.D.
Assistant Professor, Dept. of Electrical and Computer Engineering
Florida International University

Unveiling Disparities: A Data-Driven Exploration of African-American Health Experiences

Jamell Dacon, Ph.D.
Assistant Professor, Dept. of Computer Science
Morgan State University

An Overview of Ethics and Equity of Artificial Intelligence in Healthcare

Benjamin Collins, M.D.
Vanderbilt University Medical Center

Trustworthy AI in Healthcare

Sherrine Eid
SAS

Enhancing Public Health through Innovations and Emerging Technologies

Long Nguyen, Ph.D.
Assistant Professor, Computer Science and Data Science
School of Applied Computational Sciences

AI Augmented Reality Police E-Trainer for Culturally Competent De-Escalation and Non-Lethal Force Police Training to Eradicate Police Violence Against Black Males

Jayfus Doswell, Ph.D.
Founder and CEO, Juxtopia, LLC

Machine Learning Improves Survival Prediction after Allogeneic Hematopoietic Cell Transplantation

Akshay Sharma, MBBS, MSc
Pediatric Hematologist, Oncologist and Transplant Physician
St. Jude Children’s Research Hospital

From AI to Advocacy: My Reluctant Journey into Policy Translation

Suresh K. Bhavnani, Ph.D., M.Arch., FAMIA
Professor, Biomedical Informatics
Director, Discovery and Innovation through Visual Analytics
University of Texas Medical Branch

Concluding Panel and Closing

Panelists: Sarah Jabbour, Jayfus Doswell, Ph.D., and Suresh K. Bhavnani, Ph.D., M.Arch., FAMIA

Upcoming Seminars

There are no upcoming events.
There are no upcoming events.

News Feed

Trustworthy AI in Medical Systems Planning Workshop

April 2–3. Join us for a two-day, virtual workshop focused on identifying the requirements, gaps, and challenges for trustworthy AI in medical systems.

Vibhuti Gupta, Ph.D.

Gupta receives NSF CAP grant to pursue trustworthy artificial intelligence in medical systems

Vibhuti Gupta, Ph.D., assistant professor, computer science and data science, has received a National Science Foundation Expanding AI Innovation through Capacity Building and Partnerships (NSF CAP) grant. The two-year, $395,703 award will be applied to foundational and use-inspired advances in trustworthy artificial intelligence in medical systems.