Gupta receives grant to advance trustworthy and ethical AI in health care

The use of artificial intelligence in health care has led to technological breakthroughs like surgical robots and human-like performance with skin cancer classification, diabetic retinopathy detection, chest radiograph diagnosis and sepsis treatment. Despite these encouraging developments, AI has only limited use in the clinical setting. The concern is potentially harmful bias in AI methods.
Thanks to funding from the National Institutes of Health’s National Institute of Minority Health and Health Disparities (NIMHD), Vibhuti Gupta, Ph.D., will lead a team to both identify AI biases and educate others in ethical and responsible AI.
“Trustworthy and ethical AI is critical in health care,” says Dr. Gupta. “Otherwise there can be algorithmic and societal bias in the predictions, which leads to misdiagnosis, maltreatment, and disparities affecting the trustworthiness of AI systems.”
The $218,250 grant is a supplement to NIMHD funding to the Meharry RCMI Program in Health Disparities Research. Dr. Gupta’s project will identify and address ethical considerations in AI technologies in health care and provide training on potential risks, opportunities, and challenges for ethical and responsible AI in health care.
Dr. Gupta and Samuel E. Adunyah, Ph.D., chair and professor, biochemistry and cancer biology, are principal investigators for the grant. Dr. Adunyah is the primary investigator for the original NIHHD RCMI grant.
“What we need to address in clinical AI methods is bias to specific ethnic groups or subpopulations,” says Dr. Gupta. “That bias can be caused by humans or through insufficient representation in the data that builds models.”
Dr. Gupta will work with co-principal investigators Todd Gary, Ph.D., director, external research development partnerships; Long Nguyen, Ph.D., assistant professor of computer science and data science; Lubna Pinky, Ph.D., assistant professor, biomedical physics; David Lockett, grants proposal development and awards management specialist; Nadine Shillingford, Ph.D., chair and associate professor, computer science and data science; and Dean Fortune S. Mhlanga, Ph.D., who will also serve as his mentor for the project.
Ebony Weems, Ph.D., an M.S. Biomedical Data Science student, and assistant professor
biomedical science/ biomedical data science at Alabama A&M is also on the project team. She will help design the virtual tutorial series and incorporate AI ethics into her courses.
The first step will be to identify, and develop mitigation strategies for, datasets and models from multiple sources with significant bias. That work will then be used to enhance Meharry’s RCMI capacity by providing ethical and responsible AI training to RCMI investigators, medical professionals, staff and graduate students. This training will include:
- Virtual training sessions covering several topics related to eliminating or reducing bias.
- Professional development training in AI ethics for up 20 RCMI investigators, post-docs, staff, medical professionals and graduate students.
- Enhanced teaching modules for two Meharry SACS’s courses, MSDS 575 Ethics in Data Science and MSDS 540 Introduction to Artificial intelligence in Health care.
- A survey of RCMI investigators, data scientists and health care professionals to identify ethical AI needs and areas of improvement.