You are here

Ethical and Scientific Issues Developing AI in Medicine: Parallels to Drugs Development

Friday, March 22, 2019 - 8:00am


2B, Memorial Center for Learning and Innovation, 228 W. Miller St., Springfield, IL


Alex London, PhD, the John & Marsha Ryan Bioethicist-in-Residence, will present "Ethical and Scientific Issues Developing AI in Medicine: Parallels to Drugs Development" on Friday, March 22.

Dr. London will also present to the SIU School of Law on March 20. This presentation will be videoconferenced to the Dirksen Conference Room at SIU Medicine's Medical Library, 801 N. Rutledge St., Springfield. 

Click flier to learn more:

About Alex London, PhD

Alex John London, PhD, is the Clara L. West Professor of Ethics and Philosophy and director of the Center for Ethics and Policy at Carnegie Mellon University. He is an elected Fellow of the Hastings Center whose work focuses on ethical and policy issues surrounding the development and deployment of novel technologies in medicine, biotechnology and artificial intelligence.  He is co-editor of Ethical Issues in Modern Medicine, one of the most widely used textbooks in medical ethics and has published more than 85 papers in leading philosophy journals (such as Mind and the Philosopher’s Imprint), high impact science and medical journals (such as Science, eLife, JAMA, The Lancet, and PLoS Medicine), as well as numerous other journals and collections.  Professor London’s work on ethics and AI examines the nature of algorithmic bias, how to encode alternative models of moral decision making in formal systems, social trust, and the nature and source of uncertainty in AI systems. 

For more than a decade Professor London has helped to shape key ethical guidelines for the oversight of research with human participants.  From 2012-2016, he was a member of the Working Group on the Revision of CIOMS 2002 International Ethical Guidelines for Biomedical Research Involving Human Subjects.  Prior to that, he was an expert commentator at three World Medical Association meetings for the revision of the 2013 Declaration of Helsinki.  From 2007-2018, he was a member of the ethics working group of the U.S. HIV Prevention Trials Network where he was part of the group that drafted the HIV Prevention Trials Network Ethics Guidance for Research.  From 2016-2017, he was part of the U.S. National Academy of Medicine Committee on Clinical Trials during the 2014-15 ebola outbreak, and, from 2016-2018, he was a member of the U.S. Health and Human Services Advisory Committee on Blood and Tissue Safety and Availability. He has served as an ethics expert in consultations with numerous national and international organizations, including: the U.S. National Institutes of Health, the World Health Organization, the World Medical Association, and the World Bank. 

March 20, SIU Law School Lecture:
“Accountability and Non-Domination in the Use of AI Systems in Medicine: Validation vs. Explainability”

Abstract: Breakthroughs in machine learning are enabling the use of artificial intelligence (AI) to perform a wide range of diagnostic and predictive tasks in medicine.  This prospect has prompted utopian hype, as well as dystopian hysteria, dramatizing the importance of ensuring that systems involved in life-and-death decisions merit public trust.  Essential to securing such trust are clear practices and procedures to ensure accountability and respect for the freedom of stakeholders from arbitrary interreference at the hands of machine intelligence.  A common proposal for achieving these goals imposes requirements like explainability or interpretability that seek, in different ways, to lay out the operation of such systems to human inspection.  Because the most powerful AI systems are often opaque “black-boxes,” these requirements may be purchased at the price of reduced predictive accuracy.  In this talk, Professor London will argue that such requirements are misguided in domains—such as medicine—where our knowledge of fundamental causal relationships is precarious and under-developed. Instead, he will argue that we should promote trust and accountability by clearly defining the tasks such systems can perform, the conditions necessary to ensure acceptable system performance, and rigorously validating their accuracy under those well-defined conditions.

March 22, SIU Medical School Lecture:
“Ethical and Scientific Issues Developing AI in Medicine: Parallels to Drugs Development”

Abstract: The management of uncertainty in medicine raises important ethical issues.  In this talk, Professor London will discuss some of the sources of uncertainty surrounding the development and deployment of artificial intelligence (AI) systems in medicine, exemplified by two strategies for employing deep neural networks to make medical diagnostic decisions.  He will argue that, although AI systems are different from drugs in various ways, their development and deployment are similar in ethically relevant respects regarding the nature and source of the uncertainties that must be addressed prior to widespread use.  Building on prior work on the structure of clinical translation for new drugs (Kimmelman and London 2015), Professor London will argue that there should be a strong, default presumption in favor of requiring prospective clinical trials to validate claims of utility for AI systems in medicine and will discuss several sources of uncertainty about clinical utility that such trials should address.  

For more information about these presentations, contact Kristie Parkins at


SIU Events for Alumni, Faculty and Staff, Resident and Fellow, Student