top of page
Search
Writer's pictureKhari Motayne

The Need for Explainable AI Mandates in Healthcare

Updated: Sep 16, 2023

The rise in artificial intelligence and machine learning (AI/ML) is changing how we think about healthcare's future. Whether it be the discovery of new antibiotics like halicin, streamlining diagnoses, and personalizing treatment plans, this technology opens doors for quicker advancements and broader access to healthcare for all. Genuine concerns have emerged as AI/ML applications become more accessible, with the users of AI becoming less and less well-versed in the function of AI. The potential of AI/ML applications in healthcare is endless and can offer patients a new level of care that would otherwise not be possible. From bespoke personalized treatment plans to continued innovations in developing new drugs, it is not an exaggeration to say that it could be the healthcare innovation of the century. Only if the industry adopts responsible and ethical approaches will this be possible. As it becomes more accessible and integrated into the healthcare profession, mandates around explainability will help foster trust and accountability while ensuring these technologies are safe and ethical.


One of the central challenges in using AI-powered algorithms for decision-making is their inherent complexity, referred to as the "black box" dilemma. Traditional AI systems, such as rule-based algorithms, could provide clear explanations for their decisions, making them more transparent to healthcare professionals. In contrast, modern AI/ML models, especially deep learning neural networks, exhibit remarkable performance but need more transparency in their decision-making processes, with even the most advanced data scientists deciphering their path to a decision. Similar to how decisions people make are different to interpret, neural networks, which, by design, mimic the neurons in the biological brain, are only sometimes easy to understand. This opacity is a significant risk in the future these algorithms will play in healthcare. Understanding the rationale behind any diagnosis or treatment is critical for patient-doctor trust.

In the aftermath of the pandemic, there has been a renewed focus on examining the staggering toll of health disparities in the United States. AI/ML algorithms excel at detecting patterns in large data sets that would otherwise be either impossible or exceedingly challenging for people to find. AI/ML algorithms' outputs are wholly grounded in the quality of their training data, which also means that even implicit bias reflected in data will often lead to biased outcomes. Of course, the inverse is also true; given a suitable data set and the proper parameters, AI/ML algorithms can be a powerful tool for detecting bias in data sets. Even putting aside more forward-looking applications of machine learning applications, algorithms currently play an increasing role in what types of information doctors have access to. The growth of marketing in the healthcare space is increasingly going towards digital properties, many of which use algorithms to determine what type of content to serve and which advertisements doctors see. Even in that instance, without a fairness doctrine in place, it can skew how doctors and patients receive crucial information. Unintentional skews regarding geography, gender, and ethnicity are common in algorithmic-powered ad serving. They can also impact critical aspects, ranging from knowledge of clinical trials in communities to access to information on the latest treatment information.


Either of these challenges alone poses substantial risks to patient trust and outcomes, but when taken together, could have damaging cascading effects on the future of health equity. As trust in the healthcare system has declined in recent years, any applications of AI/ML do not exacerbate these issues, particularly in underserved communities, where this has been a pervasive challenge. AI/ML outputs that are explainable and interpretable offer an ethical course correct to this potential erosion of trust.

IBM defines explainable AI as "a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms." Explainable AI/ML can enhance AI-driven healthcare applications' diagnostic accuracy and reliability. In the case of medical image analysis, interpretable AI models can highlight specific regions or features that contribute to the diagnosis, aiding radiologists and pathologists in their decision-making process. This technology increases confidence in AI-generated results and empowers healthcare professionals to independently validate and corroborate the findings, leading to more accurate and reliable diagnoses. That becomes even more important if those decisions are going to be aided by AI systems. Healthcare professionals face the daunting task of sifting through vast patient data sets to determine the most suitable treatment plans. Explainable AI/ML can provide transparent decision support, helping doctors understand why an algorithm recommended a particular treatment option for a specific patient. These outputs enable physicians to tailor treatments more effectively, considering each individual's unique characteristics and medical history, ultimately improving patient outcomes.


With AI/ML algorithms increasingly influencing critical medical decisions, ensuring these models do not introduce biased or discriminatory outcomes is imperative. Explainable AI/ML enables researchers and clinicians to identify potential biases in the training data and model architecture, thus mitigating the risk of unjust practices. By shedding light on the decision-making process, explainable AI/ML fosters accountability, prompting healthcare professionals to critically evaluate and challenge the model's suggestions when necessary. The lack of explainability in AI/ML systems has raised concerns among regulatory authorities and legal bodies. In healthcare, where decisions can have life-altering consequences, it is crucial to comply with relevant regulations and be able to justify the actions of AI-driven applications. Explainable AI/ML helps meet regulatory requirements and provides an audit trail, enabling healthcare professionals to understand the reasoning behind a specific recommendation or outcome in disputes or legal challenges.


AI/ML algorithms that are interpretable give data scientists, healthcare professionals, and patients the confidence they need to rely on their outputs. But the first step to this has to come from organizations determining what "fairness" means when aligned with their mission statements and business goals. Institutions that are either considering or currently utilizing AI/ML to offer goods and services should adopt AI councils to discuss the ethical ramifications of these technologies. Fairness is ultimately a moral question, one that precedes AI/ML and one that it cannot answer for us. Depending on the required task, the level of explainability will vary. It is critical to note that while there is excitement around neural networks and deep learning, those technologies and the level of predictive power they bring to the table are only sometimes required. To quote Professor Michael R. Roberts regarding financial AI/ML models, "The message is data, not models. That's what leads to successful machine learning implementations." (Roberts, n.d.). As stakeholders evaluate which tools to utilize, the balance between interpretability and performance will be critical. Other methods tested include using different algorithms to explain the outputs of complex neural nets.


Ultimately, patient trust is the cornerstone of any successful healthcare system, particularly in historically underserved communities where gaps in trust are a primary contributor to differentiated outcomes. As AI/ML technologies gain prominence, ensuring patients' confidence in the diagnostic and treatment processes is vital. By adopting explainable AI/ML, healthcare providers can establish a transparent and comprehensible framework, allowing patients to understand the rationale behind their diagnosis and treatment plans. This framework, in turn, fosters trust and promotes patient engagement in their healthcare journey.

Incorporating explainable AI/ML in healthcare is not merely a matter of preference but an ethical imperative. As AI/ML technologies evolve and transform healthcare, transparency and interpretability are essential to build trust, ensure patient safety, and facilitate collaboration between AI systems and healthcare professionals. AI councils and future legislation by governmental bodies will be the cornerstones to ensuring these technologies plot their intended course and do not exacerbate issues around health equity. By prioritizing explainability, interpretability, and fairness, the healthcare industry can harness the full potential of AI/ML while upholding the principles of responsible, ethical, and patient-centered care.


Sources:

  1. https://news.mit.edu/2020/artificial-intelligence-identifies-new-antibiotic-0220

  2. https://abcnews.go.com/Health/ai-detect-treat-cancer-potential-risks-patients/story?id=101431628

  3. https://www.forbes.com/sites/forbestechcouncil/2023/01/24/ais-biggest-promise-the-democratization-of-precision-medicine/?sh=775285f81ba1

  4. https://jme.bmj.com/content/48/10/764

  5. https://medicine.yale.edu/news-article/yale-study-documents-staggering-toll-of-health-disparities-for-black-americans/

  6. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

  7. https://www.nature.com/articles/s41591-019-0649-2

  8. https://www.ibm.com/watson/explainable-ai

  9. https://www.nature.com/articles/s41746-023-00837-4

  10. https://www.sciencedirect.com/science/article/pii/S2666389921002026

  11. Roberts, M. R. (n.d.). Credit Risk - Models vs. Data - Module 3 – Finance. Coursera. https://www.coursera.org/learn/wharton-ai-applications-marketing-finance/lecture/FOz8a/credit-risk-models-vs-data

22 views0 comments

Recent Posts

See All

Comments


bottom of page