Skip to content

Navigating the Ethical Maze: AI Application in Medical Practices and Human Involvement

Growing adoption of AI in medicine stirs optimism and apprehension. Our medical center's ethicist participated in a national committee that proposed guidelines to guarantee AI medical equipment benefits patients and prevents exacerbating health disparities.

Maintaining Human Involvement: Navigating the Moral Implications of AI in Medical Practice
Maintaining Human Involvement: Navigating the Moral Implications of AI in Medical Practice

In a significant development, the Society for Nuclear Medicine and Medical Imaging (SNMMI) has released ethical considerations for the development and use of Artificial Intelligence (AI) in medical devices. These guidelines aim to ensure that AI complements, rather than replaces, the expertise of physicians, while addressing key concerns such as data protection, accessibility, and transparency.

The recommendations, published in two papers titled "Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance" and "Ethical Considerations for Artificial Intelligence in Medical Imaging: Data Collection, Development, and Evaluation", are not limited to a specific area of AI medical devices but are intended to be applied broadly.

Jonathan Herington, PhD, a member of the AI Task Force of the SNMMI, emphasizes the urgency of solidifying ethical and regulatory frameworks for AI medical devices, as the landscape is shifting quickly.

To avoid deepening health inequities, developers must ensure AI models are calibrated for all racial and gender groups by training them with diverse datasets. However, current AI medical devices are often trained on datasets with underrepresentation of Latino and Black patients, making them less likely to make accurate predictions for these groups.

In order to preserve the physician-patient relationship, AI integration must prioritize human interaction and trust. AI medical devices should be designed to be useful and accurate in all contexts of deployment, and developers should make accurate information about their devices' intended use, clinical performance, and limitations readily available to users.

The task force also recommended building alerts into AI devices or systems to inform users about the degree of uncertainty of the AI's predictions. Clinicians should use AI as an input into their own decision-making, rather than replacing their decision-making.

Moreover, the recommendations highlight the importance of ensuring AI supports, not substitutes, physician clinical judgment and autonomous decision-making. Engaging physicians actively in AI development is crucial to ensure acceptance and practical integration based on their perspectives.

The rapid advancement of AI systems necessitates a swift establishment of ethical and regulatory frameworks around them. The task force called for increased transparency about the accuracy and limits of AI, and the recommendations cover two main areas: Deployment and Governance, and Data Collection, Development, and Evaluation.

There is a concern that high-tech, expensive AI medical devices may only be available to well-resourced hospitals, leaving under-resourced and rural hospitals without access. The task force emphasized the importance of ensuring all people have access to AI medical devices, regardless of their demographic characteristics.

These points reflect a consensus that ethical AI in nuclear medicine and medical imaging must balance technological advances with core medical ethics principles including beneficence, non-maleficence, autonomy, justice, and transparency. The recommendations, initially developed with a focus on nuclear medicine and medical imaging, can and should be applied to AI medical devices broadly.

[1] Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance. Society for Nuclear Medicine and Medical Imaging. [2] Ethical Considerations for Artificial Intelligence in Medical Imaging: Data Collection, Development, and Evaluation. Society for Nuclear Medicine and Medical Imaging. [3] [Various publications related to SNMMI's ethical frameworks for AI in medical imaging.]

  1. Prioritizing human interaction and trust in AI integration is essential for preserving the traditional physician-patient relationship, ensuring accurate information about AI devices' use, performance, and limitations is available to patients, and building alerts into AI systems to inform users about the degree of uncertainty in AI predictions.
  2. The rapid advancement of artificial intelligence in medical devices necessitates a swift establishment of ethical and regulatory frameworks to avoid deepening health inequities, as AI models must be calibrated for all racial and gender groups, and ensuring access to AI medical devices is available to all, regardless of demographic characteristics, is crucial.
  3. Ensuring that AI complements the expertise of physicians while addressing concerns such as data protection, accessibility, and transparency is essential in patient care, as the Society for Nuclear Medicine and Medical Imaging (SNMMI) recommends, whether in the field of science, health-and-wellness, or medical-conditions, and as technology continues to evolve, understanding its potential impacts on these areas is vital for maintaining ethical practices in artificial intelligence applications.

Read also:

    Latest