The International Tinnitus Journal

The International Tinnitus Journal

Official Journal of the Neurootological and Equilibriometric Society
Official Journal of the Brazil Federal District Otorhinolaryngologist Society

Reach Us Reach Us Whatsapp +44 7367 141882

ISSN: 0946-5448

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Volume 28, Issue 1 / June 2024

Research Article Pages:65-69
10.5935/0946-5448.20240012

Balancing Act: Navigating the Ethics and Governance of Artificial Intelligence in Healthcare and WHO's Role in Shaping the Future

Authors: Akshat Mehta, Nancy*, Srishti Sonkala, Adarsh Kumar Mishra

PDF

Abstract

The integration of technology in healthcare has revolutionized patient care, enabling personalized and efficient medical treatments and interventions. The expansive growth of digital healthcare technologies is crucial for enhancing patientprovider communication, optimizing treatment plans, and managing disease prevention and health surveillance. As the World Health Organization (WHO) leverages its global influence, it advocates for the ethical use and governance of emerging technologies like Large Multi-Modal Models (LMMs), which have shown promising capabilities in areas ranging from diagnosis to medical education. However, ethical challenges, such as maintaining autonomy, ensuring equity, and protecting privacy, necessitate stringent governance to avoid potential disparities and misuse. The collaboration between WHO and other global bodies, such as the International Telecommunication Union (ITU), emphasizes the development of global standards and guidelines to ensure safe, effective, and equitable digital health solutions. The continual adaptation of technology in healthcare demands comprehensive regulations, transparent practices, and international cooperation to uphold the standards that protect and enhance patient well-being and public health.

Keywords: Digital Healthcare Innovation, Ethical Governance, WHO Standards, Patient-Centric Technology, Large Multi-Modal Models (LMMs).


Introduction

The exploration of technology in healthcare is expanding, bringing forth innovations that significantly improve patient care and therapeutic advancements. Within this realm, various systems are emerging or in development to cater to the diverse needs of healthcare and therapeutic progress. These solutions offer a wide range of applications, supporting patients throughout their healthcare journey, enhancing communication with healthcare providers, and aiding in treatment adherence.

As healthcare transitions towards a more patient-centric approach, personalized strategies are becoming increasingly prevalent in decision-making processes. This evolution enables the utilization of data to promote patient well-being, facilitate engagement, predict and prevent diseases, manage conditions effectively, and customize treatment plans for individuals. Consequently, the integration of technology in clinical settings is growing, with medical devices and systems aiding in decision-making processes and patient assessment.

Moreover, technology plays a crucial role in the development and evaluation of medical products, contributing to drug discovery and patient enrichment in clinical research.

The WHO recognizes the substantial potential that technology holds for improving public health and medical practices. However, it also acknowledges that fully harnessing the benefits of technology entails addressing ethical challenges within healthcare systems, among professionals, and for those receiving medical and public health services [1]. While many ethical concerns discussed in this report predate the rise of advanced technology, its integration introduces additional considerations.

Whether technology can advance the interests of patients and communities depends on a collective effort to design and implement ethically defensible laws and policies and ethically designed technologies. There are also potential serious negative consequences if ethical principles and human rights obligations are not prioritized by those who fund, design, regulate or use technologies for health [2]. The opportunities and challenges of technology are thus inextricably linked.

Advanced technology has the potential to enhance healthcare providers' abilities in improving patient care, offering precise diagnoses, optimizing treatment plans, aiding in pandemic preparedness and response, guiding health policy decisions, and allocating resources within healthcare systems. However, to fully leverage this potential, healthcare workers and systems need comprehensive knowledge about the environments where such systems can operate safely and effectively, the prerequisites for ensuring dependable and suitable usage, and mechanisms for ongoing evaluation and monitoring of system performance. Additionally, healthcare workers and systems require access to education and training to utilize and sustain these systems safely and effectively.

Cutting-edge technology also has the potential to empower patients and communities to manage their healthcare and understand their changing needs better. To achieve this, patients and communities need reassurance that their rights and interests won't take a backseat to the profit motives of technology companies or the surveillance goals of governments [3]. It also means integrating the ability to identify health risks into health systems in a way that upholds human autonomy and dignity, keeping humans at the forefront of healthcare decision-making.

Cutting-edge technology has the potential to help resource-poor countries, where patients often struggle to access healthcare workers or medical professionals, bridge gaps in healthcare services. However, these systems need to be carefully tailored to suit the diverse socioeconomic and healthcare environments, accompanied by training in digital literacy, community involvement, and awareness campaigns. Systems primarily relying on data from individuals in high-income countries may not be effective for those in low- and middle-income settings. Therefore, investments in technology and its infrastructure should prioritize the development of equitable healthcare systems by avoiding biases that could hinder fair provision and access to healthcare services [4].

The COVID-19 pandemic has impacted healthcare practices. Numerous new methods have emerged to address the pandemic, although some have proven ineffective. Ethical concerns have arisen regarding certain methods, particularly concerning surveillance, infringement on privacy and autonomy rights, health and social disparities, and the prerequisites for trustworthy and lawful practices. During their discussions on this report, members of the expert group developed interim WHO recommendations for the use of proximity tracking methods in COVID-19 contact tracing.

World Health Organization (WHO) and Digital Healthcare: Tracing the role

 The World Health Organization (WHO) is a global leader in shaping the landscape of digital healthcare, aiming to leverage digital technologies for improved health outcomes and strengthened health systems worldwide. Through its Digital Health and Innovation division, WHO fulfills various roles and functions to advance the field of digital healthcare.

One of WHO's key roles is in setting global standards for digital health technologies. It develops and promotes standards and guidelines to ensure the interoperability, quality, and ethical use of digital health solutions across different healthcare settings [5]. These standards facilitate collaboration among countries and stakeholders, enabling the adoption of innovative digital solutions.

Additionally, WHO is committed to building capacity in digital healthcare. It provides technical assistance, capacity building, and training programs to empower healthcare workers with digital literacy and skills [6]. By enhancing their capacity to utilize digital tools for disease prevention, diagnosis, treatment, and healthcare delivery, WHO strengthens healthcare systems.

Furthermore, WHO fosters innovation and research in digital health. It promotes research, pilot projects, and innovation hubs in collaboration with academia, industry, and technology developers. These initiatives facilitate the development and evaluation of novel digital solutions to address emerging health challenges and improve health service delivery.

In terms of policy and governance, WHO advocates for evidence-based policies and governance frameworks to regulate digital health technologies. It promotes ethical principles, data privacy, and security standards to ensure the responsible and equitable implementation of digital healthcare solutions while safeguarding individuals' rights and well-being.

Moreover, WHO emphasizes collaboration and coordination in digital healthcare. It facilitates global collaboration among governments, international organizations, civil society, and the private sector to address common challenges and opportunities. Through platforms like the WHO Global Strategy on Digital Health and the Digital Health Atlas, WHO promotes knowledge sharing, best practices dissemination, and peer learning among countries.

Essential Ethical Guidelines for Implementing AI in Healthcare

WHO supports a set of essential ethical principles. It is hoped that these principles will serve as a basis for governments, technology developers, companies, civil society, and inter-governmental organizations to adopt ethical approaches in utilizing technology for health purposes.

Preserving Human/Individual Autonomy: The use of technology might shift decision-making to machines, potentially undermining human autonomy. Upholding the principle of autonomy means ensuring that individuals maintain control over healthcare systems and medical decisions. Respecting individual autonomy also involves providing healthcare providers with necessary information for safe and effective decision-making and ensuring individuals understand how these systems impact their care. Additionally, it requires safeguarding privacy, maintaining confidentiality, and obtaining valid informed consent through appropriate legal frameworks for data protection.

Promoting Human Well-being and Public Safety: Technologies must prioritize the welfare of individuals and the public. Designers of these technologies should adhere to regulatory standards for safety, accuracy, and effectiveness in clearly defined applications or scenarios. Measures for quality control during implementation and ongoing improvement in technology utilization should be available. Preventing harm requires ensuring that these technologies do not cause mental or physical harm that could be avoided by alternative practices or approaches.

Ensuring Clarity, Comprehensibility, and Transparency: Technologies must be easily understood by developers, medical professionals, patients, users, and regulators. Achieving this involves two main strategies: enhancing transparency and making technology understandable [7]. Transparency entails providing sufficient information before designing or implementing a technology, enabling meaningful public engagement and discussion about its design and appropriate use. Technologies should also be explainable based on the audience's ability to comprehend them.

Promoting Responsibility and Accountability: It is crucial for individuals to have a clear, transparent understanding of the tasks that systems can perform and the conditions necessary for achieving desired outcomes. While technologies carry out specific tasks, stakeholders bear the responsibility of ensuring their capability to perform these tasks and that they are used appropriately and by adequately trained individuals. Responsibility can be ensured through evaluation by patients and clinicians during the development and deployment of technologies. This requires the implementation of regulatory principles both upstream and downstream of the process, establishing points of human oversight. In case of any issues, accountability must be established, with adequate mechanisms in place for questioning and providing remedies for individuals and groups adversely affected by decisions.

Promoting Inclusivity and Equity: Ensuring fairness requires that technologies designed for healthcare encourage broad and fair access, regardless of age, gender, income, race, ethnicity, sexual orientation, ability, or other protected characteristics. These technologies should be widely shared and available for use in various settings, catering to the diverse needs and capacities of different communities. They should avoid embedding biases that disadvantage already marginalized groups, as bias undermines fairness and equality. Additionally, efforts should be made to minimize power imbalances between different stakeholders involved in the development and use of these technologies. Continuous monitoring and evaluation are essential to identify and address any disparities affecting specific groups. Technologies should never perpetuate or worsen existing biases or discrimination.

Advocating for Responsive and Sustainable Practices: Ensuring responsiveness involves continuously evaluating applications to see if they meet expectations and requirements transparently. It also means aligning technology with broader sustainability goals for health systems, environments, and workplaces. This includes designing systems to minimize environmental impact and enhance energy efficiency, in line with global efforts to protect the environment. Sustainability also entails addressing potential disruptions in the workplace, providing training for healthcare workers to adapt to new technologies, and mitigating potential job losses due to automation.

Systems are intricate, relying not just on their underlying code but also on the data they are trained on, sourced from various sources including clinical settings and user interactions. Improved regulation can play a crucial role in mitigating the risks of exacerbating biases present in training data [8].

For instance, accurately capturing the diversity of populations can pose challenges, potentially resulting in biases, inaccuracies, or even system failures. To address these challenges, regulations can be implemented to mandate reporting of attributes such as gender, race, and ethnicity of individuals featured in training data. Additionally, intentional efforts can be made to ensure that datasets are representative of diverse populations.

WHO and Evolving Landscape of Healthcare & AI Regulation

In 2018, the WHO teamed up with the International Telecommunication Union (ITU), marking the beginning of the Focus Group on Advancing Healthcare Technologies (FG-AHT), a vibrant platform aimed at addressing crucial questions regarding technology's role in the healthcare sector [9].

Acknowledging the growing global interest and involvement in technological advancements, the FG-AHT recognized the need for a sustainable institutional framework. This partnership led to the creation of the Global Initiative for Healthcare Technologies (GI-AHT), officially launched in July 2023 under the guidance of the WHO, ITU, and the World Intellectual Property Organization (WIPO) [10]. The GI-AHT now stands as a sturdy, long-lasting institutional structure, dedicated to nurturing, facilitating, and implementing technological innovations in healthcare.

As the GI-AHT continued to develop, the WHO stayed committed to shaping the future of healthcare through technology. This commitment is evident through a wide range of advisory materials, strategic projects, and collaborative global initiatives, envisioning a future where the integration of technology and healthcare is the norm.

The Global Initiative for Advancing Healthcare Technologies represents a joint effort led by three specialized agencies of the United Nations: WHO, ITU, and WIPO, each bringing unique expertise and contributions to this collective endeavour.

In 2024, the World Health Organization (WHO) has issued new guidelines on the ethics and governance of Large Multi-Modal Models (LMMs) a rapidly growing technology with diverse applications in healthcare [11]. These guidelines offer over 40 recommendations for governments, technology companies, and healthcare providers to ensure the responsible use of LMMs in promoting and protecting public health.

LMMs can process various types of data inputs, such as text, videos, and images, and generate diverse outputs. They stand out for their ability to mimic human communication and perform tasks not explicitly programmed. LMMs have gained rapid adoption, with platforms like ChatGPT, Bard, and Bert becoming widely recognized in 2023 [12].

"Generative AI technologies have the potential to enhance healthcare, but only if stakeholders address and fully acknowledge the associated risks," stated Dr. Jeremy Farrar, WHO Chief Scientist. "Transparent information and policies are needed to manage the development, deployment, and use of LMMs to achieve improved health outcomes and address existing health disparities" [13].

The guidelines delineate five main applications of LMMs in healthcare:

Diagnosis and clinical care, including responding to patient inquiries; Patient-driven utilization, such as symptom investigation and treatment exploration; Administrative tasks, like documentation and summary of patient visits in electronic health records; Medical and nursing education, such as simulated patient encounters for trainees; Scientific research and drug development, including compound discovery. While LMMs are beginning to be employed for specific health-related purposes, there are documented risks, including generating false, inaccurate, biased, or incomplete information, which could adversely impact health decisions. Additionally, LMMs may be trained on low-quality or biased data, leading to disparities by race, ethnicity, gender identity, or age.

The guidelines also highlight broader health system risks, such as accessibility and affordability of the most effective LMMs. They may also foster 'automation bias' among healthcare professionals and patients, where errors are overlooked or complex decisions are unduly delegated. Furthermore, like other technologies, LMMs are susceptible to cybersecurity threats, posing risks to patient data and healthcare integrity.

To develop safe and efficient LMMs, WHO underscores the importance of involving various stakeholders – governments, technology companies, healthcare providers, patients, and civil society – in all stages of development, deployment, oversight, and regulation.

Key recommendations from the WHO guidelines include [14]:

1. Governments should invest in or provide public infrastructure, including computing power and datasets accessible to developers across sectors, contingent on adherence to ethical principles.

2. Laws and regulations should ensure that LMMs and healthcare applications meet ethical obligations and human rights standards.

3. Regulatory agencies should be designated to assess and approve LMMs and healthcare applications.

4. Mandatory post-release audits and impact assessments should be conducted by independent third parties, published, and disaggregated by user type.

5. For developers, engagement with potential users and stakeholders should occur early in the development process, allowing for transparency and input on ethical concerns.

6. LMMs should be designed to fulfill specific tasks accurately and reliably, with developers understanding potential secondary outcomes [15].

Conclusion

WHO recognizes the potential of technology to enhance health outcomes across various aspects of healthcare, including clinical trials, diagnosis, treatment, and patient-centered care. Additionally, technology can support healthcare professionals in acquiring evidence-based knowledge and skills to improve healthcare delivery.

To ensure the safe and effective integration of technology in healthcare, WHO collaborates with the International Telecommunication Union (ITU) through the Focus Group on Health Technology. This collaboration involves experts from regulatory bodies, policymakers, academia, and industry, who explore regulatory considerations and good practices for technology use in healthcare.

The publication, based on the work of this collaboration, provides an overview of regulatory considerations for technology use in healthcare, covering areas such as documentation, risk management, data quality, and privacy. It serves as a resource for stakeholders involved in healthcare delivery, including developers, regulators, manufacturers, and healthcare providers.

Looking forward, technology, when properly utilized, can significantly improve healthcare outcomes and contribute to personalized treatment planning. Developing countries like India have the opportunity to lead in this technological revolution, leveraging their resources and fostering innovation in healthcare. National and international healthcare organizations play a crucial role in facilitating collaboration and bridging the gap in technology development.

References

  1. Amann J, Blasimme A, Vayena E, Frey D, Madai VI, Precise4Q Consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. 2020;20:1-9.
  2. Ho CW, Caals K. A call for an ethics and governance action plan to harness the power of artificial intelligence and digitalization in nephrology. Semin Dial. 2021;41(3):282-293.
  3. Guidance WH. Ethics and governance of artificial intelligence for health. World Health Organization. 2021.
  4. Moor M, Banerjee O, Abad ZS, Krumholz HM, Leskovec J, Topol EJ, et al. Foundation models for generalist medical artificial intelligence. Nature. 2023;616(7956):259-65.
  5. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22:1-7.
  6. Ho CW, Soon D, Caals K, Kapur J. Governance of automated image analysis and artificial intelligence analytics in healthcare. Clin Radiol. 2019;74(5):329-37.
  7. Oala L, Fehr J, Gilli L, Balachandran P, Leite AW, Calderon-Ramirez S, et al. Ml4h auditing: From paper to practice. ML Heal. 2020:280-317.
  8. Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc Sci Med. 2022;296:114782.
  9. Cath C. Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philos Trans Math Phys Eng. 2018;376(2133):20180080.
  10. Leimanis A, Palkova K. Ethical guidelines for artificial intelligence in healthcare from the sustainable development perspective. Eur J Sustain Dev. 2021;10(1):90.
  11. Taeihagh A. Governance of artificial intelligence. J Soc Policy. 2021;40(2):137-57.
  12. Béranger J. Ethical governance and responsibility in digital medicine: The case of artificial intelligence. Digit Health. 2021;2:169-90.
  13. Shah P, Kendall F, Khozin S, Goosen R, Hu J, Laramie J, et al. Artificial intelligence and machine learning in clinical development: a translational perspective. NPJ Digit Med. 2019;2(1):69.
  14. Guidance WH. Ethics and governance of artificial intelligence for health. WHO. 2021.
  15. Larsson S. On the governance of artificial intelligence through ethics guidelines. AJLS. 2020;7(3):437-51.

1Ph.D. Research Scholar at Hidayatullah National Law University, Raipur Chhattisgarh, India.

2Assistant Professor of Law at MATS University, Raipur, Chhattisgarh, India.

Send correspondence to:
Nancy
Ph.D. Research Scholar at Hidayatullah National Law University, Raipur Chhattisgarh, India, E-mail: akshatm120@gmail.com

Paper submitted on March 30, 2024; and Accepted on April 26, 2024

Citation: Akshat Mehta. Balancing Act: Navigating the Ethics and Governance of Artificial Intelligence in Healthcare and WHO’s Role in Shaping the Future. Int Tinnitus J. 2024;28(1):065-069