Jian Guan *
1Clinical Center, the National Scientific Data Sharing Platform for Population and Health
2Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
Key words: artificial intelligence; medical ethics; ethical governance; machine learning;brain-computer interaction; brain-inspired computer; robots; biohybrids
Abstract Artificial intelligence (AI) is rapidly being applied to a wide range of fields, including medicine,and has been considered as an approach that may augment or substitute human professionals in primary healthcare. However, AI also raises several challenges and ethical concerns. In this article, the author investigates and discusses three aspects of AI in medicine and healthcare: the application and promises of AI, special ethical concerns pertaining to AI in some frontier fields, and suggestive ethical governance systems. Despite great potentials of frontier AI research and development in the field of medical care, the ethical challenges induced by its applications has put forward new requirements for governance. To ensure“trustworthy” AI applications in healthcare and medicine, the creation of an ethical global governance framework and system as well as special guidelines for frontier AI applications in medicine are suggested.The most important aspects include the roles of governments in ethical auditing and the responsibilities of stakeholders in the ethical governance system.
A RTIFICIAL intelligence (AI) is a superstar that has energized industry and science and is increasingly permeating every aspect of our society, including healthcare and medicine. AI was invented by McCarthy in 1955 and is defined as “the science and engineering of making intelligent machines”.1The term implies the use of a computer to model intelligent behavior with minimal human intervention. In this respect, AI spans a broader definition as per the Cambridge dictionary,namely as an interdisciplinary approach that adopts principles and devices from a variety of fields, such as computation, mathematics, logics, and biology, to solve the problems of understanding, modeling, and replicating intelligence and cognitive processes.2AI is generally accepted as having started with the invention of robots. Da Vinci’s sketchbooks of robots were believed to be his lasting legacy to this field, helping to set the stage for this innovation. His sketches of human anatomy for a humanoid robot design drawn in 1495 were rediscovered in the 1950s and regarded as a source of inspiration for a generation of researchers in robotics.
Over the last decade, AI has been applied to different areas such as search engines, machine translation systems, and intelligent personal assistants. AI has also found many uses in the medical field, along with the widespread use of electronic health records(EHRs) and the rapid development of life sciences, including neuroscience. A recent study traced the global and historical growth of research into AI in health and medicine, based on studies published from 1977-2018 and available on the Web of Science platform. It was found that 84.6% of the total of 27,451 papers was dated 2008-2018.2The AI index report for 2018 showed that the number of AI-related papers published in China increased by 150% between 2007 and 2017.3Industry representatives are actively deploying frontier AI research and development. In 2014, IBM announced a breakthrough point for TrueNorth chip,which has potential to revolutionize the computer industry by integrating brain-like capability into devices.It is a brain-inspired chip that consumes merely 70 mW, is capable of 46 billion synaptic operations per second, and has been used to implement deep learning.4Likewise, governments worldwide have devised plans and designed strategies pertaining to AI applications.5,6China announced the Next Generation Artificial Intelligence Development (NG-AI) Plan in July 2017.By 2030, the government aims to cultivate an AI industry worth 1 trillion RMB, with related industries worth 10 trillion RMB.5The NG-AI Plan is the most comprehensive of all national AI strategies,5with not only goals for research and development (R&D) and industrialization, but also standard setting and regulations, ethical norms, and security, which show the intent of China to actively participate in and lead the global governance of AI. In April 2019, the NG-AI Plan Office of the Ministry of Science and Technology(China) decided to establish a governance committee for NG-AI Plan. At present, experts appointed by the committee are studying and drafting the Criteria for the Development of the Next Generation of Artificial Intelligence (tentative title). Suggestions on NG-AI governance, including governance concepts and principles, key areas and related policies, have also been widely solicited in government departments, enterprises, universities, institutes, and individuals interested in this field.
AI has started revolutionizing several areas in medicine, from the design of evidence-based treatment plans to the implementation of recent scientific innovations. AI is viewed as an augmented or a substitute approach for healthcare professionals. In April 2018, the National Health Commission of China issued The National Criterion and Standards of Hospital Informatization Construction, a trial implementation. The use of AI has been proposed to fulfill informatization requirements in tertiary hospitals. This shows the importance of AI in China’s healthcare policy. However,despite its promises to future medical care, AI applications in healthcare and medicine induce some ethical issues and challenges that cannot be ignored. The challenges are more prominent in some frontier fields to which AI is being applied, which can be also regarded as NG-AI, including but not limited to machine learning (also called deep learning), brain-computer interfaces (BCIs), brain-inspired computers, and biohybrids. Yet, research on AI ethics in health and medicine is lacking.2This article summarizes the development of AI-related ethical governance ideas and guidelines,and wishes to provide insights that are especially relevant to AI in healthcare and medicine.
In general, AI in medicine can be categorized into three main branches, namely, virtual, physical, and a combination of both (cooperation between virtual reality and robots), all of which have shown extraordinary advantages and/or potentials in research and clinical work.
Virtual AI via machine learning is a subset of AI and refers to a set of methods that can detect patterns in data automatically in order to predict data trends or allow decision-making under uncertain conditions.7,8Machine learning continues to boost research in genetics and molecular medicine. In addition, AI can facilitate precision medicine with integration of phenotype data from EHRs and genotype data from “omics.”9In recent years, the emergence of high-throughput sequencing has made it possible to obtain large-scale information on DNA and proteins, and the interactions between molecules. Diagnosis biomarkers and therapeutic targets are now typically discovered directly or through interaction algorithms.10AI techniques could also be used to optimize clinical trials of innovative drugs and therapies using data-driven precise planning of treatments, predicting clinical outcomes, and simplifying process management (such as the recruitment and retention of patients), thus lowering their complexity and costs.
In the era of digital healthcare, AI has shown great value with the widespread application of EHRs and high-throughput sequencing. AI can potentially provide active guidance to physicians making clinical decisions. This potential and successful examples are evident in prediction, imaging, and pathological diagnoses and treatments.11-15In 2018, the Food and Drug Administration (FDA) of USA granted approval for the first AI-based diagnostic system, IDx-DR to be marketed. It will be able to detect diabetic retinopathy and undertake screening decisions through an AI algorithm after analyzing images of eyes taken with a retinal camera and uploaded to a cloud server independently from a clinician.16
In healthcare, the physical AI branch is best represented by robots, which are used to assist elderly patients or attending surgeons. Care robots have become increasingly common in aged care settings.17A surgical system made by the American company Intuitive Surgical was named Da Vinci in recognition of his inspirational impact on this field. It was approved by US FDA in 2000. Da Vinci surgical systems facilitate complex surgery using a minimally invasive approach and can be controlled by a surgeon from a console.18
BCIs is another active AI application with much promise in medicine. They are computational systems that form a communication pathway between the central nervous system and some output, be a device or feedback to the user.18Its major purpose is to improve the quality of patients’ lives. Targeted populations include patients suffering from neurological disorders such as amyotrophic lateral sclerosis,19spinal cord injuries, or stroke.20The field of BCI has developed quickly after the first implantable trial began in 2004.21Recent attempts toward developing BCI-controlled robots have shown potential in the field of assistive robots.22
Endeavors that attempt to “learn from nature”,such as IBM’s TrueNorth, show that biological systems have recently become a source of inspiration for AI-innovative solutions. Progress in engineering miniaturized interfaces between living and artificial systems is driving research on biohybrids. Communication and collaboration between AI and the neuroscience field have become more commonplace.23
Given that science is a double-edged sword, certain discoveries eventually cause harm. This is particularly appropriate for special frontiers in AI. Hence, the double effect principle of ethics must be carefully considered in AI applications, exactly as stem cell research and gene editing. According to this principle, an act performed with good intentions (such as therapy and risk prevention) may nonetheless generate harmful consequences.24,25These typical double effect innovations promise treatment of some diseases such as Parkinson’s disease, diabetes, spinal cord injury, and cancer. On the other hand, their research and application may be harmful, even for medical purposes. Modern AI systems, especially the ones receiving the greatest attention—BCIs—are based on artificial neural networks(ANNs), and are largely regarded as impossible to verify for safety-critical applications; the decision-making process of an ANN is supposedly opaque.26,27
As is the case with all new scientific techniques, the principles of biomedical ethics should be adhered to AI in healthcare applications. They are autonomy, beneficence, non-maleficence, and justice as per the book
Principles of Biomedical Ethics by Tom Beauchamp and James Childress.28They are embodied as informed consent, privacy and safety, voluntary participation,autonomous decision-making, etc., which must be considered and put into practice in any implementation.They are always valid and binding unless they are in conflict, with none taking priority over the others. The ultimate goal of ethical governance is to guarantee the safety and interests of all subjects and patients.
Sound ethical criteria for AI in clinical medicine are also significant. To solve ethical dilemmas such as those raised by the use of AI products in a clinical setting, the ethical issues could be analyzed with regard to the four terms under Jonsen’s framework: (1)medical indications, (2) patient preferences, (3) quality of life, and (4) contextual features. These terms cover the social, economic, legal, and administrative contexts in which a case arises.29Taking BCI as an example, Sullivan et al. investigated the ethics of BCI-related research in user-centered design (UCD). They believed that UCD could help researchers recognize the perspectives of persons with disabilities, considering not only the scientific and financial, but also the ethical aspects.18
Ethical issues pertaining to AI in medicine have been elaborated upon in Bioethics of Clinical Applications. A partial list of the ethical problems facing the clinical application of AI also includes safety, efficacy,privacy, information and informed consent, the right to decide, the “right to try,” costs, and access. With AI, however, it is not enough to follow these basic principles. Big data are indispensable to AI research in healthcare. Moreover, research and development involving AI in medicine is currently inseparable from animal experiments, animal welfare, and animal ethics. However, this article does not focus on such general ethical principles; rather, it discusses those special features and challenges incurred by AI that go beyond these issues. For example, the long-term risks and benefits associated with some frontier AI fields remain unknown. Likewise, it is recognized that certain healthcare-related AI applications, such as care and assistance machines or robots, should abide by clinical ethics, but more research is necessary in these areas.This article, therefore, focuses on those special ethical issues and challenges arising from AI frontiers and their application to medicine.
It is the author’s viewpoint that ethical issues pertaining to AI in medicine should be considered at two levels: for humanity as a whole and at the individual level. During its embodiment in practice, it is vital to find a balance among three relationships: between science and human society as a whole, between individuals and human society as a whole, and between different individuals. Ethical governance aims to assess and balance these three relationships. The first level of ethical governance generally answers philosophical questions related to the given perspective. The latter two levels of governance explore responsible practice to verify ethical principles and make trustworthy choices according to the decision taken at the first level of ethical governance.
Ethical challenges posed by science to society as a whole
At this level, ethics must examine the goals, rights and wrongs of specific technologies and applications(such as the role of AI, cloning techniques, and gene editing), with the focus resting on the potential of the technology/application to impact society. Technical errors associated with such technologies may cause problems, but successes in areas beyond control could result in even more serious issues. The fields of AI and biotechnology have highlighted the need, in a way never seen before, for the careful review of ethical concerns arising from developments in science. The results of any such review would provide a final “yes” or“no” answer to the implementation of AI or biotechnology in legal terms, so as to prevent the disasters such technologies could unleash on mankind.
We are all aware of how developments in molecular biology have ushered in a new era of molecular medicine, which has allowed patients access to novel therapies and also offers the promise of personalized medicine. On the other hand, some technologies have dramatically strengthened humans’ ability to change their natural world, environment; its risks, benefits,and future effects are particularly difficult to predict.Hence, while research on certain biological interventions, such as gene editing and stem cell therapy, is currently permitted in somatic cells, it is prohibited in human embryos or germ cells.25Similarly, deep learning is arguably the core of both biological and artificial intelligence.30Based on current knowledge and experience, scientists cannot anticipate the impact of deep learning in AI on the environment and humanity’s future. Moreover, biohybrid systems (or biohybrids),which refer to the integration of AI with biological systems, have been capturing the interest of various scientific communities.21“Neurobiohybridization,”which refers to interfacing brain-inspired devices with the real human brain, has been established as a possibility. Biohybrid systems may come into play as workbenches of “living” artificial systems characterized by self-organization, evolvability, adaptability,and robustness.31,32Thus, the challenges associated with such a technology involve personality, emotion,and humanity’s future, as evidenced within the “robotics” and “biomimetics” communities; this is especially so as biohybrid systems can be used not just as treatment interventions, but also as a means to refine natural evolution. These aspects have undoubtedly caused fear in the public’s minds and will continue to do so unless an effective governance framework is devised. It is also possible that AI interventions that do not pose such challenges currently may do so in the future, although it might take years or even decades until biohybrid, BCI and other neurotechnologies become part of our daily lives.
Ethical challenges posed by science to individuals
Ethics assessments at this level involve assessing the benefits and risks of the special processes and products such as AI, BCI, and robotics. The focus in this case rests on their potential to impact individuals, such as subjects and patients. The purpose of ethical governance is to instill good practices and avoid negative impacts on participants. The issues at this level are embodied to a greater extent as potential damage that may result from AI systems or products, even with regard to diagnosis and treatment. At this level, there are challenges to existing ethics guidelines. Consider BCI as an example; ethical reviews typically focus on the therapeutic outcomes and potential damage caused by current BCI technology (for example) to individuals, such as when using a reader on a patient’s scalp to help people with spinal-cord injuries, which could lead to a brain infection and subsequent damage to the brain.33However, BCI is a “direct connection be tween living neuronal tissue and artificial devices that establishes a non-muscular communication pathway between a computer (machine) and a brain (human)”.34The technology will likely decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions, and decisions.35Therefore, for BCI, it is not only people’s privacy, but also their identity, agency and equality that should be respected and protected.35
Reports indicate that data-driven pathological diagnosis software based on deep learning provides more accurate diagnoses than experts or doctors in the same field. However, it still entails challenges,limitations, and concerns. The scale of the data available to most medical studies has so far been below the common standard in machine learning. Also,system reliability is a crucial aspect of its medical applications. The accuracy is based on the theory that the data used for deep learning follow a normal distribution. However, some exceptions do exist, and not all patients/subjects will conform to the normal distribution. Moreover, unlike machines, human health and life cannot be classified as easily as merely 1 or 0. Thus, this principle gives priority to the individual’s interests; no one can sacrifice the individual’s rights and interests even for the sake of human society as a whole.
Another important ethical consideration at this level involves ensuring fairness among individuals once AI applications are allowed into clinical practice. In such cases, the professional will be required to act in the best interests of her/his patient, according to the four requirements under Jonsen’s framework of general ethical issues.36Moreover, each individual should have an equal opportunity to enjoy the benefits provided by AI in medicine.
Ethical governance is an effective complement to law, as it balances and regulates risks and benefits in scientific research. The rapid expansion of AI applications strongly suggests greater urgency to create guidelines that consider the societal and ethical impacts of AI on practice. Some ethical AI governance systems that address issues beyond the existing principles and guidelines of biomedical ethics are summarized as follows.
AI-related ethics and standards were previously fit within a wider overarching framework of responsible research and innovation (RI), which includes initiatives across policy, academia, and legislation. The aim of RI framework is to identify and address uncertainties and risks associated with novel areas of science. The RI framework proposes a new process for research and innovation governance. It has expanded to consider ethical governance by building trust in robotics and AI systems.37,38
The following four conditions for ethically acceptable clinical applications and research can be considered for AI in medicine as well. (1) The principal aim of the act and the act itself are good. (2) The harmful effects are not intentionally pursued. (3) The harmful effects are not the aim of the act, and the good effect is not a direct cause-and-effect result of the harmful effect.(4) The intended good effect is as great as, or greater than, the harmful effects, and is proportionate to them.
A comprehensive report entitled Guidelines on Regulating Robotics has been completed by the European RoboLaw project. It reviews both ethical and legal aspects; the ethical analysis covers rights, liability, and insurance, while the legal aspects cover privacy and legal capacity.39Surgical robots are one of the topics of focus in this report. Another work on robots and robotic devices, named Guide to the Ethical Design and Application of Robots and Robotic Systems, published in April 2016, was led by the British Standards Institution Technical Subcommittee.40It also articulates a broad range of ethical hazards and their mitigation, provides guidance on the identification of potential ethical harms and guidelines on safe design, protective measures, and information for the design and application of robots. Similar clear-cut rules and special guidelines for AI in medicine are much needed.
The European Commission’s High-Level Expert Group on AI published Ethics Guidelines for Trustworthy AI in April 2019 as a new guidance. It proposes the concept of “trustworthy AI,” and identifies the ethical principles that should be adhered to when developing, deploying, and using AI systems, namely respect for human autonomy, prevention of harm,fairness, and explicability. In addition, the guideline sets out seven key requirements to ensure that AI systems are lawful, ethical, and robust from both technical and social perspectives.41The requirements for development, deployment, and use of AI systems for trustworthy AI include safety, privacy, data governance, transparency, non-discrimination and fairness, environmental and societal well-being, which are consistent to important components of medical ethics. They constitute vital rules and norms for AI in medicine, together with the above-mentioned ethical principles for trustworthy AI.
Overall, the principles and guidelines could provide important ideas and specific measures with regard to ethical aspects for the Governance Guidelines for Next Generation of Artificial Intelligence (tentative title). However, no special governance system or guidelines are suitable for AI in healthcare and medicine.To govern NG-AI, a special global ethical framework and governance system, including refined standards and guidelines, are required for this realm. It is crucial to demarcate an administrative role within the government to drive AI governance, and underst and the benefits and potential risks for all AI stakeholders,including governments, institutes, and individuals. It is particularly important to clarify the penalties for institutes and individuals who violate the relevant laws,regulations, and norms.
AI has broad applications in medical research and clinical application. In addition to its advantages and potential for healthcare, AI induces special ethical issues and challenges to ethical governance, especially in some frontier AI fields, which pose potential risks to the environment and humanity’s future.
Some valuable ethical governance guidelines have already been established as important components of the AI governance system. However, to govern the trustworthy AI in healthcare and medicine, further globally applicable refinements and special guidelines, especially for frontier AI fields, are required.
The author declared no conflict of interests.
Chinese Medical Sciences Journal2019年2期