Artificial intelligence (AI) is inviting itself today in all discussions and in all economic sectors. Scarecrow for some, Reassuring for others, AI does not spare the field of health. Aware of the stakes, particularly ethical, of this technological race, the National Council of the Order of Physicians (CNOM) published in January 2018 its white paper
Artificial intelligence (AI) is inviting itself today in all discussions and in all economic sectors. Scarecrow for some, Reassuring for others, AI does not spare the field of health.
Aware of the stakes, particularly ethical, of this technological race, the National Council of the Order of Physicians (CNOM) published in January 2018 its white paper ” Doctors and patients in the world of data, algorithms and artificial intelligence “. The opportunity for us to come back to what exactly AI in health means, what its contributions are but also its potential risks.
But by the way: what is AI?
We hear this term artificial intelligence, which is sometimes misused every day. It can frighten because it associates the terms of “intelligence”, that is to say, a mental and cognitive capacity to evolve in its environment, with the term “artificial” which refers, of course, to what does not emanate from nature but of the work of Man, but can also evoke the idea of ”factitious”. Hence, perhaps, some reluctance with regard to this new concept.
New? Not that much. The term has existed since the 1950s. Much more complex than it seems, it has evolved a lot over the years, at the rate of new learning methods (Machine Learning, Deep Learning), and there is no doubt that it will evolve further in the coming years.
In the past, the definition of AI was limited to that of the Algorithm. Today, AI can be defined, more generally, as the capacity of a machine (that is to say an “object” capable of acting, by itself or under human control. ) to reproduce actions or functions which are usually those of human beings (man or animal).
In this regard, the development of deep learning has greatly contributed to changing the machine from the behavior of imitation and reproduction of actions towards behavior that is increasingly “intelligent”.
We are therefore very far from the simple “robotics” sometimes used, wrongly, as a synonym of AI.
Finally, it should be noted that specialists distinguish two types of AI:
- Weak AI, which means the execution of a program previously defined and carried out by man;
- Strong AI, which goes much further since the machine is then capable of producing, but also of understanding, intelligent behavior, and even of drawing new skills from it. “Super AI”, defined as those with capacities superior to humans, are today especially mentioned by those who warn about the dangers of AI.
What applications of AI in medicine?
It is impossible to be exhaustive, as the “technical” applications of AI in health are so diverse.
As the CNOM points out in its white paper, we can schematically group together, under the banner of AI, the following tools (without being exhaustive):
- Aid in the diagnosis, for example by crossing billions of data or analyzing images;
- Robotic surgery which, however impressive it may be, remains mostly in the hands of humans;
- Virtual reality: for example, 3D glasses can be worn by patients for awake surgery, or by surgeons to “model” the patient and gain precision, as was the case at the beginning of December 2017 for the installation of a shoulder prosthesis at the Bobigny hospital;
- Health applications and connected objects that allow patients to become more involved in their own care, and the caregiver to ensure more regular and in-depth monitoring;
- Genetic tests to predict risk and survival rate for, for example, certain cancers;
- Use of data in research, which makes it possible in particular to perform a reverse search and to accelerate the processes;
- 3D printers allow custom-made medical devices to be produced in a short time;
- Serious games and simulation, increasingly used in the training of health professionals;
- Large-scale exploitation of health data (the famous Big Data, or mega data).
And in the care relationship?
There too, there is no lack of examples: “animator robots” in retirement homes, intended to stimulate residents on the cognitive level or to “entertain” them, humanoid robots which interact with children in pediatrics or child psychiatry, “robots of ‘reception’ trained to dialogue with patients and to detect their emotions (!) in order to adapt their speech, virtual coaches, conversational agents in mental health, etc.
It is in this area that the massive intrusion of AI poses the most problem in terms of ethics. Indeed, the caregiver/patient relationship is at the very heart of care. While it is widely accepted by all that AI can increase efficiency, precision, and speed for technical acts, delegating the relational role of the caregiver to a machine can appear much more shocking.
It is in this relationship that the relationship of trust is established, a guarantee of the quality of care. Entrusting this aspect to a robot can certainly have advantages, in a context of staff shortage and / or cost rationalization, but also risks losing an essential element of the support: the unique conference, empathy, leading to a dehumanization of the relationship.
Aware of this possible drift, the CNOM recalls in its white paper that “the doctor must remember that he is treating a person who is sick and that he is not only fighting the disease from which an individual is affected. (…) Medical empathy helps to heal. ”
AI is definitely the topic of the day. Every day, the media, both specialists and generalists, echo this or that experiment, this or that exploit of the machine, even the most futile. In this avalanche of information will necessarily slip some semantic inaccuracies, some “sensational” news but devoid of any scientific interest. They will fuel the fantasies of the public, either in the sense of an overflowing and sometimes unjustified enthusiasm or on the contrary in the sense of an irrational fear with regard to these machines which are ever more intelligent and therefore potentially threatening …
Without going so far as to imagine “killer robots” which, like in a science fiction film, would suddenly rebel against humanity, it is clear that AI involves certain risks, which are difficult to list:
- Breach of privacy and violation of professional secrecy during the use of personal data, especially as many voices are raised to ask for the relaxation of the rules in this area to avoid slowing down certain uses of AI. On this point, the CNIL remains firm on data protection. There is also the question of the possible loss of data or its use for malicious purposes.
- The disappearance of some jobs that could be replaced by AI processes, even if the CNOM considers this fear unfounded, convinced of the persistence of the human relationship in the field of health. The fact remains that training, initial and continuing, will have to keep pace with AI to allow health professionals to adapt to changes in their professions.
- Dehumanization of the relationship: 6 out of 10 people would not be ready to interact with a robot in the health sector.
- Responsibility in the event of machine error: who will be responsible, since at present there is no specific legal regime in this area?
- The opacity of the algorithms which do not allow a man to always keep control of the machine: in its white paper, the CNOM underlines the impossibility of analyzing the reasoning of the machine that led to the result in the context of deep learning. The fear of seeing an AI “escaping” all human control then becomes real. If, today, it is still commonly accepted that the machine, although having capacities superior to humans, lacks human “common sense”, which requires having global knowledge of a problem, and not segmented, what if, in a few years, progress allows AI to free itself from all human control?
These risks – already existing or potential – must not lead to stagnation. Hence the usefulness of finding a way to support AI developments, without slowing them down, but limiting the risks of abuses.
Ethics and flexible law: solutions to support AI?
While some – and in particular lawyers – call for specific AI regulations, most specialists tend to prefer a more flexible “framework”. Thus, in its 33 recommendations, the CNOM invokes recourse to “soft law”, a flexible legal system that makes it possible to regulate a subject in a way that is much less rigid and cumbersome than the traditional legislative framework. Thus a field can be regulated without its development being held back by too heavy constraints.
More and more voices are being raised, including among its most ardent defenders, to demand the integration of ethics at the heart of AI, especially when machines or robots are called upon to play a social role, ” human ”alongside the patient.
In its white paper, the CNOM is clear: “In this rapidly advancing technological whirlwind, we must propose – the Order in any case – to succeed in organizing and ensuring the complementarity between man and machine, the former retaining the ethical capacity to always keep the last word ”.
Ethics, which must always guide the approach of the healthcare professional, would undoubtedly constitute the necessary counterweight to the blind and unlimited development of AI. It would thus be an acceptable compromise between the supporters of an “AI race” and those, more circumspect, who are worried about its possible abuses.
The debate has only just begun.