Ethical Dilemmas concerning robots in Healthcare
A robot is a type of automated machine that can execute a specified task with little or no human intervention and with speed and precision. A fast-growing area in healthcare, the use of medical robotics is bringing greater precision to surgical procedures and efficacy to hospitals and clinics. Robots are also assisting in surgery, allowing doctors to conduct surgery through a tiny incision instead of an inches-long incision. The problem arises when use of these robots comes in conflict with ethics and we find it hard to decide which way to tilt. As Albert Camus, a French philosopher, said, “A man without ethics is a wild beast loose upon this world.”
When Healthcare Communication Robots (HCRs) are introduced to long-term facilities and hospitals for patient communication, new kinds of ethical dilemmas may arise. Since HCRs will be used in healthcare settings, it is important to protect patient rights and maintain their safety. To this end, discussion and collaboration with an interdisciplinary team is crucial to the process of developing these robots for use among patients.
The interaction between patient and doctor has deep historical significance, with accompanying ethical norms, professional guidelines and legal regulation having implications for the wider organisation and delivery of healthcare. Understanding this interaction in depth is critical to shape the artificial systems that are built to replace some or all functions of human doctors. In the context of modern evidence-based medicine, the core relationship between patient and doctor has been modelled in a number of forms.
The European Foresight Monitoring Network - EFMN (2008) defines healthcare robots as “systems able to perform coordinated mechatronic actions (force or movement exertions) on the basis of processing of information acquired through sensor technology, with the aim to support the functioning of impaired individuals, medical interventions, care and rehabilitation of patients and also to support individuals in prevention programs.” However, the research and regulatory policies are not consistent across research and practice. Several key features of the current model of patient-centred care require re-evaluation in a robotic/AI healthcare model. Confidentiality of medical information is the norm, with rare scenarios where it is ethical and legal to breach confidentiality. While confidentiality is already facing challenges through the management of electronic healthcare records holding large-scale granular data, this is poised to explode in a robot-doctor era.
More widely, robotic delivery of healthcare creates a host of new medicolegal conundrums. Where does responsibility for a medicolegal error lie in a complex system of potentially opaque AI decision-making and lack of clear medical accountability? Can current professional regulatory bodies continue their role in protecting the public by enforcing professional standards, or are new legal frameworks required? What does a robotic doctor do when faced with a patient who lies, has irregular immigration status, or may have been involved in a crime?
The use of social robots in mental health also raises ethical issues. As well as helping to navigate the ethical issues of social robotics, mental health is one of the few fields in which this technology has actually been deployed, with autistic spectrum disorders (ASDs) and Alzheimer’s disease considered the most widespread and most promising use cases, in the same logic of resorting to artificial sociability to fight against the isolation of people.
The production of anthropomorphised robots, in which the characteristics we spontaneously attribute to humans can be found, raises first and foremost the issue of confusion between human and robot. In public reports, this risk is considered to be all the greater for vulnerable individuals. One child psychiatrist expresses more doubts, and a certain unease: “Because what bothers me is the idea that not all autistic children, but some of them are so in their own world that you feel that , between humans and robots , they don’t really grasp the fact that the robot is..not independent, that it isn’t speaking on its own. You feel that it’s sometimes difficult for them to see that. And ethically, I find that somewhat questionable.”
The desire to avoid confusion between human and non-human raises a second question: that of attachment. Public reports present this issue as being particularly critical for vulnerable people, who are considered more likely to succumb to anthropomorphic traps. In the end, from the professionals’ viewpoint, the issue is not so much to prevent attachment, which they consider they have little control over, but rather to avoid fuelling the idea of possible reciprocity, particularly by suggesting that the robot has emotions similar to humans. Regarding Pepper, a social robot, one psychologist explains: “It is really sold as having a heart, as being able to cry with you if you are sad. And we don’t like that at all. It is not at all ethical.”
On a concluding note, it's fair to argue that the application of robots in the field of healthcare might not be the best of ideas. When one is unwell, they need emotional support over everything else. That support must be genuine. With time, they may develop dependence on that bond. A bond with a robot may turn even more pathetic than loneliness, because once the illusion is shattered, so is the person. Company with loved ones may firmly supplant medical intervention in some cases. Artificiality is bound to have its restrictions, because it is mechanical. The world may be a horrendous place, but it has not yet been squeezed dry of humanity. Everybody here needs at least one person with whom he may let go of all artifice, gaze at the stars, go fishing, cry, argue, have fun, confide in and love. We need flesh and blood people. Gadgets can never replace human contact.
Written by Srishti Choudary
Illustrated by Anannya Pincha
References:
Comments