Can Computer-Assisted Ethics Support Decision-Making in Healthcare and Nursing?

Special Article: Palliative Care

Gerontol Geriatr Res. 2023; 9(3): 1093.

Can Computer-Assisted Ethics Support Decision-Making in Healthcare and Nursing?

Joachim Fischer*

Department of Computer Science, Humboldt University, Berlin, Germany

*Corresponding author: Joachim Fischer Department of Computer Science, Humboldt University, Berlin, Germany. Email: [email protected]

Received: June 28, 2023 Accepted: August 10, 2023 Published: August 17, 2023

Abstract

Since Socrates established the scientific discipline of ethics, the question arises as to what this scientific discipline can actually achieve. This article aims to explore whether computer-assisted ethics can support decision-making in hospitals. This question has been discussed in relation to self- driving cars and is known as “trolley problem”. Here, it will be examined in the context of medical and nursing ethics.

The Problem

The question of the relationship between technology and ethics can be posed in various ways. This includes questions such as technology assessment or the question of autonomous driving. However, all of these are merely questions of convenience and comfort. Whether we get affordable electricity, whether we drive a vehicle ourselves or let it be driven usually does not affect us as profoundly as questions of life and death.

The meaning of “moral” and “ethics” in german-speaking countries

- The verbs moral and “ethics” are more or less less synonymes in english. (cf. P.Singer Praktische Ethik 1984, 9ff) In the german-speaking countries (D, A, CH + Parts of Franc and Romania) it is a little bit different: “moral” means the everyday use on this topic- “ethics” means the reflection onthose topics. In final: “metaethics” analyzes the form of speaking about moral opinions, last but not least there ist the problem “Casuistry”, how to apply moral principles to the cases. In this article I will use strictly distinct

- “Moral” the common speaking on opinions of good and evil, which does neither reflect possible cultural, religious or social prejudices.

- “Ethics” reflection on the common unse of moral

- “Casuistry” will be the main topic of this article, wheras

- “Metaethics” will not be the subject of this article.

The Purpose of Such an Algorithm

Delimitation of the Topic

Therefore, I focus on the relationship between medical ethics and technology. The question of whether there are enough ventilators for all COVID-19 patients has concerned many people at the be- ginning of the pandemic, just as the question of how much vaccine is available and who should receive it. Fortunately, these specific questions can now be considered largely solved. (Provide references) However, they have exposed a deeper question in all its severity: how do medical and nursing staff cope with such life-and-death questions for which they are responsible? Although there are rules for this, known as triage, anyone who has had to deal with this triage problem knows that the rules of triage are ethically correct but belong to the most psychologically burdensome aspects of this profession. Couldn't there be an algorithm that relieves nursing staff and provides them with the ethically "correct" answer?

1. Such an algorithm is intended to relieve employees in healthcare and nursing professions. The aim is twofoldTo find quick answers because most decisions are time-critical.

2. To find satisfactory solutions that reduces the psychological burden on decision-makers.

The Attempt of such an Algorithm

Such an algorithm would be a computational rule that can calculate ethical decisions. For the discipline of logic, such computational rules already exist and have achieved spectacular successes, such as the proof of God's existence by Goedel. However, they are currently lacking in ethics.

Ethical Schema

Here, I would like to present a possible decision-making schema (Slide 1).

The first and most important decision in ethics is the correct definition of the situation. Many ethical approaches, especially in medical ethics, fail because the situation is not described precisely enough. This is also the case in practical and relevant medical questions: in old age, dementia and depression can easily be mistaken for each other even by experienced doctors. However, they need to be treated completely differently, and the prognoses are entirely different. (cf. ICD F006-009) There are already initial approaches to support diagnoses using artificial intelligence, but I am not familiar with them, so this topic will not be discussed any further.

In this most accurate definition, the standards of various ethical schools are now being applied. I will demonstrate this in more detail in the second step.

1. Now, let's first discuss the results: no ethical school can make more than four statements! These statements aAction is absolutely required.

2. Action is absolutely prohibited.

3. Action is recommended (depending on the circumstances).

4. Action is not recommended (depending on the circumstances).

One could also introduce a fifth category: Insufficient data. However, this can be addressed with a simple rule.

By using the ancient Stoic school, it may be possible to further simplify the categories: actions that are absolutely prohibited, actions that are absolutely required, and the difficult middle ground where everything depends on the circumstances.

The Special Case: Insufficient Data

In this case, Descartes' ethical rule (Discours de la methode III, 2) can still be applied: if you get lost in a dense forest, always keep going in one direction, eventually you will reach the light. In this sense, it means that an insufficient database should be treated as continuing as before. Here is a note: I am exclusively focusing on medical ethics in this context.

Applicati on of Ethical Rules

A significant part of any ethics with a certain aspiration, whether secular or religiously grounded, knows overarching maxims that function as super-rules above all other rules. Examples, without claiming to be exhaustive, including:

The Golden Rule: Do unto others as you would have them do unto you. Or, in a positive formulation: "Whatever you wish that others would do to you, do also to them." (Mt. 7, 12)

Judaism: Preservation of life (Pikuach ha_näfäsch)

Utilitarianism: Increase the sum of happiness and decrease the sum of suffering.

Duty ethics: Categorical imperative.

According to Johannes Fischer, these are rule-based ethics.

More challenging to translate into an algorithm are virtue ethics, such as Aristotle's Nicomachean Ethics, the golden mean "The middle way remains," (II, 1106 b) the ethical approaches of liberal Is- lam or Christianity ("Change your mindset, for the kingdom of heaven has come near "The bible Mt 3,2), or even more difficult, Augustine's situational ethics: "Love, and do what thou wilt." (In Epis- tolam Joannis ad Parthos, Tract. VII, Cap 8.here the translation by Browne) Imagine the latter in a clinical setting. The theologian Fischer from Zurich refers to these ethics as virtue ethics (cf. Fi- scher Präsenz und Faktizität 2019, 19). They can hardly be translated into an algorithm. Here, the algorithm reaches its limit. Virtue ethics are based on internalized attitudes.

An algorithm would have to be structured in such a way that it examines all possible alternatives of action in a precisely defined situation and applies the rules of rule-based ethics. Is that possible? I would like to illustrate the possibilities and limitations of this ethics using the example of utilitarianism and the golden rule. Utilitarianism:

Here, a calculation rule can be easily established in the form of an accounting ledger. Utilitarianism is based on two fundamental principles: happiness and suffering. These can be represented as an account.

Golden Rule

1. It is known in a positive form from the Sermon on the Mount, but undoubtedly it is much older. It can be traced back to the Bronze Age. In its simplest form, it goes: "As you do to me, so I do to you." This can be easily represented as a computational rule. It becomes more challenging with the well-known formulation: "Do unto others as you would have them do unto you." Here, two pro- blems aWho is the "you" that decides here? A psychopath would certainly make different decisions than a teacher.

2. There are situations in which the decision-maker must forbid someone something that they themselves would like. For example, forbidding a too overweight child from eating sweets. According to some practitioners, Immanuel Kant further developed the golden rule into the categorical imperative. However, I will omit this discussion at this point.

What are the limitations of rule-based ethics in connection with AI?

The Felicific Calculus

The felific calculus is shown at slide 2. in the form of an account.

The problem: many parameters must be subjectively set. A consistently correct decision is not possible.

The Switchman Case

The paradigm of decision-making is well illustrated by the "Switchman Case" (see for the legal and historical development: Wörner https://www.zis-online.com/dat/artikel/2019_1_1263.pdf ). The case is described as follows:

"The classic Switchman Case deals with the situation where a single freight car is hurtling down to- wards a passenger train. If the freight car remains on its current track, it will collide with the passenger train and kill a large number of people. A railway official who sees the impending disaster at the last moment switches the points, diverting the freight car onto the only siding where a few workers are unloading another freight car. As the official anticipated, three workers are killed upon impact."

Wörner extensively discusses the legal situation in her article and then applies this question to the programmer, concluding: "Just like the switchman, the programmer is not allowed to sacrifice innocent third parties. In a programming emergency, the programmer is not allowed to switch the points" (ibid. 48).

The Flawed Construction of the Switchman Case

No one wishes to find themselves in the situation of the switchman. In medical ethics, triage decisions regarding life and death come closest to this scenario. However, such decisions are made by a committee to reduce errors in judgment and emotional burden.

A similar approach is known in the context of execution by firing squad. The execution of the sentence is distributed among a group, and one shooter - no one knows who - has a blank cartridge.

This is intended to provide emotional relief.

Thus, the aim is always to avoid or distribute such decisions among a group in advance.

Moreover, the case is poorly constructed and does not accurately reflect the ethical reality. In the specific scenario, a railway official would act differently. They would block the switch to prevent the freight car from moving, give a warning signal so that the workers can seek safety, and (!) switch the points. In reality, there is often not just a choice between option A or option B but also a third alternative.

Recognizing the flawed construction of a model is a significant part of intelligence. This understanding aligns with the medieval definition of truth as "adaequatio rei per intellectum" - the adequation of the thing to the intellect.

Conclusions

1. What can we conclude regarding the possibilities and limitations of an algorithm in ethics after this whirlwinOnly rule-based ethics can be formalized.

2. Even these rule-based ethics are influenced by subjective preferences in many aspects. This somewhat limits their applicability but, on the other hand, helps highlight the decision- maker's own limitations and preferences.

3. Such an algorithm is not suitable for controlling a car. I can now provide the justification after this lengthy digression: Driving a car is an attitude, a virtue in philosophical terms. However, virtues cannot be translated into an algorithm. Therefore, it is impossible for an algorithm to steer a car.

4. In much more difficult questions of palliative medicine it will even be of less use.

Notice

To try the effect of AI in this case, I used AI to translate big parts of this text. I asked a friend, to correct it. In the end, she worke about hours, I, myself had to work one hour to write the corrections. For me this seems, AI actually does not resolve the problem of quick and good decision- but this will be another topic.

References

  1. Aristoteles Nikomachische Ethik tusculum. 2007; 2.
  2. AugustinusIn Epistolam A. Ioannis ad Parthos tractatus. Saeculo V editio: Mig- ne Migne, editor. Vol. X; 1841 (Descartes Rene) Discours de la methode Leyden 1638 https://gallica.bnf.fr/ark:/12148/btv1b86069594/f6.item.
  3. Novum testamentum Graece. Nestle-Aland. [Begründet von Eberhard und Erwin Nestle] von Barbara, Aland K, editors. revidierte Auflage. Hrsg, Stuttgart 2012 Singer Peter Praktische Ethik Reclam. vom Institut für Neutestamentliche Textfor- schung Münster/Westfalen unter der Leitung von Holger Strutwolf. Deutsche Bibelgesellschaft. 1984; 28.
  4. Fischer Joachim Triagesysteme und deren ethische Problematik Gesundh ökon. Qual Manag. 2020; 25: 121-45.
  5. Fischer Johannes Präsenz und Faktizität. Mohr. 2019.
  6. Christoph B, Formalization WB. Mechanization and Automation of Gö- del’s Proof of God’s Existence arXiv: 1308.4526.
  7. Wörner Liane. Zeitschrift für Internationale Strafrechtsdogmatik – www. Available from: zis-online.com.

Download PDF

Citation: Fischer J. Can Computer-Assisted Ethics Support Decision-Making in Healthcare and Nursing?. Gerontol Geriatr Res. 2023; 9(3): 1093.

Home
Journal Scope
Editorial Board
Instruction for Authors
Submit Your Article
Contact Us