Interview
AI never has people's well-being in mind

Artificial intelligence in medical applications is currently not yet sufficiently transparent. How its reliability can be increased and the relationship of trust between doctors and patients can even be supported is the subject of the research of the scientist siblings Lena and Matthias Zuchowski. In the interview, they explain how their framework can be implemented in practice.

Susanne Donner | January 2025
Doppelinterview_Zuchowski_Aufmacherbild
Freepik

Matthias Zuchowski, you, a medical doctor, health economist, and director of commercial administration and business development at the Robert Bosch Hospital, together with your sister Lena Zuchowski, have proposed a framework for reliable AI. Are applications of artificial intelligence in medicine not yet sufficiently reliable?

Matthias Zuchowski: AI applications in medicine are currently expanding significantly. On the one hand, this is desirable because, for example, they can reduce costs. On the other hand, there are a number of unanswered questions: What happens when patients use AI at home? What if doctors make diagnoses or suggest treatments based on AI? Doctors have sworn the Hippocratic Oath, or more precisely the Geneva Declaration. They place themselves at the service of humanity and serve to maintain and restore health. This does not apply to medical AI or its developers. In short, we see a need for action to develop a framework for AI in medicine.

Lena Zuchowski, you are a philosopher at the University of Bristol in the UK. What do you think such a framework can and must achieve?

Lena Zuchowski: We have established that it is crucial for medical staff and patients to trust each other.

Our entire healthcare system is based on a trusting doctor-patient relationship.

This trust is crucial for the success of treatment, for well-being and for whether patients follow medical advice – in other words, for compliance. If AI is introduced as a third party into this relationship, it must not undermine the trust between doctor and patient, but must strengthen it. To do that, it must function reliably. Our entire framework is based on these two core values: trust between doctor and patient and the reliability of AI. This is because people cannot trust AI itself.

What do you mean? Why can't people trust an AI – after all, that's what happens every day when AI-based software suggests treatment options?

Lena Zuchowski: AI is not capable of developing moral or emotional understanding on its own. It is nothing more than an adaptive algorithm programmed by humans and trained with a set of data from humans. Because of these characteristics, AI cannot fully take over the care of a human being. It can perform certain tasks, such as an AI-based robot passing food to a child, but it cannot take care of the child in the way that parents can. This means that the responsibility for treatments, the decision-making authority, must ultimately lie with the medical staff. And we have to ask ourselves how we maintain this responsibility of the doctor towards the patient.   

Hospitals and medical practices already use a variety of software products, from patient management systems to programs for evaluating X-ray image data. Why do you see a categorical difference between AI-based software and other software?

Matthias Zuchowski: All software is a tool. But medical AI is more likely to be able to provide answers to questions from medical staff as well as patients. For example, it can suggest diagnoses. We distinguish between clinical medical AI, which is used by medical personnel. If, however, laypersons are to use the AI, we speak of patient-accessible AI: self-treatment is within reach with such products. However, it cannot take responsibility for the therapy; medical personnel must do that.

So you are also concerned with maintaining the familiar healthcare system in which doctors have the authority to treat. How would medical AI have to be in order to respect that?

Matthias Zuchowski: It needs a greater degree of transparency than is currently the norm for many AI applications. Doctors know how an X-ray machine works. But we have no equivalent for medical AI. Such applications must not be a “black box”. Medical staff must have a basic understanding of the software involved.

But there are several conflicts lurking here. It is not in the interest of developers to disclose the source code or fundamental programming steps. Also, developing AI requires highly specialized knowledge in the fields of mathematics and computer science that medical professionals do not have. How do you intend to solve these problems?

Matthias Zuchowski: In our framework, we propose that medical AI manufacturers are responsible for training professional users in how to use the software. AI must be explainable. This necessarily includes a basic understanding of the mechanisms behind it, including the limitations of the respective AI application. A basic understanding of the technology is required.

Doctors need to be able to use AI like they use a stethoscope, which they may not be able to build, but know how it works.

And yes, it is necessary to adapt the curricula, training and further education programs for this. It won't work without training in data processing.

Lena Zuchowski: It is crucial for the transparency of AI applications that medical personnel and patients are informed about the data set used to train the AI. This is fundamental to the validity of the results. For example, a well-known AI that is already available is designed for skin cancer prevention. Photos of skin changes are examined to determine whether they are tumors. However, such AI has mostly been developed on the basis of the Caucasian population with white skin. It is not equally suitable for other ethnic groups. AI can use such mass training sets to exacerbate a problem that we already have in healthcare: that minorities are not sufficiently considered.

Matthias Zuchowski: Ultimately, all this means that a medical AI that works completely opaque is not suitable for healthcare.

What is needed for the required transparency, besides training for medical personnel?

Lena Zuchowski: In our framework, we suggest that medical professionals and patients should also be involved in the development of the AI at a very early stage. There should be a person in the company who is responsible for ethical problems and data imbalances or gaps in the course of software development and testing. Experience shows that it is more effective for one person to hold this position than for an entire company with many employees.

Matthias Zuchowski: If medical personnel and patient representatives are involved in the development from the outset, the limitations of AI can also be identified early on and communicated to patients during consultations at a later stage. For example, software that is to be used in London must also take into account the population structure there.

The acceptance of medical AI would also be greater if there was a broad understanding of it. That would be a commercial advantage for the manufacturer.

There are always papers that claim that AI is even better than humans, for example, at detecting breast cancer more reliably or simply making fewer mistakes. So isn't it progress if this AI makes the decision?

Lena Zuchowski: AI can be better. This has also been shown in a large study on breast cancer detection in Sweden. It is better at detecting very small changes in breast tissue. Admittedly, this is a limited set of cases. But AI can at least prompt doctors to take a second look – and that is a good thing.

Matthias Zuchowski: There's no question that AI can be useful. But it is often claimed that AI is better per se. That can't be the case because AI can never consider well-being. If a person has many illnesses, for example, and a tumor is found, AI may suggest a treatment. But it can't assess whether this person with all these illnesses wants this treatment and will survive it. For that, a responsible and trustworthy decision is needed: in hospitals today, several doctors always discuss the case in so-called tumor boards in order to make and propose a treatment decision.

How far is the medical AI industry from meeting the requirements in your framework?

Matthias Zuchowski: At the moment, there is often a casual attitude in the medical AI industry of “let's try it out and if people use it, it'll be fine”. But chatbots that work like a doctor have an impact on health. They need to be proven effective – evidence-based medicine applies here as well. A software must be better or equivalent compared to the gold standard. Ultimately, anyone developing AI for the medical sector must follow the guiding principle of medicine: the safety of the patient comes first. This is non-negotiable. The exchange between the parties involved on these aspects has only just begun.

But people have been googling their symptoms for a long time without AI.

Matthias Zuchowski: Yes, people often ask ChatGPT or Google. But even the exact description of the symptoms requires medical expertise: a chronic or acute cough is not easy to distinguish. At such points, every AI should indicate that a doctor needs to be consulted. Biology is much more complex than an algorithm that is learning.

What can AI be useful for if it can't completely replace doctors and shouldn't be used by patients on their own?

Lena Zuchowski: What it can achieve depends not least on the application scenario: in Sweden, in the breast cancer detection project, it has a voice alongside doctors. But it could also be a kind of fallback insurance to ensure that certain cases are not overlooked. For example, before a limit value is exceeded, say for a blood parameter, the AI could already make recommendations on what to look out for. This can help to ensure that important facts are not overlooked.

Lena_Zuchowski_Webbeitrag

Dr. Lena Zuchowski

Dr. Lena Zuchowski is Senior Lecturer in Philosophy of Science at the University of Bristol. She focuses her research on the interface between science and philosophy and has conducted research on chaos theory, among other things. She has published several papers on the ethical foundations of medical AI.

Matthias_Zuchowski_Webbeitrag

Dr. Matthias Zuchowski

Dr. Matthias Zuchowski is a physician and health economist. He is also a member of the hospital management team at the Robert Bosch Hospital and managing director of the Ambulatory Healthcare Center. He coordinates the research collaboration with the University of Bayreuth and the Bosch Health Campus.