Expert Perspective: Rob White on Trust, AI and the Future of Cancer Care
- 6 days ago
- 5 min read

Following the MULTIR Hybrid Symposium on 2 October 2025 in Hannover and online, we invited Rob White of Melanoma Patient Network Europe (MPNE), who chaired the closing panel discussion, to share his reflections on some of the questions raised during the event. While the symposium brought together clinicians, researchers, AI experts, industry, ethicists and patient advocates to explore how AI can transform cancer care, this conversation focuses specifically on what stayed with Rob after the discussion: trust, transparency, equitable access, and the role of patients in shaping the future of AI in oncology.
Panel: Lise Boussemart (MULTIR Coordinator, Professor, MD at Inserm & CHU Nantes), Harald Mischak (Professor, Executive Director of Mosaiques Diagnostics GMBH) and Tomislav Križan (Founder and CEO of Atomic Intelligence D.O.O.).
One of the first questions you raised was about trust. Why is trust such a central issue when AI enters healthcare?
Trust is fundamental in medicine because healthcare decisions are made under conditions of uncertainty and risk. During the panel, I introduced one definition of trust as “a relational stance in which one party forms a justified expectation about another party’s competence and reliability under conditions of uncertainty and risk.” That felt highly relevant to the discussion about AI in oncology.
I asked the panel whether they would trust an AI more than a human doctor, and what drove them to that conclusion.
The panel was split on whom they would trust more. Harald and Tomislav both trusted AI more; their reasoning being that AI would be on top of the literature and research in the field, in a way no human doctor could be.
Lise trusted human doctors more as she felt more comfortable speaking with them. Furthermore, Lise noted that there were differences between different communities, genders and cultures etc. with respect to the preference for AI or a human interaction and this would need to be taken into consideration when assigning an AI or human doctor. We cannot have a "one size fits all" approach in the roll out of AIs to replace or augment human practitioners.
The panel also touched on open versus closed AI models. Why does that debate matter in medicine?
I asked the panel for their views on open versus closed AI models (in terms of the source code and model weightings etc.), including whether ethically we should insist on openness, even if an open model had demonstrably lower efficacy.
Harald opened the conversation making the point that no one wants to die ethically, so the best model for the job should be used.
Tomislav emphasized that the training data, specifically how well it represented the cohort seeking treatment, is the key factor in determining which AI model performs the best in a given context, and this should be the primary consideration.
Tomislav advocated for the use of open models and noted that this was the direction of the industry.
How important are transparency and explainability from a patient advocate’s perspective? Would patients accept a less transparent tool if it delivered better outcomes, or is explainability essential for trust?
Patients want to get better, and that concern overrides all others. Hence, the model with the best outcomes is the one the patient will pick every time.
I see one of the jobs of a patient advocate as minimising the second-order harms that can arise from the decisions a patient makes, especially as they are making these decisions at a time of extreme stress.
A transparent model makes it easier to understand those second-order effects and so reduces the risk of unexpected downstream impacts from the decision.
Patient advocates, therefore, should push to shape policy and legislation that moves the industry in that direction, with the aim that one day all health-related AI is transparent by design.
A transparent model reduces risk to a patient, but we should always be mindful that a person’s perspective on risk varies with their circumstances. This brings us back to my first point: a patient wants, first and foremost, to get better, and in life-threatening cases, they will not be thinking about the transparency of the model.
Another issue you raised was equitable access. Do you think AI is more likely to reduce inequalities in healthcare or deepen them?
I asked the panel whether we might be heading toward a future in which human doctors are for the rich, while AI is for the masses, and whether that would be a good or a bad thing.
Harald felt this was already the case in Germany, seeing a human doctor is increasingly difficult so individuals turn to online sources.
Tomislav said that this could be a good thing if AI was well read and was more likely to be up to date, compared to a human doctor. Once again, however, the closeness of the training data to the actual patient cohort was crucial in determining good outcomes.
Lise made an interesting point regarding the potential psychological harm to human doctors if they only saw the serious cases many of whom would not recover. Now, doctors see a mixture of patients doing well and not so well, so they see survival which helps maintain a balance.
What role should patients and patient advocates play in shaping AI in healthcare?
The panel discussion was interesting, but we were constrained by time, and there were lots of areas we didn’t have time to discuss, e.g. AI prediction in drug discovery and treatment pathways. However, MULTIR is a 4-year project so I am discussing ways we can get the above points onto a future agenda.
The responses reinforced there is no consensus, even amongst experts, about the role AI could and should play in the treatment of patients. AI can process vast datasets and identify patterns beyond human reach, but the empathy, shared decision-making, and accountability are an important consideration in patient treatment. AI can and should augment—not replace—human judgment, at least in the foreseeable future. As in all decisions regarding treatment paths, a patient should have agency in the path taken and the knowledge to make an intelligent decision about the role of AI and humans in their process.
Reflections on MULTIR
It is encouraging to see the promises of the EU’s Cancer Mission coming to fruition in projects such as MULTIR. It is even more encouraging that patients are an integral part of these projects.
It is essential that patients and their advocates continue to be part of the conversation around the role AI will play in future treatment pathways. Work such as MPNE’s own AI Consensus demonstrated that patient advocates are serious contributors to the discussion. Input we have into projects such as MULTIR, builds on that work and ensures that issues such as bias, inequity and lack of transparency are called out from the perspective of patients who stand to benefit the most from these technologies. Patients will only adopt AI technologies and realise that benefit, if they trust that the technology is always working, in a provable way, in their best interests.
MULTIR is working not only on the frontiers of scientific research to increase the efficacy of immunotherapy, but also what the role of AI should be in that space. Projects like MULTIR are “moon-shot” projects, not only delivering value in their specific field of study but continuing to lay broader groundwork for the role of personalised medicine, and how AI can contribute more broadly to the development of medical science. The challenge now, is to ensure this groundwork translates into real measurable improvements in patient outcomes.
The MULTIR project represents the new reality for medical research in Europe, where patients and their advocates have a voice. Maximising the impact of that voice means staying engaged by contributing our evidence-based perspectives and lived experiences to shape how AI tools are designed, validated and ultimately deployed in clinical settings. Patients also bring something no algorithm can replicate: the understanding of what it means to receive a diagnosis, weigh treatment options, and live with the consequences of those decisions.
We sincerely thank Rob White of Melanoma Patient Network Europe (MPNE) for sharing his reflections following the MULTIR Hybrid Symposium and for helping to bring the patient perspective to the centre of this conversation.






