Deep-tech diagnostics: how artificial intelligence is transforming medical diagnostics
12.11.2025
Artificial intelligence has become an integral part of the healthcare industry. It makes everyday life easier in numerous applications and is also changing medicine at a breathtaking pace - from the early detection of diseases to personalized therapy planning. Self-learning algorithms are opening up completely new possibilities for healthcare and existing technologies. They analyse X-ray images in a matter of seconds, recognize patterns in complex laboratory data and can help doctors to make more precise and personalized diagnoses. We wanted to find out what is important when using AI with a focus on diagnostics and spoke to an expert.
You wrote your doctoral thesis on the identification of non-contrast-enhancing brain tumour regions using an upscaling pre-activation U-net model with depth regularization. What specific evidence would convince you in practice to trust a diagnostic model?
Prof. Dr. Schaffer: The general trend, both in industry and in healthcare, is towards explainable AI models that, in addition to simply outputting a result or diagnosis, also justify and show how this result came about. Such systems are particularly valuable for the critical analysis of borderline cases in which results are medically uncertain or not yet fully clarified. In many diagnostic cases, there is no absolute "true" or "false", but rather areas of uncertainty that even experienced medical professionals continue to explore. In this context, a confident and constructive use of AI systems, not only for decision-making but also to broaden understanding of complex, less defined areas of medicine, can only be beneficial in my view. What is needed in the future for the more typical diagnostic cases to turn explainable models into truly trustworthy diagnostic models is a formal certification process by medical experts - as every medical professional must undergo.
AI models in medical diagnostics often use real medical data, so-called real word data (RWD). In order to collect RWD more quickly, it is possible to carry out multi-site studies, i.e. the same research methods are used at several sites simultaneously. In your opinion, how lean could an EU-compliant multi-site study be in order for you to accept it as evidence of clinical benefit?
Prof. Dr. Schaffer: As a realistic starting point, I could imagine a 6 to 12-month observational study in two to three medium-sized European hospitals, with standardized patient groups to reflect the diagnostic diversity in everyday clinical practice. The primary focus should be an improvement in diagnostic accuracy and faster diagnosis compared to conventional diagnostic workflows.
The study should be GDPR and EU AI Act compliant and ensure transparent data processing and independent monitoring. This would already lead to high clinical credibility without the need for comprehensive randomized trials.
What tests or safety measures are there to avoid errors or risks when using AI systems?
Prof. Dr. Schaffer: In the case of diagnostic AI systems, especially multi-agent architectures, i.e. systems in which several AI modules work together, various safety tests should be carried out before the market launch. In my view, these include redundant validation pipelines, isolated execution environments and runtime monitoring, inspired by other computational methods. Such mechanisms detect anomalies, provide process isolation and prevent uncontrolled tool actions, effectively creating a "watchdog layer" for AI-supported clinical reasoning. Furthermore, if diagnostic systems are to be integrated into robotic platforms that physically interact with patients or medical devices, additional layers of safety become imperative. Even collaborative robots designed for human-robot interaction must undergo additional stringent functional and mechanical safety certifications (ISO/TS 15066, ISO 13849, etc.) when guiding diagnostic tools.
In this context, safety is not only a software issue: it also depends on redundant hardware circuits, self-diagnostic routines and fail-safe control mechanisms to avoid unintended movements or actions. Such cross-domain safety concepts are essential to ensure patient safety and build trust in future healthcare applications of AI and robotics.
Where would you say the MDR and EU AI Act are technically most stringent, and what documentation must be in place before the first patient use?
Prof. Dr. Schaffer: The most demanding technical aspects under the EU AI Act are data traceability, risk classification and documentation of explainability. Transparent model documentation, including data provenance and key performance indicators, must be available before the first patient use.
In my opinion, the most technically difficult part is ensuring continuous compliance, with models that must remain auditable and explainable after implementation, even if they are retrained or updated. This is a non-trivial requirement. It requires AI systems to have a robust lifecycle management process, as is the case with other certified medical devices.
How would you go about moving from retrospective results to a robust multicenter study?
Prof. Dr. Schaffer: The first step is to critically analyze the retrospective data in order to formulate a clear and practically relevant research question. A feasible study concept with defined objectives and standardized procedures is then developed. A key step is to contact clinics or centers that treat similar cases at an early stage in order to establish cooperation and create a broad, meaningful database. In Bavaria, we have top-class university hospitals such as the University Hospital Regensburg, the University Hospital Erlangen or the University Hospital Munich, which, with their clinical research infrastructure, strong expertise in medical imaging and proven data governance frameworks, fulfill the optimal framework conditions for such studies.
Thank you very much, Prof. Dr. Schaffer, for these insights and the interview.
________________________________________
As Bayern Innovativ, we support the transfer of AI and other complex technologies. To this end, we network companies, politics, science and practice, and are committed to knowledge transfer and new innovations that will shape the healthcare of tomorrow.