Artificial intelligence for widespread use in medical care systems: An approach from the UK shows how it can work
10.12.2025
While numerous AI models are already convincing in research, their use in the clinical environment is often still difficult. Doctors often do not know how the systems work, whether they are trustworthy and whether patients' data is actually protected. This is a gap that needs to be overcome. With this in mind, we spoke to an expert. In an interview, Amitis Shidani explained to us how artificial intelligence (AI) can be used in the healthcare sector under safe, fair and practicable conditions.
Shidani et al. have published the paper "Implementation Framework for AI Deployment at Scale in Healthcare Systems". They show a structured way of transferring AI from the laboratory to clinical routine so that AI becomes a trustworthy assistant for both practitioners and patients and does not end up in a black box.
Ms. Shidani, would you like to briefly introduce yourself to the readers of our newsletter and tell them a bit about yourself and your work?
I'd love to! First of all, thank you very much for having me here - I'm delighted.
I'm currently a PhD student in statistics and machine learning at Oxford University. I specialize in better understanding modern machine learning models and improving their efficiency, scalability and generalizability.
I am also interested in bridging the gap between theoretical research and practical applications to make these models useful, particularly in healthcare and biology. I have a relatively diverse background in mathematics, computer science and engineering. In addition, I have gained knowledge in the field of computational biology during my studies. I think that describes me quite well.
How did you become interested in using AI to improve healthcare?
That's a good question. For me, it all started with a course in my bachelor's degree called "Systems Biology". It was an interdisciplinary course for students from math majors who didn't necessarily know much about biology.
The first half of the course was dedicated to basic biology and taught us how complex biological systems are. It was a real eye-opener. I was amazed at how complicated these systems are and how they interrelate.
I realized how difficult it is to fully grasp even one part of a biological system and the potential of computational tools to penetrate this complexity. What particularly motivated me was seeing how even small changes can make a big difference in health technology. It's not just about creating new tools, but above all making them practical and user-oriented.
You work at Oxford University, one of the leading centers for medical AI. What does it mean to you to be researching such an important topic?
I consider myself really lucky and find it very inspiring. Not only because I work with leading professors and researchers, but also because the interdisciplinary collaboration with people from very different fields such as medicine, computer science or ethics is so valuable to me. Everyone contributes their own perspective.
There is real teamwork. We work together to solve complex problems. Especially in the field of medical AI, diversity is essential to achieve real progress.
In one of your latest research articles, you present a framework for the use of AI in healthcare systems. Can you explain this in simple terms and explain why it is so important?
I would love to. Our framework provides a structured approach to safely and widely deploy AI tools in healthcare. While many AI applications work well in the lab, they quickly reach their limits in everyday clinical practice - due to issues such as trust, patient safety and data protection. Our concept is designed to close this gap. Not to solve everything, but to solve part of this problem. It enables healthcare systems to test and compare many different AI tools simultaneously and select the optimal combination.
Our approach follows the principle of "human in the loop" or "human-centered AI", which is of great importance in medicine. As a key feature, our framework evaluates the best-performing models not only according to traditional accuracy, but also according to criteria such as fairness, data protection and real clinical benefit.
Many of the evaluation criteria were defined on the basis of feedback from users, i.e. general practitioners and even patients. In other words, the aim is not to develop a perfect algorithm, but an ecosystem in which AI models can be continuously tested, improved and classified as trustworthy over time. We use reinforcement learning to achieve this. Ultimately, we want AI to become a reliable support in healthcare - for both sides, i.e. for doctors and patients alike.
Many people fear that AI could replace doctors. How do you see AI supporting medical staff?
That's a fascinating question and I could say a lot about it. I understand the concerns, especially given how quickly AI is developing. As far as I'm concerned, I'm optimistic. I don't think AI will replace doctors; I think it will support them.
Biological systems are extremely complex. I see AI as a kind of new colleague who can analyze data faster and recognize patterns that humans might miss. Routine tasks can also be taken over by AI. This gives doctors more time to talk to patients and make complex decisions that require human judgment and empathy.
AI also opens up more cost-effective ways to test or develop new drugs. The best results come from collaboration between clinical professionals and AI, combining the strengths of both sides.
Your article often mentions data protection and explainability. Why are these two aspects so important when dealing with AI and patient data?
In my opinion, there are at least two angles to consider. On the one hand, trust is the be-all and end-all in healthcare. Patients need to be sure that their data will be treated confidentially and securely. Doctors, on the other hand, must be able to understand why an AI system is making a particular recommendation; they must not blindly accept it. It is crucial to minimize risks. The explainability of an AI system creates trust. Data protection ensures that the rights of individuals are protected.
On the other hand, data protection can facilitate collaboration. For one of my scientific publications, we needed additional data to improve our model. However, these were not accessible due to data protection regulations. If AI systems guarantee data protection, hospitals and research institutions can share data in a secure way. This could improve collaboration and research for everyone involved.
explainability also helps developers to understand why a model does not work and provides feedback to improve it. These topics are dynamic and constantly evolving, so we need to keep an eye on that.
Are you already testing or implementing parts of the framework in hospitals or a research environment?
Unfortunately no, not yet. This work is primarily a research publication, but of course we hope that hospitals or research institutes will start testing the framework. If there is interest in a partnership, we would be very happy to help set up the entire infrastructure.
From research to practical application - what are the biggest challenges?
One major challenge is that hospitals are extremely complex ecosystems in themselves. The introduction of a new AI system is not limited to the installation of software, but also requires integration into existing workflows, ensuring data security, training staff and building trust. This is difficult and very time-consuming.
Another major challenge is the Medical Device Regulation. Medical AI has to meet extremely high safety standards, which slows down the process but ultimately makes it much safer. These are two of the biggest hurdles.
What types of partnership do you see as helping to bring AI solutions into everyday clinical practice?
There are many ways in which institutions such as Bayern Innovativ, universities, healthcare providers and start-ups can work together. From a university perspective, it is extremely valuable when such organizations help us to understand the scientific aspects of the problem and the associated concerns in practice.
Many problems are not solved because the problem itself is not properly understood from the outset. Sometimes we tend to propose solutions directly without knowing the needs of those who will ultimately use them.
It is also crucial to give students opportunities to understand how the industry works. Students often have sound scientific knowledge but lack practical experience. Contact with industry helps to bridge this gap between research and application.
Last but not least, financial support for students is extremely valuable as it opens up important opportunities for teaching and research.
What might healthcare look like in ten years' time when AI frameworks become standard?
It's difficult to look ten years ahead. AI is developing so rapidly that even one or two years is difficult to predict. But fundamentally, I believe that healthcare will become much more personalized and proactive.
AI could help doctors identify early signs of disease, predict risks before symptoms occur and tailor treatment decisions. It could also support drug development and trial testing by using simulations and synthetic data. This could make these processes more cost-effective and efficient.
Of course, there are ethical risks involved. However, AI can help us to develop better regulations for the future. Overall, I hope that AI will make healthcare more efficient, personalized and responsible.
What advice would you give to young innovators who want to work on AI with real added value for the healthcare sector?
My suggestion is very simple - even if I don't see myself in a position to give advice. In my experience, it's always important to understand the real problem first.
I see two broad research directions in AI for healthcare. One is predominantly exploratory and aims to expand methods or knowledge. The other looks for practical, real-world implications. In both cases, understanding the problem is important, but for those pursuing applied approaches, it becomes even more critical. I often see highly motivated people starting something very big, complex and technologically ambitious because it seems exciting, but without a clear understanding of the specific problem they want to solve. In my opinion, it helps to do practical research the other way around: First identify the real-world challenge and then use AI as a tool to address the challenge. And don't underestimate breaking the problem down into smaller, more manageable pieces!
Thank you very much!