April 2026 · Dr Badrulhisham Bahadzor

How Clinicians Use AI for Continuing Medical Education

01

The CME problem nobody has solved

Continuing medical education is a contractual obligation. Every medical licensing body in the world requires practising doctors to demonstrate ongoing learning — through CPD points, CME credits, or their local equivalent. The intent is sound: medicine evolves, and clinicians must evolve with it. But the mechanisms we use to fulfil this obligation have not kept pace with the volume of evidence we are expected to absorb.

Consider what "staying current" actually means in a specialty like urology. The European Association of Urology publishes updated guidelines annually. Each update modifies recommendations across multiple chapters — prostate cancer screening, stone management, paediatric urology, neuro-urology, and a dozen others. A single year's update can span hundreds of pages of revised text, new meta-analyses, and modified treatment algorithms. That is one guideline, from one society, in one specialty.

Now add the American Urological Association guidelines, which often reach different conclusions on the same topics. Add the NICE guidelines if you practise in the UK. Add the key journal articles published in European Urology, The Journal of Urology, and BJU International — perhaps 50 to 100 papers per year that are genuinely relevant to your subspecialty practice. Add the systematic reviews and meta-analyses that synthesise these papers into actionable summaries.

No clinician reads all of this. We skim. We attend conferences where speakers summarise the highlights. We read review articles that distil the key changes. We rely on colleagues to flag what matters. This works — roughly — but it means our knowledge base is perpetually incomplete, selectively updated, and dependent on secondary interpretations of the primary evidence.

The fundamental CME challenge is not access to information. It is the gap between the volume of evidence produced and a clinician's capacity to process it — while also running a clinical service, teaching trainees, and having a life outside medicine.

02

What existing tools get right — and wrong

The past few years have seen an explosion of AI tools aimed at medical literature. It is worth understanding what they do well and where they fall short, because the distinctions matter for CME specifically.

Discovery tools — Consensus, Semantic Scholar, PubMed AI — are excellent at finding relevant papers. If you want to know "what has been published on focal therapy for prostate cancer in the last two years," these tools will surface the papers faster than a manual PubMed search. They are optimised for discovery: finding literature you did not know existed.

General AI assistants — ChatGPT, Gemini, Claude — are useful for getting a quick overview of a topic. They can summarise a concept, explain a mechanism, or outline a differential diagnosis. But their answers come from training data, not from specific documents. They cannot tell you what the 2025 EAU guidelines specifically recommend, because they may not have the 2025 edition in their training data — or they may conflate it with the 2023 edition, or with a different society's guideline entirely.

What is missing from both categories is comprehension of documents you have already selected. Once you have identified the papers and guidelines that matter — the ones you trust, the ones your institution follows, the ones your licensing body recognises — you need a tool that helps you extract, compare, and understand them. Not discover new things, but deeply understand the things you have already decided are worth reading.

Discovery tools help you find literature. General AI tools give you general answers. What CME actually requires is deep comprehension of specific, curated documents — the ones you trust and are expected to know.

03

Building a living knowledge base

The approach I want to describe is one I use myself and have found genuinely useful for maintaining my own knowledge base across urology subspecialties. It centres on the idea of a living library — a collection of documents that grows and updates alongside your clinical practice.

Start with your core guidelines. For me, that is the EAU Guidelines — the complete set, organised into collections by chapter. Prostate Cancer in one collection. Bladder Cancer in another. Urolithiasis in a third. Each collection contains the current edition of the relevant guideline chapter, uploaded as a PDF.

When the annual update is published, I upload the new edition alongside the previous one. This is where the power of AI-assisted comprehension becomes apparent. I can ask: "What changed in the management of non-muscle-invasive bladder cancer between the 2024 and 2025 editions?" The AI searches both documents, identifies the differences, and cites the specific sections where recommendations were added, modified, or removed. This task, done manually, takes an hour of side-by-side reading. With AI retrieval, it takes two minutes.

As the year progresses, I add key journal articles to the relevant collections. A landmark RCT on focal therapy goes into the Prostate Cancer collection. A new meta-analysis on flexible ureteroscopy outcomes goes into Urolithiasis. Each addition makes the collection more comprehensive, and each query searches the growing evidence base.

Your knowledge base is not static. It grows with every guideline update, every key paper, every systematic review you add. And because it is searchable, you do not need to remember where you filed something — you just need to ask.

04

Comparing guidelines across societies

One of the most valuable CME exercises — and one of the most tedious to do manually — is comparing how different guideline societies approach the same clinical question. In urology, the EAU, AUA, and NICE frequently diverge on screening recommendations, surgical thresholds, and follow-up protocols. Understanding these differences is not academic; it has direct implications for clinical practice, particularly if you work in a multinational setting or supervise trainees from different training backgrounds.

With separate collections for each society's guidelines, you can run the same query against each one and compare the answers side by side. "What are the indications for active surveillance in low-risk prostate cancer?" asked of the EAU collection yields one set of criteria. The same question asked of the AUA collection yields another. The differences — in PSA thresholds, in Gleason grade group definitions, in recommended monitoring intervals — become immediately apparent.

This kind of comparative analysis is exactly what CME conferences try to deliver through expert panel discussions. But a panel discussion happens once and covers three or four topics in an hour. AI-assisted comparison is available on demand, for any topic, across any guidelines you have uploaded. And the answers are cited, so you can verify every claimed difference against the original text.

05

Integrating new evidence into existing knowledge

The hardest part of CME is not reading a new paper. It is understanding how a new paper changes what you already know. A new randomised trial on the use of checkpoint inhibitors in advanced urothelial carcinoma does not exist in isolation — it exists in the context of existing guidelines, previous trials, and your current practice.

When a significant paper is published, upload it to the relevant collection alongside your existing guidelines. Then ask the question that matters: "How does this new trial's findings compare to the current EAU recommendation on first-line treatment for metastatic urothelial carcinoma?" The AI searches both the new paper and the existing guideline, identifies the overlap and the divergence, and cites both sources.

This workflow simulates what a good journal club does — contextualising new evidence within existing knowledge. But it does it on demand, for any paper, against any set of background documents. And it does it in two minutes rather than requiring a dedicated hour-long session with six colleagues.

Over time, this accumulates into a genuinely comprehensive knowledge base. Each new paper is not just filed — it is integrated. When you query the collection six months later, the answer reflects both the original guideline and the subsequent evidence you have added. Your knowledge base evolves the way your clinical practice should: incrementally, evidence by evidence, with each new finding contextualised against what came before.

06

The difference between AI for discovery and AI for comprehension

This distinction is important enough to address directly, because I see it confused frequently. AI for discovery and AI for comprehension are fundamentally different tools that serve different stages of the CME workflow.

Discovery is the first step: finding out what exists. Which papers have been published on this topic? What are the latest trials? Has there been a new meta-analysis? Tools like Consensus, Connected Papers, and PubMed's AI search are excellent at this. They help you identify the literature that is worth reading.

Comprehension is the second step: understanding what you have found. Once you have identified and downloaded the papers that matter, you need to read them, extract the key findings, compare them to existing guidelines, and integrate them into your practice. This is where Medevidex sits. You upload the documents you have already vetted — the ones you have decided are worth your time — and use AI to help you understand them deeply.

The critical difference is trust. When you use a discovery tool, you are trusting the algorithm to surface relevant results. When you use a comprehension tool like Medevidex, you have already made the trust decision — you chose these documents, you vetted them, and you know they meet your evidentiary standard. The AI simply helps you navigate them more efficiently. Every answer it gives comes from sources you selected.

Use discovery tools to find the papers. Use comprehension tools to understand them. The two are complementary, not competing. Most clinicians need both — but the comprehension step is where the real CME learning happens.

07

Practical examples beyond urology

While my experience is in urology, the workflow applies across specialties. A cardiologist can upload the ESC guidelines on heart failure, chronic coronary syndromes, and valvular heart disease into separate collections. When the annual update is published, upload the new edition and ask what changed. When a landmark trial like EMPEROR-Preserved shifts the SGLT2 inhibitor landscape, upload the paper and ask how it compares to the current guideline recommendation.

A general practitioner managing a broad scope of practice can organise by condition cluster: diabetes management in one collection (with NICE guidelines, the ADA Standards of Care, and key trials), hypertension in another, mental health in a third. The scoping ensures that a query about antihypertensive targets does not accidentally pull in diabetes-related blood pressure recommendations — which exist in different guidelines with different contexts.

A paediatrician can separate neonatal protocols from adolescent medicine. An anaesthetist can maintain separate collections for cardiac anaesthesia, neuroanaesthesia, and obstetric anaesthesia. The pattern is the same: organise by clinical context, upload the trusted sources, and query them on demand.

08

What this does not replace

AI-assisted CME does not replace conference attendance, peer discussion, or structured learning programmes. These serve different functions that AI cannot replicate.

Conferences provide serendipity — exposure to topics you would not have sought out, conversations with colleagues working on problems you had not considered, and the perspective that comes from seeing your specialty through someone else's practice. No AI tool can replicate the experience of hearing a world expert walk through a difficult case and explain their reasoning in real time.

Peer discussion provides accountability and perspective. When you discuss a paper with colleagues, you hear interpretations you would not have reached alone. You learn what others think matters. You refine your own understanding through the process of explaining it to someone else. Reading, even AI-assisted reading, does not replace this.

Structured CME programmes — workshops, courses, simulation sessions — provide hands-on learning that document review cannot. You do not learn to perform a robotic prostatectomy by reading about it, no matter how efficiently you can query the literature.

What AI-assisted comprehension replaces is the inefficient, manual, time-consuming process of navigating large volumes of text. It makes the reading portion of CME — which is substantial — faster and more productive. This frees time for the parts of CME that genuinely require human interaction.

09

Getting started with AI-assisted CME

If you are a practising clinician who wants to integrate AI into your CME workflow, here is a practical starting point. Create a free Medevidex account. Identify the two or three guidelines that are most central to your practice. Upload them into separate collections. Start querying.

The first time you ask a question and get a cited answer pointing to the exact page in your guideline, you will understand why this approach is different. It is not about getting an answer — you could get that from Google. It is about getting a traceable, verifiable answer from the specific document you trust, in seconds rather than minutes.

As the year progresses, add new papers as they are published. Upload the updated guidelines when they arrive. Your knowledge base grows. Your queries become more powerful. And the time you save on document navigation can be redirected to the parts of CME that actually require your cognitive effort — clinical reasoning, hands-on training, and peer discussion.

10

Read more

AI for Medical Exam Preparation · AI-Powered Medical Literature Review · Why Medical AI Needs Citations