April 2026 · Dr Badrulhisham Bahadzor

Using Medevidex for Systematic Reviews and Research

01

The systematic review bottleneck

Anyone who has conducted a systematic review knows where the time goes. It is not the database search — PubMed, Scopus, Embase queries take hours to refine but days to execute. It is not the screening — tools like Rayyan and Covidence have made title and abstract screening manageable. The bottleneck is full-text data extraction.

You have finished screening. You have 50 to 100 full-text PDFs that met your inclusion criteria. Now you need to open each one and extract specific data points: sample size, study design, intervention details, primary endpoint, follow-up duration, adverse events, subgroup analyses. You need to fill a spreadsheet with dozens of columns, one row per study, and every cell needs to be accurate because a single transcription error can invalidate a meta-analysis.

This is the part that takes weeks. Not because it is intellectually difficult, but because it is mechanically tedious. You open a PDF. You search for "sample size." You find it buried in the methods section on page 4. You copy it to your spreadsheet. You go back to the PDF. You search for "primary endpoint." You find it in the results section on page 7, but the definition is actually on page 5. You cross-reference. You record. You move to the next PDF and repeat the entire process.

A typical systematic review with 60 included studies and 20 extraction variables means 1,200 individual data lookups — each requiring you to find the right passage in the right paper and transcribe it accurately.

02

Why existing AI tools do not solve this

There are already AI tools designed for research. Elicit, Consensus, Semantic Scholar, and others offer intelligent search across published literature. These tools are genuinely useful — they help you find relevant papers, summarise abstracts, and identify trends across the literature.

But they solve a different problem. They help you discover papers. They search PubMed and other databases for you. What they do not do is query the full text of papers you have already downloaded and vetted. When you are in the data extraction phase of a systematic review, discovery is finished. You know exactly which papers are included. What you need is a tool that can read those specific papers and answer specific questions about their content.

This distinction is critical. Elicit might tell you that a particular trial exists and summarise its abstract. But the abstract does not contain the granular data you need for extraction — the specific inclusion criteria, the exact adverse event rates by grade, the subgroup analysis for patients over 70. That information is in the full text, often in tables on page 6 or supplementary data mentioned on page 12. And Elicit does not have your full-text PDFs.

Literature discovery and literature extraction are fundamentally different tasks. Existing AI research tools excel at discovery. Medevidex is built for extraction — querying across the full text of papers you have already selected and downloaded.

03

How Medevidex fits into the systematic review workflow

The workflow is straightforward. After you have completed your screening and downloaded the full-text PDFs of your included studies, you upload them to a Medevidex collection. The system processes each PDF — extracting text, figures, and tables — and indexes everything for retrieval.

Once processing is complete, you can query across your entire collection of included studies. This is where the time savings become dramatic. Instead of opening each PDF individually and searching manually, you ask Medevidex a question that spans all your included papers.

Consider the practical examples. You are extracting sample sizes: "What was the total number of patients enrolled in each study?" Medevidex retrieves the relevant passage from each paper where the sample size is stated and presents them with citations. You are extracting primary endpoints: "What was the primary endpoint and how was it defined in each trial?" The system finds the endpoint definitions across your papers, each pointing back to the specific document and page.

More complex queries work too. "Which studies reported grade 3 or higher adverse events, and what were the rates?" "What were the inclusion and exclusion criteria across these trials?" "Which studies used intention-to-treat analysis?" Each answer comes with citations to the specific paper and page, so you can verify every data point against the original source before entering it in your extraction spreadsheet.

04

The verification step is non-negotiable

I want to be direct about this: Medevidex is an extraction accelerator, not an extraction replacement. Every data point it retrieves should be verified against the source before it enters your systematic review dataset. This is not a limitation of the tool — it is a requirement of rigorous research methodology.

The good news is that verification is fast when the citation is real. Medevidex tells you the exact document and page number. You click through to the source page, confirm the passage, and move on. This takes seconds, compared to the minutes it would take to find the passage manually in the first place.

The workflow becomes: query, review the AI's retrieved passages, verify against the source page, record in your spreadsheet. The querying and retrieval — the time-consuming part — is handled by the system. The verification and recording — the intellectually essential part — remains with you.

A tool that saves you 70% of the mechanical extraction time while preserving 100% of the verification rigour is not cutting corners. It is removing the bottleneck so you can focus on what actually requires your expertise.

05

Querying across studies: comparative extraction

One of the most powerful applications is comparative extraction — asking a question that requires synthesising information across multiple papers. Traditional manual extraction handles one paper at a time. You fill in one row of the spreadsheet, then move to the next paper. Comparisons happen later, after all individual extraction is complete.

With Medevidex, you can ask comparative questions directly. "How did the surgical approach differ across these trials — open, laparoscopic, or robotic?" The system retrieves the relevant methods section from each paper and presents them side by side, each with its citation. You immediately see the landscape of interventions across your included studies.

This is particularly useful for identifying heterogeneity early. If you are planning a meta-analysis, knowing upfront that half your studies used a 90-day endpoint and the other half used a 12-month endpoint saves you from discovering this inconsistency halfway through manual extraction. You can query "What was the primary endpoint assessment timepoint in each study?" and see the variation instantly.

Similarly, for inclusion criteria: "What age range was included in each trial?" reveals whether your studies are comparable before you invest hours in full extraction. This kind of rapid overview query is something that simply is not practical when you are working through papers one at a time.

06

Handling supplementary data and complex tables

A common frustration in systematic review extraction is that the data you need is often not in the main text. It is in Table 3 on page 8, or in the supplementary appendix, or in a figure legend. Traditional keyword search within a PDF often fails here — you search for "adverse events" and find a mention in the abstract, but the detailed breakdown by grade is in a table two pages later with no text mentioning "adverse events" near it.

Medevidex indexes tables and figures alongside text, which means retrieval is not limited to keyword matching in body paragraphs. When you ask about adverse event rates, the system can retrieve the relevant table, not just the sentence that mentions adverse events in passing. The citation points you to the exact page where the table appears, so you can read the full table in context.

For supplementary data, the practical approach is to download the supplementary PDF (most journals provide this as a separate file) and upload it alongside the main paper. Medevidex treats all documents in a collection as searchable content. This way, when the main paper says "see Supplementary Table S2 for detailed adverse event data," that supplementary table is already indexed and retrievable.

07

What Medevidex is not

Let me be clear about scope. Medevidex is a retrieval and extraction tool. It is not a statistical analysis tool. It does not perform meta-analysis. It does not calculate pooled effect sizes. It does not generate forest plots. It does not assess risk of bias (though it can help you find the passages you need to make that assessment yourself).

It also does not replace your methodological expertise. A systematic review requires judgment calls at every stage — which studies to include, how to handle missing data, whether clinical heterogeneity precludes pooling, how to interpret conflicting results. These are decisions that require domain knowledge and critical thinking. No AI tool should be making them for you.

What Medevidex does is remove the mechanical barrier between you and the information locked inside your included papers. It turns "open 60 PDFs and find the sample size in each" from a two-day task into a two-hour task. The intellectual work of conducting the review — the part that actually requires a researcher — remains entirely yours.

Medevidex handles the retrieval so you can focus on the reasoning. It finds the data points; you decide what they mean.

08

A practical workflow from search to extraction

Here is how a researcher might integrate Medevidex into a standard systematic review workflow, step by step.

First, conduct your database search and screening as you normally would — PubMed, Embase, Cochrane, whatever databases your protocol specifies. Use Rayyan, Covidence, or manual screening to select your included studies. Download the full-text PDFs of all included papers.

Second, create a collection in Medevidex for your review. Name it something meaningful — "SR: Robotic vs Open PN 2020-2026" — so you can find it later. Upload all your included full-text PDFs. If supplementary files contain important data tables, upload those too.

Third, wait for processing to complete. Medevidex will extract text, tables, and figures from each PDF and index them for retrieval. For a collection of 60 papers, this typically takes minutes, not hours.

Fourth, begin your extraction queries. Start broad: "What was the study design, sample size, and primary endpoint in each included trial?" Review the results, verify against source pages, and fill your extraction spreadsheet. Then move to specific variables: "What were the operative times reported in each study?" "What were the positive surgical margin rates?" "How was functional outcome assessed?" Each query scans your entire collection and returns cited answers.

Fifth, use comparative queries to identify heterogeneity and inform your analysis plan. "What definition of chronic kidney disease was used across studies?" "How was complication severity graded — Clavien-Dindo, CTCAE, or other?" These queries give you the methodological landscape before you start the statistical work.

09

Beyond systematic reviews: other research applications

While systematic reviews are the most obvious use case, the same capabilities apply to any research task that involves extracting information from multiple documents. Writing a narrative review? Upload the key papers and ask synthesising questions. Preparing a research proposal? Upload the existing literature and query for gaps, methodologies, and outcome measures used in prior work.

Grant applications are another practical use case. When writing the background section of a grant proposal, you need to cite specific findings from specific papers. Instead of re-reading fifteen papers to find the passage you half-remember, query your collection: "What were the reported recurrence rates after partial nephrectomy in studies published since 2022?" The retrieved passages, with citations, give you exactly what you need for your background section — and the citations are already verified.

For clinical researchers juggling multiple projects, Medevidex collections serve as organised, queryable literature repositories for each project. Your bladder cancer systematic review in one collection. Your renal cell carcinoma prognostic study in another. Your department's clinical audit literature in a third. Each is independently searchable, and the documents are always available — no more hunting through folders on your desktop to find which PDF belongs to which project.

10

Read more

AI for Medical Literature Review · What Is RAG and Why Does It Matter for Clinical AI? · Organising Your Medical Library With AI