In this episode, I’m excited to welcome back Katrin Kupas, a statistician with deep expertise in Health Technology Assessment (HTA) and real world evidence (RWE). We dive into how RWE and the new Joint Clinical Assessment (JCA) process in Europe can work together—and where the challenges lie.

As the JCA becomes more central in EU regulatory and reimbursement discussions, knowing how and when to use real world data is critical. Katrin shares practical use cases, methodological guidance, and strategic insights for integrating RWE into early planning.

What You’ll Learn in This Episode:

✔ What the JCA is and why it changes how we plan evidence generation

✔ When and how RWE can help answer comparator questions in HTA

✔ The risks of non-randomized comparisons—and how to mitigate them

✔ Why we need an integrated evidence plan early in development

✔ How tools like ROBINS-I and quantitative bias analysis can improve credibility

Why You Should Listen:

If you’re a statistician, medical affairs lead, or part of an HEOR or market access team, this episode will help you:

✔ Develop a stronger evidence generation plan across the drug lifecycle

✔ Understand how real world evidence can support JCA submissions

✔ Learn when RCTs aren’t enough—and how RWE can fill the gap

✔ Gain practical advice for designing indirect treatment comparisons

✔ Improve your bias assessment strategies with tools like ROBINS-I

Resources & Links:

🔗 ROBINS-I Tool – for evaluating bias in non-randomized studies

🔗 The Effective Statistician Academy – I offer free and premium resources to help you become a more effective statistician.

🔗 Medical Data Leaders Community – Join my network of statisticians and data leaders to enhance your influencing skills.

🔗 My New Book: How to Be an Effective Statistician – Volume 1 – It’s packed with insights to help statisticians, data scientists, and quantitative professionals excel as leaders, collaborators, and change-makers in healthcare and medicine.

🔗 PSI (Statistical Community in Healthcare) – Access webinars, training, and networking opportunities.

If you’re working on evidence generation plans or preparing for joint clinical advice, this episode is packed with insights you don’t want to miss.

Join the Conversation:
Did you find this episode helpful? Share it with your colleagues and let me know your thoughts! Connect with me on LinkedIn and be part of the discussion.

Subscribe & Stay Updated:
Never miss an episode! Subscribe to The Effective Statistician on your favorite podcast platform and continue growing your influence as a statistician.

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won’t send you spam. Unsubscribe at any time. Powered by Kit

Learn on demand

Click on the button to see our Teachble Inc. cources.

Load content

Katrin Kupas

Director – Patient-focused Real World Evidence at Merck

She is currently leading the Global MarketAccess&HEOR Biostatistics team at Bristol-Myers Squibb, and is actively involved in the PSI/EFSPI HTA Special Interest Group as well as the EFPIA working group on the EU HTA. Before joining BMS, she worked as a statistician in clinical development as well as in Medical Affairs.

Katrin holds a diploma in biomathematics and obtained her PhD in data sciences at the university of Marburg in 2006.

Transcript

RWE and JCA – how do they go together

[00:00:00] Alexander: You are listening to The Effective Statistician Podcast, the weekly podcast with Alexander Schacht and Benjamin Piske designed to help you reach your potential lead great science and serve patients while having a great [00:00:15] work life balance

[00:00:23] in addition to our premium courses on the Effective Statistician Academy. We [00:00:30] also have lots of free resources for you across all kind of different topics within that academy. Head over to the effective statistician.com and find the [00:00:45] Academy and much more. For you to become an effective statistician.

[00:00:50] I’m producing this podcast in association with PSI community dedicated to leading and promoting use of statistics within the healthcare industry [00:01:00] for the benefit of patients. Join PSI today to further develop your statistical capabilities with access to the ever-growing video on demand content library free registration to all PSI webinars and much, much more.

[00:01:14] [00:01:15] Head over to the PSI website at PSI web org to learn more about PSI activities and become a PSI member today.[00:01:30] 

[00:01:30] Welcome to another episode of the Effective Statistician today. I’m super happy to have Katrin back on the show. Hi, Katrin, how are you doing? 

[00:01:38] Katrin: I’m good. How are you? Happy to be here. 

[00:01:40] Alexander: Yeah, happy to have you. And today we talk about two acronyms that [00:01:45] have a interesting correlation to each other, real world evidence, RWE.

[00:01:50] The joint clinical advice JCA. But before we dive into the topic, Katrin, maybe you can talk a little bit about where you’re [00:02:00] coming from and what brought you into this real world evidence space. I. 

[00:02:04] Katrin: Yeah, happy to do. I’m a statistician, a director and statistics, and I have greater than 10 years experience in HTA and I’m actively engaged as a European [00:02:15] HTA activities, and especially when it comes to evidence generation outside of an RCT.

[00:02:22] I’m now in a new position as a director for real world evidence. Evidence is very important when you do not have RCT data [00:02:30] in hand for generating all the evidence needed for this new European HTA process of JCA. I’m always trying to link those two topics and to bring people together and to discuss.

[00:02:42] Evidence generation plans and have [00:02:45] workflows ready to answer all those questions in this HTA process and to be able to use real world evidence when there is a right use case for that. But yeah, I’m dealing with those two topics since quite a while now and it’s very interesting and it’s very difficult to marry them [00:03:00] because in HT A, there’s a strong focus on randomized controlled trials, on certainty of analysis results.

[00:03:06] And on the other hand, we have a lot of data outside of a randomized controlled trial. We have a lot of data evidence available. We have strong data available [00:03:15] and we need to do more to be able to answer the right question with the data. 

[00:03:18] Alexander: Yeah, so let’s directly dive into the use cases for using real world evidence in the JCA process.

[00:03:27] And I think we can already [00:03:30] think about two different scenarios. One is you have a first JCA, so the molecule is completely new on the market. You probably don’t have any real world evidence for your new molecule, [00:03:45] but you have it for other comparator data. The second scenario could be that you have a new indication for your molecule, and so therefore you have at least safety data and these kind of things, and [00:04:00] maybe some off-label data for your molecule that is coming from real world evidence.

[00:04:06] So I think these two different cases we need to have a look into. So depending on these two, what are good use cases [00:04:15] for real world evidence and joint clinical advice. 

[00:04:18] Katrin: Let’s start with the first scenario as the JCA kicked off on January 12th. Those will be the most relevant use cases because the JCA is only happening for new drugs that come to the [00:04:30] market.

[00:04:30] For the JCA, you will get a number of different peaks. You have to answer population intervention, comparatory outcome. Those peoples have to fulfill the needs of all 27 member states. So you were. End up with a [00:04:45] lot of comparators. You will definitely not have any randomized trial data against those comparators.

[00:04:51] We about evidence. Can be really important to fill this gap and to have effectiveness. Effectiveness includes safety [00:05:00] comparisons, variable comparators you do not have in your randomized data, especially when you might have a new drug where you will. The studies that have been run for those comparators are old.

[00:05:11] You might have some bias due to change in standard of [00:05:15] care or change in disease management. You cannot use published data from clinical trials, but you might want to use actual real data for those comparators to avoid this bias. So that can be as a basis for an indirect treatment comparison that is a co compar that you don’t have direct evidence [00:05:30] against.

[00:05:31] Of course, the data also helps you to define patient numbers, treatment patterns, the patient journey to get a full picture of the drug. But for the jcas, the most relevant one is using indirect treatment comparison, and that’s the one where the final [00:05:45] guidelines are open for it. They draw all the limitations, which are there, but they are open for it.

[00:05:50] It’s described as a use case, the indirect comparisons, the data. And that’s the most important one. So for the second scenario, when you have your [00:06:00] drag on the market, 

[00:06:01] Alexander: let’s double check on this indirect comparison here. We have very often indirect comparisons without the which co comparator, I would say.

[00:06:10] Yeah. So it’s not so typical indirect comparison. Like [00:06:15] you have two studies both compared to placebo and then you use placebo as a comparator. It’s likely is that you have two arms, one from an RCT and the other from observational data, and now you need to [00:06:30] put them next to each other and have an indirect comparator without, of course, that makes things much more complicated.

[00:06:38] Katrin: Yes, a non-randomized comparison is much more complicated when it comes from a method standpoint, but also when it [00:06:45] comes to assessing the bias you can have there. With randomization, you make sure that you have a balanced assignment to the two treatment groups With respect to disease characteristics, patient, you do not have a patient selection due to the [00:07:00] treatment.

[00:07:00] You also have the same starting points and same index stage for your comparison. So there’s so many sources of bias that you can have when you do a non-REM comparison. It’s really of importance not to focus on [00:07:15] the the statistical methods here, but to also focus on the selection of the data source on identifying the different sources of bias and then adjusting with the right statistical method to be able to rule out any bias that might be due to [00:07:30] confounding.

[00:07:30] And then of course, there’s always unknown unmeasured code found. Need to do a quantitative bias assessment to try to quantify the remaining residual bias you can have there. 

[00:07:40] Alexander: We’ll dive further, deeper into biases later in this episode. [00:07:45] So that was the first use case. What’s the second use case? 

[00:07:48] Katrin: So the second use case when you have your drug already on the market.

[00:07:52] In another indication, you could use real world evidence to get more safety data. Maybe also on long-term safety data [00:08:00] from real data clinical. Settings of this drug. You mentioned that at the beginning, if there is some off-label views already for the new indication when the indications are closed and there might be off-label views, you can already maybe used that data for direct [00:08:15] comparison.

[00:08:15] The issue is when we use Viva data of your new drug, you might have a strong selection bias compared to the old standard of care. You might have very different patient populations taking the new drug and taking the old drugs. So that’s one of the [00:08:30] flaw backs, but you might have data connected at the same time period.

[00:08:33] There’s a different source of bias. You get in those use cases, and you have to make sure to understand those sources of bias fully. Then take a decision. Do you have the data [00:08:45] that can answer the question you want to answer? Is it possible to adjust for that bias? Is it possible to quantify it or not? 

[00:08:51] Alexander: Of course, it’s important to look into how you collect data.

[00:08:55] One source could be registries or claims [00:09:00] databases. If you plan for prospective observational studies, comparative prospective observational studies, when you launch your first indication, that can help you for later indications, at least from a safety point of view. And [00:09:15] if you are compar to us. Similar indications that you want to go into.

[00:09:19] This could be good use cases for prospective observational studies. 

[00:09:23] Katrin: Absolutely. Fully agree. You mentioned a very good point. Assessment of endpoints, assessment of data. [00:09:30] Very important when you think of an oncology example, uh, not just when you look at data itself, but also when you look into a clinical trial versus we have a clinical trial.

[00:09:39] You do the assessment by imaging. Whereas in real world in the clinical practice, there might be [00:09:45] symptoms associated. There might not be an image all the time, so you might have different time points when you assess a progressive disease and a different assessment of that endpoint. So that’s also very critical to look into that.

[00:09:56] But if you are able to collect the real world data [00:10:00] prospectively, you are able to define how the endpoints are assessed. You’re able to define. The patient population for both groups. You can emulate a target trial instead of, but you just leave out the randomization there. But all the other things you can try to make as [00:10:15] similar as feasible between the two arms to have more comparable data.

[00:10:18] Very important. 

[00:10:19] Alexander: What further use cases, what do you foresee using robot evidence for the trans clinical assessment? 

[00:10:25] Katrin: Real good evidence can be also very helpful to have a look [00:10:30] into trial feasibility. So when you prepare for JCA and the data, you can see a patient numbers. Is it feasible to recruit the patients for a trial that fulfills the JCA needs?

[00:10:42] As a patient journey might be very [00:10:45] important to understand how to design a trial to make sure that the patient benefits from the drug. Um, so trial and feasibility is really a very important one. Of course you can use data to understand prognostic and predictive factors for the treatment [00:11:00] outcome, and that’s also very important when you want to run an indirect treatment comparison, that you have a full understanding what are the prognostic predictive factors, what might be possibly confounders as well, that you have to adjust for.

[00:11:12] So then of course you can look at. [00:11:15] Your health technology in the real world setting. So external validity. So if you have a random mass trial, let’s focus more on internal validity. And you look at the cause of treatment effect, but what’s the external validity? How does the health technology, uh, show [00:11:30] effectiveness in the real world and the standard care outside of the clinical trial setting?

[00:11:35] Alexander: That is, especially for the comparators, also very relevant. Yes. And whether the comparators, how much are they used and are they [00:11:45] used in the same way as in the clinical trial in terms of dosage, frequency, whatsoever? 

[00:11:51] Katrin: Absolutely. And also the effectiveness. It can be very different in the real world because you might have much more different factors that come in [00:12:00] and it can be very simple ones like the region the patient is living in and the axis he has to different drugs and different care outside of the treatment.

[00:12:09] So there’s a lot of things that can be assessed using V evidence. Also in rare diseases, one [00:12:15] of the use cases, which is already very established, is to really enrich the data of a control group and. Use historical data and real world evidence to borrow data for that enrichment. Um, and that’s very established already and it’s a very important use case that if [00:12:30] you have only a few patients and a very rare disease Yeah.

[00:12:32] To be able to have enough power. 

[00:12:34] Alexander: Yeah. That is definitely a super interesting use case and one that is not just relevant for the joint clinical advice, but potentially [00:12:45] also for your regulatory submission. Having these things in place is important. Now, a couple of the use cases you mentioned, not just come into place when you do the JCA, but much earlier in the [00:13:00] process.

[00:13:00] Looking into real world evidence is not an exercise that you do one year before you want to launch the product. Yeah. But throughout the development period, for example, when you speak about trial [00:13:15] visibility or informing patient prior distributions or prognosis, predictive factors, all these use cases happen far earlier.

[00:13:26] The development pipeline that calls for [00:13:30] having a good integrated evidence plan. You lose a lot of opportunities if you just focus on your clinical trials for your regulatory sub machine. 

[00:13:39] Katrin: Identifying important data gets very early on and understanding can those [00:13:45] questions that we have, can those be answered evidenced.

[00:13:48] The focus, of course, filling data gets for the medical community because there’s a lot of questions. Also does the clinical trial that cannot be answered. Also having in mind already the exterior [00:14:00] assessment, the market exo environment, where you have to show that you’re better than the standard of care, and having that early on in mind to have a good strategy for real world evidence use is very important.

[00:14:11] The integrated evidence plan is one of those tools that with your, [00:14:15] for that to define the data gaps and understand which data can answer which question we evidence cannot answer every question. Definitely not. But definitely has an important place in the whole strategy of generating evidence. Let’s 

[00:14:29] Alexander: look in [00:14:30] into bias assessment because that is a very important thing In preparing for this podcast episode recording, we talk about publication set introduces a tool [00:14:45] called Robins Eye.

[00:14:49] Expand on what that is and why this is important for the joint clinical assessment. 

[00:14:55] Katrin: The Robin’s Eye tool is a tool that gives you [00:15:00] very focused workflow, how to assess, how much bias do I have in my comparison. The bias here is defined as as a tendency for results to differ systematically from a perfectly designed randomized [00:15:15] trial in a huge patient group without any flaws.

[00:15:18] The tool looks at. Different types of bias that can happen. It looks at the patient selection for the trial. It looks at confounding. It looks at the assessment of [00:15:30] outcomes. It looks at many different dimensions to come up with a conclusion. Does this trial have a high risk of bias or low risk of bias?

[00:15:37] Roberts eye tool is meant for non-randomized cohort trials that compare one drug versus the other. It’s a [00:15:45] very useful tool that lets you think very early on on the possibilities of bias you have in there. And it does not only focus on confounding it, focus on the other things as well, on the trial design, on the patient selection.

[00:15:57] And it gives you a very structured approach [00:16:00] to look at your bias. 

[00:16:01] Alexander: Yeah, 

[00:16:01] Katrin: so it’s a very useful tool and it’s mentioned on the JCA guidelines that this has to be filled for every non-randomized comparison that you want to show. 

[00:16:11] Alexander: And we’ll put a link to use this into the show [00:16:15] notes as well as a link to a presentation that fin provided some time ago that goes into lots of these use cases as well as the confounding.

[00:16:26] In summary, there’s a lot of [00:16:30] opportunities for real world evidence within the joint clinical assessment. He things is. Are that you plan for these very early on because clinical trials are just one source [00:16:45] and getting the real world evidence data will take you some time. You need to have all of that kind of pre-planned so that you can respond to questions from the JCA here this 100 days period pretty [00:17:00] fast, and you need to have a good understanding of all the different biases.

[00:17:05] What else are key points that this should take into consideration when using robot evidence for this joint clinical assessment? 

[00:17:14] Katrin: So what is very [00:17:15] important is. A full understanding of the different sources of bias. Then a pre specification of the analysis methods to avoid results driven conclusions.

[00:17:25] Sometimes you cannot pre-specify everything fully because sometimes you have to have [00:17:30] seen the data what to do that if you think of imputation, of missing data, but you can have a decision tree to avoid that. You do that results driven, and then of course the scientific methodology to adjust for confounders that you have identified.

[00:17:44] [00:17:45] Before, and I always recommend to really look into the literature, to talk to experts, to have a full knowledge or try to get the full knowledge of all the possible confounders and not focus on the variables you have measured in your [00:18:00] data. Because that’s what people often do. They do not have the full picture.

[00:18:04] And then look at the data. I have those variables, and I’ll adjust for those covariate, and then I’m fine if I do that for all of them, really get a full picture. Just for the ones you have, use propensity score. [00:18:15] Let’s use a lot of sensitivity analysis to show the robustness. And then a very important point is QB, a quantitative bio analysis for sure.

[00:18:24] If you have a non confound that you have not measured in your data. You can adjust your treatment [00:18:30] effect and you can adjust for that confounding, but you will have a lot of unknown confounding where you do not know the effect. And then there’s a lot of possibilities there. You can do tipping point analysis, which effect of a confound variable, and which imbalance can explain the [00:18:45] treatment effect solely by confounding.

[00:18:47] If those examples are very extreme, it’s very unlikely that you see the result just due to confounding. There’s also seeing a number of summaries like the E. Which you can show there are other robustness value I like much [00:19:00] more because you can set it into context and it has thresholds. So there’s many possibilities to do a quantitative bias analysis and to quantify is a bias you might have in your results to show the robustness and also to convince HTA bodies, especially when you know [00:19:15] that you have used liver data, that you do not have a randomization.

[00:19:18] I think that’s one of the steps. That should be a standard in every real world evidence analysis to quantify the bias you still have in there. 

[00:19:27] Alexander: Thanks so much. Yeah, completely agree. Get a [00:19:30] good understanding of the sources of bias and quantify these will make all the difference for increasing the trust into your analysis.

[00:19:39] Thanks so much, Catherine, for this great episode. 

[00:19:41] Katrin: Thank you.[00:19:45] 

[00:19:46] Alexander: This show was created in association with PSI. Thanks to Reine and her team at VVS. Working, on the show in the background and thank you for listening. Reach your potential lead great science and serve [00:20:00] patients. Just be an effective [00:20:15] statistician.

Join The Effective Statistician LinkedIn group

I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.

I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.

When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.

When my mother is sick, I want her to understand the evidence and being able to understand it.

When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.

I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.

Let’s work together to achieve this.