In this episode, I’m joined by Deepa Jahagirdar, Associate Research Principal at Cytel, to explore what it really takes to build a good external control arm (ECA). Deepa brings a fascinating background from social epidemiology, where causal questions often need to be answered without running randomized trials. That experience translates directly into today’s growing need for ECAs, especially when we rely on real-world data to support single-arm trials, extension phases, or situations where randomization simply isn’t possible.

Together, we discuss how to choose the right data source, how target trial emulation works in practice, what to do about confounding, and how to judge whether an ECA is truly robust. If you’re working with real-world evidence, complex study designs, or causal inference, this episode will give you clarity and confidence in approaching ECAs the right way.

Why You Should Listen:

✔ You want a clearer understanding of when and why ECAs make sense.

✔ You’re dealing with real-world data and need a practical framework for selecting the right source.

✔ You’ve heard the term target trial emulation, but want to understand how it’s applied in real projects.

✔ You want to strengthen the causal credibility of your studies without relying solely on randomized trials.

✔ You want simple, actionable principles for handling confounding and unmeasured bias.

Episode Highlights:

[00:00] – Setting the stage
I introduce the topic of external control arms and why they’re more widely relevant than many statisticians think.

[01:35] – Introducing Deepa
Deepa shares her path from social epidemiology into designing and supporting ECA studies at Cytel.

[03:00] – Why ECAs are fascinating
We talk about how methods used to study policies without RCTs translate into clinical research.

[04:00] – Where ECAs show up
I walk through common scenarios—from rare diseases to extension studies—where external controls add value.

[07:30] – Choosing the right real-world data
Deepa explains how she approaches data selection depending on disease, outcomes, and feasibility.

[10:20] – Target trial emulation
We discuss how designing the “ideal RCT” guides everything that follows when constructing an ECA.

[16:30] – Handling confounding
Deepa explains the role of expert knowledge, DAGs, and standard adjustment approaches.

[21:20] – Thinking about unmeasured confounding
We talk about assessing robustness and understanding how much bias it would take to overturn your results.

[24:20] – Final takeaways
Deepa highlights the importance of focusing on the big causal question and overall robustness—not perfection.

Links:

🔗 The Effective Statistician Academy – I offer free and premium resources to help you become a more effective statistician.

🔗 Medical Data Leaders Community – Join my network of statisticians and data leaders to enhance your influencing skills.

🔗 My New Book: How to Be an Effective Statistician – Volume 1 – It’s packed with insights to help statisticians, data scientists, and quantitative professionals excel as leaders, collaborators, and change-makers in healthcare and medicine.

🔗 PSI (Statistical Community in Healthcare) – Access webinars, training, and networking opportunities.

Join the Conversation:
Did you find this episode helpful? Share it with your colleagues and let me know your thoughts! Connect with me on LinkedIn and be part of the discussion.

Subscribe & Stay Updated:
Never miss an episode! Subscribe to The Effective Statistician on your favorite podcast platform and continue growing your influence as a statistician.

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won’t send you spam. Unsubscribe at any time. Powered by Kit

Learn on demand

Click on the button to see our Teachble Inc. cources.

Load content

Deepa Jahagirdar

Research, Data & Statistics | PhD Epidemiology

Deepa Jahagirdar is currently an associate research principal at Cytel. She is the technical lead for study design, methods, and statistics for a variety of projects, including target trials and ECA. Prior to this position, she completed her Ph.D. in epidemiology at McGill University. She has over ten years of experience developing methodological solutions to complex data and statistical problems in epidemiology, enabling robust findings across various substantive areas. Additionally, she has extensive experience facilitating work with various stakeholders and clients, ranging from international funding agencies, corporations and academia to government. She excels at conveying highly technical concepts in meaningful ways to foster effective collaborations.

Transcript

External control arms – how to get to a good one

[00:00:00] Alexander: You are listening to the Effective Statistician Podcast, the weekly podcast with Alexander Schacht and Benjamin Piske designed to help you reach your potential lead great science and serve patients while having a great [00:00:15] work life balance.

[00:00:21] Alexander: In addition to our premium courses on the Effective Statistician Academy, we also have. [00:00:30] Lots of free resources for you across all kind of different topics within that academy. Head over to the effective statistician.com and find the Academy and much more [00:00:45] for you to become an effective statistician. I’m producing this podcast in association with PSIA community dedicated to leading and promoting user statistics within the health industry for the benefit of [00:01:00] patients.

[00:01:00] Alexander: Join PSI today to further develop your statistical capabilities. With access to the ever-growing video on demand content library preregistration for all PSI webinars at head over to the [00:01:15] PSI website at PSI Web to learn more about PS I activities. We become a PS I member to today.[00:01:30] 

[00:01:35] Alexander: Welcome to another episode of The Effective Statistician. Today, i’m super excited to have a colleague of mine from Cytel on the line, [00:01:45] and we will talk about a super interesting topic external control arms . And just Shortly before this meeting, we were actually talking about this topic and couple of therapeutic areas and companies we are working [00:02:00] with on ECAs, so external control arms.

[00:02:04] Alexander: And with that, hi Deepa. How are you doing?

[00:02:07] Deepa: I am happy to be here and I am currently an associate research principal at Cytel, and I work to support a lot of our [00:02:15] clients on methods and statistics for designing ECA studies among other complex study designs.

[00:02:22] Deepa: My background is I did a PhD in epidemiology at McGill a few years ago, and I spent some time in [00:02:30] different substantive areas, mostly with the underlying theme that handling more complex statistical problems to get at real associations and causal effects without actually having access to trial data. [00:02:45] So I started at Cytel a few months ago and.

[00:02:48] Deepa: In this space with ECA. 

[00:02:50] Alexander: What makes this ECA topic interesting for you? 

[00:02:55] Deepa: My background is, it really comes from social epidemiology where you can never really design [00:03:00] true RCTs. So for example, when you wanna find the effective a policy, like a labor policy that the government has passed and did it actually work to increase jobs, you have to figure out cool methods to really try and get at this answer in a way that an [00:03:15] RCT would.

[00:03:15] Deepa: But of course, we know the people who take up labor policies are not random. There’s no way you can design a clinical trial and practice. So you have to think of how you can really get at your ideal trial to study these policies. Now I’ve [00:03:30] realized in this real world data and clinical space, the same methods can be used, and that’s what makes ECA really interesting is that we can try to use the same methods to get at causal effects of drugs and different [00:03:45] technologies in the same way that we would in a clinical trial.

[00:03:47] Deepa: But instead, we use this external control arm or ECA, which is data that does not come from the trial. But rather comes from an observational data source of data that’s already out there, and we have to think [00:04:00] about how we can make sure that we address challenges around biases, and we’ll go into more detail of that.

[00:04:07] Deepa: But it really is similar to my background of trying to design causal studies to assess causal policy effects, [00:04:15] but translated to this world of real world data and clinical trials. 

[00:04:19] Alexander: Awesome. Yeah, I think it’s really cool that we can learn so much about our work and our challenges by looking into very different research [00:04:30] topics.

[00:04:30] Alexander: Yeah. That’s the strength of really coming from the data side that. The data very often looks similar in different areas. The context is different, but mathematically lots of similarities there. [00:04:45] External control arms are very often used for single arm studies.

[00:04:48] Alexander: And the single arm studies, of course happen in, for example advanced oncology indications. So very small oncology indications, rare indications or [00:05:00] rare. Where populations like pediatric populations, for example, but we can also find them in other cases. Like this very typical standard designs that we have in many different indications that are [00:05:15] also very prevalent.

[00:05:16] Alexander: Where we have a first randomized trial, maybe it was a placebo, but the placebos only given for, let’s say three months or six months. And then also patients switch over to [00:05:30] the experimental drug. And then this long-term extension period of the study is again, a single on trial. And of course that is something that you see in depression studies that you see in psoriasis [00:05:45] studies.

[00:05:45] Alexander: And many different indications where of course you don’t wanna give placebo for years and years to come. So even if you think that’s not a topic for you because you’re not working in oncology or rare diseases, it might actually [00:06:00] be a topic. And one other aspect could also be safety.

[00:06:05] Alexander: And you pool data from lots of different studies. In a sense you get also one large single arm trial where only you know. [00:06:15] Very small parts of it are controls. And so that’s another area. Or if you have safety data, coming from all kind of different other sources. So external control arms have a really [00:06:30] important.

[00:06:30] Alexander: place to play in our medical research. Now, when we think about external control arms, there’s basically three areas where we can get the external control data from. The first is maybe you have run a [00:06:45] another clinical trial. Or you get a clinical trial from some kind of consortium or something like this where you get this kind of data from, or you just do a literature review, hopefully a systematic literature [00:07:00] review.

[00:07:00] Alexander: And you basically create some kind of network meta-analysis and get some kind of cumulative data from there. Also, that’s, of course not individual patient level data. But I think the most prominent role here in terms of external control [00:07:15] is real world data. So now real world data is a pretty big field.

[00:07:21] Alexander: People, how do we actually find the right real world data for our specific use case? 

[00:07:27] Deepa: Sure. And I think this [00:07:30] depends. What the substantive area is or what the disease is. Are you talking about a rare disease? Are you talking about outcomes that may develop over many years? Or are you talking about something that’s more acute, where you’d expect to see a lot of [00:07:45] patients experiencing death or whatever outcome you are looking for?

[00:07:49] Deepa: So for the more common use cases there are big databases available that contain claims data. And often if you have a hard endpoint like death, this data [00:08:00] can be leveraged where you have millions and millions of patients in these databases, you’d probably be able to find enough cases and hard endpoints like death.

[00:08:10] Deepa: And by hard endpoints, endpoints that we know would be coded accurately. this claims [00:08:15] data may be enough, but if you’re talking about a disease where it’s much rarer, harder to find the cases, then maybe disease specific registries offer another option for data sources. You might not have enough, even in millions of records of claim data for [00:08:30] your study.

[00:08:30] Deepa: It can become more challenging when you have outcomes that are not coded often. So this can be diseases where there are cognitive outcomes of interest like intellectual disability or symptoms related to autism, Outcomes that you wouldn’t be able to [00:08:45] get in traditional databases.

[00:08:46] Deepa: And in that case, you could even consider using academic data and collaborating with academic, collected this data over time and be able to draw on this because those outcomes are so specific that you often wouldn’t find them in large [00:09:00] scale databases like claims databases or electronic medical records.

[00:09:04] Alexander: Yeah. And so the other point is, of course. Lots of these databases are local databases. Yeah. So this is not, we don’t have global claims data. We have [00:09:15] claims data in Germany and in France and in Sweden and in US we have lots of different data providers in terms of these claims data.

[00:09:23] Alexander: You will need to look into all these kind of different resources, but of course, having some [00:09:30] experience. With doing this type of research can help quite a lot to weed out a lot of databases that most likely will not help. 

[00:09:40] Deepa: Yes. And also collaborating with the right people, like you’re saying, like [00:09:45] in different countries there are different key data sources. And I think having people on the ground who understand that, especially if your study is centered in a specific country, you need to have an awareness of the data landscape in that country. And I know [00:10:00] one time we talked to someone who actually it was a rare disease and the only data available was manual health records.

[00:10:06] Deepa: So literally we would be going there and transcribing they didn’t have electronic health records for this historic data. The data Does exist, but you have [00:10:15] to be aware and be speaking to the right people to know, especially when it’s rare diseases. 

[00:10:20] Alexander: Very good. Now there’s one term that is floating all around doing an ECA.

[00:10:27] Alexander: That is target trial [00:10:30] emulation or TTE how does Target trial emulation refer to ECAs and how does then apply in real world data? 

[00:10:39] Deepa: So it, I would describe a target trial emulation as trying to follow all the steps [00:10:45] to design a robust RCT so that we can make similarly robust conclusions without actually being able to do the RCT for many reasons.

[00:10:53] Deepa: And so by result robust, I mean that we can ultimately conclude that A causes B [00:11:00] and to make that statement. There’s a strong standard of scientific rigor, so we’re hoping to achieve that with Target tri emulation. So the steps would go that you would be basically designing your ideal randomized controlled trial.

[00:11:13] Deepa: In the same way, [00:11:15] defining your population, defining your outcomes, defining your eligibility, inclusion, exclusion criteria. And it would basically follow the same thing with the caveat that you’re not actually going to be randomizing [00:11:30] the patients. So once we have this is called our target trial, and then the way that ECA works into this is you may have the.

[00:11:39] Deepa: Not patients from an actual trial. For example, the situation you were describing where patients [00:11:45] are given the treatment for a certain number of months and there’s a control group, but then everyone gets the treatment in an open label extension and they know what treatment they’re getting. Yeah. So you could use an external control arm to create a control group for.

[00:11:58] Deepa: Part of the study where everyone [00:12:00] knows their treatment. Or you can use an external control arm when the trial itself is not really a trial in the sense that it’s a single arm trial. So in itself does not have a control loop. And this can happen, for example, when it could be unethical to withhold the [00:12:15] treatment over a long period of time if the disease takes a long time to develop or progress.

[00:12:20] Deepa: So there’s many reasons why. Someone may choose to conduct a single trial, but we can use this external control arm to then get at our target trial, which is the trial we would’ve [00:12:30] done had we been able to randomly assign the treatment to exposures and control. 

[00:12:34] Alexander: I’m just thinking from a timing perspective.

[00:12:37] Alexander: Yeah. I have, been in lots of discussions like re. First runs this clinical trial single arm [00:12:45] clinical trial. And then we build this real world data external control arm. Then I think that doesn’t give us a lot of flexibility.

[00:12:55] Alexander: So because the real world data is like the real world data is, yeah. So it says you [00:13:00] can’t change anything about it and once you have run the clinical trial, you can’t change anything about the clinical trial. Is that kind of a good way to think about it? First running your clinical trial and then finding the external [00:13:15] comparator data?

[00:13:16] Alexander: Or should we first basically run the external control data and then the clinical trial? Or is there something in between? 

[00:13:24] Deepa: Yeah, I would almost have two answers to this question, and the first would be, ideally it would [00:13:30] go right into your design. If you’re planning to do an external control arm, you plan that at the start of a trial, and sometimes this might be possible where you know you can only conduct a single arm study.

[00:13:40] Deepa: so you can build using the external real world data right into the design. [00:13:45] But I think in practice this often isn’t possible. And part of the reason is that the RCT is really meant to be the gold standard ideal study design. So you don’t necessarily want to make any modifications to that design because of the availability of real [00:14:00] world data.

[00:14:00] Deepa: So go ahead and design your study, run your trial, and then only think of the ECA after. That presents its own challenges in that you can’t always find real world data that will match or be able to align into your trial [00:14:15] data. And that would be the advantage of thinking of it first.

[00:14:17] Deepa: But it is important to acknowledge that when it comes to a trial, if you can do it well, you probably don’t want to take away from that robustness in order to be able to work with an EC at a later date. And the [00:14:30] other part to this is that I think often it’s not thought of until after that this is possible.

[00:14:34] Deepa: So they’ll do the trial, but then. For some reason there’s an additional degree of robustness that’s desired. And then, oh, let’s start looking for real world data where [00:14:45] we can construct an E, CA. So in practice, that probably doesn’t happen less. But yes, from the standpoint of let’s get these data sources as close as possible and build this ECA into.

[00:14:56] Deepa: Trial. That makes sense. But then we also wanna [00:15:00] consider that a trial is a trial and we don’t want to modify the trial based on the availability of real world data. 

[00:15:05] Alexander: There are certain parts in the trials that you can always kind of think about. There’s a certain operationalization or how often you capture data or [00:15:15] what kind of additional data you might want to capture.

[00:15:18] Alexander: So there, there’s always opportunities I think. 

[00:15:22] Deepa: For sure. And if it’s about if there’s ways we can augment the trial a little bit, like by collecting more information or something that will make the ECP more [00:15:30] feasible, that would be really desirable to think of in advance. 

[00:15:33] Alexander: Yeah. So I’m just thinking of imagine there’s a registry 

[00:15:37] Alexander: For that specific disease and say, use a specific questionnaire. If you don’t record the questionnaire in the clinical trial. [00:15:45] You really miss out. Yeah. And that might be just a very simple, small thing to change. Or maybe there’s certain baseline characteristics that you wanna evaluate in a similar way as you have it in the [00:16:00] registry.

[00:16:00] Alexander: So that you can really adjust for it when you do bias, control and these kind of things. 

[00:16:05] Deepa: Yeah, and similarly like quality of life questionnaires, and there’s different measures that get at the same construct, but if you’re able to standardize that with what was available in some kind [00:16:15] of registry data, you’re in a much better spot to do that analysis.

[00:16:18] Alexander: Do scientific rigor, you just talked about this and of course there is one aspect that is really important because we don’t have randomization as you mentioned, we [00:16:30] get the treatment information from two different sources. How can we make sure that we don’t measure the difference in sources rather than the difference in treatments?

[00:16:41] Deepa: Yeah, so this is the key limitation of. Target trial [00:16:45] emulation and ECAs is that in a randomized controlled trial, you are literally randomized in the treatment and there’s many safeguards in place to ensure that it’s truly randomized. Of course, there can be problems here too, but in theory, because you’ve [00:17:00] randomized the patients entirely in terms of who gets the treatment and who doesn’t, all the characteristics that a patient has inherently will balance out automatically, even if you don’t measure it.

[00:17:11] Deepa: This is important in cases where, there are certain people like [00:17:15] those who are sicker, who are more likely to get the treatment and also experience the outcome also pass away. And this is a classic confounder situation. So in real world data, we don’t have that assurance where we’ve randomized the people.

[00:17:28] Deepa: So what we need to do [00:17:30] is have a really strong hold on, what are the important confounders or characteristics in this disease area? And first we do that by usually talking to people who have a lot of clinical knowledge in this area or have otherwise worked in a specific disease [00:17:45] area.

[00:17:45] Deepa: And we build a map, it’s called a directed ICIC graph. And it’s a map of how all these different characteristics work with the treatment and the outcome. And how they influence each other so that we have a really strong understanding of all the [00:18:00] pathways with this disease. And then that map gets translated into looking in the real world data to find those variables.

[00:18:06] Deepa: So let’s for example, say smoking. If we know smoking is a very important characteristic, that hopefully it’s measured somewhere and we can [00:18:15] adjust for that. Once we have that list of variables identified by experts and then we’re able to find them all in registry data. And there are some that maybe are a little more flexible and others where we have to have it ’cause it’s too important to leave out.

[00:18:29] Deepa: But ultimately, [00:18:30] then we have the list of variables we can adjust for. And maybe we’ll go into how that’s done. But when we adjust for those, we can adjust for everything that we see. And we can check that the patients are actually balanced on all these [00:18:45] characteristics. And that part is the same as in an RCT.

[00:18:48] Deepa: You would typically produce a table one where you’re just checking for balance on all the key variables. And we do the same with the registry data, but the process of identifying those and looking for those, a lot of emphasis is placed on [00:19:00] that. 

[00:19:00] Alexander: So then we have all the confounders and we get both that from understanding the medical background, but also looking into the data what kind of approaches do we then use to leverage these knowledgeable [00:19:15] confounders to adjust for potential biases between the different treatments, arms.

[00:19:20] Deepa: So once we have that measurement of balance of all the covariates, some will definitely jump out as these are very imbalanced, meaning the patients in the treated group are much [00:19:30] more likely to be older. For example, there’ll be something we see but as a whole, we take all these covariates and we employ usually what’s called a approach.

[00:19:39] Deepa: So the patients who are more likely to receive the treatment. Based on different [00:19:45] characteristics, we basically generate a weight that’s inverse to that. So that they contribute less to the final data. And those that are less likely to have received the treatment?

[00:19:54] Deepa: contribute a higher weight. So in that sense, we. Construct this kind of pseudo population [00:20:00] where everything is balanced by using weighting. And this is typical of other places. For example, when you do a survey, often the survey results come with sampling weights, where those more likely to be sampled carry a lower weight so that you can get [00:20:15] proper results in descriptive analysis with survey data.

[00:20:18] Deepa: So it’s very similar to that. But in this case, it’s usually called a propensity score, and the method would be called an inverse probability of treatment weight. And that basically means we’re weighting [00:20:30] people by the inverse of the likelihood that they receive treatment. Yeah. And after we do these weights, we can actually check how well it worked so we can see how the balance changes.

[00:20:39] Deepa: Now the treatment group is not older anymore. Everyone is averaged out, and there’s a diluted pool [00:20:45] of these characteristics so that we only use the patient’s. with the weights that end up allow us to have a balanced population when running the analysis. 

[00:20:54] Alexander: And there’s all ways you can use this approach.

[00:20:56] Alexander: You can create bins for your [00:21:00] propensity score and then adjust within these spins and basically have a categorical or ordinal categorized covariate. Or you can use matching or you can use regression approaches. So there’s various ways or you [00:21:15] can move around this.

[00:21:16] Alexander: And then you basically have, gained more precision, but you usually, pay a little bit in terms of precision and you have a decreased effective sample size. [00:21:30] One thing that is of still remaining is unmeasured confounding. Yeah. And that’s a little bit thing that I wanna touch more on now because I’ve never talked about this on the podcast up to now.

[00:21:44] Alexander: It took [00:21:45] us more than 450 episodes to talk about unmeasured confounding. What can we do to better understand unmeasured confounding? 

[00:21:54] Deepa: And I’ll just back up to explain why this is a problem. In real world data, and it goes back to what I [00:22:00] said about randomization. While it takes care of the characteristics we observed, it also implicitly takes care of the confounders that we don’t observe.

[00:22:10] Deepa: So that’s the unmeasured confounding. So we don’t really have to deal with this in RCT, but in real [00:22:15] world data, when we construct those weights I was describing, that’s based on only on what we observe. So we don’t know what’s going on with things we don’t observe. So that could still drive our study results, and that would mean it’s not robust anymore.

[00:22:28] Deepa: We can’t make those causal claims if there’s too [00:22:30] much unmeasured confounded. So the way we approach this question is. Is it’s more conceptual. So how much confounding would it take or how much unmeasured, confounding would it take in order to actually flip our study results? And we can [00:22:45] answer that question by doing a quantitative bias analysis.

[00:22:48] Deepa: And where we do precisely that, we explore the impact of these assumptions we’re making around unmeasured confounding, is there very little, is there a lot? And we see how our effect changes. [00:23:00] Under all sorts of scenarios there. And ideally what you would want to see is that it would take a ton of unmeasured confounding to flip our effect because then we enter the realm of, that’s not even plausible.

[00:23:10] Deepa: It’s so much confounding. This probably couldn’t even happen. So that means our effects are pretty [00:23:15] robust, even if there’s this big unmis missing thing that we are not accounting for. And that’s generally the goal of. Like in, and when you’re doing real data analysis and ECA you just wanna understand all the assumptions you’re making the study with regard to [00:23:30] missing data con, founding or whatever it is.

[00:23:32] Deepa: You gotta keep your. Keep thinking about this bigger picture question where how are these assumptions actually driving the results and how much, how wrong would we have to be to actually flip our results? And that’s the way bias is often [00:23:45] approached in ECA studies. 

[00:23:46] Alexander: And when you talk about flip results, I’ve seen basically looking into two areas.

[00:23:52] Alexander: One is to bring the treatment effect to zero. Or the other one is to basically get the [00:24:00] confidence interval to touch zero. So these are I think, the two limits. 

[00:24:04] Deepa: And if there’s a different threshold there, sometimes there’s the clinically significant effect and Okay. I’ll find that effect too.

[00:24:10] Deepa: So yeah, there’s a few different ways we could define that. But yeah, basically to wanna see [00:24:15] how much bias it would take to really meet that threshold of these results aren’t holding anymore. 

[00:24:20] Alexander: Yeah. Awesome. Very good. So that was a great discussion about ECAs. Actually we also submitted a workshop [00:24:30] to the JSM for 2026.

[00:24:33] Alexander: Hopefully we get accepted to run this ECA workshops there. If we don’t get accepted, we’ll try it on other places. But I’m really looking forward to work more on this [00:24:45] topic. And if you follow me on LinkedIn you probably have seen that I’m posting quite a lot on set.

[00:24:51] Alexander: Follow us there. And you’ll learn more about this. So when it’s about, eca, is there anything kind [00:25:00] of key learning that you would see listener to take away with? 

[00:25:04] Deepa: I would say with ECAs that. I think it’s easy to get caught up in details like, can we find this exact covariate, can we do this?

[00:25:13] Deepa: And small [00:25:15] methodologic question. But I do find with ECAs zooming out in the first instance and thinking in bigger picture in terms of the biases you expect to encounter in terms of what else can explain your disease outcomes is really helpful, especially when speaking to some of our [00:25:30] clients where it’s not necessary.

[00:25:32] Deepa: For example, if you can’t find the exact thing you’re looking for in real world data, it might not be necessary. And to really honing out and understanding what we’re trying to achieve in terms of a causal effect and the target trial can be really [00:25:45] helpful to move forward if you feel stuck with not finding the right data source or not being able to get exactly what’s written in your trial.

[00:25:52] Deepa: And yeah, I would say that’s emerged over my time working in this field. 

[00:25:57] Alexander: Thanks so much. That’s a very good [00:26:00] insight. It’s very often more about robustness than about having every little tick box ticked. 

[00:26:07] Deepa: Yeah. 

[00:26:07] Alexander: Thanks a lot. As I said, follow Deepa and myself on LinkedIn and I’m pretty sure you’ll [00:26:15] learn much more about ECAs.

[00:26:18] Deepa: Thank you very much.

[00:26:24] Alexander: This show was created in association with PS I. Thanks to Reine and her team at [00:26:30] VVS, who helps the show in the background. And thank you for listening. Reach your potential lead. Great science can serve patients. Just be an effective [00:26:45] statistician.

Join The Effective Statistician LinkedIn group

I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.

I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.

When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.

When my mother is sick, I want her to understand the evidence and being able to understand it.

When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.

I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.

Let’s work together to achieve this.