In this keynote episode, Professor Sebastian Schneeweiss from Harvard Medical School shares groundbreaking insights from his extensive research into emulating randomized controlled trials (RCTs) using real-world data (RWD). Recorded live at The Effective Statistician Conference 2024, this talk explores whether non-randomized studies based on electronic health records and claims data can reach conclusions as reliable as those from traditional RCTs.
Prof. Schneeweiss, also Chief of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital, walks us through the RCT DUPLICATE project, a major FDA-funded initiative that evaluated whether regulatory decisions could be replicated through high-quality real-world evidence (RWE).
From the successes to the limitations—and everything in between—this episode is packed with lessons for statisticians, regulators, and pharmaceutical leaders interested in the future of data-driven healthcare decisions.
What You’ll Learn:
✔ The motivation behind emulating randomized trials using real-world data
✔ How claims and EHR data can support regulatory-grade evidence
✔ What makes a trial emulation good vs. suboptimal
✔ The role of adherence, measurement limitations, and data quality
✔ Use cases where RWE could expand indications or replace costly trials
✔ Key takeaways from the RCT DUPLICATE project and the “Benchmark-Calibrate-Extrapolate” strategy
Recommended For:
✔ Biostatisticians and epidemiologists
✔ Health tech innovators and data scientists
✔ Regulatory affairs and clinical development professionals
✔ Anyone involved in real-world data, RWE, or comparative effectiveness research
Resources & Links:
🔗 JAMA 2023 RCT DUPLICATE Publication
🔗 RCT DUPLICATE protocols and SAPs on ClinicalTrials.gov
🔗 FDA Framework for Real-World Evidence
🔗 The Effective Statistician Academy – I offer free and premium resources to help you become a more effective statistician.
🔗 Medical Data Leaders Community – Join my network of statisticians and data leaders to enhance your influencing skills.
🔗 My New Book: How to Be an Effective Statistician – Volume 1 – It’s packed with insights to help statisticians, data scientists, and quantitative professionals excel as leaders, collaborators, and change-makers in healthcare and medicine.
🔗 PSI (Statistical Community in Healthcare) – Access webinars, training, and networking opportunities.
If you’re working on evidence generation plans or preparing for joint clinical advice, this episode is packed with insights you don’t want to miss.
Join the Conversation:
Did you find this episode helpful? Share it with your colleagues and let me know your thoughts! Connect with me on LinkedIn and be part of the discussion.
Subscribe & Stay Updated:
Never miss an episode! Subscribe to The Effective Statistician on your favorite podcast platform and continue growing your influence as a statistician.
Never miss an episode!
Join thousends of your peers and subscribe to get our latest updates by email!
Get the





Learn on demand
Click on the button to see our Teachble Inc. cources.
Featured courses
Click on the button to see our Teachble Inc. cources.
Alun Bedding
Executive and Team Coach | Leadership Consultant | Statistical Consultant
Alun is dedicated to helping professionals make significant shifts in their thinking on various topics. He understands that each individual is unique and tailors his approach to meet each person’s specific needs. Alun works with professionals at all stages of their careers, including neurodiverse ones.
He specializes in guiding new leaders through the challenges of their roles and believes that everyone has the potential to achieve their vision. Acting as a thinking partner, Alun empowers individuals to reach their goals.
The most common subjects Alun addresses include:
- Navigating the uncertainties of starting a new leadership position
- Managing career transitions
- Building confidence
- Prioritizing important tasks
- Enhancing teamwork
- Preparing for job applications and interviews
- Understanding the impact of climate change
With a background as a leader in statistics and the pharmaceutical industry, Alun brings firsthand experience to his coaching. He also works as a statistical consultant, focusing on early clinical development and pre-clinical drug discovery. His expertise lies in dose-finding, dose-escalation, adaptive designs, and Bayesian methods. Additionally, Alun supervises PhD students working on basket and platform trials.
If you’re ready to work with Alun and believe he can help you, contact him on LinkedIn or at alun@alunbeddingcoaching.com.

Sebastian Schneeweiss
Professor of Medicine and Epidemiology, Harvard Medical School | Department of Medicine Brigham and Women’s Hospital
Dr. Sebastian Schneeweiss is a physician-pharmacoepidemiologist and healthcare data scientist with over two decades of experience evaluating the effectiveness of biopharmaceuticals in clinical practice. He has developed and actively applies a causal inference pipeline to analyze complex healthcare databases using data-adaptive methods within rapid analysis cycles. His mission is to accelerate the understanding of drug effects and support value-based transactions in healthcare.
He holds a dual appointment as Professor of Medicine and Epidemiology at Harvard Medical School and serves as Chief of the Division of Pharmacoepidemiology and Pharmacoeconomics in the Department of Medicine at Brigham and Women’s Hospital. There, he leads a world-renowned research and training center composed of 30 faculty and 80 staff. Dr. Schneeweiss is also the co-founder of Aetion, Inc., a leading software-enabled healthcare analytics company. He has authored over 600 peer-reviewed publications, received numerous national and international awards, and has been elected as a fellow of multiple professional societies.

Transcript
Real-World Evidence vs. Randomized Trials: Can We Emulate Accuracy?
[00:00:00] Alexander: You are listening to the Effective Statistician podcast. The weekly podcast with Alexander Schacht and Benjamin Piske designed to help you reach your potential lead great science and serve patients while having a great [00:00:15] work life balance.
[00:00:22] Alexander: In addition to our premium courses on the Effective Statistician Academy, we [00:00:30] also have. Lots of free resources for you across all kind of different topics within that academy. Head over to the effective statistician.com and find the [00:00:45] Academy and much more for you to become an effective statistician. I’m producing this podcast in association with PSIA community dedicated to leading and promoting use of statistics within the healthcare industry.
[00:00:59] Alexander: [00:01:00] For the benefit of patients, join PSI today to further develop your statistical capabilities with access to the ever-growing video on demand content library free registration to all PSI webinars and much, much more. [00:01:15] Head over to the PSI website@psiweb.org to learn more about PSI activities and become a PSI member to pick.[00:01:30]
[00:01:30] Richard: Let’s start. Dear Colleagues, participants of Effective Statisticians conference, I am really excited we’ll speak about the learnings from Im mutilating randomized trials with data from clinical practice, which is a highly relevant topic, and for certain [00:01:45] emulations we can enhance our understanding of treatment effectiveness in diverse settings.
[00:01:49] Richard: These credentials speak for themselves as an expert. He’s a professor of medicine and epidemiology at Harvard Medical School and Chief of the Division of Pharmaco Epidemiology and Pharmac Economics at the Bridge and [00:02:00] Women’s Hospital. He is also a PI of the F-D-A-C-D-A funded Setin Innovation Center.
[00:02:06] Richard: He is a voting consultant to the FDA drug safety, a risk Management advisory Committee, and a co-founder of a Cion Incorporated. [00:02:15] His work centers on comparative effectiveness and safety of biopharmaceuticals, developing analytical methods for epidemiologic analysis using complex longitudinal healthcare databases.
[00:02:28] Richard: At Harvard, he [00:02:30] teaches courses on database analytics for pharmaco epidemiology and effectiveness research in longitudinal healthcare databases. I’m really excited to hear his presentation. The participants, you will see the q and a part [00:02:45] in the Zoom channel. If you have any questions, please write them down there.
[00:02:50] Richard: If they’re relevant to the slides that Sebastian is presenting at that particular moment, I will interrupt Sebastian, ask the question so that we can get the natural [00:03:00] flow of the discussion. Having said that, welcome Sebastian. Really looking forward to your presentation. The screen is yours.
[00:03:08] Sebastian: Thank you very much for this warm introduction.
[00:03:11] Sebastian: I share, and you should be seeing my [00:03:15] slides now in full screen mode. Do you see it? Thank you. Great. Perfect. It’s pleasure to be here with all of you, and I’m happy to be interrupted for any questions. As Richard
[00:03:25] Sebastian: already said, here is my disclosure. The typical funding is that I have equity in [00:03:30] Eon Inc. A software-enabled healthcare analytics company.
[00:03:34] Sebastian: I wanna touch on five points. We have some preliminaries that go pretty fast through the preliminaries. We have 45 minutes to get up for me talking, and then we have 50 minutes of q and a, or as I said, in between questions. [00:03:45] Then what’s the motivation for trial emulation? For actual trial emulate, not hypothetical trial emulation.
[00:03:50] Sebastian: Then the learnings from a project that was FDA funded rc. Duplicate some more learnings from RCTs with regards to real world evidence of pharmaco epidemiology, which [00:04:00] I take into. Interchangeably and then some considerations about next steps. What does this whole thing mean for real world evidence? This is a warmup here.
[00:04:09] Sebastian: A few slides. Real world evidence really took off with the 21st Century Cures Act in [00:04:15] 2016 where Congress mandated FDA. To think about what they want to do with non-randomized studies that are based on data from the routine operation of the healthcare system, which they called real world data. [00:04:30] If you analyze real world data to understand the effectiveness of medical products, that is called real world evidence.
[00:04:36] Sebastian: They posted these terms quite publicly in this framework document in 2018, which is why I think certainly from the North [00:04:45] American perspective. We will be stuck with those terms, real world data and real world evidence for a while since they’re embraced now by our key agency. And after five years in 2021, FDA came out with a whole flu of guidance documents for industry.
[00:04:59] Sebastian: And as you [00:05:00] all know, these are documents that are taken quite serious. And then 2023, there was another guidance on external control arm. So it’s a very. Active space in the US as well as in Europe and in Japan. There are very similar movements as well in main China [00:05:15] around the world, really trying to understand what, how far can you push real world evidence results for decision making.
[00:05:23] Sebastian: FDA leadership came up with this publication from a Capin drug safety last year, outlining the use cases for real [00:05:30] world evidence, for regulatory perspective from a regulatory perspective. The HTA perspective health technology assessment agency perspective is very similar to this, maybe a little bit broader than these use cases.
[00:05:42] Sebastian: I highly recommend pulling up this paper and studying [00:05:45] that if you’re in that space. Often we work with insurance claims data, and what at typical point B is what the typical patient looks like in a claims database. You have a hospital stay, you have diagnostic information, visit with diagnostic [00:06:00] information, prescription drug dispensing.
[00:06:02] Sebastian: We have a longitudal record of all encounters with the professional healthcare system, together with diagnostic, procedural and pharmacy dispensing information. In that data stream, you implement an ecologic [00:06:15] study, a causal study, which often is a cohort study with a core entry date. You look forward for the follow-up and you look backwards in a, in order to assess the patient health status.
[00:06:26] Sebastian: Now what you also have is electronic health records, which you then link to [00:06:30] the claims data backbone. We get the continuity and the chronology, the certainty about the chronology from claims data. Particularly in the US and the granularity from the electronic health record database, [00:06:45] the US is quite a fragmented healthcare system, and you never know.
[00:06:49] Sebastian: When you look at the electronic health records as much granular information that they have, you never know where they have the complete information, which is why the combination of both claims data [00:07:00] as well as electronic health record databases or other registry data is probably what we really want to have in the long run.
[00:07:07] Sebastian: Here’s an example of an actual trial emulation. The relied trial is the Biron, a directly acting oral [00:07:15] anticoagulant. With regard to stroke prevention in patient with atrial fibrillation, the trial found a reduction in the risk of stroke by 34% compared to warfarin. Years later, we emanated this stride in spirit and [00:07:30] similarly found a 25% reduction in the risk of stroke that the incidence rates.
[00:07:34] Sebastian: Are lower than in the trial, which is of course on the trial side, you have enrichment strategies to enrich in patient with high risk of stroke and the right side you have as things [00:07:45] play out in clinical practice already, you see how real world evidence analysis can complement the findings from randomized control trials.
[00:07:54] Sebastian: Another example. This is in the space of hypertension. Telmisartan is angen receptor [00:08:00] blocker compared to Ramipril, an ACE inhibitor in the on-target trial, trying to demonstrate non-inferiority of those two agents. When it comes to cardiovascular events, a trial showed data, non-inferior [00:08:15] Ramipril versus Teles.
[00:08:15] Sebastian: Tellart has a ratio 1.0. Then you see on the right side the. Stimulation of that randomized trial in claims data, and we find similar findings, equally supporting non-inferiority. Now, we pushed this [00:08:30] field a little bit harder. We knew that the Carolina trial was ongoing. Carolina was comparing Linagliptin versus IDE to agents to treat diabetes.
[00:08:40] Sebastian: We were emulating the design as closely as we could in claims data on the left [00:08:45] side of this figure. The FDA motivated the Carolina trial because they were concerned about a increase in the risk of cardiovascular events of MACE events. We concluded six months before the Carolina trial was completed that there’s [00:09:00] no difference in the risk for cardiovascular events between linagliptin and G Glide.
[00:09:04] Sebastian: However, we thought that Linagliptin has a much lower rate of hypoglycemic events. The Carolina trial was unveiled at the American Diabetes Association in San [00:09:15] Francisco. It showed exactly the same. There’s no difference in maize, and there’s a substantial benefit of lein versus glide when it comes to avoiding hypoglycemic events.
[00:09:24] Sebastian: This is a prediction of an ongoing randomized control trial. Here [00:09:30] is an emulation of a trial point exposure in patient with ardit infarction. It’s thrombus aspiration as a procedure in your coronaries. It’s a one-time intervention. Is the taste trial, finding no difference between the intervention and no [00:09:45] intervention.
[00:09:45] Sebastian: And the emulation anti Mathis from Karolinska Ska found exactly the same using the SWEETHEART registry in Sweden. Tougher challenge is when you compare users of a drug versus non-users, the risk of dying [00:10:00] or MI in patients using beta blocker after MI versus not using beta blocker after mi. And you see some similar proative protective effect of this strategy, but randomized trial didn’t find any difference here.
[00:10:13] Sebastian: That is that IT challenge when you [00:10:15] have a non-user comparison group. We talk about this all the time. Contemplating a randomized trial. The workhorse randomized trial is the parallel group, randomized controlled trial where you have some sort of washout or run-in phase for randomization, [00:10:30] and then patients are randomized into the treatment group, exposed group, experimental group, what have you, versus a comparative, and sometimes that can be a placebo for regulatory purposes.
[00:10:40] Sebastian: Ation thereof is the cohort study. You can emulate exclusions. [00:10:45] You have S for selection rather than R for randomization, and then you have the two arms exposed versus comparator. Clearly we need to worry about the lack of baseline randomization, but we also need to worry about measurement issues as we work with secondary data [00:11:00] rather than with primary data as the randomized trial dose.
[00:11:04] Sebastian: We have also complex treatment strategies where there is then more bifurcations of the treatment strategy. We are not going to talk about that. We talk about simple treatment strategies on the left side and [00:11:15] on the right side. Here you see this little table where you write down the trial, and this is not a hypothetical target trial in our examples.
[00:11:22] Sebastian: These are actual trials, so we know exactly the eligibility criteria and treatment strategies. So on and the [00:11:30] right column. Then you write down how you plan to emulate this trial, this actual trial in the data that you have available. And you will realize very quickly that some of the measurements are real challenge with [00:11:45] secondary data, and sometimes some of the design aspects are a challenge and cannot really be replicated in clinical practice.
[00:11:52] Sebastian: And we talk about that. Now this is a food for thought here, bill Shadish, a statistician published in Jasa, one [00:12:00] of these journals, more Greek than Latin letters, but is a very applied paper. I really encourage you to read that. Basically showed that if you have perfect measurement of exposure, outcome, and confounding factors, you get the same findings as the [00:12:15] randomized control trial, whether you randomize or whether you don’t randomize, and he randomized students.
[00:12:21] Sebastian: Two, a randomized trial versus a non-randomized study. And when the randomized trial, of course, there was further randomization into math training versus vocabulary [00:12:30] training. And here there was self-selection into math training, vocabulary training. And as I’ve said already after some adjustment for the observed based on characteristics.
[00:12:39] Sebastian: What was the math skill was vocabulary skill. Before the intervention, the findings were exactly [00:12:45] the same. So this is going toward data quality really matters. Now let’s go ahead and think about what motivates us to do trial emulation. And when I say trial, emulation, trial had been done or is ongoing [00:13:00] and we are emulating that one.
[00:13:02] Sebastian: The elephant in the room when you talk about. Non-randomized database studies is always can be fully controlled, confounding. Are we suffering from some ification problem that will bias our [00:13:15] findings? In order to deflate any of these arguments, you need to compare to the two causal relationship of the treatment to the outcome.
[00:13:23] Sebastian: Where do you find that true causal treatment effect? Can we ever know the true causal treatment effect in a [00:13:30] given population? The best thing we have is comparing to an actual randomized controlled trial because a well done randomized trial lends itself to causal interpretations, but you have to make sure it’s in the same population as the [00:13:45] database study we are comparing against.
[00:13:47] Sebastian: If there happen to be a well conducted RCT that is identical in the design and in the measurements as a given real world evidence study, would we not hope to see the same [00:14:00] finding? Confirming the similarity of dissimilarity in findings would be of great value to understand whether and when real world evidence studies can come to causal conclusions, like a randomized control [00:14:15] trial.
[00:14:15] Sebastian: Alright, so we are clearly not saying that every world evidence study should be calibrated against a randomized trial because that would be stupid, right? The trial gives us the answer already. This is really only about benchmarking, seeing how well. [00:14:30] We perform doing these non-randomized studies with secondary healthcare data for clinical practice.
[00:14:36] Sebastian: Now we are comparing RCTs versus real world evidence and want to see whether is a difference or whether they’re similar. Now, when you look at comparing real world evidence [00:14:45] study to real world evidence study, there’s actually some variability. There is surely, when published a paper where she identified 150 published database studies.
[00:14:55] Sebastian: In top journals using four or five databases that we also had available [00:15:00] to us, and she was rerunning the same instruction set that was given in the publication and in any online materials. And in theory, she should get exactly the same because she used the same data and instructions reading the method [00:15:15] section.
[00:15:15] Sebastian: But it turns out the correlation is only 0.85. You see these red dots, these are outliers. The 10 verse outliers when you look at the published report, that there was a lack of clarity and completeness in telling us what [00:15:30] exactly the authors had done. I don’t think they did anything wrong or we did anything wrong.
[00:15:34] Sebastian: It was just a miscommunication. So there’s room for improvement in letting our audience know what exactly have we done, because right now there’s no guarantee that we get the same [00:15:45] findings given the way a real evidence studies are reported. So there’s room for improvement here, but there’s variability if you compare real evidence to real world evidence.
[00:15:54] Sebastian: Now, if you compare R CT to RCT, there’s also variability. Remember, adu Helm Aducanumab from [00:16:00] Biogen, this medication to reduce the cognitive decline in patients Alzheimer. They had designed two randomized trials, study 3 0 1 and 3 0 2, and the way I lead them, they were exactly the same design, exactly the same eligibility [00:16:15] creator, and everything exactly the same.
[00:16:17] Sebastian: Nevertheless, one study found no difference in the MMSE. There’s absolutely no difference here between the two treatment groups, but 3 0 2 did show a substantial improvement by 18 percentage points on [00:16:30] the MMSE scale, whatever that exactly means. How can it be that equally designed, randomized trials get quite different findings.
[00:16:36] Sebastian: I always say that we want to emulate, well conducted randomized control trials, and sometimes things go wrong. Top [00:16:45] Cat is an example of random mass trial and the PIs of the study, mark Fefa, was quite open about this, that in the study sites in Russia and in Georgia, at that point in time actually collected or identified patients who didn’t have heart failure.
[00:16:59] Sebastian: The probability [00:17:00] of having the outcomes way lower. These patients don’t have heart failure compared to all the other sites of the Top CAT trial. Things can go wrong even in randomized clinical trials if we find. Inequalities and the results of the RCT versus real world [00:17:15] evidence. Is the trial really representative of the real world study?
[00:17:18] Sebastian: How variable are the RCT results? Did we fail to emulate the trial? Do we have a different population? Are we having different treatment pattern, different dose escalation, and [00:17:30] things like that? What about the follow-up duration? The adherence or persistence on treatment might be much lower clinical practice than in the randomized controlled trial.
[00:17:40] Sebastian: But we really want to know is there any bias operating within the real world? Evidence study is the [00:17:45] residual confounding, is the differential surveillance acting. That is what we want to learn, but it’s extremely difficult to disentangle the bias from the emulation failure and from the variability that is inherent to any single [00:18:00] RCT.
[00:18:00] Sebastian: So you see challenges already. Now let’s dive into the. Duplicate project where we actually did randomized trial. Emulations, a family of studies aim to understand and improve the [00:18:15] validity of real world evidence studies for regulatory decision making. And one is was the most important, where the learnings that we hope to get and was FDA funded, FDA wanted us to study whether had we [00:18:30] replaced a randomized controlled trial.
[00:18:33] Sebastian: A single similarly designed real world evidence study, would we have come to the same regulatory decision. As you can see, it’s a very regulatory perspective [00:18:45] because it was funded by the FDA and FDA, researchers were involved in doing this study. In order to study this, we identified 30 RCTs that were designed to be submitted to regulators.
[00:18:58] Sebastian: We also identified [00:19:00] seven ongoing randomized trials where we didn’t know the findings. We developed a process of how to do that, and then we wanted to study the factors that are likely predicting the results of the success of the emulation or not. Now, some [00:19:15] key aspects of how we did this. We used three based claims, databases, screened hundreds of randomized controls and rejected most of them, usually because of measurement issues.
[00:19:27] Sebastian: The 30 plus seven trials that we [00:19:30] identified, they are not a random sample. They’re highly selected in a sense that we thought we can do well in emulating the measurements and design of these trials. No claim of representativeness whatsoever. [00:19:45] We did one-to-one propensities matching, usually with more than a hundred pre-exposure variables.
[00:19:50] Sebastian: The RCTs estimate the average treatment effect. Since we did one to one prop, Cisco matching the average treatment effect in the treated, we don’t think that’s a big issue. The [00:20:00] RCTs estimated intention to treat and none of them reported per protocol. Now, in real world evidence studies because of the issue of non-adherence or short persistence, we opted to go further per [00:20:15] protocol.
[00:20:15] Sebastian: Cause of contrast. If you assume that in the randomized trial, adherence is almost perfect and I should say. Because these were trials submitted to the regulatory agencies, and everybody knows that if [00:20:30] adherence is a problem, these trials will not be accepted. The adherence was extraordinarily high and all of these trials.
[00:20:36] Sebastian: If adherence is perfect, then the ITT estimate is exactly the same as to per protocol estimate. So we assumed that in the randomized [00:20:45] control trial. We also see the per protocol estimate, which is the same as the ITT, and hence it’s fair to compare against the per protocol. Analysis. In the World Evidence Study, we have several predefined binary agreement [00:21:00] statistics.
[00:21:00] Sebastian: The big learning here was no matter what single agreement statistic you pick, you will be unhappy. It’s always a set of agreement. Statistics, not just a single one that you might be interested in, but you can read this all up in these publications. [00:21:15] We use the aion software platform to implement all these 37 trial emulations.
[00:21:20] Sebastian: For scalability, reproducibility, and clarity. We select the patients in a reproducible way. We pick the comparison groups, we select the treatment strategy. [00:21:30] We select the risk adjustment methodology. In this case, it was one-to-one propensity score matching rather than waiting V, the do visibility in diagnostics.
[00:21:40] Sebastian: Before we then post the statistical analysis plan to clinical [00:21:45] trials.gov. This is all publicly available for all 37 trial emulation. ITUM produces the report of the actual implementation. We press the button and everything is run. Everything is predefined here in close collaboration with the FDA who reviewed the [00:22:00] protocols before we deposited them@clinicaltrials.gov.
[00:22:04] Sebastian: The process was the whole trial selection on the left side. Here we, I spoke about this briefly, developed a statistic analysis plan, feasibility analysis, register, and [00:22:15] then do actual analysis and comes a report. The, as soon as we had the beginnings of statistic analysis plan, we moved over to the platform in order to implement this study in order to do feasibility counts.
[00:22:27] Sebastian: Now, what is interesting is that we shared [00:22:30] the analytic platform, which is an online platform with the FDA. So that the FDA could make their regulatory consideration, it can make changes to the analysis. They can say, in hindsight, we would have defined the outcome slightly differently or done [00:22:45] treatment waiting rather than one-to-one prop potential matching.
[00:22:48] Sebastian: We enabled the regulator to work with the underlying data in a randomized controlled trial. When you have an effective disc claim, you will need to submit your patient level data. To [00:23:00] the FDA so that they can do re-analysis and the European Medicines Agency is doing that now as well. We as real world evidence producers, I think we need to follow that and also be able to present the data to the regulator for any [00:23:15] re-analysis now.
[00:23:17] Sebastian: But why don’t I pause here. Any questions so far? This was a lot about preliminaries and design before we then dive into the findings of this. I No questions in the q and [00:23:30] a. Alright, perfect. Thank you. Then I just continue, but please do interrupt if you feel I’m going too fast here. Now this is trials one through 11.
[00:23:39] Sebastian: There’s a lot of trials for the management of diabetes. What you see here is the [00:23:45] column with the RCT findings, some protective effects here, some beneficial effects, and some non-inferiority studies. You see the real world evidence effect estimate, the adjusted one. That is what we compared against the RCT.
[00:23:58] Sebastian: You see the standardized [00:24:00] difference between the real world evidence estimate and the RCT estimate, and you see our agreement statistics. On the very right side, you see a qualitative assessment according to seven criteria. Whether we [00:24:15] thought we did well in the iation of the design and of the measurement, this assessment is independent of the closeness of the effect estimate.
[00:24:24] Sebastian: This is really just considering. How well did we do in emulating the design and the [00:24:30] measurements? Sometimes we had difficulties. We realized later on in emulating the comparison group when there was a placebo comparison. For example, we have antiplatelet trials and Triton Plato ESO React Five. We had patients [00:24:45] with atrial fibrillation.
[00:24:46] Sebastian: These are anticoagulants. Aris total rely rocket AF anticoagulants for the treatment of VTE. The Einstein program recover, amplify record one. We had two trials in the hypertension space. You see the [00:25:00] emulations here where we were quite happy with the emulation of design and the measurements. Fractures and BIPHOSPHONATE trials, chronic kidney disease, heart failure and paradigm hf, asthma, and COPD, and for a variety of reasons, and they’re [00:25:15] all different reasons, and we’re going to talk about that.
[00:25:17] Sebastian: We felt less sure after we had actually implemented the real evidence study that we did. We might have a problem in emulating the actual randomized control trial for variety of reasons. So these are the seven ongoing [00:25:30] trials where you see also in diabetes space, Carolina grades, sra, and Soul prostate Cancer pronouns, which is a safety trial where the cardiovascular endpoint in this treatment work in the treatment of VTE, sorry, COBRA af.
[00:25:43] Sebastian: That’s in treatment of atrial [00:25:45] fibrillation versus VTE. We have put our predictors forward except for the sole trial still ongoing on our end. Three of the trials have been completed in the interim, and we did well in two trials, not so well in the grade trial. The other trials are still [00:26:00] ongoing. All right, I think there’s a question in Q and a.
[00:26:04] Sebastian: Actually, I see it over here. There are usually differences between clinical practice, treatment, RCD, title controlled treatments. Would use modeling on other methods to account for this? Excellent [00:26:15] question, Cornelia. It did not. Basically, we also did not consider if you have up titration, down titration, things like that.
[00:26:22] Sebastian: We could not emulate that in the real world evidence study. So there is clearly. One of those factors that would make us think that the emulation was not [00:26:30] that strong if that would happen. Absolutely. This is how the results typically look. This is the example of the lead rider glide of one of the Glip one receptor agonists against placebo to reduce mace.
[00:26:41] Sebastian: So this was categorized as the emulation not being [00:26:45] so done because it’s hard to emulate a placebo. What we used as a placebo, however, was DPP four inhibitors, which we knew have no relationship with MACE endpoint from several randomized controlled trials. So they have the same indication, [00:27:00] treatment, diabetes, reducing the blood sugar, but they act as a placebo with regard to the endpoint of interest, which is mace, which is the randomized control positive.
[00:27:08] Sebastian: Up to two upper lines here, black is placebo, blue is the liraglutide or reduction in mace, the [00:27:15] trial emulation. On a lower level, the incidence rate of MACE is lower in clinical practice, but you see also the differential is pretty much the same between liraglutide, the red line here in clinical practice versus TPP four, which serves as our placebo when it [00:27:30] comes to the MACE endpoint, right?
[00:27:31] Sebastian: So you see this frequently in our emulation overall. When we put this all together, these are 32 trials, this JAMA publication of 2023. On the horizonal axis, the findings from our [00:27:45] emulation and lock scale, and on the vertical axis, the trial finding, you would expect these dots to be all exactly on the diagonal right, and that clustering around the diagonal.
[00:27:56] Sebastian: What is that? Overall, the correlation is [00:28:00] 0.8 of all these thoughts now. This wouldn’t satisfy me if this would be my final interpretation of what we have done. I would be unhappy with that and wasn’t sure whether I could conclude with any conviction that the real evidence studies are doing just fine. I think I want [00:28:15] to see a stronger correlation here.
[00:28:17] Sebastian: Let’s dissect this and find out what are the differences. It is clearly a six distribution difference, although the inclusion period was exactly the same, but we know. Women are underrepresented, randomized, controlled trials, and of [00:28:30] course we see exactly that. In clinical practice, we have 50 50 roughly.
[00:28:33] Sebastian: While it’s quite different in many of the randomized controlled trials, although we had exactly the same age eligibility criteria, our sub participants were much older [00:28:45] on average than the trial participants. Again, it’s the self-selection into trials and all these selection mechanisms that are not fully spelled out.
[00:28:54] Sebastian: Generally speaking, when doing all that, I learned more about randomized control trials than I ever wanted to [00:29:00] know. For example, did you know that most trials have an exclusion criterion? If the study vascular thinks that a patient doesn’t survive the next 12 months, that makes perfect sense for competing risk issues and things like that.
[00:29:12] Sebastian: I would want to do that myself, but how do you [00:29:15] operationalize that? How do you operationalize the prediction of a patient might be dying in the next 12 months? How do I operationalize this in my emulation study using claims that are EHR data? There’s a lot that is less precise [00:29:30] in this whole evidence generation enterprise that we are in than we would like, and I think we need to acknowledge that Now here.
[00:29:38] Sebastian: Two examples. Rocket AF and Paradigm hf. On the left half of the table is the RCT. [00:29:45] On the right side is the real world evidence and what you focus on, the incidence rate of the event, and I think for Rocket was stroke 1.7 was 2.2 in the exposed comparator and the real world evidence quite similar. 1.5, 2.4, right?
[00:29:58] Sebastian: So the incidence rates are [00:30:00] quite similar. We had a very specific definition of the stroke event and we call it green. This is a good emulation. Now for the Paradigm HF, where the outcome is heart failure hospitalization. Look at the incidence rates 21 versus 26 in a trial, [00:30:15] yet 46 and 44, double the incidence rate.
[00:30:18] Sebastian: Clearly, there were patients sneaking through our eligibility criteria who were not hospitalized because of the decompensation of heart failure. They had heart failure as a [00:30:30] comorbidity, but it was not the main reason for hospitalization. We mark that as a suboptimal emulation with regards to the measurement here.
[00:30:39] Sebastian: So you get the idea about the emulation quantity. There are [00:30:45] other aspects that led us to think that there are certain emulation challenges and in the end we think it’s not the age six differences, the difference in distribution of age and six. And we did a re-weighting exercise for age and six and there was no difference there.
[00:30:59] Sebastian: [00:31:00] It wasn’t confounding either. At least not in a majority. There were other things that were problematic. For example, if the treatment starts in the hospital, you should know that in insurance claims data, you don’t see the exact treatment in the [00:31:15] hospital because hospitals are getting paid with a prospective payment, they get a lump sum of treating a certain condition, and it’s up to them to do organize the treatment.
[00:31:23] Sebastian: Insurance doesn’t know what exactly was done when it comes to treatment. Now this is an example of a patient [00:31:30] with a myocardial infarction. They get hospitalized and receive an antiplatelet agent right away. They get stented or whatever, and they get interplate. Either clopidogrel or prl as part of this trial, I think was Plato or was tried on one, I think it was Plato.
[00:31:44] Sebastian: [00:31:45] These are the trial findings. When you magnify what’s happening at the very beginning, all the action is at the beginning. At the very beginning you, you see the treatment effect, right? And then after that, the Kaplan, my blood survival plots actually parallel. There’s very [00:32:00] little additional action. There’s a little bit maybe, right?
[00:32:02] Sebastian: But overall, most of the actions at the very beginning, so it’s all happening in the hospital, the treatment choices happening in the hospital. As well as the early events, the reins happening in the same hospital [00:32:15] and we don’t see that in our data. Hence, we see no beneficial effect, no big difference really between the two treatment while the trials do see a difference.
[00:32:24] Sebastian: Okay, and so this is a design problem or a measurement problem. We cannot emulate the [00:32:30] measurements ’cause they’re happening in the hospital. Another thing is that persistence of treatment. In clinical practice is much worse than when randomized trials, and I do admire these randomized controlled trials, how they keep patients on [00:32:45] treatment for this prolonged period of time.
[00:32:47] Sebastian: The example here is the Horizon pivotal trial. This is electronic acid against placebo in patients with osteoporosis, and the outcome is hip fracture. So you practically measure quite well we thought. It’s a major event, [00:33:00] obviously, and the electronic acid, I should say is an intravenous infusion once a year.
[00:33:06] Sebastian: And what we saw is the bottom half here of this de Kaplan MyUM that after 12 months, nobody except for, I dunno, [00:33:15] five or six people came back for a second infusion. There was no persistence on this treatment. It was a one time issue. We gave them six more months of follow up time because these biphospho linger around and it takes a while for them to we off.
[00:33:29] Sebastian: [00:33:30] We basically stopped our follow up time after 18 months because we felt nobody’s exposed anymore. All right? But while the trial keeps on going, we are comparing the first 18 months against an experience over 36 months. It actually [00:33:45] turns out that in the first 18 months, we get exactly the same point estimate as the first 18 months of the trial.
[00:33:52] Sebastian: However, there was treatment effect modification over time. The treatment effect was strengthened the longer you were on the treatment. You see the KA [00:34:00] diverging here further and that overall, you see this 41% reduction hip fracture while we see only the 25% reduction. Okay, so now, so here we have shorter follow up in clinical practice [00:34:15] haired.
[00:34:16] Sebastian: With time varying treatment effects, the treatment is getting stronger with longer duration, and that is biasing our overall findings, not biasing. It makes it different if you’re studying something different, like on the left side, you’re studying in [00:34:30] clinical practice, the effect of ogr versus after you have survived your index hospitalization.
[00:34:37] Sebastian: It’s a different study question because we cannot emulate the trial really. The third example are the Ashland is [00:34:45] CPD trials and semi swissa has published on that for the last 20 years already, and highlighting that we just, we didn’t reach his papers. Unfortunately, we thought we can emulate these trials, but we can’t.
[00:34:55] Sebastian: What happens in, in CPD for example, often we are interested in [00:35:00] comparing. Triple therapy versus dual therapy. These are inhaler medications where you combine hatable corticosteroids plus LAMA, plus laba. These are all treatments to reduce inflammation and keep the bronchi open versus dual therapy. [00:35:15] In this example LAMA LABA combination.
[00:35:18] Sebastian: That’s a perfectly valid and important question. These trials had a run-in period where they observed the baseline therapy, and now here comes the kicker. In this baseline therapy, it turns [00:35:30] out that almost 40% of subjects that were randomized were already on triple therapy. Now, if you get randomized to the dual therapy, what does that mean?
[00:35:39] Sebastian: For those 40%? For those 40%, you take away the I-C-S-L-A, the Inha corticosteroid, [00:35:45] which of course makes the treatment group the triple look much better because you have a flawed take away trial. Taking something away that works makes the other group look better. In clinical practice, we don’t take away what seems to work so we could not emulate [00:36:00] this study.
[00:36:00] Sebastian: Design found no difference between dual and triple therapy. In our study, while the takeaway trials, they found this 25% improvement in the treatment with triple versus dual therapy. I don’t think that’s relevant this estimate because it’s a takeaway trial. [00:36:15] That’s not something that we do in clinical practice.
[00:36:17] Sebastian: Alright? And again, the idea makes it impossible for us to emulate these trials now. What you see here in, and they look at the scatter plot. Again, they’re color coded red dots in the blue [00:36:30] dots. The red dots are those trials where we had difficulty emulating them for all the reasons that just laid out.
[00:36:36] Sebastian: The blue dots were in trials where we felt we did a good job in emulating the trials. When you now look at the correlation coefficient in the blue dots, you see [00:36:45] that jumps up to 0.94. In my mind. Now we in business. I think that is the correlation that makes it very interesting with regard to the credibility of real world evidence study, being able to replicate trial findings.
[00:36:59] Sebastian: Now, here’s [00:37:00] one data point that makes me puzzled. This is a blue dot. We thought we emulated as well is far away from a diagonal what is happening here. This is actually two studies. The Einstein DV T study, Einstein PE study. [00:37:15] Both of them we thought we emulated well. This is Rivaroxaban and Direct Anticoagulant versus Warfarin plus Enoxaparin, which is a low molecular weight heparin for the treatment of VTE.
[00:37:26] Sebastian: The Einstein DVT is patients were included because that [00:37:30] activity and Einstein PE is including patients who had a more. Pro Gradient DVT, which with the thrombus went up to the lung and caused a pulmonary embolism. The Einstein dvt, we emulate both that we thought, but the findings are [00:37:45] quite similar here, 0.68 versus 0.75.
[00:37:48] Sebastian: When the Einstein P look at this, the trial should actually a harmful effect, strictly speaking, a 12% increase in the risk of VT in these patients. With some uncertainty, of [00:38:00] course, but certainly didn’t show a benefit. As did the Einstein D and we continue to see the benefit and you now worry what is going on with the Einstein pe.
[00:38:11] Sebastian: And we had several conversation with the people who did the trial. It’s exactly the same, set [00:38:15] up the same clinical sites in order to do the trial. This is one of those flu things that I cannot explain. We looked at a meta-analysis of other trials in the same space and they didn’t see any treatment effect heterogeneity, whether these.
[00:38:29] Sebastian: Patients had [00:38:30] DVT. Orban always worked in the same way or the other drk that they used Apixaban. There was no treatment effect originated in DVT and P patients. Our conclusion is, and totally understand if you disagree with that there, there’s something wrong with this [00:38:45] trial. For whatever reason there is just fluke finding.
[00:38:48] Sebastian: I would not give that much credence to this data point here. But it was all predefined and we are not taking it back. This is what we found. I’m highly skeptical that this blue.is really up here. It [00:39:00] most likely is much more further down here, closer to the green for the reason that I just gave you. With all those findings, we conclude that overall, if you have good measurements and you can emulate the design and the measurements of the trial, [00:39:15] you can expect to get the same finding as the randomized control trial in your emulation claims and EHR data.
[00:39:20] Sebastian: Alright. We didn’t learn anything new because the trials already showed you what the finding would be. What is the application? Really, it comes to [00:39:30] studies of broadening the indication of already marketed medications. When you want to get a supplemental indication, you have your phase two three program.
[00:39:39] Sebastian: You file your NDI to the FDA, this in North American perspective here, but similarly in Europe. You enter [00:39:45] the market and after a while you decide I want to apply for supplemental indication. For a slightly different population or for a different endpoint. Rather than lowering hba one C for an ANTIDIABETIC medication, I want to show [00:40:00] that I reduce the cardiovascular events.
[00:40:03] Sebastian: So you run another randomized controlled trial, and if that’s successful, you submit your supplemental NDA. That’s the classical approach to broadening indications. What you can contemplate is once you’re in the [00:40:15] marketplace, you try to emulate your phase three trials. In design and measurements, your drug is now out in clinical practice and used, and you use whatever database, a claims database as we have done with RC, duplicate and [00:40:30] emulate the trial.
[00:40:30] Sebastian: And if you find the same finding or close enough to the phase three trial findings, that’s a huge confidence booster. Basically, you have demonstrated now that in your data and analytics [00:40:45] setup, you can come to the same conclusion as the trial that you have emulated. That is a huge confidence booster. And now in a second stage, what you do is expand the indication.
[00:40:55] Sebastian: You look at a different endpoint, you look at a broader [00:41:00] population. You now include women in childbearing age, for example, who were excluded. You include older adults. You’re broadening your population. Okay? And then you submit SNDA. Now this strategy, which we call Bench Al, exploring [00:41:15] that, we are trying to validate this.
[00:41:16] Sebastian: We have a contract with the FDA is quite interest in us. It seems to me. I don’t wanna speak for them where we do exactly this, the blue pathway. But we are picking examples where in real life actually it happened that there was a randomized control [00:41:30] trial. What you basically do is you have your two estimates, your two remains unobserved in a real application.
[00:41:36] Sebastian: And you have some sort of divergence, course zero, one star, the emulation, it’s never exactly the same as zero one UI one. Here is the [00:41:45] divergence parameter. You take your robot estimate of your second staging, use XI one as the divergence parameter rather than the unknown SI two. Then you can do sensitivity analysis around that.
[00:41:57] Sebastian: It’s benchmarking. Extrapolation is the [00:42:00] greener and the calibration is the XI parameter here. In order to broaden indication, that’s a real use case for pharma and regulators. Here’s a work example where Julian Ki was a fellow with us who’s now a professor in Paris running a [00:42:15] GI department in emulating two randomized trials, the success trial and the Sonic trial in the IBD space comparing infliximab versus type purin monotherapy, and it was quite successful in that stage two, he studied what he really wanted [00:42:30] to study.
[00:42:30] Sebastian: Which was the combination therapy of vedolizumab together with thiopurine versus monotherapy. That is something that is unlikely to be ever done in a randomized controlled trials. There’s no economic interest, but pharma and often the public funders are [00:42:45] also not dealing out enough money to really do these trials.
[00:42:47] Sebastian: So far, we have not seen any of these trials. This is an application of this two stage approach. Ben x Cal. Other learnings from RCTs. Very briefly, we are running out of time here. [00:43:00] I’m very skeptical about line programming. The reproducibility is highly error prone. You rather want to focus on the study parameters that you can verbalize and use those study parameters to use a pre-programmed process.
[00:43:14] Sebastian: This is the FD [00:43:15] Central program. To run a cohort study, for example, use platforms like the adjunct platform way have a very granular. Depiction of what is going on. Very transparent. You have a study registration, something that randomized trials do all the time. I think [00:43:30] real levels should do the same. If you have effectiveness claims, you have audit trails and you might wanna share the data.
[00:43:35] Sebastian: So we are learning from RCTs. Why shouldn’t we do the same as the randomized control trial? Particularly when we have effectiveness claims? Start audio E [00:43:45] as a guidance document for academics with representation from the regulators. This also led to a harmonized. Protocol template of how to do better study protocols for these types of studies.
[00:43:57] Sebastian: The literature is littered with [00:44:00] nonsensical real world evidence studies, and we are all aware of that, and reviewers are confused and feel uneasy to discern between good and bad real world evidence studies, and we need to help them. When you look at the commentaries that we read from FDA, for example, in publicly [00:44:15] available commentaries, how to interpret real evidence.
[00:44:18] Sebastian: I think they do a good job. They look for flaws, the typical biases in these non-randomized analysis of secondary data. Real world evidence is not a shortcut to success. There are [00:44:30] evaluation tools like Robin Eye that very much follow the target trial framework is very helpful. And we published a study for reviewers familiar with evaluating randomized controlled trials.
[00:44:39] Sebastian: What is the added thing that you need to consider when you evaluate these secondary [00:44:45] data analysis and together with FDA, we came up with this principled framework and how to do and how to plan for these randomized control trials overall. Real world evidence will have more influence, increasing [00:45:00] influence in regulatory decision making on H two, a decision making as we follow the principles of causal inference and as we reduce human error and increase transparency, and we know how to do that, we can learn that from randomized control trials and as the data source improving, which they do [00:45:15] rapidly every week there’s a new startup company about a new data asset, and often enough it’s not as what was promised, but the situation is getting only better when it comes to the data sources.
[00:45:25] Sebastian: With that upbeat message for everybody, I will pause here and I’m happy to [00:45:30] take any questions.
[00:45:32] Richard: Thank you. Thank you very much. There’s one question in the chat, or Nia, could you comment on pragmatic randomized trials?
[00:45:40] Sebastian: These are pragmatic elements in randomized controlled trials that we [00:45:45] evaluate submitted to the FDA.
[00:45:46] Sebastian: These are highly controlled trials where you have blinding adherence, adjudication committees, all the belts and whistles to control the environment to get to an efficacy estimate. Now you can loosen these things and introduce pragmatic elements. [00:46:00] You can let the adherence play out as it is in real world practice.
[00:46:04] Sebastian: You can not have adjudication committees, but rely on hospital records to assess the outcome. In that sense, these pragmatic randomized trials are in between highly controlled randomized trials [00:46:15] versus real world evidence studies, and it turns out that with our emulation, we do way better in emulating pragmatic, randomized controlled trials than highly controlled one.
[00:46:25] Sebastian: Main reason is in pragmatic, randomized controlled trials, most of the time you don’t have placebo. Active [00:46:30] comparator and that makes it much easier to emulate.
[00:46:34] Richard: Thank you. Sebastian, I don’t see any open questions. I have two actually stand with one. From the trials that I saw that you emulated I missed all trials, [00:46:45] right?
[00:46:46] Richard: There was one prostate cancer trial that was focused on the cardiovascular safety endpoint. I think is there a specific reason or is it something you’d like to do in the future?
[00:46:57] Sebastian: Richard, we were very pragmatic in order [00:47:00] to do this. I think we were the first ones doing this as a large scale, these emulation things, and we had these databases available and thought, okay, cancer is not in these databases, so there was a pragmatic decision not to do cancer.
[00:47:12] Sebastian: We have a program ongoing now on core, which is [00:47:15] FDA funded, where we do exactly the same for oncology, where we have no access to five. Oncology specialized EHR systems with much more granular oncology data where we evaluate oncology trials across multiple data sources across multiple [00:47:30] cancers. There is a bit of literature out there about trial elimination oncology, and there’s good stuff out there
[00:47:35] Richard: already.
[00:47:36] Richard: There is a question from the Lewis, so I will postpone my second question. In duplicate, you use only propensity score matching. What about the user [00:47:45] relevant of other causal inference methods, structural models, G methods, et cetera, but emulating clinical trials?
[00:47:51] Sebastian: Yeah. Louis, two important points in this question.
[00:47:53] Sebastian: One is the structural models in G methods when you have time varying treatment for complex treatment strategies. But [00:48:00] we had fairly simple treatment strategies. You get put on one or the other and take it as long as you can. So we didn’t need to use structural models. We have such simple treatment strategies that we compare.
[00:48:11] Sebastian: You could argue whether we want to investigate the [00:48:15] informative, censoring, inverse priority of censoring waiting. As we have shown multiple times, it doesn’t make any difference whether you do this or don’t. The propensity score matching, which is the reason why we estimate a TT, I’m assuming treatment rather than a TE.[00:48:30]
[00:48:30] Sebastian: Because in real world evidence and real world data, the inverse probability of treatment waiting, which is the basis for geo methods, have trouble sometimes if their data source is not perfect. Sometimes you have this extremely high weight in the extremes of the propensity [00:48:45] score. We have seen this in demonstrators multiple times because of misclassification.
[00:48:49] Sebastian: Clearly, this patient shouldn’t be in these extremes. IDW is upgrading that you truncate weights, but then the whole interpretation of a TE is no longer valid because your population has changed. They can do [00:49:00] prop Cisco matching right away, which is much more transparent and clearer in its conduct and appreciated by regulators.
[00:49:06] Sebastian: So there are statistical reasons as well as very pragmatic reasons for that choice.
[00:49:12] Richard: There’s several questions now. The [00:49:15] next one is anonymous, is data. Your I-C-T-R-W-E study available to look at. I’m interested in your SAP. Yes. You did mention that it was publicly available, so maybe you can share where that [00:49:30] is
[00:49:31] Sebastian: exactly.
[00:49:31] Sebastian: So you go to the JAMA paper, the genre lab publication. You see the codes for registration numbers for clinical trials.gov, and there you’ll find the certificate management. You see all the boring details there. As well as plausibility analysis. It’s all there. [00:49:45]
[00:49:46] Richard: Thank you, Sebastian. And the last question, how do you take into account things like colitis in matching for Lewis?
[00:49:55] Sebastian: It’s Lewis special cause inference. That’s great variables that we selected. We are [00:50:00] selected by investigators. This was not an automatic confounding adjustment procedure. Although we are doing this right now and we find very similar findings, whether you use automatically or having the investigator, we basically rely on the investigator not to adjust for colitis.
[00:50:14] Sebastian: The other [00:50:15] comment that I wanna make about colitis is it’s extremely hard in our field to find really meaningful and strong colitis that would really make a difference. And when you’re not sure, even if you draw a that whatever, whether [00:50:30] it’s a collider or a confounder, I would rather always adjust for it than in doubt.
[00:50:34] Sebastian: So in that sense, try to take care of the Collider. If there is some flu colitis in there, I’m not worried about it for those reasons.
[00:50:41] Richard: Okay, thank you very much. It is four o’clock on the [00:50:45] door. So with that I would like to close this. There is another question, so a couple of other questions. Louis agrees with your answer, so let’s call.
[00:50:56] Richard: We will have to stop now for people to allow to go to other sessions, [00:51:00] so thank you very much. I’m sorry for innocent, the question has not been answered. Maybe, I know that Sebastian can see it online, so he can maybe answer it. Just shoot me
[00:51:08] Sebastian: an email, send or whoever has other questions, just send me an email.
[00:51:11] Richard: Okay. Thank you everyone, and have a great rest of the [00:51:15] conference. Thank you, Sebastian, for a very comprehensive, great presentation. Thank you so much.
[00:51:19] Sebastian: Thank you, Richard, for handling this so well.
[00:51:25] Alexander: This show was created in association with PSI. Thanks [00:51:30] to Reine and her team at VVS who help with the show in the background, and thank you for listening. Reach your potential lead grade science and serve patients. Just be an effective [00:51:45] [00:52:00] statistician.
Join The Effective Statistician LinkedIn group
This group was set up to help each other to become more effective statisticians. We’ll run challenges in this group, e.g. around writing abstracts for conferences or other projects. I’ll also post into this group further content.
I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.
I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.
When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.
When my mother is sick, I want her to understand the evidence and being able to understand it.
When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.
I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.
Let’s work together to achieve this.
