In this episode, I’m joined once again by my friend and frequent guest, Kaspar Rufibach, to talk about a topic that’s been around for decades but is gaining fresh attention thanks to the new ICH E20 draft guidelineadaptive designs in confirmatory clinical trials.

Kaspar and I discuss why and when we should consider adapting a clinical trial, what kinds of adaptations are statistically valid and meaningful in a regulatory context, and why these designs—despite their efficiency—are still not used as often as they could be.

We also dive into the statistical foundations behind adaptive designs, such as p-value combination methods and meta-analytic thinking, and explore how adaptive approaches can help us make faster and smarter decisions in drug development.

Why You Should Listen:

If you’ve ever wondered what adaptive designs really are, when they make sense, and how ICH E20 will influence our work as statisticians, this episode will give you a clear, practical overview.
You’ll learn:

✔ Why adaptive designs often save valuable time—and what organizational barriers keep teams from using them.

✔ What types of adaptations are possible and truly useful in confirmatory settings.

✔ How combining evidence across study stages works in principle.

Episode Highlights:

01:28 – Catching up with Kaspar
Kaspar returns to the podcast to dive into the topic of adaptive clinical trials.

02:34 – Why adapt?
We discuss the main motivation behind adapting a trial and when it’s worth the effort.

03:00 – Group-sequential designs
A quick look back at where adaptive concepts began and why they remain relevant.

06:03 – Practical adaptations
We touch on examples of adaptations that can make studies more flexible and efficient.

10:00 – Planning challenges
Kaspar shares how real-world constraints shape decisions around adaptive design.

15:06 – Why not more often?
We reflect on the cultural and operational reasons these designs are still less common than expected.

25:30 – ICH E20
An overview of what the new guideline covers and why statisticians should pay attention.

27:13 – Looking ahead
I share upcoming opportunities to continue this discussion at industry meetings in Basel.

29:13 – Closing thoughts
A reminder about the value of good planning and purposeful adaptation in clinical trials.

Resources and Links:

  • ICH E20 (Draft): Adaptive Designs for Clinical Trials
  • ICH E9(R1): Estimands and Sensitivity Analyses in Clinical Trials
  • EFSPI Regulatory Workshop & ISCB Conference (Basel)

🔗 The Effective Statistician Academy – I offer free and premium resources to help you become a more effective statistician.

🔗 Medical Data Leaders Community – Join my network of statisticians and data leaders to enhance your influencing skills.

🔗 My New Book: How to Be an Effective Statistician – Volume 1 – It’s packed with insights to help statisticians, data scientists, and quantitative professionals excel as leaders, collaborators, and change-makers in healthcare and medicine.

🔗 PSI (Statistical Community in Healthcare) – Access webinars, training, and networking opportunities.

Join the Conversation:
Did you find this episode helpful? Share it with your colleagues and let me know your thoughts! Connect with me on LinkedIn and be part of the discussion.

Subscribe & Stay Updated:
Never miss an episode! Subscribe to The Effective Statistician on your favorite podcast platform and continue growing your influence as a statistician.

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won’t send you spam. Unsubscribe at any time. Powered by Kit

Learn on demand

Click on the button to see our Teachble Inc. cources.

Load content

Kaspar Rufibach

Expert Biostatistician at Merck

Kaspar is an Expert Statistical Scientist in Roche’s Methods, Collaboration, and Outreach group and is located in Basel.

He does methodological research, provides consulting to Roche statisticians and broader project teams, gives biostatistics training for statisticians and non-statisticians in- and externally, mentors students, and interacts with external partners in industry, regulatory agencies, and the academic community in various working groups and collaborations.

He has co-founded and co-leads the European special interest group “Estimands in oncology” (sponsored by PSI and EFSPI, which also has the status as an ASA scientific working group, a subsection of the ASA biopharmaceutical section) that currently has 39 members representing 23 companies, 3 continents, and several Health Authorities. The group works on various topics around estimands in oncology.

Kaspar’s research interests are methods to optimize study designs, advanced survival analysis, probability of success, estimands and causal inference, estimation of treatment effects in subgroups, and general nonparametric statistics. Before joining Roche, Kaspar received training and worked as a statistician at the Universities of Bern, Stanford, and Zurich.

More on the oncology estimand WG: http://www.oncoestimand.org
More on Kaspar: http://www.kasparrufibach.ch

Transcript

[00:00:00] Alexander: You are listening to the Effective Statistician Podcast, the weekly podcast with Alexander Schacht and Benjamin Piske designed to help you reach your potential lead great science and serve patients while having a great [00:00:15] work life balance.

[00:00:22] Alexander: In addition to our premium courses on the Effective Statistician Academy, we also have. [00:00:30] Lots of free resources for you across all kind of different topics within that academy. Head over to the effective statistician.com and find the Academy and much [00:00:45] more for you to become an effective statistician. I’m producing this podcast in association with PSIA community dedicated to leading and promoting user statistics within the health industry for the benefit of [00:01:00] patients.

[00:01:01] Alexander: Join PSI today to further develop your statistical capabilities. With access to the ever-growing video on demand content library free registration for all PSI webinars and much, much more. Head over to the [00:01:15] PSI website at PSI Web to learn more about PSI activities to become a PS I member to today.

[00:01:28] Alexander: Welcome to another [00:01:30] episode of the Effective Statistician. And this is a special one because I haven’t recorded a new episode for quite a while, and I’m today again with one of my favorite guests. Kaspar. Hi Kaspar. How are you doing? 

[00:01:44] Kaspar: [00:01:45] Hi, Alexander. Thanks for having me again and welcome back.

[00:01:48] Kaspar: More than happy to be on the podcast again and, very happy to talk with you again, 

[00:01:53] Alexander: and we are talking about a topic that is not new, but [00:02:00] very relevant because of some things that came out of ICH, but we’ll talk about that at the end. Adaptive studies, adaptive clinical trials and especially [00:02:15] confirmatory clinical trials.

[00:02:16] Alexander: I have never worked actually on one, so I’m super happy to have an expert like Kaspar to talk with me about it. Why do we actually wanna adapt clinical trials? [00:02:30] So basically change your study, so to say in the middle of it’s running. 

[00:02:34] Kaspar: Yes. I think this is a very good question. Why should we bother?

[00:02:38] Kaspar: Because there is a risk. It makes the study more challenging to design and also to run. So there needs to be a good [00:02:45] justification, why we should actually adapt. And the way I see it a little bit, and maybe starting a little bit kind, history I think in the seventies, in the eighties, drug developers developed group sequential designs.

[00:02:59] Kaspar: So this is [00:03:00] a design where you have a primary endpoint. You want to run the trial in a confirmatory fashion. That means you want to have type one error protection, but you don’t want just to run the trial in one stage, to detect the hazard ratio of [00:03:15] 0.75 with 80% power and alpha 5%, you need 380 events.

[00:03:19] Kaspar: One option would just be to wait until you have 380 events. However, maybe you want to look halfway. Say after 50% of these 380 [00:03:30] events whether maybe the effect or your hypothesis test already rejects the null, but that you’re already statistically significant when you do that, you look twice after 50% of these events.

[00:03:44] Kaspar: And then if [00:03:45] you don’t reject, you will again look at the end, at the final analysis and the fact that you look twice, you have to correct for this means, and we all know these group sequential designs, you can then distribute your 5% of alpha differently to the [00:04:00] interim and to the final. Either saying I want to have a similar alpha level, that’s what we call a poke hook boundary, or distribute it a little bit more unevenly so that you save almost all of the alpha for the final analysis and have a very [00:04:15] high hurdle at the interim.

[00:04:15] Kaspar: That’s what we call O’Brien Fleming. And then there are other boundary functions. So if you think about it, this is an adaptive design because at the interim analysis, you have the choice to either continue the trial as planned or [00:04:30] adapt the sample size to zero. That would correspond to stopping the trial.

[00:04:34] Kaspar: Now people have started to run these trials, I think in the eighties, and then in order to maintain integrity of the trial, you. Assemble an IDMC independent data monitoring [00:04:45] committee who looks at the data, the interim analysis in our template trial. After half of the events maybe they see something, they see, for example, that there is a subpopulation in which the drug works, but not in the full population.

[00:04:59] Kaspar: And I think on [00:05:00] an ad hoc basis, people then started to say. Over enroll in that subpopulation. So they adapted beyond just either stopping the trial or continue as planned. And for a while that went on. And then at some point people started to think, oh, this [00:05:15] actually does not maintain type one error anymore.

[00:05:17] Kaspar: So the theoretical development in statistics caught up and started to develop methods for also more advanced adaptations than just setting the sample size to zero. Yes or no. [00:05:30] And I let you comment then maybe we can discuss what adaptations are actually possible and meaningful in a drug development context.

[00:05:41] Alexander: So looking into subgroups. It’s surely a [00:05:45] very interesting area, especially if that says your treatment effect might be different across different subgroups or maybe your safety is different across different subgroups. And you want to have that also taken into account. [00:06:00] Another topic should be that maybe you can.

[00:06:03] Alexander: Change. What’s your primary endpoint? What do you think about that one? 

[00:06:08] Kaspar: So you’re jumping the gun a little bit here. Let’s say there are two types of adaptations. One type [00:06:15] is what is statistically mathematical possible and. I think a subset of that is then what is possible or meaningful in a drug development context.

[00:06:25] Kaspar: Changing the endpoint and we are talking about confirmatory [00:06:30] clinical trials, so phase three trials and changing the endpoint halfway through a phase three trial while mathematically, statistically possible. I think is a challenge in a drug development regulatory context because, I could [00:06:45] well put myself in the shoes of a regulator.

[00:06:48] Kaspar: If a sponsor shows up and says, oh, we want to change the endpoint halfway through I would ask are you ready to go into phase three if this is your intention? But theoretically that’s possible, [00:07:00] but. I think there are other adaptations that are more relevant in drug development context. I already described this potential adaptation at an interim analysis to decide on the testing strategy at the final, whether [00:07:15] to just test in a subpopulation in the all comers or in both.

[00:07:20] Kaspar: It’s quite obvious that this introduces multiplicity and you need to account for that. Yeah. Another, I think the other key adaptation that we have in clinical trials [00:07:30] is when you start with a control arm and then maybe say two arms, two different treatments, two different doses of the same treatment, and you want to halfway through drop one of these experimental arms and just continue with the [00:07:45] better.

[00:07:45] Kaspar: But you then, at the final analysis, you want to compare all the data that you have recruited. Over the entire course of the trial, but you have looked into the data at the interim you, so you need to account for that fact that you have dropped an arm. You [00:08:00] cannot just take the arm that survived at the interim, compared it to control without making any adjustments.

[00:08:06] Kaspar: For me, these are two key. Adaptations, enrichment treatment, arm selection and then group [00:08:15] sequential designs. Set the sample size to zero, yes or no. That’s another important adaptation. You can also argue that, for example, you want to start with a one-to-one randomization ratio, and then if [00:08:30] you meet some criteria at the interim, maybe you start to enrich.

[00:08:35] Kaspar: The treatment arm to a two to one, that is also an adaptation. And finally, if you have a continuous endpoint, say a mean difference, [00:08:45] you have to specify a variance or to assume a variance at the design stage. Now, if you are wrong with that assumption on the variance, say you assumed the variance to [00:09:00] be smaller than what you actually observe at the interim.

[00:09:03] Kaspar: You might want to inflate the sample size after the entry into account for the fact that you have more variability than what you assume at the design stage. So this is what we call sample size. We assessment for me [00:09:15] this whole field of adaptive designs, that’s about it. What is meaningful in drug development?

[00:09:20] Kaspar: You can add theoretically more things like change your primary endpoint, theoretically possible. I think I have seen one example where this has been done. But that’s [00:09:30] not the first thing I would think of when talking about adaptations.

[00:09:33] Alexander: Completely agree. I think becomes really challenging to, understand and interpret and communicate the outcomes of the study where you change your primary endpoint somewhere in the [00:09:45] middle. I think the dose selection is also a very interesting case. That goes into phase two, phase three seamless design, isn’t it?

[00:09:55] Kaspar: Now you’re touching one of my head phase. A little bit because we keep [00:10:00] using these terms very often, seamless, phase two, three. I’m not so supportive of using these terms because I think they can generate a lot of confusion outside of biostatistics because people are not very clear about what it means.

[00:10:13] Kaspar: But if [00:10:15] we leave terminology aside you can think of it in that way. And, in my previous company, there was this template example where this has been pulled off and that the setup was basically you had an approved [00:10:30] drug, a targeted drug in some indication, and that same molecular target was present in another cancer.

[00:10:41] Kaspar: So this to itself to say we should try that [00:10:45] drug in this other cancer as well, of course. But we don’t want to rerun the whole development program from the beginning. and rehash everything because we have a lot of safety data. We just need to show efficacy in a phase three setup, and we want to do that in an [00:11:00] efficient way still because it’s a different cancer.

[00:11:02] Kaspar: We’re not sure about the dose. So you have an approved drug in one indication. You have the same molecular target in another indication, and you want to directly start phase three with two doses. [00:11:15] And here this made a lot of sense. And then you say, okay, let’s start with two doses. Let’s define an interim decision criteria, which actually in this specific case was based on safety, efficacy and PKPD data.

[00:11:28] Kaspar: So you can define whatever criteria [00:11:30] you want. Drop an arm at an interim, and then that arm that makes it to the end, compare that to the control. and that’s then my pathway to potential approval. The trial failed. The design was very innovative at the time and very fit for [00:11:45] purpose.

[00:11:45] Alexander: You can’t judge a trial by trial design by the outcome. 

[00:11:49] Kaspar: I completely agree with what you say. The value of a design should be independent of whether the drug works or not. the degree of how innovative a design is [00:12:00] leads to an efficient decision.

[00:12:02] Kaspar: Yeah. And the point is to make the right decision, failure of the drug, doesn’t say anything about the quality of the design, but very often innovative designs, sync with the drug if the trial is not successful, which is very unfortunate But [00:12:15] yeah, this is just how things work.

[00:12:16] Alexander: Yeah. So let’s look a little bit into this new methodology part. So if we now have, let’s make it very simple. We have just see two [00:12:30] stages and we test once and then we test later again. what does the methodology look like? 

[00:12:37] Kaspar: I think this was the big observation of Peter Bauer.

[00:12:42] Kaspar: And I think, I dunno the nineties. [00:12:45] Because you can, if you say, I run a trial and I want to adapt at an interim analysis halfway through, what you ultimately have is you have a first piece and then you adapt something. [00:13:00] Say you drop an arm and then you have a second piece. Ultimately what you have done is you have two independent studies.

[00:13:11] Kaspar: And this is reminiscent of a meta-analysis. [00:13:15] So what adaptive designs basically do is they borrow from methods, from meta-analysis. And one of these methods is just p-value combination. So you compute the P value in the first stage. [00:13:30] You compute the P value in the second stage, and then you combine these P values to give you an overall p value of the trial of both stages.

[00:13:39] Kaspar: And under the, all these two P values are uniformly distributed. And based on that you [00:13:45] can derive a global test statistic. And that’s, of course, things are may a little bit. Get a little bit more tricky, but ultimately, this is how I think about it. So in meta-analysis, you combine inference from separate trials.

[00:13:59] Kaspar: In [00:14:00] an adaptive design, you combine inference from several stages of a trial and, the most prominent, I think p value combination function is maybe just to multiply the two P values and then properly calibrate them such that at [00:14:15] the end of the day you have a valid hypothesis test. You then just need to think about what exactly what hypothesis do I reject.

[00:14:22] Kaspar: So these are these little nitty-gritty details you need to think about, but conceptually, this is what happens. 

[00:14:29] Alexander: [00:14:30] to. sum it up. There’s lots of opportunity. What we can do we can create much more efficient designs to make more efficient decisions. And also the [00:14:45] methodology is.

[00:14:47] Alexander: Truly not new, although I think there’s always some new advances here and there, the framework, lots of these things are really long established and well understood. Why [00:15:00] don’t we see more of these adaptations? 

[00:15:06] Kaspar: This is a question that a lot of people ask, and I think quite a few papers have been written about that kind, making formal polls.

[00:15:12] Kaspar: Why are these designs not used more [00:15:15] often? I don’t have a definite answer. Maybe I have a few hints and a few pointers. I think what people sometimes underestimate is. When we talk about these adaptations or sometimes people mistake about the designs for let’s start a trial.

[00:15:29] Kaspar: [00:15:30] Let’s look at the data halfway through, and then think about how we change the trial. This is not how it works. These adaptations, so group sequential enrichment treatment, arm selection and sample size reassessment. This is all [00:15:45] pre-planned at the design stage. You very precisely say what you’re going to do, and you have an interim analysis at which you pick one of a couple of a handful of options.

[00:15:57] Kaspar: And I think this is sometimes [00:16:00] underestimated when that and it is a bit more complicated to implement than just a one stage design where you can just compute sample size and go with it. So that’s one aspect. I think another aspect [00:16:15] is. It may need a bit more upfront planning. And that trial I was describing with the dose selection in another indication there, internally, the comparison was made.

[00:16:28] Kaspar: We want to answer destructive [00:16:30] development question. We want to get pivotal evidence in a new indication, and we propose an adaptive design with dose selection. An alternative would be to first run a [00:16:45] randomized phase two between the two competing doses. Finish that, pick the dose that turned out to be better, and run it again to compare against the comparator in yet another phase three.

[00:16:57] Kaspar: That’s much more easy to plan. [00:17:00] So say just, I don’t recall the numbers, but say you need three months to plan the randomized phase two, and you need three months to plan the randomized phase three. That gives you six months. If you plan this, what you [00:17:15] call the seamless phase two three adaptive trial, maybe that takes you 12 months.

[00:17:20] Kaspar: So that’s 12 months against six months, but the time you save is maybe one year. With the [00:17:30] seamless trial, so you save six months at the end, and actually in this specific case it was about one year. The issue is that very often in companies, teams are incentivized [00:17:45] through metrics that don’t help with one of these metrics.

[00:17:49] Kaspar: For example, is first patient in and if teams have to rush to include the first patient. Of course they try not to have a too complicated design, [00:18:00] even if that design would save you one years, two years until you get approval. So the metric one should actually use is when is my clinical cutoff date for regulatory of, at which I can generate [00:18:15] regulatory or as pivotal evidence.

[00:18:18] Kaspar: And I think that this is, in some sense, teams are not incentivized to plan more complicated trials. That’s another aspect. I think, yet another aspect is [00:18:30] hesitancy of decision makers because again, if you take these two competing approaches, a seamless phase two three versus a randomized phase two, followed by a randomized phase three.

[00:18:43] Kaspar: In the randomized [00:18:45] phase two scenario, decision makers can look at the data after the randomized phase two and make a decision and still share their opinion and their view and have a say in a seamless trial. You [00:19:00] outsource that decision to an IDMC basically. say you save two years, but you have to put the money on the table for a phase two.

[00:19:09] Kaspar: A seamless phase two three trial is basically a phase three trial. That’s why I don’t like this seamless phase two [00:19:15] three thing. It’s just a pivotal trial, so you have to put the money on the table for a pivotal trial, and you are blinded to what those IDMC picks, 

[00:19:24] Kaspar: Actually the scenario in this trial, and I think this is sometimes challenging [00:19:30] for teams to get through governance bodies in companies because it takes a lot of courage to do it 

[00:19:35] Alexander: I think the other is prior experience in that therapeutic area. And if you can point to a similar drug [00:19:45] or similar study that is run in the same therapeutic area, then it becomes much easier.

[00:19:52] Alexander: However, if all of the others have done, let’s say, the standard approach, it’s even harder to [00:20:00] condense because you can’t say, oh, he also done it that way. And that’s always help with anything that is innovative. Also, we can’t really say adaptive designs are innovative. Let’s say I generate 30, [00:20:15] 40, 50 years old.

[00:20:16] 

[00:20:16] Kaspar: Yeah. But this is a general theme. Groupthink will not lead to innovation of course. And yeah. But. I think this, these features contribute to that. Adaptive [00:20:30] designs are perceived not to be used too often. On the other hand, if you count group sequential designs as adaptive, I mean in many therapeutic areas, group sequential designs, they are absolutely standard by now.

[00:20:44] Kaspar: And [00:20:45] the other aspect is maybe we overestimate, or sometimes we exaggerate. The opportunity to run an adaptive design because maybe your treatment is clear. You don’t have to pick between [00:21:00] two. Your population is clear. You don’t have to enrich you. There is no, so there is no need for adaptation. I think we should also not think about this as that it will fit for every drug development question that we have.

[00:21:12] Kaspar: It’s just a subset of drug development [00:21:15] questions. Granted, if we would apply it whenever we have such an opportunity, it would still be much more. I think another thing is that as a drug developer and as a statistician in drug development, it takes a little [00:21:30] bit of experience to spot these opportunities and sometimes I feel.

[00:21:34] Kaspar: Statisticians should be a bit better trained to spot these opportunities in team discussions. When a team is struggling to define the dose at some point, then as a statistician you should [00:21:45] raise your hand and say, Hey, we should, maybe we can put everything in the Pivotal trial. And sometimes I feel.

[00:21:52] Kaspar: You have to have seen a few development programs already. You have to be a bit senior to potentially spot these opportunities. And then at the [00:22:00] same time, also pull them off, have the technical background or at least have the support that you can implement that in a way that is foolproof for regulatory purposes.

[00:22:09] Kaspar: Maybe that’s another aspect that contributes to them not being used so often. 

[00:22:13] Alexander: Yeah. I think for, most [00:22:15] of those listening here will be statisticians. And if you don’t feel like you’re an expert in that area maybe just have a couple of discussions throughout the development of the study outline, whether that [00:22:30] is something to consider or whether you should go with, let’s say a straightforward solution.

[00:22:36] 

[00:22:36] Kaspar: And one other aspect I think gains a little bit in importance also on a regulatory side. And something [00:22:45] that maybe even we as statisticians are sometimes under appreciating a little bit. Assume you plan an adaptive trial, of course you plan that around the primary endpoint, say it’s overall survival in oncology, but then you have a whole bunch of secondary [00:23:00] endpoints.

[00:23:01] Kaspar: First of all, the inference for the primary endpoint may need to be adjusted. So the fact that you adapt doesn’t just affect your type one error. It also affects your inference how you [00:23:15] If you want an unbiased estimate, you might need to account for the fact that you adapted. That’s one aspect, and the other aspect is.

[00:23:23] Kaspar: In theory, you would have to pull through the adaptation for all secondary endpoints, for all safety endpoints, et cetera, et cetera. [00:23:30] So from a conduct perspective and from an inference perspective, there might also be challenges. And yeah, if you take all this together, these are quite a few things that.[00:23:45] 

[00:23:45] Kaspar: Might lead that In certain instances, people just prefer a template thing that takes one year longer.

[00:23:51] Alexander: Yeah. However, if we think about how important, just a couple of months are in terms of development time and how [00:24:00] we, what we all do in terms of cutting time. Time is really precious.

[00:24:05] Alexander: So I think having a discussion about this is super helpful. One last thing before we go to the regulatory topic is utility [00:24:15] analysis. Do you consider that to be an adaptive feature as well?

[00:24:22] Kaspar: I think it depends on how you define a confirmatory adaptive trial. often people say it is something where [00:24:30] you adapt under type one error protection and futility analysis typically doesn’t affect your type one error. So do you call that adaptive? Yes or no? For me it somehow belongs because futility analysis is the flip side of a group sequential [00:24:45] design.

[00:24:45] Kaspar: So I would also summarize that in this family, because at the interim, You pick one of two scenarios. Either you set the sample size to zero, you kill the drop, or you run it to the end as planned. So for me, that’s [00:25:00] also an adaptation. I would count FU utilities also under adaptive designs and building the bridge to what you just mentioned, the 20.

[00:25:08] Kaspar: There is also short statement about F utilities in E 20. So I think that [00:25:15] group also considers them to belong. So ICHE 20, what is that all about? ICH is the International Conference for Harmonization. That is a global [00:25:30] consortium with members from pharma industry and regulatory agencies that has built over the last, I don’t know, 50 years a comprehensive suite of guidances.

[00:25:43] Kaspar: That [00:25:45] regulate or display good practice in drug development. And the one that is relevant for our audience, statistician is E which describes good statistical principles for running clinical trials. [00:26:00] And I mean we have all talked a lot about R one, which can.

[00:26:04] Kaspar: Making a few things in E nine, a bit more precise with respect to estimates and it was felt that maybe it would be useful to write [00:26:15] a guidance specifically on adaptive designs. And that’s E 20. And this working group has worked a couple of years now. I think it was a little bit hit by the pandemic because this working group, which, brings together [00:26:30] industry and regulatory statisticians often meet face-to-face. the pandemic has postponed a few of those meetings. So it took quite a while until this draft guidance is now out and it has been published about a month [00:26:45] ago and is now open for comments from anybody who would like to make comments.

[00:26:50] Kaspar: And the title is indeed. Adaptive designs for clinical trials. So it’s exactly discussing this very topic that we talk about today. [00:27:00] 

[00:27:00] Alexander: Yeah. And if you are relatively unfamiliar with the topic. I highly recommend you have a look into it and read through it. Thanks so much, Kaspar for another very useful episode.

[00:27:13] Alexander: And I’m pretty sure [00:27:15] this wasn’t our last gig that we did today together. We will both be at the EFSPI regulatory workshop. That happens in Basel. And you can join those [00:27:30] in person and virtually. I would join in person because I think it’s a huge opportunity also from a networking perspective.

[00:27:38] Alexander: High value from a content perspective. So there’s lots of very interesting topics [00:27:45] about it. You wanna meet up with one of us? And that is definitely one of the next spots to meet. The other opportunity is probably for ISCB, isn’t it? 

[00:27:59] Kaspar: Yes. At the [00:28:00] end of August the International Society of Clinical Biostatistics holds their yearly conference in Basel as well.

[00:28:06] Kaspar: We expect about between eight and 900 attendees. So the conference is completely sold out and we expect they’re also a very. [00:28:15] Per program for statisticians working in industry. It’s a bit more academically leaning. It’s a different style than the spy workshop. And yeah, if you want the full coverage, then I invite you to attend both.

[00:28:29] Kaspar: The [00:28:30] spy workshop is just two weeks later and yeah, us talking about confirmatory clinical trials and D 20. At the S five workshop, there will be a session about E 20, so an industry view on E 20 and the regulatory view on E 20 and the panel discussion where I [00:28:45] anticipate some of the points that will come up in, in the commenting process will already be discussed.

[00:28:51] Kaspar: If you’re interested in this topic this might be really worthwhile to either to attend face-to-face. Or virtually [00:29:00] and and participate or contribute to the discussion. Thanks so much. Have a great time. Thank you, Alexander. Talk to you soon.

[00:29:13] Alexander: This show was [00:29:15] created in association with PSI, thanks to Reine and her team at VVS. With the show’s background, and thank you for listening. Reach your potential lead great science and serve patients. Just be an effective [00:29:30] statistician.

Join The Effective Statistician LinkedIn group

I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.

I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.

When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.

When my mother is sick, I want her to understand the evidence and being able to understand it.

When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.

I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.

Let’s work together to achieve this.