Every statistician in the health sector must know about estimands and how to apply the estimand framework.

In this episode, we introduce the topic using a case study. We’ll cover

  • How does the HTA system in Germany work?
  • What are the 4 critical elements for the estimands framework?
  • How does the application of the estimand framework differ in study planning vs post-hoc?

Lovisa runs also a workshop during a corresponding upcoming one-day PSI workshop on RWE.

You can register for the one-day event at http://www.psiweb.org/events/psi-events.

Lovisa Berggren

Lovisa Berggren (MSc), Senior statistical consultant, Freelancing consultant specialized in HTA submissions, data mining and analyses of integrated clinical trials data. With a focus on Phase III and IV neuroscience, autoimmune and oncology.

Lovisa gained her initial experienced as a statistician working 3.5 years with AstraZeneca. During this time she prepared and attended two public oral advisory committee meetings with the FDA and one EMA oral hearing for new indications with Quetiapine (neuroscience). Her job also including leading and coordinating big teams of statisticians and programmers under high pressure.

After AstraZeneca Lovisa joined ImClone for 1.5 years as a contractor working on two phase III trials in oncology. During the restructuring and merger of ImClone and Eli Lilly Lovisa moved over to work as a contractor for Eli Lilly. During her 5.5 years with Lilly she has worked with a number of datamining projects and publications (autoimmune and neuroscience). As well as in the role of lead statistician for the HTA submission of ixekizumab. Lovisa is currently working part time for Eli Lilly and Cogitars.

In parallel, she conducting her PhD at the UMIT University in Austria. Her PhD thesis focuses on methods for evaluating consistency of treatment effects in HTA reimbursement submissions.


Estimands in the presence of high and unbalanced drop out rates – a case study in the German HTA system – Interview with Lovisa Berggren

Episode 20 Estimates in presence of high and unbalanced dropout rates, a case study in the German HTA system.

Welcome to the Effective Statistician with Alexander Schacht and Benjamin Piskill, the weekly podcast for statisticians in the health sector designed to improve your leadership skills, widen your business acumen and enhance your efficiency. In today’s episode we talk with Louisa Baer-Kranz.

about S-demands and especially about S-demands in the German HTA system. We present a case study with pretty high and also pretty unbalanced dropout rates in the rises. This podcast is sponsored by PSI, a global member organization dedicated to leading and promoting best practice and industry initiatives for statisticians.

Join PSI today to further develop your statistical capabilities with access to special interest groups, a video on demand content library, free registration to all PSI webinars and much much more. Visit the PSI website at psiaweb.org to learn more about PSI activities and become a PSI

Welcome to another episode of the Effective Statistician. Today I have a guest that is very, very familiar to me because we are working together for quite some time. And also today I’m alone without my co-host Benjamin. And my guest today is Luisa Bechran. Hi, Luisa. Hi, Alexander. Nice to be here today. Yes, very good. So I could…

probably talk quite a bit about your strengths and all the different experiences you have. We have been working together in the neuroscience area already and now in immunology. Maybe you can tell a little bit of how you got into statistics and what was your life since then.

Absolutely. So hi everyone, my name is Lovisa Berggren and I’m originally from Sweden and I started out studying mathematics and some way half through I realized it wasn’t very applied and around the same time I got my first statistical courses and I thought, aha, finally something that is applied.

And after I’d finished off my master thesis, I started to work for AstraZeneca in Sweden with neuroscience. And after a bit over three years with them, I thought it was time to explore the world a little bit. And I ended up in Germany working for ImClone in oncology for almost two years before I joined Eli Lilly and Alexander.

team working on neuroscience and then in biomedicine and psoriasis. And then I also decided to quite recently start my PhD at a university in Austria called UMIT in Halterol. So that’s a little bit more about me.

Okay, what’s your research topic from a PhD perspective? I’m looking into different ways of analyzing consistency across subgroups in an HTA setting across the European countries, which is a very interesting topic because something that is very hard to analyze from a statistical perspective is consistency. So and it also…

It also comes across other areas that I’m very interested in and that is post-launch activities and HTA-related activities. Yeah. And today we have actually an HTA topic which is directly related to your PhD. And this is about the situation specifically for Germany.

And also some things that we will talk later about in the year, actually in September, there’s a one-day event about this topic and similar topics about generalizability actually in Bad Homburg. So if you want to go to this and learn more about what we are talking today, as well as a couple of other topics, so we have quite some nice speakers there.

And NICE, I can see here all in capital at last, because there are some people that are related to NICE. And we also have a guest from Equig that will join us on the panel discussion, who is actually the… How should I say it? Chair of the statistics group at Equig. And…

So today we talk actually about estimates in the HTA system in Germany or kind of a case study where estimates play a bigger role because there was a study with a pretty high dropout rate in one of the comparator arms and that made all the kind of following problems in terms of interpretability.

and what we actually wanted to say something about it. And just to give you a little bit of a feeling of how this HTA system in Germany works. So when you want to bring a new drug on the market in Germany, in the first year, you basically have pretty much free pricing. But by the time you bring it on the market,

you need to submit a value dossier, an HTA dossier, to the GBA. That gets then usually reviewed by the EQIC. And the EQIC is the scientific advising body to the GBA, who is the political decision maker. This whole system is also very often called the MNOC system.

because MNOC is the law that sits behind the system. Now then, eQUIC reviews it, the value does here that the company submits, and provides a review for which the company, as well as other stakeholders, have three weeks to review it and provide comments. Afterwards, there is then a hearing at the GBA. So it’s the GBA.

Here’s the sponsors that submitted the value dossier, as well as other people or parties that submitted comments. This is an interesting discussion at the GBA in Berlin on which basis there can be maybe some further clarifications at the end, and maybe a little bit of additional analysis.

And then based on this hearing and all the different data, the GBA makes a decision about the added benefit of the drug. And if there’s an added benefit, then there’s subsequently a price negotiation where everybody wants to go. From a sponsor side, of course, everybody wants to go into this pricing negotiation with a very, very good added benefit.

So this is just a little bit of background from an HTA system, so something like five-minute intro into MNOC in Germany. So today we will talk actually about a case study in psoriasis where specifically oral products were compared to a new biologic which was about to launch on the market.

And for the critical oral product that was a randomized comparator in the study, there was a pretty high dropout rate due to adverse events. So the study was a 24-week study, and nearly half of the patients in the comparator arm dropped out due to side effects.

very much kind of in the first half of the study. So that, of course, led to all kind of problems of how do we compare, on the one hand, the biologic, for which nearly all patients completed the 24 weeks, to another oral product where there was a significant dropout rate. And that led to all kind of interesting discussions.

And this is very much kind of, we are now in 2018. Most of these discussions happened last year, so in 2017. So we are kind of in the midst of all the estimate framework discussions. And so this is also a very, very nice case study, which was presented at the PSI conference this year by Louisa.

And that’s why we are actually talking about it today. Exactly. So, Luisa, as I kind of just mentioned the estimates framework, what are actually the kind of four critical elements of this estimates framework? Well, before I describe those, I would just quickly like to describe what an estimand is. Because even though it’s a very

theme at the moment, it’s sometimes forgotten about where all of this comes from. And it actually started with FDA who looked into missing data problems and they discovered that a lot of what we do in clinical trials is really good, it’s pre-specified, it’s very well described. So we have our objectives and we have our analyze plans.

But they also discovered that there is a link missing between these two. And that is how to address missing values. And if you don’t address this, you get a lot of ambiguity on how your objectives actually translate into your analyzers. And in worst case, you can end up with a situation where you run all your analyzers according to plan, but you can’t make a conclusion on your objectives because of the ambiguity of missing data.

And then Emma took over this and constructed Estimants. And we’re going to see of the estimate of the objectives itself. Exactly. So kind of just saying I want to know the treatment effect of drug X versus drug FET is probably not a good objective. Yeah. So you can see Estimants as a way of making clear and transparent objectives and thereby

connecting them to the analyzers and the results in a transparent and cohesive way. So, and in order to do this, you will need these four building blocks that you just mentioned. And they are by the new ICH E9 addendum. They are defined as population, variable, population level summary.

and in the current event strategy. There’s nothing about missing data in there. Exactly. And that’s because missing data is just the part of the of the problems that you can come across when you do a clinical trial, because you also have problems like death, or switch of treatment and other kinds of events that will in one way or another lead to missing data.

So that’s why they define this in the current event as a broader term that includes what we normally call missing data, but also other things like treatment switching. So I think the treatment switching is a very, very interesting thing. So for example, if, you know, for all patients in your study, you have a complete set of follow up. So you don’t have any missing data.

you still can have very, very much discussions about estimates if there’s kind of different treatment switching in it. Exactly. So, for example, if you allow something like rescue therapy or something like this, and then after the rescue therapy, you follow up on the patients and you see how they actually do. So, you collect all the data, you have all the data. So it’s not really… It started as a missing data problem, but it’s…

much bigger than that, isn’t it? Absolutely. And that’s a very good point that you just made, that you can’t just say that, oh, we don’t expect much missing data in our studies, so we don’t need to address this whole topic of estimates. Because like I said earlier, it’s really about making clear objectives. So it’s still something that should be considered, even if you don’t foresee a lot of missing data.

Okay, okay. So let’s go shortly into kind of these four critical elements. So first, in terms of population, what do you mean by population? So population is basically the patients that you need to observe or to study in order to answer your research question.

about the population as a whole, you need to make sure that you include, that your in an exclusion criteria in the study include all patients. But if your interest is for example to look in patients with itch, you should consider to limit your study population to this population of interest or if both populations are of interest you should at least stratify for this population to make sure to preserve.

randomization when you do your analysis. Yeah, and we are talking about itch here because we are talking about psoriasis case. And not all patients that have psoriasis do have problems with itch, but a considerable number. And the itch was one of these very critical endpoints in this case study. And

One of the problems, of course, was, okay, do you use all patients, so the kind of complete ITT population, or do you use just those patients with itch? And then, of course, itch, I think, was measured on a 10-point or 11-point scale. So is it all patients with, you know, at least any itch or any kind of…

clinically irrelevant itch or you know what’s kind of the populations that you’re talking about these kind of things, isn’t it? Exactly. And that actually brings us over to the next part of the estimon and that is the variable because it’s also important to make sure that you collect the variable that you’re actually interested in in your research question in your study.

And it’s also important that you collect it in a way so that you have all the data needed. So if you want to do detailed analysis on the variable, it’s not a good idea, for example, to just collect it as a yes-no variable if it’s possible to collect it in a more detailed way. And it’s also important to decide what scale you are collecting this variable on,

even if it’s an already validated instrument, you might have the possibility to look at it in terms of percent change or area under the curve or time to event, etc. So selecting your variable and really thinking it through upfront is very important. So in terms of this, if you speak about area under the curve, that is kind of from the

What’s the burden of the disease over a specific period, for example? Yeah. Exactly. So when you talk about the variable, you are talking about patient level data. Yeah. Okay. So the variable is a patient level data and that could be a response variable or change from baseline variable or percent change from baseline variable or an AUC variable as you just described.

And of course, that is kind of somehow also linked to the population, isn’t it? Because let’s say you have response variables that says you have a minimal clinically important improvement from baseline, then that basically implies that your population should only be those that have at least this minimal…

are above a certain threshold in terms of their severity at baseline. So for itch, for example, a four-point change on this 0 to 10 scale is very often considered clinically important. So they should actually have at least four at baseline, otherwise, you know, they can’t reach the endpoint, isn’t it? Exactly. And this is something that comes back. The more we discuss this as demands.

It’s really a process. It’s something, it’s an iterative process. So when you decide one of these different components of the estimate, they will impact each other. So it’s nothing that you just sit down, write down once. It’s something where you really need to iterate and go back to your objectives and to your analysis. And to link back to what we talked about this being patient level,

That brings us to the third component of our estimate, which is the population level summary. And the population level summary is, now when you have collected your variable, let’s say, response. So then you need to decide how you’re gonna summarize this because on the population level, you have different options. So when you have a binary endpoint like response, you could, for example, look at

absolute difference in percent responders. You could look at your odds ratio or your relative risk. Or you could actually use this responder variable to create a time to event variable. And then you would, for example, look at hazard rate or similar. So the population level summary is really…

what are the values that you’re going to compare between your treatment arms? And here was a response rate and if you have kind of different time points over which you collect the response, so let’s say you have a 24-week study and you collected that week 1, 2, 8, 12, 16, 20 and 24, then of course you have kind of, you know,

response rates on multiple occasions and you would ultimately may want to say about kind of, okay, what’s the response at 24 weeks? But it could also be something kind of if you think about the time to first response, which is then… But that is a different kind of patient level data, isn’t it?

So, time to first response would be kind of a summary statistics that would be on the patient level. Yes. And once again, like I said before, it’s all an iterative process. So some of these things that you mentioned, for example, time point, that’s something you could either specify in your variable or in your population level summary, but it’s also part of, could be part of your intercurrent event strategy.

because you may, for example, want to go for a strategy that only looks on treatment. So that, of course, again, very nicely brings me to the last bit on the estimate framework, which is the in the current event strategy. And as we said before, in the current events include missing data, treatment switching, and basically any event.

in your study that impacts your data collection or your analyzability of your data. Is this intercurrent event always directly related to treatment, to the treatment itself? Is it always kind of stopping, changing, augmenting, whatever the treatment?

Not necessarily. It should be related to treatment in some ways because otherwise it’s more likely to be an outcome variable. But there are there are events that are a bit hard to classify. So for example, that could be could lead to missing data, but that could also be an interesting outcome in your study. So there are there are different ways to to there are

lot of different kinds of intercurrent events, but they should impact how you can analyze the data. So for example, the patient changing hair colors would not be an intercurrent event because that doesn’t have any impact on how we can analyze our data.

Yeah, yeah, yeah. Unless hair color is one of our end points, of course.

Yeah, I’m just thinking about some bizarre or very, very rare, psychiatric diseases that were maybe that is assigned for something. Yes. And I mean, if we step aside from the medical research, I think things like hair color could have more of an impact on whatever you may be studying. But for medical research, I think it’s quite safe to say that

high color is not, change of high color is not an intercurrent event. Yeah, and I think I yet to need to come across a study where we actually collect this data. So yes, but maybe I’m just working in the wrong therapeutic areas. Okay, we had digressing here a little bit. So in terms of intercurrent events. So in our

case study, the intercurrent event was dropout and most of the dropouts were due to AEs. So, is there something different in terms of whether these dropouts would be due to AEs or whether they would be due to missing follow-up or kind of, you know, due to efficacy? Would that make a difference?

It could, yes, because that brings us back to the different in the current event strategies. And you basically have five of them. The first one is treatment policy strategy, which is what most of us would say the ITT approach. You basically say that it doesn’t matter what happens to the patient. If he leaves the study, if he switch medication.

doesn’t matter. We continue to collect data and we analyze it based on his or her randomization. The second one which… Okay, just hang on a minute. So in terms of treatment policy, my interpretation or maybe my very naive interpretation to this is kind of, you know, you have a rather clearly defined algorithm for how you treat patients. So kind of…

If they have this event, then you go to this next treatment. If they have that event, then they go to this next treatment. And if you kind of randomize patients, then it’s pretty clear that they get actually kind of different sequences of treatment based on their different outcomes. And you basically compare these different

you know, treatment strategies, policies, irrespective of kind of, and then you look into the outcome at, you know, some time point, just based on the randomized comparison at the start. Yes. And this is one case of treatment policy strategy. And this is a very favorable case, because that’s a scenario that

the regulatory and the HTA authorities are likely to approve of. However, there is nothing in the treatment policy strategy that requires you to have these predefined switching alternatives. It’s really truly a pure ITT approach where you just say that we will continue to collect data even if the patient drop out or switch to.

some treatment we hadn’t even foreseen. So it’s, of course, then becomes harder to use and to interpretate your data. But if you just look at the principle, it’s possible to include any type of event except for death, because you can’t collect data after death, where you can collect data after that event. And then just simply ignore the event and say you are not so much interested in it.

But of course, the useful part of this strategy is when you have well defined treatment policies that you can compare at the end of the study. So if you have clearly defined in your protocol what happens for the different cases and then you collect the data after these cases. So let’s say you have a predefined…

strategy for how to treat treatment failures, what to do after an adverse event, these kind of things. So that in the end, you can clearly say, okay, we compare treatment strategy A, where you first start with C, experimental drug, and if that doesn’t work, you have this kind of escalation strategy compared to the standard of care, where you have this type of…

escalation strategy. In terms of that, I find that very, very appealing. I think we as statisticians tend to think from a efficacy point of view, what is the efficacy outcome in this kind of sense. Now of course that similarly also applies to any safety outcomes.

if there’s kind of, you know, safety events that happen after the switching, they would also kind of, you know, be compared. So, so, you know, so, so for example, if you have your experimental drug, which is kind of for lots of researchers, their baby, this baby would get hurt by any kind of AEs that happens basically after they stopped.

treatment with this baby. So I think that is a pretty hard pill for many to swallow. Yeah, no. And I think it’s also important to start and distinguish between effectiveness and efficacy and the same thing with safety. Is it just the safety of our drug while it’s given or is it actually…

safety in a broader perspective, like we talk about effectiveness as more a real world perspective on efficacy. Yeah, so this efficacy, effectiveness wording, I’m not personally, I’m not very much of a fan of it because…

In the past, I have seen that very often used kind of efficacy you get from RCTs and effectiveness you get from observational studies. And here we talk about effectiveness within RCTs. So for lots of people that might be completely confusing. So absolutely. And I think that’s… And I think the estimate framework wants to overcome the confusion.

So I’m not sure we should add further vocabularies that doesn’t help to clarify things. Yes. Then we have talked about the treatment policy strategy. Then we also have the composite strategy, which links back to what we said before about death, for example, because death…

What happens after death could potentially be seen as missing data because we can’t collect anything after death. And so unless death is your primary endpoint, it will cause missing data. But there are cases where it might not be the only outcome of interest, but you might want to create a composites strategy where you combine death with, for example,

progression or with some other response variable. I really like this composite endpoint. So it’s like in the psoriasis field, there’s lots of these talk about non-responder imputation, which basically is a composite endpoint. Absolutely. Have you reached your clinical symptom improvement?

And I used the long treatment and all these other responders and if they have stopped that treatment, then by definition, say on a non-responder. And I think that makes this very, very nicely to interpret. It’s just, you know, the wording non-responder imputation is…

maybe a little bit misleading because it talks about handling missing data when in fact I think it’s a composite endpoint. Absolutely. And that also, like you say, a lot of people just see it as a way of imputing missing data and then they leave it at that. Well, if you embrace this estimate framework, you can then start and conclude things about your results.

connect your objective to your results. So you would then start and talk about not that my results are this or that within an MRI imputation, but my results would be this if we consider clinical success as the patient having a response and still being on treatment. So you can really describe your data in a much better way if you think it through a bit more in detail and don’t just go for.

Yeah, we used NRI imputation. Yeah. And I think the NRI, I think, works very, very nice. These composite endpoints work very, very nice for binary endpoints. What about continuous endpoints, like kind of change from baseline? And here with itch being a type of continuous endpoint. So

change from baseline is of course a very, very appealing thing.

How can you incorporate something similar in terms of composite endpoint for such a continuous endpoint? So then you would normally move a bit more towards the next strategy, which is the hypothetical strategy. And that is sort of answering the question, how would my results have looked if there were no, if in the current events or treatment switching.

wasn’t allowed. So if dropouts wasn’t allowed or treatment switching wasn’t allowed, how would our results look then? And there, for example, you have the modified baseline observation carry forward or the pure baseline observation carry forward, where we make a very strong assumption that the second we stop observing the patient or the patient has an intercurrent event.

they go back to absolutely no benefit of the drug, or at least they go back to the baseline value, which in some settings actually could be a benefit if you have a deteriorating disease. But in this kind of modified baseline observation carried forward, you could also have different scenarios for safety. You would go back to maybe ZZ.

baseline if they drop out for some other reason, lots to follow up. You could say, okay, they just have a LOCF value, last observation carried forward value for that. So there’s lots of different ways you can handle that, isn’t it? Absolutely. And there is so much more that can be done within this hypothetical strategy.

For example, your multiple imputation strategy would fall here, fall under this category. Your MMRM would fall into this category. And basically any method where you somehow model what would the patient have experienced if they didn’t have the intercurrent event. I have one question in terms of…

AUC that seems to be one of the um

upcoming kind of on vogue, maybe on vogue topics with in the German HTA setting, at least it’s pretty prominent in the new update of the GBA template. At the time we are recording this. Is AUC kind of the solution to the problem in terms of missing data?

No, but it falls under one of the intercurrent events strategies and that is the while on treatment strategy where you say that okay, intercurrent events might happen, but we are anyhow just interested in what happens during the time you actually take our drug. So then area under the curve could be a nice way to summarize this. Or here you can also have, as we talked before, we talked about the time points.

and that you can put that in the variable, but you could also put the time point in the current event strategy. And that would be to say that we are looking at the last visit on treatment. So it would be, once again, a bit similar to LOCF, but the interpretation would be different. You wouldn’t say that the results are this at week 24 if you use LOCF. You would say that.

If you look at the last visit up until week 24 that the patients were on treatment, these are the results. Yeah. In the specific case study, all the final conclusions from the eQUIC and also from the GBA were all based on the Kaplan-Meier analysis of the time to event endpoints. Yeah. But time to event can actually fall into different in the current event strategies.

not necessarily a while on treatment strategy, even if it can be tempting to think that it is. Okay, tell me about it. So for example, let’s say that you do a Kaplan-Maiyar curve and analyze it based on that, then it’s actually a hypothetical strategy because you are censoring your…

the missing data. So that makes an assumption about how those patients would have behaved. You sort of say that they would have continued in a very similar way. So your Kaplan-Meier curve would be a hypothetical strategy and your time to overall progression-free survival would be a composite endpoint because you would look into both death and you would look into progression.

as an event. It’s quite tricky. It requires to think it through a few times, actually. Yeah, and I think it’s also there’s a difference whether you look into time to something bad or time to something good, because that has a difference on your kind of

What does that make sense in terms of the…

whether your hypothetical assumptions actually play out. So if you stop treatment, it’s very, very unlikely that after that you have further benefit. But of course, if you stop treatment, afterwards you still can die because of that treatment. So I think that is…

Just a couple of my curve to applying all of this doesn’t seem to be the direct solution here. No, and that’s also an important point that there is no perfect solution. There is no one estimate that will fit for everything. There is no one in the current event strategy that we can apply regardless of study, regardless of endpoint, etc. So

Once again, it’s this iterative process. And before we move on, I just want to quickly mention the last intercurrent events strategy, and that’s the principle stratum strategy. And that is something you have to include in the study design. And that, for example, could involve an enrichment period or something like that, where you use the pre-randomization to identify the patients that will not have an intercurrent event.

So for example, patients that you know tolerate the treatment already. Okay. So one final conclusion, there’s no ring to rule them all like in Lord of the Rings. Everybody needs to have a, you know, really put some thoughts into defining the objective very, very clearly. Go then into the different parts of the elements.

have a nice iterative framework. And I think it’s also iterative because there’s of course many stakeholders involved, which I think is especially in our scenarios where we have not just eMind FDA as kind of very, very prevalent stakeholders, but lots of different HDA bodies around the world, all the different…

academic societies, every different physician might have a kind of different perspective on what is his preferred estimate. It’s pretty, pretty complicated problems that was dropping out of this original missing data problem, which in the end…

helped us actually overall to come up with a much more precise interpretation of our data. So, the more I dig into this estimate framework, the more I become a fan of it. Yeah, and I think what you just mentioned is really important that, you know, you have different stakeholders that might have different interests in what estimate you use.

And you also have different, you may also need different estimates for your, for your regulatory submission and for your HTA submissions. So that’s why it’s so important to do this on a study design stage, but then also to redo this process a little bit modified when you do all these post-hoc analysis because

that we can really benefit from having more of a structure and really not just go crazy to running all analyses, but actually take a step back and ask ourselves what analyses should we do and why and how are we going to address the missing data and what impact is that going to have on the analyses and the interpretation of the analyses. And I think in all our communications, we need to get much better.

with regard to these things and all the different documents that we have, starting from the protocol up to publications and presentations. Of course, in study reports or any home pages where we publish the data, I think that would help a lot. But I think it’s a long way to explain this concept to

everybody in the community. So I’m pretty sure this topic will be quite hot for the next years. Yeah, and I think it’s something that could really help us going back to our case study. I think if estimates would have been used more actively in the planning stage of the analysis, I think we could have avoided quite a lot of analysis.

So that’s why it’s so exciting to now start and see how we can use this framework, not only for studies, but also for post-hoc analysis. Yeah, and it would help me a lot because I was kind of asked on this topic for nearly 45 minutes in the meeting in Berlin. And yes, there were lots of lots of different questions around it. And I think there is.

from all the different stakeholders and says so much things to learn. That’s not surprising. It’s a very, very new concept. Of course, it takes some time to really understand it and teach it to everybody.

I think this episode today was hopefully for you as a listener quite nice further introduction to the topic. If you haven’t done so, I would strongly recommend to read the addendum of the ICH guideline to attend PSI meetings that are about that.

within your companies or within your colleagues, you know, talk about this and challenge it because this is not something that will come naturally and you know, it’s I do this one hour training and then I’m done. Okay, thanks a lot, Luvisa. That was great to have you here. Thank you so much for having me. Have a nice time. Bye. You too. Bye.


I don’t know.

as a statistician in the health sector. If you enjoyed the show, please tell your colleagues about it.

Join The Effective Statistician LinkedIn group

I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.

I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.

When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.

When my mother is sick, I want her to understand the evidence and being able to understand it.

When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.

I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.

Let’s work together to achieve this.