What knowledge gaps exist regarding the implementation of these new analytical techniques, and how can statisticians and data scientists bridge them to maximize effectiveness of clinical research? Despite the promises of Bayesian approaches, the lack of expert training, software tools, and computational resources has made their adoption slow. Statisticians and data scientists should invest in training programs to enhance their skills in Bayesian modeling, use open-source software for Bayesian modeling, and ensure that they have access to well trained computational analysts. 

It is also essential that statisticians and data scientists effectively communicate the results of these analyses to clinicians, regulatory authorities, and maybe even patient groups to encourage adoption.

Bayesian approaches have great potential to improve the accuracy and efficiency of data analysis in early-phase clinical trials. But Bayesian methods are not without their challenges, but with adequate training, resources, and communication, they provide opportunities for novel ways to cope with complex problems in clinical research.

But, first comes the challenge: prior elicitation from clinical information. Developing prior distributions can be difficult without the right set of tools and resources.

In this episode, Miguel Pereira, a statistical consultant for a German-based company COGITARS specializing in early clinical trial design, and I highlighted ways to tackle these challenges.

We also discuss the following points:

  • What can statisticians and data scientists learn by exploring the early development research questions?
  • Are there any unique design challenges that they should be aware of?
  • How can Bayesian approaches offer new insights that traditional methods cannot provide?
  • What strategies should data scientists employ in order to successfully navigate the complexities of these new approaches?
  • What knowledge gaps exist regarding the implementation of these new analytical techniques, and how can statisticians and data scientists bridge them in order to maximize their effectiveness?

These are all important questions for statistical and data science practitioners, and exploring them further could reveal opportunities for novel ways to approach problems in an increasingly complex field. So, share this link with your colleagues!

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won’t send you spam. Unsubscribe at any time. Powered by ConvertKit

Learn on demand

Click on the button to see our Teachble Inc. cources.

Load content

Miguel Pereira MD PhD 

Director and Data Scientist at COGITARS

I am a former MD turned biostatistician and data scientist. My mission is to use data and analytics to improve medicine and healthcare while developing innovative methods along the way. From computational biology and bioinformatics to preclinical development and clinical trials, there’s no shortage of interesting problems to solve

Transcript

Bayesian approaches in early clinical research

[00:00:00] Alexander: Welcome to another episode of the Effective Statistician. Today I have Miguel with me as a expert in patient statistics for early clinical research and will speak specifically about oncology today. That’s definitely an area where I have only a little bit of insight, so I’ll be learning a lot today as well. Hi Miguel. How are you doing?

[00:00:26] Miguel: Hi, how are you? Doing well, thank you.

[00:00:28] Alexander: Very good. So maybe for those who don’t know you maybe you can introduce yourself first.

[00:00:35] Miguel: Yes. So, my name is Miguel. I’m based in Cambridge UK and I was, at the moment, I work as a statistical consultant for a company called ktar, which is based in Germany. And we provide consulting services mostly in the design of Beijing clinical trials in early clinical development. We don’t just do patient statistics, but that’s the main focus of the.

[00:00:55] Alexander: Yeah. And we actually had Oliver on the podcast already at least once, probably more than 100 episodes ago. So if you go back we talked about working for big companies, for small companies, for cro and being a freelancer and now he’s growing his own business, cock guitars. Which is great to work with. It’s a great consulting organization especially in that area. And that can help quite.

[00:01:23] Miguel: Yes. And the focus is not just on providing services, which would be the main focus of CROs. It’s not to be our focus is not to be a CRO but to provide expertise in novel statistical methodology and in developing novel methods for novel designs because drugs are different and have different requirements, and that’s our main focus.

[00:01:39] Alexander: Yeah. And I notice that you also do quite a lot of actually CRO oversight work for smaller clients.

[00:01:45] Miguel: Yes, that is correct. So when we work with with clients that we partner almost as their statisticians in-house statistician as we were in-house statisticians and we work with CROs in try to articulate the interaction between the client, what we are doing, and in the CRO.

[00:01:58] Alexander: Yeah. Okay. Let’s talk about early phase development and specifically we’ll focus today on oncology. What are the typical professions that come along?

[00:02:09] Miguel: So in early phase of development, and this is something that goes pretty much across the board regardless of whether we’re talking about oncology or any other other area or it’s mostly about safety. We want to, we’re talking about first in human trials usually. So phase one and we want to know what is. We want to find what is the highest level in general of a drug that is still considered safe in clinical, in humans. So part of it is how can we define the maximum acceptable dose or MTD maximum tolerated dose? While keeping the patients safe, there is one thing which is particular of oncology and which I would say specifically of oncology, which is usually we are the first human trials aren’t people that have cancer and not in healthy subjects as other areas which were a phase one trial would be in healthy individuals because we’re effectively not.

That interested in efficacy yet, but we are interested in safety. And an interesting thing that comes also, comes from that is that we are not only treating people that are ill because we are treat people that are ill. We also want to reduce the amount of subjects that are exposed to subtherapeutic levels of a drug. Because if we’re talking about the drug that could change the landscape of that disease. We don’t want to, we want to make sure that these people are also benefiting from it because usually they are late stage patients that don’t have many choices in terms of treatment because they’ve basically exhausted all the lines of treatment that they had.

[00:03:27] Alexander: Yeah. And it’s also especially when you think about oncology, lots of disease indications are rare diseases. And that’s another reason why you usually wanna restrict your amount of patients because it can take forever to recruit lots of patients. So when we think about this and we wanna understand, okay, what is Maximum tolerated dose. How was that traditionally?

[00:03:50] Miguel: So the most traditional design, this is a design that is specific to oncology, is the three plus three design, which, and is, it’s extremely easy in most, a lot of people listening to this will recognize what it is, but it can be explained in 30 seconds. So in a trial, you enroll patients at the first dose level, you enrolled three patients. , require the number of dose limiting toxicities or DLTs. Usually grade three is what we’re aiming for and we see how many DLTs occurred in a cohort of three. If zero DLTs occurred, we can escalate to the next dose level. if two if one D LT occurs we should enroll and expand on that and we’ll have three more patients at that dose level. Meaning we have six and we detect whether have DLTs or not. And if two or three DLTs occur, we we stop at that dose level. And this is a very simple design. And it’s so simple that I’ve just explained it to you, and you don’t have to be a statistician to understand it. It is, anyone can understand and implement it.

[00:04:45] Alexander: If you stay at the same dose. So imagine you have three patients. One of these has a D L T, and then you enrolls three further on the same dose, and then none of these have A D L T. Then you would still go to the next dose?

[00:04:59] Miguel: Yes. So then if you have one out, which would mean if you enroll these three, we have one out of six. Because you required one in the first cohort and none in the second three patient cohort. And so one out of six would be, you can escalate to the next dose level. Ineffectively the cohort sizes are, is essentially three or six out of, that’s why it’s called the three plus three, because three is the baseline or the normal cohort size, and eventually might expand with more. Three more, which would be the sixth. So three plus three.

[00:05:25] Alexander: Okay. And if you have two or three. The first step that get it and you directly stop there and see you have reached your no will MTD, your hunt, maximum level.

[00:05:37] Miguel: Exactly. So you exactly, you would say, this is the mtv we cannot go more or so, we cannot continue the end of escalation and this is the maximum dose that we are going to give these patients in say, phase a, phase two trial or eventually another, in another expansion cohort that is outside of this three plus three desgin.

[00:05:54] Alexander: And you mentioned grade three toxicities. For those people that are not working in oncology. In oncology you don’t have the typical kind of severe and serious adverse events. You have toxicities and all the different that happens.

No, you also consider severe treatment. Associated adverse events. But you could see you have a great toxicities that go, goes from one to four. And usually in it’s effective has to do with either the duration or the severity of a certain symptom. It could be a headache. So if you have a mild headache for. three hours after you’ve taken a drug that would most likely be considered a grade one adverse event. But then if you have say an immune reaction to a compound and for some reason you had almost cardiac arrest or you had to stay in hospital for 24 hours, you under observation in special treatment has to be administered, you most likely go into the realm of the grade three or grade four. However, it’s not very hard to get into a grade three because if it’s something that already in some cases, if it’s something that already interferes with your daily life in a significant manner, in some cases that could be considered a grade three. So there is I don’t know off the top of my head what are the full criteria for a grade three. But they are fairly well defined in terms of intensity and severity.

Yeah. How long is he follow up time usually to check for his toxic?

[00:07:10] Miguel: So that is depends essentially on the drug being administered, in that decision, it’s usually made by the medical team or by the clinicians that say for this specific drug, given the regimen or the number of cycles that are being administered, this being in a chemotherapy oncology related setting that depends more on the clinical decision rather than a standard follow up period. So you could have, we could have a follow up period of say three days, but we could also have a follow up period of say, 21 days or seven days after the last administration. So for example, let’s say we have a cycle of three administrations of a drug, which in they’re administered once every week. So this could be a period of 21 to 28 days according to what the follow up is where we give the first administration and we record if there are any DLTs. Second administration, we record if there are any DLTs. Third administration will record if there are any DLTs. So this would be a 21 day regimen, for example.

[00:08:03] Alexander: Okay. And when you then check for these DLTs so basically you need to basically have a database lock after each of these steps wave and get all the information in, do the analysis, check for these things. And then you have,

[00:08:22] Miguel: You don’t necessarily need a database. You don’t need necessarily the database lock, because usually you do the database lock when you’re doing more the final analysis or an interim analysis. So in this case, it depends on how the assessment of DLTs is done and the those escalation meetings occur. So if, for example, we have a certain drug where a patient comes in is given a certain drug, it is the patient’s assessed for seven days after or the cohort is assessed for seven days. We could potentially have a dose escalation meeting at the end of those seven days. And we would decide whether we escalate or do not escalate. We that is something specific of these dose escalation trials where you need to get the data. Analyze the data and make a decision and then continue decide whether you do escalate or don’t escalate.

And in a way, that’s why this, the three plus three design, which has many limitations, but it’s very useful because if you just know the clinicians have been observing the patients, so they know the number of DLTs and they just know whether they can escalate or not. The amount of statistical work is minuscule if.

[00:09:22] Alexander: Yeah. You just need to make sure that you have all your data in place. Yes. it is.

[00:09:25] Miguel: Exactly.

[00:09:26] Alexander: Yeah. Okay. Very good. Let’s talk a little bit about, you just said it’s pretty simple and easy but what are the limitations of it?

Yes, everything tends to be a trade off. So far you trade simplicity for in this case, less accuracy. Less flexibility and effectively this I call with simplicity as well comes many, the limitation of not being able to do things like you are stuck with the dose levels you have. So if you had to find five dose levels at the beginning of your trial. Those are those levels you’re going to assess. If for some reason you think, oh, perhaps the last dose level or the second to last dose level might be a bit too much, you cannot just go and say, we’re just going to reduce that dose. You don’t have that flexibility. Also your cohort sizes are three plus three you cannot do four, you cannot do two, you cannot do seven. And that’s usually, I think that clinicians have a hard time understanding because the reason why it’s three plus three is because we’re targeting a certain proportion of DLTs that come from the fact that it’s three. Because one out of three is a third.

[00:10:29] Miguel: In one of out of six is about 16.7%. And these DLT rates are what we are usually are the ones that are usually accepted in oncology. So our targeting rate would be about 30 to 33% of acceptable grade three DLT rate for a cancer drug. So the three post three is completely fixed. Clinicians have a really hard time understanding that. And on top of that, we have also, we’ve talked about inflexibility being stuck with the same dose levels. We also have the fact that it’s less accurate. In general, the targeting rate of a three plus three design, meaning the, if you do a simulation the proportion of trials are the percentage of trials that get it right to get the M T Right is 35 to 40%. So this is just more a little bit more than a third of trials. Get it right.

[00:11:14] Alexander: So if we talk about accuracy here, what exactly is accuracy here?

So in this case, accuracy in or in the targeting rate is how many times in a trial or when you’re simulating trials how many times do you get the right m t d?

Okay. So if you think if you assume of these five MTDs that you set. Yeah. And you say, okay, you basically know, okay, this one is the right one out of these five. In about third you get exactly that right? One. And in the other cases you would get a higher or a lower one.

[00:11:52] Miguel: Correct. Yeah. And for example, and I was doing. Just doing some calculations and if, for example, the real MTD sorry, the real DLT rate is 0.5. So if at those level you have a 50% chance of having a grade three dlt, which is very high and way above what you would accept, you have an 11% chance of escalating.

[00:12:15] Alexander: Okay. Yeah. .

[00:12:16] Miguel: So where, so you should not only stop because it’s already too high it could be 11% of the cases. So if you simulate a hundred trials in 11 trials, you’ll escalate. So you are ine effectively would be presumably the next dose would be considered to be toxic. And you would stop there, but you would be declaring an m t d that is too toxic. On the other hand, you can be unlucky and you could go to the first dose level, everything’s fine, second dose level. And you observe two d lts just because you were unlucky. And you’re done. And the drug does not get, and you cannot test under those levels. so if in a way it can go to effectively it’s bad both ways because you can be taking those level that its too toxic or a phase two trial, or you might not even do a phase two trial. You might just kill the drug because you didn’t, you weren’t able to escalate to a level that where you can see some efficacy.

[00:13:09] Alexander: Okay. Very good. So that makes a lot of sense and I think as statisticians you can really easily see these kind of limitations. I would say one of the biggest limitations is that you, that it’s really difficult to pick in between, because if you think about, if you think about it, probably there’s some kind of curve. And,

[00:13:27] Miguel: Exactly.

[00:13:28] Alexander: Probably wanna get to close to it as possible, isn’t it?

[00:13:32] Miguel: And in fact, that brings me to another limitation of the three plus three which is, it’s algorithmic, meaning you have those rules and you can, there are still flexibility around that. And also when you look at those level, you are just looking at that information. You don’t care about what happened. at all. So for example, let’s say you reached the third dose level and in the first dose you didn’t see anything. The second dose, you didn’t see anything, and now you see two DLTs. It’s very different from a trial where you, the first dose level, one dlt, you expanded, etc, you, but you were able to escalate the second dose level. You see another dlt, you expanded, but you were manage, you managed to escalate, and now you see two DLTs. If you look at the history behind it, Those are two very different scenarios, which the three plus three design completely ignores.

[00:14:21] Alexander: Yeah. That makes a lot of sense. When you have at the next dose levels, the first two BLTs, you can more likely assume that maybe you could go further. Whereas if you have, as you said in the second scenario, which is described already, two DLTs has two different lower dose levels, then it’s, just from a maximum likelihood perspective, I would say it’s much more likely that you have reached your M T D.

[00:14:50] Miguel: Exactly.

[00:14:51] Alexander: Yep. Very good. That makes a lot of sense. So let’s bring in base now. Base seems to be one of the really nice things that help us to be much more flexible and bring in some, modeling and these kind of things. How does that work with huge patient approaches here?

[00:15:08] Miguel: I guess first thing is, and that’s one of the things I like about patient, I find that, and this is not specific to clinical trials, just patient statistics as a framework and or as a school in statistics, I find it a lot more intuitive, not just to me as someone who does statistics, but also to when I work with clinicians and didn’t say that in the beginning. Originally I studied medicine, so I have a iCal background, and that’s the way I think. for me, thinking sometimes about p-value it’s very convoluted when you’re thinking about it. Whereas with Beijing and yet sometimes I can ask sometimes questions to people that know a lot about statistics and they don’t know the definition, proper definition of a p value, because when you think about it it’s not intuitive. And with Beijing in general, I find that things are a lot more intuitive and it’s more in line with the way our brains work in the sense that we as people in our decision making about whether we go shopping right now, because it doesn’t, it’s sunny, so it’s not gonna rain.

We just assess probabilities not in a formal way, but informally we are assessing probabilities and that’s what Beijing formalizes in a mathematical way is probabilities in with the use of probability distributions. And that’s why for me then moving to doing work in statistics in general and now in clinical trials using vision statistics is what I feel is natural.

And on top of that, because we’re talking about probabilistic. Decision making. We, it’s, we know that’s the biggest advantage of statistics. We can include prior information.

[00:16:34] Alexander: Yeah, I think that is a real nice thing. As I said, basically you build. Built on the information as you collected. So, as you just mentioned, with the, you learn something of the first step, you learn something on the second step, you maybe go into the experiment with other knowledge about, similar compounds or from maybe some talk studies or whatsoever. And you could, and leverage that.

[00:17:01] Miguel: And effectively, because we are working in a phase one, mostly the phase one setting or phase one, two settings, we, our sample sizes are very small and in order for us to have what we would call statistical enough statistical power we need to leverage prior information. In order to be able to make good inference. And with this, I’m not implying that we cheat because we have a lot of, because a lot of people say you can use prime information so you can influence your final result by using a heavy prior that will just overshadow the data. I, we definitely don’t do that, and we have a lot of ways of working around that to make sure that the buyer is not influencing the data in a biased manner.

[00:17:38] Alexander: Yeah. Will, it’s a little bit of a philosophical question. Yeah. If you put a lot of emphasis on your prior, you need to have a lot of beliefs that it’s correct. Or a lot of evidence that is correct. And if you then if you’re a borderline it’s looks like we are pretty close. We just need a little bit more evidence. So then of course your additional study doesn’t need to be, super big. Whereas if you start with nothing. And you don’t have no clue of what’s really going to happen, then of course your study needs to be much bigger, because you get more.

[00:18:14] Miguel: But in the use of, in the use of prior information doesn’t have to be very informative. And a big example I like to give when people come with this argument is, okay, if I ask you what is to estimate the prevalence of a disease. A disease, let’s say, what is the prevalence of. . Let’s think of something like during Covid times, what was the prevalence? Let’s forget about incidents, prevalence at any given period of covid, which at some point was between 2010 and 20%. The use of prior information can just be, I know that it’s less than 50. And I don’t, I’m not saying it’s between 10 and 20. I just, I know for a fact, or I’m very sure, I dunno for a fact, but I’m very sure that it’s less than 50. So this is a way of incorporating current information, which everyone’s would find. would say reasonable. Which would not, which would increase our statistical power in our ability to make inference without biasing the data.

[00:19:06] Alexander: Yep. Completely agree. That’s good. That’s a good example. Okay. Very good. Now if we apply these new patient approaches, what kind of flexibility do they give us?

[00:19:20] Miguel: Yes. So I think, in there are multiple ways of applying vision statistics to trials. And, but given that we talked about the three plus three, and we, what we usually use in the case of those escalation trial is a model-based approach or a patients model-based approach, where effectively we do a logistic regression model where we are modeling the probability of a d. And the probability of the DLT is a function of an intercept and a dose. Ine effectively. So if you think alpha plus B at times dose our beta parameter, that’s what we want to estimate as B, our the DLT rate associated or our beta parameters, what we call the toxicity associated to a certain dose and that is what we are estimating in our model. And in terms of flexibility multiple things. One is this being a model based approach and this being a regression model, one, we can make predictions as to what will happen in the future. So I observed this first three dose levels, and this is one of the advantages is we’re taking to account the entire history of the entire data. That’s been observed. We can estimate the probability of DLT at the next dose level. Without having observer. And of course we have a certain, the way we’re doing this is we have a probabilistic decision rule where we say specifically for cancer trials, we, our targeting rate or we want A D L T rate which is less than 33%.

So we ideally the ALT rate, which is between 16.7 and 33.3%. , and the way we decide. if the posterior distribution of the TLT rate or so, if you look at distribution the probability that the DLT rate is greater than 33% is greater than 25%. We stop. . Okay. So effectively, and I know this is I’ll repeat this because it’s, this statement is complicated to understand in itself. Essentially I look at the density of my distribution of the dlt. if the density set at 33, between 33 and one. So the right hand side of my distribution, if that is more than 25% .

[00:21:20] Alexander: Yeah, if under in the densities also more than 25% of the density is beyond sales.

[00:21:27] Miguel: Exactly. Is great. Is above 33%. Then we stop at that dose. Or we we stop at that dose level because that’s already considered too toxic. Or we don’t escalate because that’s what we obtain when we run the, our model or the next dose level. Okay. So I, that’s how we do our make our decision.

And of this is what is usually done in onco, in the oncology setting. But for example, in other settings we’ve used different ways of doing this decision rule, which could be 50% and obviously will not be 33. It could be 20, or it could be 50, depends on the setting the drug and what we are modeling. But this is the way we apply patient statistics to a patient st or logistic regression model to the dose escalation. And going back to what you mentioned in terms of flexibility, even though we can have reset those levels, because we’re doing a model that considers a continuous scale for the dose, we can effectively test any dose. So if I say that, if I go, if I’ve observed 10, 20, and 30 milligrams of a drug and I make my predictions to. and it’s telling me that 40 I can escalate from 30, but 40 is too much.

[00:22:35] Alexander: You can also do all the predictions for everything between 30 and 40.

[00:22:39] Miguel: Exactly. So I could say, let me try 35. Oh, 35 I could, my model is telling me that I can escalate to 35. And so I would try 35. And that’s where a lot of the flexibility comes from, because effectively in. Effectively in our trial, when we’re designing the trial, we don’t really need to declare all the dose levels we’re going to test. We can say these are the ones we’re going to test in principle, but we can define intermediate dose levels according to a certain set of criteria. Or you can just say that our increments will never exceed a certain amount. And this is Bec the reason is because if we’re making predictions in a linear regression model.

[00:23:12] Alexander: You don’t have a little bit of a safety margin.

[00:23:15] Miguel: Exactly. And as we know if you’ve, if you observe between 10 and 30, you are not going to make prediction accurate predictions for 60. Because it just doesn’t make sense. But if we have that boundary of how much we can escalate in then we can make predictions and define what would be an optimal, in theory, define what would be an optimal dose.

Yeah. So that makes a lot of sense. And you could also go back to an intermediate, isn’t it? So if you, let’s say if you go to from 30 to 38 and then you see, ah, that is too toxic, you could also go back to something like 35.

Exactly. These models allow us to deescalate, which the three plus three dozen, so the three plus three, it’s a one direction model. And with w brm we can we are just escalating, but we can’t deescalate or, and of course like the three plus three, we can stop and certain those level and do an expansion cohort together, more information and then make a decision whether we want to. Those levels that are higher or lower. So in that sense, there is complete flexibility in effect, in effectively we’re just including all the information in our trial. Also with the advantage because we are using a patient setting. We’re not doing any hypothesis tests. So we’re just, it’s it’s continual assessment of what is going on.

[00:24:24] Alexander: Yeah it’s much more of a estimation topic rather than a testing topic.

[00:24:29] Miguel: Correct. Exactly.

[00:24:30] Alexander: Which really is the original of the problem. Yeah. You don’t want to test the m t d, you want to estimate C S T D. If you also mentioned you can be more flexible with a number of patients. So you could have, instead of three, you could have. I guess one is probably not a good solution, but a two or four or five.

[00:24:50] Miguel: Yeah. So effectively we can use any number of patients in the model because we we’ve got a prior and we are just adding information to that prior. Technically speaking in Beijing you would, you were doing an interim analysis every time you observe a patient because you can make, you’re just adding more information to your model. Of course, that’s not ideal. We don’t want to do that. But you could say, for example, that the first two or three dose levels, because they are really low you enroll less patient. say one to three patients, but then in the higher dose levels you start enrolling more patients. So say three to six. And you don’t need to be, it doesn’t have to be three or six. It can be 3, 4, 5, 6. It doesn’t matter because effectively you are just collecting more and more information and just enriching the amount of information you have about that drug at different dose levels.

[00:25:34] Alexander: Yeah. That way you get faster to the higher levels where you want to get to. So you jump over the initial. Hopefully the initial kind of lower levels that you just set for safety faster and especially with these probably more rare diseases. That is really important.

[00:25:53] Miguel: You’re not even not being rare diseases. If you common cancers say colon cancer or lung cancer, which are very common you, by using less patients in the first cohorts, you are reducing the number of patients that are being help.

[00:26:04] Alexander: For sure. That’s the other thing. Ethics. Success for sure. Another topic. Just was going from the, let’s say more operational point of view. Okay. Very good. Now you mentioned at the beginning with these three plus three designs, the beauty is its simplicity. Now, what you just described, obviously is everything but simple, at least. It’s simple as you explained it, but I’m pretty sure. if you implement it. That doesn’t look simple.

[00:26:34] Miguel: Yes, I would say it’s three plus three. It’s as simple as we’ve talked about, and there’s not much more than people who haven’t heard about it that we haven’t ex we haven’t mentioned here. It’s really easy to understand and know every, pretty much everything there is to know about it. But with Beijing, of course, you need more training. You need to know a little bit more about patient statistics and how it’s implemented. And , Two slash three challenges with major statistics, which I don’t think it’s about understanding because as I said, I think it’s very intuitive, but it pertains to, while the programming itself it essentially we are talking about using somehow speci more specific or more niche types of programming, which involve using other jags or stand in previously wind bugs.

So the model coding is, has got a specific syntax which adds to what you need to know and learn. And that also on top of the coding because we are using in order to assess the operating characteristics. And we need to test some scenarios. the reason why we need to test data scenarios is to make sure that the prior is not influencing the data in a way that is not that is biased, for example or that doesn’t allow us to escalate. The prior itself wouldn’t allow us to escalate to the last dose level. So we need to run these and these, this can be very heavily in terms of programming because effectively the more flexible you make it, the harder it is to simulate a trial under these rules. And what we do with the operated characteristics, we simulate trials, say usually 500. And that’s how we define how many trials are getting the targeting rate. The LT rate, right? Are defining an overdose or an under dose, or, trials stop early, for example. And all of this is very heavy from a pro programmatically standpoint, from a programmatic standpoint.

[00:28:06] Alexander: Yeah. So beginnings, the studies, the protocol itself is already quite a big of programming.

[00:28:11] Miguel: Yes. Yeah. And I guess you could say that either at that stage or at the stage of the sap, but for us it’s mostly at the protocol stage. We put all the operating characteristics and the data scenarios in the protocol so that it’s clear that we’ve, what we did and that the prior we have works well. And this brings us to what I think is the main challenge in using measure statistics. It’s the prior because we need, effectively need to translate clinical information and we have either need to look at the literature or have informations from the clinicians that, and then transform that into prior distributions. And that is naturally something that I personally find. , it’s the hardest thing to do in Beijing. And perhaps the biggest hurdle in terms of using Beijing statistics, because statisticians can very easily get the programming in understanding vision statistics, that’s very easy. But then getting information from an article or from a clinician, which says, oh, I think the DLT rate at 20 micrograms will be 40% and that DT rate at 50 micrograms is going to be 60% in that. You need to create that information and say I think my prior probabilities or my prior distribution for the DLT rate is this and that is what makes what I think is hard in the greatest challenge when you’re starting with patient.

[00:29:30] Alexander: Yeah. The prior elicitation topic.

[00:29:33] Miguel: That’s correct.

[00:29:34] Alexander: Definitely. A big one. Very good. By the way Miguel will also present at the effective Statistician Academy that is coming up. So if you haven’t heard about this academy yet, then I would say, hey. Head over to the Academy Conference yet. Then head over to the effective statistician landing page or homepage and check out and register for the the conference will take place on April 25th of 2023 as we are recording this, we are at the end of 2022. And I’m really excited about this. And then you will not only hear what Miguel has to say about patient statistics, but you will also see a lot of these kind of different things, and you probably get with a couple of these things that we talked about posterior and then prior distriution these, the probabilities that something is bigger than whatsoever, it’s sometimes easier to see. Head over to the effectivestatistician.com and check out the Effective Statistician Conference. It’s great to have Miguel presenting there.

[00:30:48] Miguel: Thank you for having me. It’s good fun.

[00:30:50] Alexander: Yeah, it’ll be fun. It will be. It will be a pretty nice event. We have so many speakers, great speakers lined up. Can’t wait to get to it. Okay. Very good. In terms of patient statistics, is there any these things that you would see as a listener to take away from the call today?

[00:31:08] Miguel: One thing is the, and the three plus three, starting with the three plus three, it’s still used very widely in oncology and it’s also known to be a very suboptimal methodology. And even though people might not. Want to invest the time to learn b l RM or model the model based approaches. There are other approaches that can be used like the MTPI two or the boen B O I N, which is that one is effectively vision two that can be used as transition to MO methodologies, which are more optimal in terms of defining the mtd. But in terms of patient, the thing what I think. Useful to know is that there is a lot of value in using patient methodology in trials in general.

And if effectively one very well known trial used the vision methodology and that is the Pfizer Biointech Covid 19 vaccine study. The first publication they did, and this was a phase 1, 2, 3 trial. That was a patient methodology not a bi, not a patient logistical regression, but a different method. And that seems to be the trend seems to be that vision methods are going to use more often because there are a lot of advantages with, in terms of operating characteristics and flexibility and ease of views, and also by being intuitive. So I think people and I don’t think that there is a battle between frequentist statisticians, invasion statisticians. I don’t see myself as a vision statistician. I do Beijing or I do the frequentist method when it’s appropriate.

[00:32:27] Alexander: I completely agree. It’s about using the right tool at the right time and not be dogmatic about it.

[00:32:34] Miguel: Exactly. Because I think there’s a lot of value in using Beijing early stage development because you have small sample sizes, but then as you get to phase three, where all the sample sizes are much larger and you want to have more control of the operating characteristics where a prior doesn have that much influence you’re probably using frequentist methods are probably a better option for many different reasons. It’s more about using the most appropriate approach rather than saying, I just do Beijing, or I just do frequentist.

[00:33:02] Alexander: Yeah. And so that’s a great way to actually end the episode today, enrich your tool set if you’re working in these areas. Get to know about all these kind of different things, and tune in for the Effective Statistician conference on April 25th. Thanks so much, Miguel. Thanks for being on the show today.

[00:33:24] Miguel: Thank you very much. This was amazing. I don’t get tired of speaking about operations.

Join The Effective Statistician LinkedIn group

I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.

I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.

When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.

When my mother is sick, I want her to understand the evidence and being able to understand it.

When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.

I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.

Let’s work together to achieve this.