Introduction to Personalized Medicine Part 2

In this episode, Thomas Debray and I dive back into the intriguing world of personalized medicine.

We explore the advancements, and hurdles in customizing patient care.

Have you ever wondered what makes developing personalized treatment strategies so challenging, especially when traditional studies may not always provide clear answers for individual patients?

Why are crossover trials unique, yet not entirely suited for generalizing results to new patients?

And how does the standard phase III study design fall short in the quest for personalization?

Join us as we navigate these complex questions, shedding light on the opportunities and obstacles that lie ahead in making personalized medicine a reality for all.

We also dive into these following key points:

  • Personalized medicine challenges
  • Developing personalized strategies
  • Traditional study design limitations
  • Crossover trials and their uniqueness
  • Phase III study design inadequacy
  • Generalizability issues
  • Data collection and design for individual patient benefit
  • Observational data vs randomized trials
  • Leveraging existing data sources
  • Importance of standardization
  • Integrating multiple studies and data types
  • Future of personalized medicine in various indications

If you found this episode enlightening, we encourage you to share it with friends, colleagues, and anyone who stands to benefit from a deeper understanding of personalized medicine’s potential and pitfalls.

Together, we can advance the conversation and move closer to a future where healthcare is as unique as the individuals it serves.

Interested in harnessing the power of leadership to drive meaningful impact in the realm of statistics together? Check out this program: The Effective Statistician Leadership Program

Thomas Debray

Founder and Owner of Smart Data Analysis

Thomas offers biostatistical consulting services in the design and conduct of post-marketing studies. He also leads various innovation projects focusing on meta-analysis and risk prediction.

Transcript

Introduction To Personalized Medicine Part 2

[00:00:00] Alexander: Welcome to another episode of the Effective Statistician. I’m super happy to have again Thomas Debray on the show. [00:00:10] He has been on the show already some times and last time we dived deeper into personalized medicine and [00:00:20] today we want to continue on that and speak about the challenges around that and also the [00:00:30] Most promising opportunities on that..

[00:00:33] Alexander: So great to have you back.

[00:00:35] Alexander: Let’s start with the typical hurdles. What do you think [00:00:40] are some main challenges when it comes into personalized medicine? 

[00:00:47] Thomas: Yeah, I think one of the key challenges, of course, [00:00:50] developing these strategies to, to personalize treatment strategies because I mean, I mean, research is not really designed to, or studies, I should say, studies are not [00:01:00] really designed to come up with answers for individual patients.

[00:01:03] Thomas: And maybe one exception would be. Crossover trials, but you know, then, then again, those results would not directly be [00:01:10] generalizable to new patients. So I would say the biggest challenge is really to get data that allows you to, to you know, say something about individual patients and how they would benefit from treatment.

[00:01:19] Alexander: [00:01:20] Mm-Hmm. Crossover studies are special because they are different patients get all the different treatment options and so you can [00:01:30] much better see with inpatient differences. 

[00:01:34] Thomas: Yeah, that’s right. Yeah. And I did the idea of a crossover trial, by the way, it’s not really to learn about [00:01:40] individual treatment effects, but it is kind of an efficient design to, to you know, to expose indeed, like you said, patients to multiple treatments so that you can still learn about relative effects while minimizing.

[00:01:49] Thomas: I think [00:01:50] the, the you know, variability that usually you have between individuals, if you would do like a parallel design. 

[00:01:58] Alexander: Yep. It’s a, [00:02:00] it’s a great design. If you have the right indications to do it, then. 

[00:02:04] Thomas: Yeah, and indeed, and at the end of the day, the goal is still to, to obtain relative treatment effects.

[00:02:08] Thomas: But I mean, in theory you, [00:02:10] you can learn about individual treatment effects about the patients in your trial. So, I mean, it’s a design that’s, that’s kind of. Connects right with the personalized medicine IDs, [00:02:20] but it’s not a design that’s, that, you know, that is, I would say is suitable to really start implementing personalized medicine.

[00:02:28] Alexander: So when you think [00:02:30] about, let’s say it’s a typical phase three study. Yeah, you have standard of care and the new treatment, or maybe you have something like standard of care plus [00:02:40] placebo, standard of care plus an active comparator, and standard of care with plus the new treatment. Why is that [00:02:50] design so bad to kind of personalize it?

[00:02:53] Thomas: You mean a standard parallel design just around the eyes? Yeah, because trials I would say the [00:03:00] trials are usually powered to you know to obtain relative treatment effects So they are you know when you design a trial you do like one one of the critical steps is to like a sample size calculation to Decide how [00:03:10] many patients am I going to include in my trial?

[00:03:12] Thomas: And typically you would power your trial so that you know, you have a certain expectation, but the effect of your treatments. And then you power your [00:03:20] trial so that you can kind of demonstrate, right. And the effect is, is indeed is there. Now in theory, you could power your trial to demonstrate like effects in different, [00:03:30] let’s say subgroups.

[00:03:30] Thomas: So I just look at the treatment effect of this magnitude in males and treatment effect of that magnitude in females. Or maybe even in smaller subgroups where, you know, you stratify by, I don’t [00:03:40] know, like five, six, seven covariates. So in theory, you could do that, but you will find out quite quickly that the sample size that you need then to, to, to establish efficacy in all these you [00:03:50] know, small, specific subgroups is gonna inflate your, your randomized trial by a huge amount.

[00:03:56] Thomas: And so it’s not it’s not something that, that sponsors would, would [00:04:00] usually pursue for various reasons. So I mean, in general, the objective is for a trial is to demonstrate that the treatment works. It’s the goal is not to demonstrate what the effect is in, you know, in, in, [00:04:10] in different people with different you know, type of background indications.

[00:04:13] Alexander: Yeah. That is, I think there is one other case where [00:04:20] you would have different comparators Based on where you are. Yeah. So and I’ve seen that then very often you have [00:04:30] multiple studies. Yeah. So you have one study, let’s say naive patients, another study in pretreated patient, another study in patients [00:04:40] with a specific comorbidity.

[00:04:41] Alexander: Another study into patients that need a very specific supportive care, things like that. Yeah. And then you [00:04:50] have, have lots of different studies. Yeah. Not just one study. If you have that kind of part, is that something that is [00:05:00] then helps you to more personalize it? Yeah. Yeah. 

[00:05:04] Thomas: I mean, so if we are, if we plan to develop these models, you need to start personalizing treatment [00:05:10] decisions.

[00:05:10] Thomas: I mean, yes, you, you need, I would say, heterogeneous data, right? You need data and evidence, right? For, for how the treatment works in different types of populations. So like you say, [00:05:20] if I would only have data from a single trial, let’s say the trial is like very big somehow. But you only include it in like patients who have like, for example, no history of, of prior medication, you know, then, then in [00:05:30] terms of generalizability of whatever model I’m going to develop in that set of patients is going to be, you know, it’s going to be limited by, by who I enrolled in the trial.

[00:05:38] Thomas: And usually, of course, on top of [00:05:40] that, usually in trials, I mean, the type of patients you include. It’s always a specific type of patients, right? You don’t just start selecting patients from, from clinical practice, you know, and just at [00:05:50] randomly including them in your trial, usually there was some kind of selection process there on top of the specific type of restrictions that’s, that’s that you already had in mind in, in your trial.

[00:05:58] Thomas: So yes, I mean, having [00:06:00] trials from multiple indications or multiple let’s say a background comorbidities, I mean, it’s going to help you write to, to kind of build. A profile of, of, [00:06:10] yeah, so populations then that’s kind of, you know, individually, they may only like cover a specific subset of the population where you want to, you know, target your drug.

[00:06:18] Thomas: But if you have like multiple [00:06:20] retrials that like cover different subsets, you know, if you globally consider this set of trials, you may still not, of course, capture the entire population that you didn’t mind, but it will, the [00:06:30] coverage, right, will be much better than, than the coverage of a single trial.

[00:06:34] Thomas: Whatever the size of that single trial may have been. 

[00:06:36] Alexander: Yeah, yeah, I see that point. [00:06:40] If you talk about heterogeneity in terms of, I think you talk about heterogeneity in terms of the patient [00:06:50] population, in terms of where the patients are coming from. Is that 

[00:06:53] Thomas: correct? Yeah, so where the patients are coming from, so not only geographical heterogeneity, but also in [00:07:00] risk profile, right?

[00:07:00] Thomas: I mean I mean, just how they look, how they, how the disease manifests itself to some extent, right? I mean, there’s always a question of how much heterogeneity do you want, but a certain level of [00:07:10] heterogeneity is definitely good to to develop. a degree of generalizability.

[00:07:17] Alexander: In which variables are [00:07:20] really important to have variability in a baseline?

[00:07:25] Thomas: Yeah, so that’s a good question. So in terms of efficacy, I mean, the key variables that [00:07:30] drive the, the treatment response the treatment covariate interactions, right? covariance sets. that affect the magnitude of your treatment effect. So definitely you [00:07:40] would want variability in those covariates, but I would argue that you would also want a variability in covariates that maybe don’t affect your your, your, the efficacy of your treatment, or [00:07:50] at least covariates where you’re not sure, but at least having that variability will help you to understand that there is no effect modification in by these covariates.

[00:07:57] Thomas: also help you to kind of robustify your. [00:08:00] your opinion, right? About the generalizability of, of your models, because all these models that you’re developing, you can validate them. Of course you can get some estimates of performance, but you know, then you will still [00:08:10] have some questions, right? About, okay, how is this model going to perform in the next patient or tomorrow, right?

[00:08:14] Thomas: Today it works, does it work tomorrow? So, you know, to build some kind of confidence in, in the performance of these [00:08:20] models, you, you know, you need a certain degree of variability an higher variability that affects the performance of your model or that affects the effects of your treatment or variability.

[00:08:28] Thomas: That doesn’t, but where, you [00:08:30] know, you, you suspect that it might have an impact. Yeah. 

[00:08:34] Alexander: And so when we think about these models that predict [00:08:40] our treatment effects based on the cover is at baselines that we have, then of course we want to have variability in those [00:08:50] set. Have an effect, and that we know basically from prior studies, from the biological history plausibility, just to have them [00:09:00] for sure.

[00:09:00] Alexander: Also those that potentially have an impact, and of course this kind of, You know, this is [00:09:10] not a yes or no kind of thing. Yeah, there is very often it’s a, oh yes, this Covera has some effect and this has a big effect and [00:09:20] this has potentially only a very small effect. Yeah. And so it’s great to have variability in all these different baseline variables.[00:09:30] 

[00:09:30] Alexander: Now, when you talk about. models. From my understanding, there’s, there’s a couple of different steps in terms of [00:09:40] building trust to develop these different models. When I first started at university, my approach was just to, okay, [00:09:50] I throw in all the variables then I run a model across all the different patients I have, and then I get an output.

[00:09:58] Alexander: [00:10:00] However, this is not the best way to do it for a couple of different reasons. [00:10:10] What are these reasons that I shouldn’t kind of develop a model like what I just 

[00:10:15] Thomas: described? Yeah, I think we can learn a lot from prognostic models, which are already [00:10:20] quite common. So prognostic models are models where you just want to predict progression, right?

[00:10:24] Thomas: So regardless of treatment, so everything status quo, so you just have a group of patients. You’re not going to start [00:10:30] intervening things, but you just have a group of patients that are maybe receiving a certain standard of care, or maybe not receiving a certain type of treatment, but you just predict what’s going to happen.

[00:10:37] Thomas: So common examples are, Models for [00:10:40] cardiovascular disease, you know, where you would predict, you know, like unfavorable outcomes within, you know, on the time span of 10 years. And then, you know, those predictions are used to inform decision making and [00:10:50] still decide upon treatments. And you also have models for, you know, like predicting the outcome after surgery and so forth and so forth.

[00:10:56] Thomas: So, but the main findings, I mean, so a lot of research has been done in this [00:11:00] area, right? And, and how these models are developed is like you say, right, often they would conduct an observational study or maybe a randomized trial. They would use. the data and they would indeed just, you know, feed the data into an [00:11:10] algorithm like, like a GLM, you know, like a logistic regression model, or maybe survival model, or maybe some, some fancy machine learning model.

[00:11:17] Thomas: And it would like maybe inform a bit the [00:11:20] selection of covariance, but a lot of things are data driven, right? They just use an algorithm to figure out which variables go in, which variables go out. And then they publish these models potentially with some claims [00:11:30] that they work great or, or without any claims.

[00:11:32] Thomas: And I, I mean, what we’ve learned over the, over the past few decades is that many of these models, they don’t, they always perform much worse than, than anticipated. So I’ve, [00:11:40] I’ve not seen many models that actually perform as good or even better. That was originally claimed and the recurrent problems is that, yeah, a lot of, a lot of what to say that yeah, challenges in, [00:11:50] in, of course, the data collection, right, which data is used.

[00:11:52] Thomas: So how representative is that data for the population that we’ve in mind? All these five data driven type of approaches are very sensitive to chance [00:12:00] findings, right? Like, like kind of. Random decisions basically by, by the algorithm that are driven by the data. And then also the, the efforts made to kind of assess the performance of these models is often suboptimal.

[00:12:09] Thomas: So [00:12:10] there’s many deficiencies in this process. And that’s just for models where we don’t even try to predict. how patients benefit from treatment, not just molds that try to predict, you know, what’s gonna, what’s the [00:12:20] future risk of developing a certain outcome regardless, you know, I mean, status quo, and these molds are used actually to define also to identify individual treatment effects because often these molds are used, [00:12:30] for example, to, to you know, you’d have this prediction like, ah, my, my, I don’t know, my, My, my risk of developing a NMI, for example, within the next year is, I don’t know, 20 percent and then you, you have the [00:12:40] result from your trial, you know, starting to reduce my risk by, I don’t know, by you know, a 0.

[00:12:45] Thomas: 8, so you, you know, you apply the 0. 8 to my [00:12:50] absolute risk and there’s your so called individual treatment effect. So that’s kind of the implicit kind of way of thinking that, that is not that uncommon, but it’s based on a [00:13:00] You know, loose pieces of evidence that you assume that they all connect very well, that they, you know, that you can apply that relative effect to this absolute risk, and that all these things make sense and can be added, and [00:13:10] it’s all on the assumption that all these pieces of evidence, you know, are, are correct, are valid, but also that they are compatible, right?

[00:13:15] Thomas: And there, I think there’s a lot of a lot of how do you say that? You know, extrapolating your, your [00:13:20] confidence, right? Quite a bit, stretching it quite a bit far. So that’s, that’s just my experience so far. So, 

[00:13:27] Alexander: What would be your advice [00:13:30] in terms of building these such models in the first place?

[00:13:34] Alexander: There’s these things like. training data set and test data set and things like [00:13:40] that. Can you speak a little bit about, more about what that means from a from a model building perspective? 

[00:13:48] Thomas: Yeah, so from model building [00:13:50] perspective, so the key challenge is that unlike prognostic models where we, you know, where we just want to have like a sort of descriptive model, right?

[00:13:58] Thomas: It’s not really a causal model, it’s a [00:14:00] descriptive model. So prognostic models, the recommendation is that, you know, you want an observational study because it’s a representative sample of your target population. Now for the prediction models for [00:14:10] treatment benefits, the, it becomes much more complex because we’re interested in a causal prediction.

[00:14:15] Thomas: And that means that we need to have data where we can learn causal effects. So, [00:14:20] you know, basically we need, you know, we would definitely benefit from leveraging data from randomized trials to develop these mos. But at the same time, data from randomized trials might not be sufficient [00:14:30] because the patients in your randomized trials are typically not representative of your or not.

[00:14:34] Thomas: I mean, they are representative, but they’re not like directly exchangeable, right. With your. target [00:14:40] population. So certain model parameters, they cannot properly be estimated from randomized trials and you need observational data to make that translation right like to really translate how this [00:14:50] risk estimates or this causal effects would apply to individuals that you see in clinical practice and so essentially you would need a mixture of both randomized and observational [00:15:00] data to, you know, to compliment, right?

[00:15:03] Thomas: The missing pieces of information that you have. So the randomized trials give you evidence on, on relative treatment effects, right? On [00:15:10] causal effects that apply on population level. Observational data has given you evidence about, you know, baseline risk, but to some extent also about treatment covariate interactions.

[00:15:18] Thomas: Although, yeah, I mean, to some [00:15:20] extent, it also gives you information that you can learn from randomized trials, but the cost of learning that information is much higher in observational studies than it is in, in randomized trials. And that’s why ideally. [00:15:30] Ideally, you would have this, and then ideally you also have like patients from observation service randomized trials that are, you know, that kind of cover all this kind of subpopulation that you’re in mind, right?

[00:15:39] Thomas: Because if you [00:15:40] develop these models, you hope that you can apply them then afterwards in clinical practice, but then the question is, okay, which clinical practice, right? Is it just, is it just one, one practice in, in one [00:15:50] specific country or, you know, do you have the ambition to, you know, you know, to also start using it on a more global scale, right?

[00:15:55] Thomas: Like maybe everywhere in Europe. And then also one question would be like, ah, [00:16:00] you can apply today, but what about tomorrow? Does it still work? Or after, you know, new interventions or standard of care is changing and so forth. So there’s a lot of loose not loose, but [00:16:10] dynamic elements in this whole say that yeah, in this, in this whole landscape, right?

[00:16:13] Thomas: So if the data, the data is a big challenge. But even if you have like, like the perfect data for your model there’s still other [00:16:20] challenges that, that that have to be considered that, that relates yeah, at the end of the day, I would say that related to generalizability, right? So generalizability is not only a fixed property, [00:16:30] it’s also a dynamic property of, of, of models.

[00:16:33] Alexander: Yes, that’s a good aspect, kind of clinical practice changes over time, for [00:16:40] sure. When you speak about observational data lots of people first think in terms of observational data. Oh, then I also run a single arm [00:16:50] observational study where all the patients get my new treatment. And that’s great.

[00:16:55] Alexander: My marketing and sales colleagues will love this type of study as well. [00:17:00] So let’s run such a one armed single, single, single arm, one arm, just kind of double single arm observational study directly [00:17:10] after we get we get the drug on the market. Would these one armed studies help with predicting models?

[00:17:18] Thomas: To a limited extent. I mean, they [00:17:20] will only give you indication about prognosis in patients exposed to your active drug, but you would need to, I mean, at the end of the day, you would like also to have information on all the comparators, right? Especially if you want to [00:17:30] individualize treatment effect, you want to understand what, what are these contrasts, right?

[00:17:34] Thomas: What are all these other treatments that are relevant, that are available and how would they work? the patient benefits if they were to take, [00:17:40] you know, one of those drugs. And so to have this type of evidence, I mean, yeah, you would either have like a need to have like what is it? A randomized trial with multiple arms, but yeah, [00:17:50] your power is going to be very limited to do a lot of sophisticated analysis with that.

[00:17:53] Thomas: Or you could learn from like electronic healthcare record data or observational studies or other types of observational [00:18:00] sources to, to see like patients who are currently being treated with this alternative treatments. You know, how is it, how is the drug working in those patients? And I mean, at the same time, you might also realize that the [00:18:10] efficacy or the effectiveness of these drugs that you’re observing in, you know, in observational data sources might not always correspond to the efficacy estimates that were, you know, reported in the initial files.

[00:18:19] Thomas: [00:18:20] So then you have additional questions, right? Like, why are these discrepancies there? Is it because of adherence? Is it because of interactions in, in the in the, with the treatment? So there’s a lot of other type of [00:18:30] modifiers that can affect this, these estimates. And I, and now, I mean, we will be becoming more aware of these things as well, like the old estimates framework, where you also start thinking more explicitly about, [00:18:40] okay, when we say treatment effect, what do we mean by treatment effect?

[00:18:42] Thomas: You know, what, What are we talking about? And, and, you know, this is already quite challenging when dealing with randomized trials, but [00:18:50] it is even more challenging to define treatment effects when dealing with observational studies, where what we call in a randomized trial intercurrent events, I mean, observational world is full of You know, [00:19:00] unplanned type of things that are happening at, at the start of treatment, but also after treatment and so forth.

[00:19:07] Alexander: So one observation [00:19:10] study doesn’t work many I could 

[00:19:13] Thomas: be an expensive study. I mean, you would have to set up like a, ideally like prospective observational study, [00:19:20] multicenter you know, where you collect all the, I mean, you could collect all the data and all the, and all the. all the relevant data at the relevant time point.

[00:19:26] Thomas: So in theory, it’s possible, but in practice, it may not be the most efficient [00:19:30] way to, you know, to, to achieve that that goal. What 

[00:19:34] Alexander: would you think is the best and, oh, let’s say best is probably wrong, but [00:19:40] it’s the most efficient way to get to something like a good model. 

[00:19:43] Thomas: Yeah. So my impression is that The most efficient way would be to, to leverage existing data sources as [00:19:50] much as possible while and while thinking, you know, identifying the, the gaps that are there and the evidence that is there, and then to prospectively design studies to, to kind of fill that gap.

[00:19:58] Thomas: But I think I [00:20:00] mean this might not always work in the sense that there’s still a lot of, when, when you look at observational data, I mean, there’s a lot of variety, right? In, in the quality, but also the operationalization, [00:20:10] right? Like some standards, I mean, standards are becoming more what is it?

[00:20:12] Thomas: More prevalent. But if you have like studies that all adopt very different ways of defining endpoints or, or you know, [00:20:20] defining certain disease conditions or, or comorbidities and so forth, it’s adding a lot of noise in the whole process. And you know, if you start mingling them data across data sources that are not.[00:20:30] 

[00:20:30] Thomas: Like, like standardizing us in a similar fashion, you will end up, you have a high risk of identifying like, like effects or interactions that are not really there, but that are either kind of reflecting [00:20:40] some discrepancies between your data sources. So if you want to avoid all these problems and, and to like really effectively, you know, like put your piece of evidence together, like, like a puzzle, you [00:20:50] know, like they actually, you don’t have to force the puzzles together, but they actually fit together.

[00:20:55] Thomas: I mean, standardization is going to be. a key element in, in, in, in that. And [00:21:00] of course, for clinical trials, they already have standardization schemes and it’s, it’s very rigorous and very well defined, but observational studies standardization, I mean, it’s coming. I mean, there are like, [00:21:10] now you have OMOP and some other formats, but it’s, I don’t know to what extent it’s currently implemented.

[00:21:15] Thomas: And I think there’s still a lot of gains. That can be made to really make like observational [00:21:20] data sources much more you know not exchangeable, but you know, like compatible with one another, but also compatible with evidence from randomized trials. And so I would say that’s kind of a, a problem of [00:21:30] infrastructure, I think, right.

[00:21:31] Thomas: Or infrastructure. So I think there. I think there’s a lot of gains that would have to be made, but they are being made. So that’s one element. And the other element would be [00:21:40] the, the methods, right? To kind of put all these pieces together. But I think, so we are getting there. And I really think that the, the, the most efficient way to, to develop this [00:21:50] models to start looking at individual treatment effects is to, to really leverage these data sources and to to basically map, which evidence is author.

[00:21:58] Thomas: How can, how do they, how [00:22:00] do these pieces of evidence connect? So we have randomized trials, we have maybe a single arm study, we know there is a registry in that country we know that this hospital is doing an observational [00:22:10] study. You know, so what’s, you know, what is this evidence telling us right as individual pieces?

[00:22:16] Thomas: What is missing to, you know, to start connecting these, these different data [00:22:20] sources? Can we, you know, maybe we need to run a specific type of study. Maybe we need to start like either a prospective cohort study, or maybe we want to do a study where we work with some, you know, [00:22:30] PROs or, or you know, like wearable devices to collect some very specific endpoints that, that we have a sparsity of, of data and then, you know, once you’ve all these different pieces of [00:22:40] studies or different pieces of data, then you can start working on statistics to kind of it.

[00:22:44] Thomas: You know, to, to integrate all these data sources in in, in a rigorous fashion, but also to understand, okay, what [00:22:50] are the risks, right, of putting this data together? What’s the quality? How much, how much how much confidence can I have in, in the results that have come out of these analysis? And that’s, I mean, so statistics [00:23:00] is only a part of it.

[00:23:00] Thomas: It’s, it’s a very important part, but it’s not everything. Right. That’s, that’s my, that’s my point. Yeah. 

[00:23:05] Alexander: I’ve worked with combining studies. Yeah. [00:23:10] Multiple studies, even from the same sponsor done over a period of time in the same indication, and you can still have variability. Yeah. [00:23:20] So. So if you have standardized outcomes, if you have standardized descriptions of important baseline covariates, yeah, [00:23:30] that makes a huge difference, makes things so much easier.

[00:23:35] Alexander: And that is definitely an area where anyone working [00:23:40] in a specific Indication is specific therapeutic area greatly help. Yeah, push forward. [00:23:50] Standardization is these kind of things and standardization within clinical trials. So within all the different clinical trials across different sponsors and [00:24:00] also to make these data capture,

[00:24:03] Alexander: the way of how you capture the data as compatible with [00:24:10] real world settings as possible, so that whenever you have an observational study or registry or something like this, you can [00:24:20] reuse the same variables that you used in your clinical trials. 

[00:24:24] Thomas: Yeah, I’m not there, but for exceptionally, because for randomized trial, we often look at.

[00:24:28] Thomas: Clinical endpoints, right. But it [00:24:30] would make sense, even though they’re not power to start looking at PROs and things like that, but it would be helpful to collect some of these to collect some of these big PROs in clinical trials, because it, [00:24:40] like you say, it will make it also easier to start mapping that piece of evidence with observational studies, where you might actually have more like this kind of soft endpoints being collected done in the clinical [00:24:50] trials.

[00:24:50] Alexander: Yeah. So that’s a good advice. Yeah. Just because. In your specific studies that you’re thinking about, you might not see a [00:25:00] difference, thinking it from a holistic point of view of all the different evidence that you can accumulate over time, all these kind of different [00:25:10] parts will sum up to relevant difference to relevant evidence.

[00:25:14] Alexander: And so yeah. Think beyond your study, for sure. [00:25:20] When we think about these kind of settings I’m pretty sure there will be indications where it’s easier to implement these [00:25:30] thoughts that we just have and other indications where it will be much harder. What do you think are indications that are most promising of [00:25:40] putting this into reality?

[00:25:43] Thomas: Yeah, I would say it’s indications where. It’s relatively easy to, you know, to obtain large amounts of data, [00:25:50] right? Where, where indications where data is already being, data collection is already part of like, like healthcare processes. Where yeah, where trials are relatively large, but yeah, [00:26:00] the challenge is, of course, when trials are large, it means that treatment effects are small, so it’s not necessarily the easiest setting, but yeah, I think I would say, look, look, indications [00:26:10] where the treatment is quite uniquely defined, right, where you have, like, well established treatments that are, you know, like, not very, I mean, not very complex, where, you know, the treatment is a process, but rather where the treatment is a kind of [00:26:20] one.

[00:26:20] Thomas: Intervention, right? Where you take treatment today and then maybe the outcome happens only, you know, the outcome of interest is not happening like 20 years down the road, but it’s also. Relatively short to them [00:26:30] so yeah, I think to give a label on that I don’t know, but I wouldn’t say like rare diseases.

[00:26:35] Thomas: I think right. Disease is going to be a challenge, for example, because already just learning about relative effects is a challenge. [00:26:40] So, you know, let alone about individual treatment effects. 

[00:26:43] Alexander: Yeah, so, yeah, I also agree. Okay. It is where you can [00:26:50] observe something rather fast. You have big patient populations and where you can very easily measure the [00:27:00] effects.

[00:27:00] Alexander: Yeah, so especially kind of the patients can measure the effects themselves. Yeah, I’m just thinking about, for example, obesity. Yeah, [00:27:10] well, measuring weight is pretty straightforward, yeah? You don’t need kind of your highly sophisticated radiological, whatever [00:27:20] stuff. Pretty much everybody has, can, you know, wait themselves at home, can record that.

[00:27:28] Alexander: These things are super easy. [00:27:30] Things where you have specific radio graphic interfaces, but you need ratings from a physician. [00:27:40] These things are especially as these ratings from physicians differ very often between clinical practice and clinical trials. For [00:27:50] example, if you think about depression, you know, there’s highly sophisticated tools to use within clinical trials that are usually not [00:28:00] used in the clinical practice.

[00:28:02] Alexander: There, as you said, the PROs could. could help quite a lot. Okay. Thanks so much for this [00:28:10] great discussion about In personalized medicine, we touched on lots of hurdles that are there in [00:28:20] terms of study design, what data to capture, how to capture the data, how to build some models up to even kind of what are the [00:28:30] indications that are really, really interesting.

[00:28:33] Alexander: And so. We’ll definitely have lots of further episodes that we can record on [00:28:40] all these different aspects and further aspects on personalized medicine. For, for the key takeaway for the [00:28:50] listener, Thomas, what would you say is the, you know, the most important thing from this episode that people should take away and take home?

[00:28:58] Thomas: Well, I think I would say two things, right? [00:29:00] Opportunity and risk, I would say, right? So there’s a, I think it’s a huge opportunity in personalized medicine with all the, you know, data being accumulated increasingly often. We have increasingly [00:29:10] access to large amounts of data and also increased ability to connect pieces of evidence.

[00:29:14] Thomas: So I think this opens a lot of opportunities to start looking further than relative effects and really start looking at [00:29:20] personalized individualized treatment effects. So I think that’s, you know, like a new era, you know, coming up where we’re going to see a lot of new, you know, new types of, of [00:29:30] advanced analytics, trying to, you know, investigate in real treatment effects.

[00:29:33] Thomas: So that’s, that’s the opportunity side on the, on the risks. I would say like, look, the, the development of these models and [00:29:40] understanding of their performances is far from straightforward. It’s, it’s. It may look simple in the sense that, you know, you have data, you just run an algorithm on the data. So, I mean, I could explain it like [00:29:50] that, and yes, you can do that.

[00:29:50] Thomas: But the chance that such an algorithm is going to be, you know, giving you, like, reliable and solid and valid estimates of individualized treatment effects, [00:30:00] I would, I would say it’s very small. It’s, it’s a lot of hurdles, you know, to make it work to have the right data, also to understand that you have the right data, to understand what data is missing.

[00:30:09] Thomas: [00:30:10] So it, it requires quite a bit of thinking to do it properly. And and so there, I think. You know, that’s, that’s one of the big challenges, which I think will also decrease over time as you [00:30:20] know, we get access to better data, access to better standards and so forth. But while we are getting there, I mean, you know, be careful, my, my takeaways, you [00:30:30] know, be careful developing these models and yeah, be aware of, of the limitations that you’re dealing with.

[00:30:36] Alexander: Yep. Thanks so much. Model development is [00:30:40] definitely a very, very interesting thing. And we’ll definitely have some episodes about this coming up in the future. Thanks so much Thomas for another great episode. [00:30:50] 

[00:30:50] Alexander: Thank you.

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won't send you spam. Unsubscribe at any time. Powered by ConvertKit