Interview with Margaret Gamalo
Today, listen to an interview with Margaret Gamalo, about how to combine a control arm with real world evidence data, a really hot topic.
Also, Margaret has been sharing her knowledge for a long time and has gained lots of opportunities this way. Helping others will help you in the long-run. We discuss this aspect in the episode as well.
Join us while we talk about the following interesting points:
- Propensity scoring
- Find and match data
- Adjust for baseline differences and post-baseline differences
- Importance of presenting research and how this leads to lots of good opportunities
Listen to this episode and share this with your friends and colleagues!
Senior Director – Biostatistics at Pfizer
Margaret (Meg) Gamalo, PhD is Senior Director – Biostatistics, Global Product Development – Inflammation and Immunology at Pfizer Innovative Health. She combines expertise in biostatistics, regulatory and adult and pediatric drug development. She recently was a Research Advisor, Global Statistical Sciences at Eli Lilly and Company and prior to that was a Mathematical Statistician at the Food and Drug Administration. Meg leads the Pediatric Innovation Task Force at the Biotechnology Innovation Organization. She also actively contributes to research topics within the European Forum for Good Clinical Practice – Children’s Medicine Working Party. Meg is Editor-in-Chief of the Journal of Biopharmaceutical Statistics and is actively involved in many statistical activities in the American Statistical Association. She received her PhD in Statistics from The University of Pittsburgh and Master’s in Applied Mathematics – Operations Research from the University of the Philippines.
Alexander: You’re listening to the effective statistician podcast, the weekly podcast with Alexander Schacht, Benjamin Piske and Sam Gardner, designed to help you reach your potential to lead great science and serve patients without becoming overwhelmed by work.
Today, I have an interview with Margaret Gamalo, about how to combine a controlled arm with real world evidence data, a really invoked topic. Stay tuned for that. And now some music.
I’m producing this podcast in association with PSI, a community dedicated to leading and promoting the use of statistics within the healthcare industry for the benefit of patients. Join PSI today to further develop your statistical capabilities with access to the ever growing video on demand content library, free registration to all PSI webinars, and much, much more. Join hundreds of your peers that all paid very little for lots of content. It’s only 95lbs for high income countries which is about the same for Dollars and Euros. Head over to the PSI web.org, to learn more about PSI activities and become a PSI member today.
Welcome to a new episode of the effective statistician. And now I’m talking to someone who I actually worked alongside in the same company for quite some time, but we never really met. And so I’m really glad to have Margaret on this podcast episode today. Hi Margaret, how are you doing?
Margaret: I am very good and thanks for inviting me, Alexander. I’ve heard about it. But this is an exciting opportunity to just casually chat about propensity scores.
Alexander: Awesome! So before going into the topic itself, maybe you can give us a short background of where you have come from? And what brought you to statistics and what’s been your career up to now.
Margaret: Yeah, it’s very interesting. So I come from the Philippines, originally. I actually majored in mathematics when I was an undergrad and then one of my professors, when I was at the University of the Philippines, had a friend in the United States, who is looking for a graduate student. But he was in statistics and then my professor said, ”well, I don’t want you to stay in mathematics because you’ll end up poor”, he was actually very candid about it. That was really exactly what he said and that was actually the precur, you know, what led me to go to Pittsburgh and studied Statistics over there. And then after I graduated, I had a very short stint when I was in Kansas City, and then somebody saw me making a presentation. The joint statistical meeting in 2007. If I recall, in Salt Lake City, Utah and he asked. Would you be interested in looking for an opportunity at the FDA? and I said “well, why not?”
Alexander: After the presentation, you gave her a chance?
Margaret: Yeah, I recall. That was in 2007. So I did an interview at the FDA after that and then started at the FDA. I think June of 2008. I stayed there for quite a while. I really learned a lot from when I was at the agency and I left the agency eight years later and joined Eli Lilly. I loved it there as well. There were a lot of great people. I learned so much from working in late phase development for Baricitinib. I got really good at Dermatology. I am doing the late phase development for Pfizer’s portfolio. In dermatology. So practically, dermatology statistics.
Alexander: Do you remember, what was your presentation at the JSM?
Margaret: Uh..I am Forgetting it. That was a long time ago. But yes, that was a conversation after that, with somebody from the FDA that led me to go to the FDA. And even when I joined Lily, it was actually another conversation I had back then with Steven Ruberg. So I did a presentation at the clinical trials transformation initiative. This is for multi-drug resistant infections. We’re trying to look for ways to innovate there and then I was making a presentation. But after that, Steven Gruber was saying, “well, you know what, if you’re interested in joining the industry later, let me know”, two years later. Yes. That’s what happened. That’s actually what led me to Eli Lilly as well. So, all of this happens to me when I’m thinking about it.
Alexander: Yeah, and recently, you gave a presentation at Journal Club. Where the PSI and you actually talked about propensities which are called augmented controls in randomized, clinical trials, the case study. Another kind of presentation that brought you up another opportunity this time, not a job, but CALs interview . That shows how exposing yourself and giving presentations, giving talks and sharing your knowledge, helping the community, helps you as well as a presenter, how much it gives back to you, isn’t it?
Margaret: That’s really true. I mean thinking about it. And going back to that presentation, I gave, when I met Steve Ruberg was also a related dude, the one that I wrote. Interestingly, Because that was really in this propensity score method that we created at that time was really geared towards innovating clinical trials for multi-drug resistant infections. Infections caused by multidrug-resistant bacteria, and you’re right, external engagement before was, I didn’t really think of it that way, it was another mentor of mine who actually led me to it. Not sure if you are aware of Rampy Ware. He was the director for medical devices at FDA. He was at Cedar before and then now he’s head of methodology at Bristol-Myers Squibb, but it was actually him who just kept me going and his words are like, just keep on working sooner or later. People will hear you.
Alexander: Yes, exactly, So, we talked about the title and said some methods, but let’s go back a step. And what’s a problem that you’re trying to solve for that? What were you facing?
Margaret: At that time? Yes. So as I’ve said, this was a gear towards innovating clinical trials in infections, caused by multi drug-resistant pathogens at that time. The idea was that, we wanted to use real-world data or whatever we have electronic health records at a time, but then there’s always this danger of, what if they are not really the same and therefore, What’s going to happen is that the comparison between the treated as well as the external control, which practically is, I mean, there’s no treatment for multi drug-resistant bacteria. So it’s practically Placebo. There was a concern from the regulatory agency at that time. That well, what if it’s not really the same, we’re not really comparing the same patients? And, so the idea was that, what if we randomized a little bit to Placebo and then we just supplemented with the external control in such a case we have a benchmark with which we can compare the external control. So there is a concurrent control in the clinical trial or in the randomized Clinical trial that could be used as a benchmark to the external control. That was really the main problem. It was really very simple. And at that time.
Alexander: So the problem was really you don’t want to over expose patients to Placebo.
Alexander: Maybe it could also be something like a rare disease where you don’t have any or not so many patients and then the problem is, how do you make sure that the suppression profiles that you have in the clinical study is the same as in the real world evidence that you just to boldly stop the placebo. Okay.
Margaret: Yes. The other idea there. Is that while baselines the well, the baselines can be fairly reasonably similar between, let’s say the concurrent control in the between the treated as well as the external control because of the severity of the patients. Physicians might treat them differently eventually for external control and it’s not really something that is comparable to what was in the clinical trial. So the way that you treat the patient, given the severity of the patient might eventually make the responses of the patient. So different from when it’s supposed to be in a clinical trial. So that was the other danger over there, that we were trying to consider and so that led to the idea that we need to have an assurance. Some contexts that this use of the external control is not really going to give us a false sense of decision, or conclusion of efficacy.
Alexander: The first question I would have about that, if you have a randomized study, then the start point is very clear. It’s the randomization date, but if you have a real-world evidence data set, There’s no clearly defined start date. How did you deal with that?
Margaret: Yeah, the index date. Yeah, that’s right. You know, we call that the index date they are not going to be very similar. So one of the things that we are doing here, we looked at this one as well in another example for type 2 diabetes. No, no, I think it was adolescent type 2 diabetes. I’m pretty much. It’s really just, when was the patient diagnosed and then whether at that particular time that patient’s computable phenotypes are very similar, but you’re right there’s no such thing as a definite index date that is similar to when the patient was randomized and dosed. Because that’s never that’s kind of nebulous all the time, and it requires a lot of clinical judgment. When is the index time really defined? We can’t, we get into a lot of problems like this, for example, Immortal time bias, where at that particular period, This patient will never experience a particular end point.
Alexander: Yeah. It could also be the other way around. Because of a certain bias, all the patients in the real world evidence get a certain…
Margaret: That’s true. It could be as well. Like, for example, that these patients are severe enough already when you enroll them in the trial. And that’s exactly why all of them have very bad outcomes. So that’s the other way around, but I think of it as there is really no way. I mean, there’s really an impossible way of. I would say it was quite a difficult criteria. To really ascertain what is the index state? And it requires a lot of discussions. What would be the best? One is either from the clinical study team or with the regulatory agency because they have the weigh in as well for them like okay. This is the best index state.
Alexander: Now, how does propensity scoring come in?
Margaret: So the propensity scoring Is really just, as we know we are, it’s like a probability of being in the treatment group or being in the control group. And it’s really just a sufficient statistic that is based on the idea that if I don’t have a randomized trial, I can actually make the treatment assignment independent of the outcomes by conditioning on a set of covariates. The problem with that is the set of covariates could be very huge. And therefore it is really, it can be a little tedious to do it one by one for all of these covariates. And so the propensity score is just a way of summarizing the information from all of these covariates in one single probability and conditioning on that probability, the treatment outcome should be independent of the assignment and therefore, you can do all your counsel estimation and so on.
Alexander: Yeah it’s basically. But here it’s not so much about treatment. Like in an observational study where you have an observational study and some patients get treatment A, some patients get treatment B, and propensity scoring to compare it. But here it’s like being in the clinical trial versus not being in the clinical trial.
Margaret: That is true. Yes, and this actually makes it a lot more complicated because you can actually think of it as a three arm trial as well. There your randomized trial has two arm. I mean, course, you have treatment, you have concurrent control and then you have external control. And then you want to match external control to both treatment as well as concurrent control.
Alexander: Yeah, so you could actually match it to either the placebo? The treatment or the total?
Margaret: Or the total! You’re right. And actually there has been a lot of research that we have done on this one as well. What is really the optimal? Is it really matching with respect to the clinical trial or matching with respect to the treatment? and the answer is actually just staring like in front of us. When we did a lot of simulations on this, the answer is that, whatever is your causal demand , that is where you want it to match. So for example, if it is an augmented control, you want to make sure that your external control is matching the treatment, not the concurrent control in that trial. So if you always wanted to go back to the original Causal as demand.
Alexander: Interesting. So, if you’re matching, you’re not only matching for Baseline, but also for post Baseline covariates?
Margaret: Well, actually both, I would say this was actually one of the interesting questions that was raised as we went to the FDA at one time. We were trying to propose the same idea for type 2 diabetes in children. And at that time, I’m not sure if, you know
Mark Rothman and Mark Levinson at the FDA. They are really experts in this area. And then Marco, I think it wasn’t one of them, you know, they were my colleagues, a very esteemed statistician. One of them was saying, “well, you have to match it with the post Baseline as well. Right?” And at that time, I was scratching my head. And I go, well, why do I need to match it with both Baseline? Because I think for me before my only thinking was really just well, baseline is the only one that we can really make sure about and that we are not trying to match with both randomization events. but then I realized that randomization events or anything after the index state is actually very important, because, as I’ve said earlier, physicians might learn how to treat a patient after the fact, because of the severity, and this is also the same thing as what we call a channeling bias, right? We’re channeling a particular patient, just because of what we see previously and that’s beyond, and that is beyond the randomized trial because in a randomized trial, all the procedures are…
Margaret: …So we don’t do that, and then, it dawned on me. Oh, yeah that’s right. They actually are making a very good point and this is really the difficult part when it comes to matching because it’s not only about Baseline anymore. It’s also about things that happen after the index state or after the randomization date or whatever it is, but that’s quite difficult to actually adjust. I don’t even know how to do that.
Alexander: Yes. It’s interesting. It gives a completely different view point in terms of, If you think about the placebo arm in the clinical trial and you very much restrict how you can change treatment and then you see the real world. So treatment changes are actually different. You can think of it as your Placebo doesn’t have a lot of external validity.
Margaret: Correct. That’s true. That is true. You can all. I mean, it’s like a glass half, empty or half full. It could be that your Placebo arm actually has no external validity.
Alexander: But it also can go in both directions. Yes, it is even one side and maybe then it has less efficiency, but also it has safety problems, so it’s really it’s not that it’s, Always biased in one or the other way.
Margaret: And yeah, that’s really a good way of looking at it, depending on which lens you want to Look at, you want to look at it within the lens of the randomized clinical trial. Then of course, this is going to be biased because there is a channeling, on the other hand you look at it from the perspective observational of a trial. You say that the randomized clinical trial doesn’t have external validity because it doesn’t reflect the true nature of practice outside.
Alexander: And of course, bias is, maybe not even the work, because it maybe it’s a different estiman.
Margaret: Yeah, that’s true. Then you know, we can complicate all of these things, you know, with estiman because, yeah, you’re right. I mean, it could have been that. Well, first and foremost, a randomized trial, the objective of which is to show a causation where as an external data, probably was collected very differently. And therefore, the estimate at that time for a position was so different.
Alexander: Which databases did you look into? for this world evidence data.
Margaret: Yeah. So this one is actually interesting because at that time you were still at the update. We didn’t have any of these. So what we did was just to look at all the trials within the FDA database and combine certain placebos for trials that are very similar and that look like as your combine a lot of things and hopefully that will look like it’s going to be external control, but at that time it was of course, we were bound by rules of privacy of data. And so we cannot really just use it directly. So we have to create certain ways of how we can model all of these combined trials as a placebo in combined trials and make it look like it is the real world. So just so many things that statisticians do like, adding certain noise and so on until it looks like this one looks like what we have now in our database.
Alexander: That’s interesting. So you basically did a first meta analysis across different studies of patient-level data meta-analysis. Oh, well, that’s another area of complexity.
Margaret: I remember Jane was my intern before and she is now in Tagatta. When I was reviewing her code at that time. It was very long and busying I was thinking at that time she was this excellent statistician who can program a lot. And joking at her and I was like Jane and I think your code looks like Chinese characters to me, currently. Not that you, it’s just that I didn’t understand. I mean, sometimes programming can be really difficult. How one program is very different from how you program because it’s pretty much how he or she actually creates or does her logic. I remember it was more than a thousand lines. I was like man, I really was struggling understanding it, but yes, it was quite complicated because we have to do a lot more than just getting data from an external Source, but I think I also have talked to a lot of people who has used all of these real-world data and it’s not really very easy as well. It’s like there is such an art to it. It’s not really science but it is not only statistics because it’s so messy and so you have to do a lot of things as well pre-processing and so on.
Alexander: It already starts with having the relevant data in the real world evidence as well. So do you have the clinical endpoints that you’re looking for in real-world, evidence data? Or you don’t have it? Because it’s just a claims database and if you have electronic health records, then maybe there are other problems that come with it because it’s only from a certain area or maybe it’s predominantly male or whatsoever and there’s this other limitation.
Margaret: Oh, yeah I remember all these problems. So we did as well for covid. Actually when covid started. We started trying a trial in Baricitinib. Anyone would want to know what would be the placebo rate right?
Alexander: Okay, interesting.
Margaret: Yeah, Placebo rate, as well as what would be the rates of ventilation and so on or death, even for people that were treated with steroids, but it was extremely difficult. I know that it was very useful but it was extremely difficult because I thought that the results that we are getting doesn’t make a lot of sense. I think it’s because again, as I’ve said, patients are probably treated very differently. And when they are included in this particular database so no matter how you slice and dice there’s always these caveats that they have to pretty much understand the disease in such that they don’t fall into a trap, this is probably what’s going to happen in the clinical trial as well. Which probably is not going to be the case.
Alexander: And I think especially in an area where there’s a very rapid learning like in covid. Yeah, it’s how patients were treated at the beginning of the pandemic that probably looks completely different to how patients get treated a year later.
Margaret: Exactly. Yeah
Alexander: Not even speaking about the different countries that got involved. I think if you have real world evidence, that is also something that you always need to take into account. When is your data outdated?
Margaret: Yeah, exactly. So there is another paper that I am writing with a few of my FDA colleagues. This is a paper for all of us who are in the industry now, so we tried to review all the real world data applications that were submitted to the FDA for which the FDA already has a decision. So, we reviewed all of what the FDA wrote.
Margaret: We reviewed every single thing that the FDA wrote on their review. And you can pretty much see that there’s a lot of these things, particularly for rare diseases where you really have to have a very long observation window. So these patients from the 1980s are probably very different from these patients in the early 2000s, and that is something that you really have to take into account.
Alexander: Yeah. Do you think you can also do some propensity scoring on this and have something like, well, something like a time variable.
Margaret: Yeah, I Think you could always do that right? What would be the time component? How the time component has already changed the practice and therefore the response. So there’s like a drift. I think that can be done at the end of the day. I think you still have a big proponent of tipping point. Know when my decision is actually wrong. When will my decision will flip. And then therefore I would have an idea of how these results should change.
Alexander: Tipping Point analysis, you can also do in lots of different ways. If you think about, let’s say the time people get into real-world evidence databases. Yeah, and maybe that was started in let’s say 1980. And you’re now running something and you have patients starting in 2020. And so 40 years later. So now we can of course say, we look first we include all patients and then we include less and less patients and now we cut at 81, 82, 83, up to 2019.
Margaret: And then feed the range.
Alexander: Yeah, and then the whole things are changing or you could also look into some propensity matching. Yes, is that you say, okay, we take the time difference to 2020 as another covariate and then you potentially make some kind of curve on it. Yes. Is that it? Down weights more and more of patients that are older.
Margaret: Exactly. Yeah, you can do that as well. So there is a way of incorporating that within your propensity score. Again, the propensity score is just probability of being in treatment or being in control and you can account and you can probably create in your model you can probably create a Drift model where the Drift actually incorporates the time.
Alexander: What’s a Drift model?
Margaret: So we put this when we are trying to model the treatment response when you are doing the causal inference, you put this as the intercept and the Intercept can be dependent on the time. So it’s really just that and the rest would just be dependent on the covariates. It’s only this particular. So a lot of the times when we’re doing that, we usually just use affix, particularly alpha or whatever mu with Sub-Zero and then a random effect at the end, but you can create this drift as dependent on time. And that should probably help and that’s a good suggestion.
Alexander: Awesome! Yep, great. So there’s a lot of different ways. You can now model these things.
Margaret: Oh, yeah, And you can even create like, for example, in time to events, right? There’s usually time varying covariates. You can probably use that as well in such a way that you can change certain Model Behavior or treatment response by interacting time with respect to the covariates of the person. So, for example, the way that these people of age 65 are treated now is probably very different from the way age 65 was treated before, but not for the other levels of covariate age.
Alexander: Yeah. Obviously as you can hear, there’s lots of other research to be done. If you are interested in this area. It’s a very hot topic at the moment and I’m pretty sure it will continue to be a hot topic because there’s so many new treatments that go into all kinds of different rare diseases that it’s a, I think more and more people see that it’s really high need because if we don’t solve this problem, we’ll potentially make studies very long and very long Studies have a problem on itself. We create Maybe, we potentially withhold Good treatments for patients. Yeah, and only we might study it for a very long time and it’s futile.
Margaret: It’s futile, we wanted as much as possible to make Regulatory Agencies, right? The idea is to be able to come up with an answer that is quick, and it’s good for everyone, good for the patients as well.
Alexander: And it doesn’t mean that you need to stop investigating there. And actually, you can still look prospectively in terms of putting up Registries and things like that and see how your work is really doing.
Margaret: Yes! Exactly! You can use the accelerated model or the emergency use model authorization. That would be a good thing. I mean if we can expedite things and then eventually check, whether indeed our initial data was actually consistent with future data in showing that there is benefit that patients are gaining in it’s treatments.
Alexander: Great! Margaret: Yeah, I almost forgot about that
Alexander: Margaret, any final recommendation, you would have to our listeners.
Margaret: I just say that just keep on going. I mean for me a lot of the time, I just follow whatever is in my mind and what I like doing. Sometimes you call me crazy, but you know looking back at this particular paper, it took a long time for it to get published, because the idea was not so well accepted before, Because why would you randomize at the same time do an external control at that time nobody really likes that idea, but then, now it’s catching up. So I just say that, just keep on doing something that’s really Innovative. Innovation actually has a very long trajectory and sometimes it meanders into certain corners and so on. And then, but then eventually, people just accept it or what because, of course, science moves by consensus. So, just keep it going.
Alexander: Yes, it’s always a trick and set in science and timing is really important.
Margaret: Yeah. That’s a good thing, timing is important. That’s true. I was actually reading one paper on vaccines about 10 years ago. And this is very relevant now .
Alexander: There was actually what we’re talking about is the PSI Journal Club that it was on 6th of July of 2021. And this is also available on the video on demand library of PSI. And as part of the journal Club, it’s actually available for everybody. So there’s certain videos that are only available for PSI members, but this one will be available for everybody. So if you want to listen to Margaret’s talk and there’s also another talk by Chris Hebron, which
I’m pretty sure it is also very interesting in a similar topic, check it out and just go to the psi web.org and you’ll find they’re easily video-on-demand stuff. Thanks so much Margaret again.
Margaret: Thank you very much. Alexander, for inviting me again and this has been a very good conversation in this muggy afternoon here.
Alexander: Okay, bye!
Margaret: Bye, Thanks!
Alexander: The show was created in association with PSI. Thanks to Reine who helps us in the show in the background and thank you for listening. Head over to www.theeffectivestatistician.com to find the show notes or the references and learn more about this podcast to boost your career as statistician in the health sector, reach your potential, lead great science and serve patients, just be an effective statistician.
Never miss an episode of The Effective Statistician
Join hundreds of your peers and subscribe to get our latest updates by email!