Have you ever considered that bias might not be as detrimental as it’s often portrayed? What if the quest for precision in your estimates could actually benefit from a careful balance with bias?

In this insightful episode, I challenge the traditional view that bias is inherently negative by exploring its relationship with precision.

Through examples ranging from indirect comparisons to subgroup analysis, I illustrate the trade-offs between reducing bias and achieving precise estimates.

Join me as we navigate this interesting interplay of bias and variability in the pursuit of effective statistical practices. 

Here are the highlights in this episode:
  • Understanding Bias
  • Precision vs. Bias
  • Methodological Examples
  • Matched Adjusted Indirect Comparison
  • Subgroup Analysis
  • Decision-Making
  • Stakeholder Discussions
  • Acknowledging that every methodological choice has its trade-offs and costs.

Tune in to deepen your understanding of bias and precision in statistical analysis, and don’t forget to share this insightful discussion with your colleagues!

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won’t send you spam. Unsubscribe at any time. Powered by ConvertKit

Learn on demand

Click on the button to see our Teachble Inc. cources.

Load content

Transcript

Obsession About Bias

[00:00:00] Alexander: Welcome to another episode of The Effective Statistician and this is another, short Friday episode. Today I want to talk [00:00:10] about Bias well, we all learned that universities that bias is usually a bad thing because it means that our [00:00:20] estimation is different from what we want to estimate. And so we are all kind of always thinking about.

[00:00:27] Alexander: Ah, yeah, we need to get rid of [00:00:30] bias. Bias is something bad. Let’s prefer all the different methods that are unbiased. That is [00:00:40] only half the truth. The problem is, whenever we reduce bias, we nearly always [00:00:50] decrease precision. So, the unbiased estimates come with some payment. Yeah? So, the [00:01:00] unbiased or less biased estimates usually have a wider confidence interval.

[00:01:06] Alexander: And I see that very often, [00:01:10] very unreflectively, the, these unbiased or less biased methods generally are just kind of say, yes, they are better because [00:01:20] they are less biased. I’m not sure that they help. Always better, yeah? It’s just that you trade one versus the other. A [00:01:30] couple of examples. First is indirect comparisons.

[00:01:35] Alexander: You can do an indirect comparison using the classical [00:01:40] Bucher method, yeah, where it’s just about looking into the endpoints and you look into the variability of these and then you derive an indirect [00:01:50] comparison from it. Full stop. Very straightforward. And of course the critique is, okay, what happens if there are [00:02:00] differences between the studies you’re looking into.

[00:02:02] Alexander: So for example, you have one study that is a literature study comparing an old [00:02:10] drug to placebo, and you have your own study that compares a new drug to placebo. And now you want to look into the comparison here. Okay. [00:02:20] And because for your own drug you have the patient level data, you can actually match these study data to the literature data.

[00:02:29] Alexander: And then you [00:02:30] get matched, adjusted, indirect comparisons. And I have seen again and again the discussions that people say, well, we should [00:02:40] only do matched, adjusted, indirect comparisons, because they are unbiased. Or less biased. [00:02:50] Yes and no. Yes, when all our assumptions work correctly and we adjust for the right variables, then [00:03:00] we will have less bias.

[00:03:02] Alexander: And we will also have wider confidence intervals. So, the [00:03:10] more we adjust. The wider the confidence intervals, and of course that means that we have less precision. Now [00:03:20] here it is about, okay, is it really worth it, or does it make sense to have, [00:03:30] to accept a little bit of bias and say, well, but at least then we have good and not so wide [00:03:40] confidence intervals.

[00:03:42] Alexander: And have a discussion about bias. Okay, how big could the bias be? These kind of things. In some [00:03:50] circumstances, I rather want to have something that is more precise and also more biased. That could be better than [00:04:00] having something that is less biased and also less precise. Another example is typically subgroup analysis.[00:04:10] 

[00:04:10] Alexander: Yeah. Of course, we can look into subgroups of the subgroups of the subgroups of the subgroups and the closer kind of we get to [00:04:20] our target patient, yeah, the less bias we will have. But, of course, on the [00:04:30] opposite, We also have less and less patients, which we can average over, do standard deviations over, all these kind of different things.

[00:04:39] Alexander: So, [00:04:40] yeah, we get less and less biased, but we also get more and more imprecise. So, always [00:04:50] have that in mind. Does it really make sense to have these small subgroups? Just because we could do [00:05:00] them, doesn’t mean we should do them. And, Have a discussion with your stakeholders, with your counterparts [00:05:10] about bias and variance and how they trade against each other.

[00:05:14] Alexander: And that you can’t have it all. There’s no free lunch. [00:05:20] So, that’s a little bit on theory of bias and variability. And if you look into what you’re doing, You’ll probably [00:05:30] notice it somewhere. So, think about it, make a conscious decision about it, have a discussion about it with your stakeholders. And of course there might be different [00:05:40] preferences for different stakeholders.

[00:05:43] Alexander: That’s it for me today for another Friday episode of The Effective Statistician. If you [00:05:50] like The Effective Statistician as a podcast, as a community, then please tell others about it. It means a lot to me if we [00:06:00] have lots of statisticians, data scientists and programmers who would benefit from this community and from all the content that we share here.

Join The Effective Statistician LinkedIn group

I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.

I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.

When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.

When my mother is sick, I want her to understand the evidence and being able to understand it.

When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.

I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.

Let’s work together to achieve this.