Non-parametric analyses – much more than just the Wilcoxon test!

Interview with Frank Konietschke

Frank has something in common with Benjamin and myself – we all studied statistics in Göttingen and learned and researched in non-parametric statistics. Afterwards, we went into different paths. Whereas Benjamin started at a CRO and I joined large pharma organizations, Frank continued on the academic track.

He recently became Professor at the famous Charitee in Berlin, where he’s still dedicating a lot of research to the field of non-parametric statistics. However, he’s not an ivory tower researcher but also applies these approaches in the medical research he’s taking part in. 

Learn about a whole universe of different approaches, which will help you overcome many limitations of the methods, which you’re using daily.

In today’s episode, we’ll cover the following questions:

  • What are non-parametric analyses?
  • How can you distinguish between parametric, semi-parametic and non-parametric analyses
  • How do ranks work best and when should we use ranks?
  • How can we describe treatment effects when using ranks?
  • What is the relationship between the relative effect and common treatment descriptions e.g. in the continuous case and binary case?
  • What are the advantages and problems with these rank based approaches?
  • How does it work, if I have multiple time points, multiple arms, covariates, etc?
  • What is relevant literature to read?
  • Are there any tips on implementing these approaches, i.e. programming help?

The references below will help you learn more about these approaches and give you the tools to implement them. 

Have fun listening to this episode and share it with your colleagues.

About Frank Konietschke

Frank has done extensive research on methodological developments in nonparametric statistics including ranking procedures and resampling methods for various designs and models. His results are published in numerous publications in various journals including two papers in  “the highest-quality journal” Journal of the Royal Statistical Society Series B. Recently, he published a book on nonparametric statistics the Springer Series in Statistics. He lectured and taught nonparametric statistics on almost every continent and has been invited speaker at about 80 different universities, companies and research institutions. Currently, he is professor of Statistics at the Charite Berlin, where he leads a research group working on the development and application of statistical methods of translation and early clinical trials.

References:

  • Book: Brunner, E., Bathke, A.C., Konietschke, F. (2019).  Rank and Pseudo-Rank Procedures for Independent Observations in Factorial Designs -Using R and SAS. Springer
  • Brunner, E., Konietschke, F., Pauly, M., & Puri, M. L. (2017). Rank‐based procedures in factorial designs: hypotheses about non‐parametric treatment effects. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(5), 1463-1485.
  • Konietschke, F., Bathke, A. C., Hothorn, L. A., & Brunner, E. (2010). Testing and estimation of purely nonparametric effects in repeated measures designs. Computational Statistics & Data Analysis, 54(8), 1895-1905.
  • Konietschke, F., Hothorn, L. A., & Brunner, E. (2012). Rank-based multiple test procedures and simultaneous confidence intervals. Electronic Journal of Statistics, 6, 738-759.
  • Konietschke, F., Harrar, S. W., Lange, K., & Brunner, E. (2012). Ranking procedures for matched pairs with missing data—asymptotic theory and a small sample approximation. Computational Statistics & Data Analysis, 56(5), 1090-1102.

Transcript:

Alexander: You are listening to the effective statistician podcast. The weekly podcast with Alexander Schacht and Benjamin Piske, designed to help you reach your potential Lead Great Science and serve patients without becoming overwhelmed by work. 

Today’s episode number 82, Non-parametric Analysis, is much more than just the Wilcoxon test. An interview with Frank Konietschke. So today we are talking with Frank Konietschke, who’s professor at the Charité University, and he is working a lot on Non-parametric Analysis. Benjamin and myself actually have something in common with him and you’ll hear about it in the episode today. 

Nonparametric Analysis actually offers you lots of opportunities and that is much more than publishable Wilcoxen tests that you learn in that University or which is similar for basically the same as mann-whitney U test and this is just the two sample case, but there is much more you can do with it. And today, we’ll also talk about how you can transcribe treatment effects in these situations where you don’t have the usual parameters like means to describe them, so stay tuned for this really nice interview with Frank. You’ll have a lot of learnings about it, especially if you’re not aware of what’s going on in a non-parametric field like actually many other statisticians. One of the problems with these Innovative approaches, like the non-parametric approach of but also lots of other Innovative approaches, is that they seem to very often face problems implementing them because they can’t persuade their colleagues to do these new things. There’s lot of colleagues that are very, conservative. And just wanted to use the same thing over and over again, and that of course leads to organization lacking behind, not reaching full potential, not leveraging what’s possible. And here, statisticians need to step up and lead your organizations. Leads the organizations in such a way, not as a supervisor, that really cross-functionally influences people. And in order to help you with that, Gary Sullivan and myself, have designed the leadership program. And I talked about that already quite a lot on this podcast, but currently you have again  the opportunity to enroll into the program and the enrollment period actually will close quite soon. So, if you haven’t done that, check out the information on theeffectivestatistician.com/course, where you will find all the details. You’ll also find some help to talk with your supervisor about the program and not only you, but your organization will actually benefit from it. So, please go to theeffectivestatistician.com/course and 

there you will find all the information. 

This podcast is created in association with PSI Global member organization dedicated to Leading and promoting best practices and Industry initiatives. Join PSI today to further develop your statistical capabilities with access to the video on demand content library. Free registration to all  PSI webinars and much much more, visit the PSI website at psi.org to learn more about PSI activities and become a PSI member today. 

Benjamin: Welcome to another episode of the effective statistician. First of all, I’m Benjamin and I’m here with my co-host, Alexander, Hi Alexander.

Alexander: Hi Benjamin. How are you doing? 

Benjamin: Thanks, very well and we have a special guest today coming from a long way. From Texas to Germany, Frank Konietschke. He’s here with us today and we’re talking about nonparametrics. Hi Frank. 

Frank: Hi. Good morning. Thank you so much for having me. I’m very excited. 

Benjamin: Yeah. It’s an exciting topic actually. So, we actually have a bit in common in a way like in our history because we are all coming from Gottingen and so we started in Gottingen, Alexander did, and myself and you as well, but you are actually so young that we didn’t meet in Gottingen. 

Alexander: I’m always so old but, you know, to freeze it positive. 

Benjamin: So, we do have a common background and also have worked with Professor Boner on nonparametrics, so it’s really, really good to have you here, but actually my first question would have been to introduce yourself. So I already just started and maybe you can just give a quick Introduction to yourself and just say a little bit about you and your history and where you’re coming from, where you’re going to, and what you’re doing now.

Frank: Okay. Sure, so hi everybody. My name is Frank Konietschke. I’m a professor of Statistics at Charité in Berlin. Actually, the Charité is one of the largest university medical centers in Europe. And I lead a research group working on some space statistical methods of translation, which is basically a pre-clinical research and let me say some words about my background. So as being said, I studied mathematics at University of Gottingen and I also did my PhD in mathematics at the same University specializing in statistics. So after getting my PhD, I stayed at the same University to work at my habilitation and let me do this little short. After this, I got my first Professor position at Ludwig, Maximilian University Munich. Where I stayed for a while, then I moved to Texas. So I worked as a professor at the University of Texas at Dallas. And I recently  moved back to Germany because I got a position, a professor position at the Charité with affiliation also to Humboldt University and Free University in Berlin. And now I’m teaching statistics here and working in clinical research. And yeah so far so good at what I’m doing. 

Benjamin: Excellent now. So, we are talking about nonparametric analysis or nonparametric statistics, what actually is nonparametric analysis? 

Frank: You could say nonparametric analysis means, when you are working in statistics, you’re collecting data. And what you are doing is that you are working with data distribution moderate while It’s fair to say. Nonparametric means don’t postulate that you have a specific distribution, it is a model of the data and basically you don’t, pre-specify the data must come from a certain distribution. For example, normality, So you relax this assumption and just allow data or allow the data distribution to be completely arbitrary. 

Benjamin: There are also the basic parametric and nonparametric statistics but what actually is then semi-parametric. I think just to distinguish between the three different types. 

Frank: So semi-parametric means that you don’t postulate a specific data distribution, but you allow and assume that certain parameters exist. For example, a mean. So when you have an analysis where you assume that a mean exists, but leave the assumption of a specific data distribution free. Then you are on a semi-parametric framework.

Benjamin:  Okay, so basically that’s a compromise. 

Alexander: Yeah, and I think maybe the obvious question is, why shouldn’t a mean non existing? And I think that’s one of the tricky questions that we need to enter here because a Mean is non-existing if you can’t define it because the data doesn’t test the right properties. So, if for example the data doesn’t allow you to define a certain distance for example, you 

can’t have a Mean because, let’s say you just have ordinal data, just strictly ordinal in the sense that you can just say whether something is bigger and smaller, but you can’t say how much is bigger or smaller. Then you can’t define a Mean. So that’s one situation for example, we are set.

Benjamin: Well the Mean doesn’t make any sense basically.

Alexander: Yeah.

Frank: Well for example if it is skewed. It’s a distribution for those who are skewed. For example if you measure an income, the income measured is always very skewed. Then means, for example, also it might not be a best choice to define the or to use as a measure of Interest. 

Alexander: Yeah. But For income. At least you can say what the Mean is?  For certain questions, not being a good way to describe your distribution, but now the situations in mine. But for example, if you have a rating that just describes, let’s say a School gradings, for example, you know that A is better than B and B is better than C and C is better than D, but it’s really difficult to say whether such a difference between A and B is the same as between B and C or C and D or was it like, A to C is exactly twice as the difference between B to C. So, these settings are really difficult to define a Mean, directly from the get-go. But yeah, I think that the other point is, if you have certain types of distributions just looking into the sub-mean doesn’t make a lot of sense. So what are alternative ways can we look into that? 

Frank: So, we are working in this nonparametric framework, usually with ranking methods of ranking, methods that are nonparametric. Way to run statistical analysis. So in this ranking Frameworks, you are allowed to relax any distribution area of assumption. So, all of these methods work for any distribution of data scale. So this works for metric data. As we said these work for ordinal data, this even works for binary and Dichotomous data. And let me say why we are working in this ranking while we are working with ranking methods is because in many trials, sample sizes are very small. So in my area, here at the Charité where we are working in preclinical research, sample sizes are usually very small. We have, for example, eight or nine measurements per group. So we have a trial where we observe a few groups. and sample sizes are very small in such a situation you cannot estimate the distribution of the data at all, So you’re always in the framework that you cannot get any guess about the distribution of the data. So in all of these areas, well at least for me, purely nonparametric methods are the best way of choice to run any analysis and especially for the planning of the trial. You see, you always have an issue when you’re planning your trial. like in animal trials where you come up with very small sample sizes that better planning is based on a parametric method because it’s very likely that the data distribution won’t satisfy the assumption of the method based on which you planned your trial.

Benjamin: Yeah that’s true. It doesn’t solve the question of power but at least it solves the question of this rule.

Alexander: Well, I think in that sense. it’s maybe a more convertive way to calculate under power because you actually build in the model uncertainty. so to say yeah because if you power based on an assumption that the data is a normal distribution. Then you might think, I’m okay here because I just built additional assumptions and these additional assumptions  gave you more power, but that’s just a perceived more power because you haven’t built in any variability regarding your model. And if you think about uncertainty in two different aspects, one is the uncertainty regarding some model and then the uncertainty in the model of the nonparametric approach encompasses so to say both things because there’s nearly no assumptions regarding the model or at least really easy to justify assumptions regarding the model, for example, that every subject follows the same distribution and that are independent. 

Frank: It’s a very powerful tool and I think it’s fair to say you’re always on the safe side rather than Nonparametric Analysis.  

Alexander: What are other situations where we should use ranks? So we talked about the small group, discussions of when you’re not sure about the distribution and the ordinal approach or skewed distributions. Can you think of other situations that make it very obvious or to use ranks instead of using a certain parametric approach? 

Frank: For example, when you have many outliers or some extreme outliers in your data set, then ranking methods might be a very good way to analyze the data. it’s fair to say at this point. It depends on what kind of outliers you have. So when you have a ranking, the ranking just means you will sort the data in your list of smallest observations and get a rank one. if you have n observations, the largest observation gets rank n. So here, it doesn’t matter how far off any outlier is from the other observations. so it can be a very robust way to analyze data with outliers. Sometimes if the outlier is very informative and ranking methods might not be.. 

Alexander: In these settings where you actually see Outliers are as important things you would probably look at maybe response variables in terms of outliers or things like that and I think there’s also in this whole estimate framework where we have these composite endpoints in terms of where you build new endpoints and say, okay, if a person discontinues you to set for a reason he gets that score, if it continues to do another reason is to get a first score. And if he dies he gets the worst possible score. Then that’s a kind of another area that is a composite endpoint that lends a very nicely two ranks and the composite strategy with the estimate framework, is one area where ranks can be very nicely applied because there you also have the ordinal data set. So I think that links to another episode that we recorded a couple of months ago about the composite strategy for the estimate approach and we briefly touched on this topic within that episode. So, if you want to listen to that, just scroll back in your podcast player and find the episode. Okay, one of the problems, of course, if we don’t have Means, for example, to describe our treatment effects and differences between Means and these things. How can we describe a treatment effect? 

Frank: When you work with ranking methods, one of the questions is, how the treatment effect Means. It’s how you can describe the difference between at least two distributions in such a nonparametric framework. So when you don’t describe the difference based on means or any other model on our model we don’t have any parameters at all. So we do this in a way. Let’s say we have two groups and then we look for what is the probability that a randomly chosen observation in the first group, let’s say, it’s smaller than any randomly chosen observation. In the second group, we don’t define a treatment effect based on a mean difference or on any other parameter. You just look in which of the groups of the other data larger than the other one, as your treatment effect. So if you have, let’s say again, you have two groups. If this effect is what you look for, what is the probability that any observation of the first is smaller than the other one? If this probability is equal to 50%, then you can say none of the two groups, the data are smaller or larger. So means you don’t have any let’s say difference between the two distributions on this probability scale. 

Benjamin: Okay, I understand. It’s just more like when you, I think that’s one of the key, not problems that may be what’s holding back the really breakthrough of nonparametric in general, in the clinical trials is probably that doesn’t give you like a measurement of or quantity of difference that you can see because it’s based on ranks and it’s not based on the real or the actual observations. 

Alexander: But what you can say is, let’s say you have a Jesuit brother truth, treatment effect of 60%, then if you take the new treatment in 60% of the cases, will you be better off than taking the comparative treatment. If you have an 80% chance and it’s an 80 to 20 percent ratio, I think that’s actually a nice way to understand your data and what’s also nice is that there’s actually a relationship to the  parametric case. So let’s say, if you assume a normal distribution and you assume the sets of a standard deviation as the same in those  treatment groups, then you’ll always have the hypothesis and no Hypothesis matches each other. So, whenever you have a relative treatment effect of 50%, you will have zero difference in your Means. However, what’s also really nice is even if you have the same Mean, but the standard deviation is very different from the two groups you still have a relative treatment effect of 50%. Yeah, so you can have, let’s say a case. Let’s say you have a very extreme case. You have just three outcomes, three outcomes for the end point. 1, 2 and 3. And if for treatment group 1, everybody has two, the treatment group two half of them have one and half is three, you’ll get the relative effect will be 50% because in 50% of the cases, you will be, A is better than B, and 50% of the cases treatment B is better than A. So for me. I think it’s a very intuitive way to describe a treatment effect in situations where you just can’t quantify so easily how much better it is. You can just say it’s better. 

Benjamin: Yeah, that’s the limitations of number 

Alexander: But since its limitations inherently in the data itself. Yeah, it’s not a statistical method. Just by putting assumptions on top of it. We maybe even fool ourselves by thinking about mean differences. When actually these Means don’t make a lot of sense? What are you saying, Frank? 

Frank: Well I think first of all for the treatment effect, you are correct. It’s defined on a probability scale. So some people say it’s a little harder to interpret. Instead of comparing means, I guess that’s true. On the other hand, you base your analysis, or you define this treatment effect. Just on saying, in which of the group’s the data are, or the outcome tends to be larger than under a different condition. So, for me, I don’t think that this effect is harder to interpret. So it’s measured on a probability scale. So, based on the strength of this effect. You can, for sure, define that the larger, the data will be in the second group. Say, then the larger will be the probability. So larger, I mean closer to one.

Alexander: And I think the other nice point is, if you think about binary case and you touched earlier on that, you can also apply rank since binary case. This relative treatment effect corresponds, one is to one to the relative risk difference. So if you have response rates between the two groups the difference between these response rates. You can calculate that with a pretty simple formula directly into the relative treatment. 

Frank: It’s a major advantage also of this effect because for any postulate, any distribution which has certain parameters and then compute the relative effect. Then this relative effect is a function of the parameters of the distribution, this holds for any distribution that you might think of. So as we said before, for the normal distribution, then this effect is a standardized mean difference. If you have binary data, then it’s more or less nothing else the difference between the two successful abilities and you can continue this list for any distribution you might think of, so you always have when you compute this effect is relative effect for any distribution, then you can express this effect in terms of parameter of this distribution. So I think it’s a very nice property also of this effect.  

Alexander: So that’s the other point, if you actually assume normal distribution, then you can also see four given treatment differences. In terms of the means, how that relates to the relative effect in the nonparametric case and it’s just a function of the standardized mean difference then. So that’s also a pretty nice way to get a little bit of a feeling for what is a big  treatment effect because you basically can calculate back to what it would mean in a normal distribution setting. 

Benjamin: Okay, we talked a little bit about the treatment effects, but as a result, how can you just visualize the treatment effect? How can you best present the result of the nonparametric analysis? 

Frank: What we do and what we favor is for sure, are for sure, confidence intervals. So we did a lot of research in the computation and to derive formulas for confidence intervals for these effects. You can compute those effects if you have a more complex model than having two groups. You can compute these effects for all the group combinations. And I think confidence intervals are for sure one very nice way to visualize the treatment effects. 

Alexander: By the way, as we are talking about that, I guess that is all described in your new book that is about to come out there, isn’t it? 

Frank: Yeah

Alexander:  So we’ll put a link to that into the show notes. And if I’m not mistaken your book also comes with some program help?  

Frank: Yes, we implemented many programs for application for applying these ranking methods. So, we implemented macros for us. We implemented our packages which are called ranked FD and we have Empire LD. And another Empire comp. So all of these packages are emphasized on different application areas. So I guess they cover a very broad range of possible data models. 

Alexander: As you speak about possible data models. Currently we have just discussed very much about the two distribution cases and I think that is good to grasp the first understanding of the relative treatment effect, and these kinds of things, but there has been lots of research going on over the recent decades, to extend that, in many different forms. And there’s this research in terms of having multiple treatment groups, there’s researchers having multiple time points, there’s research about having also covariates implemented into that, looking into factorial designs. In terms of that, It’s quite flexible now, isn’t it? 

Frank: Yes. So let me go back about the definition of the treatment effect when you have more than two groups. So, when you have more than two groups, there’s one arising question: how to define the treatment effect in such a case. So, what you need is a benchmark. What do you want to define as a treatment effect?  Let’s say, first of all, I can affect every group separately. You need a benchmark, which you’ll define  as the average, and then you define an effect for every group to have as the probability that any remaining observation from the specific group is greater than the average. So we did a lot of research going average which goes to the definition of this average. So in standard literature in previous decades, people defined the average as the associated mean and so what we did, and so we found that this definition is not the best way to define the effect of how? first of all, because this weighted means it’s weighted by sample sizes means if your treatment effect later on, we also depend on sample sizes, which is not the way, how you want to have a fixed motor constant in terms of treatment effect so that we kept on going with defining the mean as the unweighted mean. So the unweighted mean is somewhat a constant? And then just how later for the estimation the ranking methods also change. So means if you go, then to the estimation that you don’t use ranks, but we call them pseudo ranks. So it’s also our book that we call random pseudo rank methods. So, this just means that your estimate of a different kind of a treatment effect for several simpler cases or for more advanced models, some fitted these approaches will be generalized, these approaches for as you said for several centers, it can be any general linear models. This could be, factorial longitudinal data. We are working on methods to adjust effects for covariates and Baseline where we use these. So, there’s a lot of research going on, like, in my group and group of Gottingen and another And in Dortmund for Marcus Poly and from other people who are working areas, so for me, it’s a very interesting field where a lot of research was going on. 

Alexander: Yeah, I can remember the discussions about pseudo ranks. I think I started about the time I was in Gottingen where we were looking into this treatment effect a little bit closer and up to that point in the mid 90s. We always had this compared treatment effect in terms of treatment effect compared to the weighted average across all different treatment groups. So the weighted average of the distributions. And we found that it has some nice optimality criteria from sample sizing and empowering and precision but of course, it has its downside in terms of the interpretation that if you don’t have a completely balanced design, then your treatment effect really depends on the sample sizes and differential sample sizes. 

Frank: What our research is actually the direction of the research that we published. It’s also when you said that the treatment effect might depend on sample size on synthesized locations, then think about the power and we found that you might have been very surprised or because there’s a little, maybe Paradox results. Let me use the word, surprising results. The power of these tests highly depends on sample size allocations, what I mean by this, with all these classical ranking methods you can set your sample size allocations from their models, in the way that you will either get a significant outcome or a significant result or a nonsignificant, which plays or might play a more important role than the difference in the distributions that are which was or which is a very huge drawback of this classical ranking methods and these this can be repaired when you define values of pseudo ranks. 

Alexander: Okay? Yeah, I can see that happening. But the interpretation of Pseudo rank is basically more or less the same as the ranks, you just have to get all relative treatment effects. And as the reference is and just the average all of the other treatment group. So basically if you have two treatment groups and say they are defined by different doses of your treatment. Let’s say no dose that means the placebo, low dose,middle dose and high dose and then if you want to compare the high dose, you would compare the high dose versus the other three together. So you can say what’s the probability of the highest dose? If you put on the highest dose you get a better outcome than if you randomly put on each of the other three doses. That’s basically your interpretation, isn’t it? 

Frank: So you always compare each group to the mean of the other ones and based on these, if this probability value, let’s say for group number 4 that you said is the largest. Then you can say immediately that the outcome under dose 4 is larger than under dose three and the others. 

Alexander: How does that work similar now if we look into time courses, isn’t it? So if we look into multiple time points and we can say, okay now we not only have the 4 distributions, 44 treatment offers, but we let’s say I also have five visits. So, overall, we have now 20 distributions. And you look into each time point and compare it to all the other time points and all the other doses basically. So you compare one distribution versus the 19 others. 

Frank: Right, But, you want to have it also for data description. You want to have such an effective size measure for each time and dose combination, right? So, for these reasons we define the effect of your average, all of the distribution under each time point and under each dose and then compare each cell or each dose in time combination to this mean, then you have a very intuitive and also very easily interpretable effective measure. 

Alexander: Yeah, and then from there you can very easily further derive other things like, your average treatment effect across the time points or your average time effect at any given time point and these kinds of things , where you just average the relative treatment effect, isn’t it? 

Frank: Yes, that’s absolutely correct. Lots of ways.  

Alexander: Unweighted average, doesn’t it? 

Frank: Yes, so today we favor using the unweighted average of the distributions as a reference distribution when you define the treatment effects, and then when you estimate this naturally leads to the pseudo ranking assignments. 

Benjamin: Yeah, I’m just wondering, we just had a call on real-world evidence. Real-world data. So isn’t this a field where non parametricity could get just based on the data itself? Isn’t this a field where nonparametrics can get into? And how is it then handled sometimes with missing data in non parametric control? 

Frank: Well first of all this ranking method is applicable in Broad ranges. It’s also for my experience or I’m lecturing these Methods at many places. So every place has a different field of application. So this is possible. So when you have missing values, we actually work on ways of very effectively incorporating missing values. So what I do or what we do is that we define the methods upon all available cases. So that implementation is not an issue so far. When you work in a purely nonparametric fear like to implement missing where you’re so, therefore, what we do is to try to derive methods based on all available cases, and then for example, go to weights based on the information that you have. So we published a few Publications about missing values. So whether it’s research That we actually do beyond the way to hopefully one day have very nice solutions for this issue. 

Alexander: But basically you can apply lots of similar techniques that you would have for a parametric analysis as well, isn’t it? So you could have simple imputation methods or you could derive multiple imputation methods as well, isn’t it? 

Frank: Well, it depends on a few things about reputation reputation is usually done based on a modern. So you usually need a parametric model in the background and then you estimate the data, right? That’s what you do. When you impute estimates, the data of the missing data based on the information, with your head of that goes usually hand-in-hand with having a certain statistical model. So just purely nonparametric fear, if you don’t have Except somebody else or they have not fully embody where you can estimate data from. So it might be you if you think about one simple way of just replacing the missing variable C. Let’s say the average of the observed observations. That’s for sure. Not the best way to do it. Right? I think we all agree. In fact induce a correlation of this imputed value and then also about how to estimate the rank by like sort pseudo ranks of this  missing observation had gotten its kind of a fear. That’s not really, it’s possibly maybe if you assume if you assume a very strict missing value mechanism like missing completely at random and them some more assumptions that you have some Works. They were people relax this assumption a little bit. And then trying to impute and push as a bivariate far case, for this is research, that’s actually is going on, but I am a very assured there will ever Be a very nice solution. I’m not sure. So I think just one parameter feels they have their limitations. 

Alexander: But is it really a limitation here or is it just you know in the parametric world? We Make it easy for us because we just assume a parametric model and then we can work with that. And of course you could first assume a parametric model to fill your missing data and Then move forward with a nonparametric approach.

Frank: Did you see? These rankings? Let’s go back to the point where we ask, where do the ranks come from? the ranks are nothing else that you put in every observation in empirical distribution. So, when you have missing values, the first question is, which mechanism has missing values? What we caused was a mechanism. So now, if you have missing values, In this ranking, what you need to do, you need to estimate the conditional distribution function. Given a certain missing value mechanism. Wait until you might think or see already there. It’s getting very, very complicated. So you can do this when you assume missing completely at random. But as soon as you relax this assumption a little bit and this will be missing at random, then the estimation of the distribution function, given this missing value mechanism. It becomes very  complicated. So that’s why I’m not sure how these things will go when you have even less strict assumptions on the missing value mechanism as missing at random. 

Alexander: Well, I think since the complete estimate discussion kicks in and you can say, okay, if you think, all these missing data, because of dropouts, they actually are treatment failures. And , you can assign a certain outcome. Yeah. and It’s a composite strategy and if you have a treatment policy since you further collect these data so that you can make an assumption. So I think that is a problem that you need to solve on the data itself. And for me, this kind of analysis approach is a second step.

Frank: A lot of research going on. So one of my PhD students, Castile, is working on these pseudo-ranking methods with missing values. So their research is going on. And maybe one day, we will have a very nice solution or service with realistic missing value mechanism assumptions. But we are there first. So when we have done this, then we have to keep going and see that maybe one day, we will have a very nice solution. 

Alexander: You mentioned a couple of groups working on. That is a working group, or a special interest group, that is  working on a data set? People who would be interested in doing research on the nonparametric or learning more about Parametric join? 

Frank: The background for these are the following; like for me I worked on these Purely nonparametric ranking methods. And then one day I wanted to explore some better approximations to have some better results for small sample sizes. So when I started to begin with resampling. So one day I got in touch with people. Like a research group from Dusseldorf University where I met Marcus Poly, San Marcos Poly. He was specializing in resampling implementation testing. So that’s how collaborative work began. So he had the expertise on the resampling. I have the expertise on the nonparametric ranking method. So then we began to collaborate and do resampling implementation, and other works in the ranking framework. And we have some other researchers in the United States and in Canada, where I’ve been working a lot also with these. So, and it’s always when everybody in this research, everybody usually has a different kind of fear to it. Where he or she is working depends also on the applications, or In the areas where somebody is working. So for me, It has always been a preclinical trial. And then some others might do some psychological studies and be meeting them. We tried to connect in these areas. So I think it’s always fruitful for my experience and I always learn a lot. When I get in touch with new research groups and can also see how I can enhance a model that I study, you always have new very likely more problems when you generalize the model that you have more settings that you never have thought about. I think that’s how it usually goes. 

Alexander: Okay. very good. And with that. I think we had a really nice discussion about nonparametric and, as Benjamin mentioned, in the beginning. It’s close to the hearts of all of us because we have thought about this and worked on it quite a lot, very early in our career and he was one of the lucky ones who could continuously work on that. It’s an awesome sight of research and has been going on for quite some time and it’s now in the state where there’s a whole complete theory that you can draw from and such as software solutions to directly Implement things. So that’s really good. So as a statistician that you are now listening to this episode, think about where these can help you in your day-to-day work. In other cases where you’re thinking that is maybe not the best approach to just assume normal distribution is better ways to do that. And I think that is one of the Innovation areas where you can bring new things to your team and where you can maybe have better discussions about treatment effects and what you are actually measuring and what is really interesting for the outcomes. And for me, especially all this, the composite estimate approach is one of the key areas where we should apply that much more and especially if it’s not just a binary approach, but if we want to have multiple categories, depending on why a patient’s drop out? I think it’s a very valuable approach. And the other ones are the outliers and these kinds of things. So thanks a lot Frank for this really  nice interview. And for everybody who’s interested in it, check out the show notes, you’ll find a link to Frank’s work and lots of further work on nonparametric statistics. Thanks a lot! 

Frank: Thank you. Thank you so much. I appreciate it. Thank you. 

Alexander: This show was created in association with PSI. Thanks for listening. Just visit theeffectivestatistician.com to find the show notes that we talked about in the episode today and learn more about our podcasts to boost your career as a statistician in the health sector. And especially also check out the homepage of the effective statistician leadership program. And there, you can find all the details that you need to understand who benefits from it. It’s actually both supervisors and non supervisors and what you can get out of it as well as help to talk with your supervisor about it. So just go to theeffectivestatistician.com /course. 

So with that, I’m ending as usual with, reach your potential lead great science, and serve patients, just be an Effective statistician. 

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won't send you spam. Unsubscribe at any time. Powered by ConvertKit