In this episode, I dive into some key questions:
How does luck influence our careers?
How do we stay humble while continually learning and growing?
What can we learn from someone who has shaped the field of statistics for decades? 

I’m thrilled to explore these topics with one of the most influential statisticians, Stephen Senn. Stephen shares his journey, revealing the pivotal decisions that guided his career and the insights he gained along the way.

Join me as we uncover the lessons that can inspire and guide statisticians at every stage of their careers.

More key points:
  • Role of Luck: Influence of luck in career development.
  • Humility: Importance of staying humble in professional growth.
  • Continuous Learning: Emphasis on lifelong learning and adapting.
  • Career Decisions: Pivotal choices that shaped Stephen Senn’s career.
  • Insights: Valuable lessons from decades of experience in statistics.
  • Influence: Stephen Senn’s impact on the field of statistics.
  • Inspiration: Guidance for statisticians at all career stages.

As we conclude this insightful episode, Stephen’s reflections remind us to embrace both luck and continuous learning in our careers. His experiences offer valuable lessons that can resonate with statisticians and professionals alike. I encourage you to listen to the full episode to dive deeper into these ideas and draw inspiration for your own journey.

If you found this conversation valuable, share it with your colleagues and friends. Let’s spread the wisdom and keep the conversation going on how we can all grow and succeed in our careers.

Transform Your Career at The Effective Statistician Conference 2024!

  • Exceptional Speakers: Insights from leaders in statistics.
  • Networking: Connect with peers and experts.
  • Interactive Workshops: Hands-on learning experiences with Q&A.
  • Free Access: Selected presentations and networking.
  • All Access Pass: Comprehensive experience with recordings and workshops.
Register now! Register now!

Never miss an episode!

Join thousends of your peers and subscribe to get our latest updates by email!

Get the shownotes of our podcast episodes plus tips and tricks to increase your impact at work to boost your career!

We won’t send you spam. Unsubscribe at any time. Powered by ConvertKit

Learn on demand

Click on the button to see our Teachble Inc. cources.

Load content

Stephen Senn

Statistical Consultant

Originally from Switzerland, Stephen Senn was head of the Competence Center for Methodology and Statistics at the Luxembourg Institute of Health in Luxembourg, from 2011-2018, Professor of Statistics at the University of Glasgow, from 2003 to 2011, and Professor of Pharmaceutical and Health Statistics at University College London from 1995-2003. He has also worked in the Swiss pharmaceutical industry, as a lecturer and senior lecturer in Dundee and for the National Health Service in England. He is the author of the monographs Cross-over Trials in Clinical Research (1993, 2002), Statistical Issues in Drug Development (1997, 2007, 2021), and Dicing with Death (2003, 2022) and over 300 scientific publications. In 2001 Stephen Senn was the first recipient of the George C Challis Award for Biostatistics of the University of Florida, in 2008 he gave the Bradford Hill lecture of the London School of Hygiene and Tropical Medicine and in 2009 was awarded the Bradford Hill Medal of the Royal Statistical Society. In 2017 he gave the Fisher Memorial Lecture. He is a Fellow of the Royal Society of Edinburgh and an honorary life member of Statisticians in the Pharmaceutical Industry (PSI) and the International Society for Clinical Biostatistics. He is an honorary professor at the University of Sheffield. He retired in 2018 but is still researching and consulting in statistics.

Additional references:

http://senns.uk/ 

Twitter

By loading the tweet, you agree to Twitter’s privacy policy.
Learn more

Load tweet

The Revenge of Time

The pages here are of four sorts:

1. Professional, covering statistical consultancy or research or blogs or my pharmaceutical statistics links or my books Cross-over Trials in Clinical ResearchStatistical Issues in Drug Development or Dicing with Death.

2. Recreational, covering my outdoor interests, in particular, skiing.

3. Guernsey McPearson’s pages.

4. Pedantry Getting Personal

Transcript

The Best Learnings From One of The Most Influential Statisticians

[00:00:00] Alexander: Welcome to another episode of The Effective Statistician. Today, I have a guest that needs no introduction. I always wanted to say that. Stephen Senn. Hi, Stephen, how are you doing? 

[00:00:13] Stephen: I’m fine, thank you. 

[00:00:15] Alexander: Very good. So, as we are recording this summer [00:00:20] is finally starting. Here in Germany and also in Scotland where Stephen now lives.

[00:00:27] So thanks so much for agreeing to record this podcast. That has been on my list for quite some time and it’s great that we are now [00:00:40] really getting together. And so my first question is really about Your career, you have been in the industry for a very long time. You have worked at various companies, organizations.

[00:00:55] You also published quite a lot and to have, [00:01:00] you have a pretty big. Impact in the academic setting, both in terms of papers and but also in both in terms of books. So if you look back over your career, what are your biggest learnings in your career? 

[00:01:18] Stephen: Well, I would say the biggest [00:01:20] learning is the importance of luck.

[00:01:22] No, this is actually a serious point which I will maybe discuss later in the podcast anyway, but one shouldn’t underestimate the importance of luck. That’s not to say that, you know, everything is luck. That’s not true. But as we know from modeling in general, there’s a lot of what, which remains unexplained, and [00:01:40] we just have to call luck.

[00:01:41] In my case, I think I was, I was lucky on a number of things. First of all I came to, to Britain to, I wanted to study mathematics, but my background in Switzerland was not enough to prepare me for the British idea, which was basically that you should have studied maths and pretty much not much else from the age of [00:02:00] 16 to 18 if you intend to do mathematics at an English university.

[00:02:05] And so I looked for something else. And I found economics and statistics. And so I did that as an undergraduate degree. And I then found that I didn’t like economics. And so then I looked for something else to do because in the [00:02:20] meantime, at university in my third year, I’d met a first year student who is now my wife.

[00:02:24] And I wanted an excuse to stay on at the university. And the only course I could find, the only MSC course I could find for which I was remotely qualified was an MSC in computing and statistics. Okay. And so I did the MSc in computing and statistics and I discovered I don’t like computing. [00:02:40] So, so by process of elimination, I thought, Oh, maybe, maybe what I should be is a statistician, but I did not complete my MSc.

[00:02:49] I did not write up my project. And it was my wife later, my later wife, as we were married, who insisted that I go back and write up my project. And so some years after I should have graduated with an MSc, I [00:03:00] did graduate with an MSc. And that later enabled me to do a PhD and so forth. And if those things had not happened, I would not have had a career as a statistician.

[00:03:09] And there have been other things along the way where, you know, I made a lucky change of career and so forth. I started out working in the health service, did that for three years, then was a lecturer for [00:03:20] nine years in Scotland. In which I learned a lot about a lot of things, but not in a very deep way, because you were always having to lecture new courses and everything, and that was useful.

[00:03:30] And I enjoyed eight years, but I didn’t like the ninth year. I switched to the pharma industry, and I’d learnt my lesson, and after eight years, I switched again. [00:03:40] And I was a professor at University College London. And again, I learnt my lesson, and after eight years, I switched, and I went to Glasgow, where I was there for eight years.

[00:03:49] And then I switched, and I went to Luxembourg. And in Luxembourg, working in a research institute. I ran out of time after seven years. After seven years, I hit 65. I had to retire. I had no choice. I didn’t make my [00:04:00] eight year rule. But that’s basically what I’ve observed since then. And I feel I’ve been lucky because I’ve had experience in lots of different areas and managed to learn from a lot of very, very interesting and clever people about statistics as I’ve gone along.

[00:04:13] So that’s also partly been luck. 

[00:04:16] Alexander: Yeah. I luck is definitely [00:04:20] important factor. Yeah. I think that also calls for maybe a little bit more being humble about things. And also give a little bit grace to yourself. Yeah. If things don’t work out as fast or as good as you would like. To have in your career.

[00:04:39] Well, [00:04:40] sometimes, yeah, you don’t have the luck. Now. I also think opportunities come to those with a prepared mind. So yes, luck is one thing, but if you, if you change and you see this [00:05:00] opportunity, well, you also need to seize it. So what makes you kind of move and why did you switch? 

[00:05:11] Stephen: Well I think partly because I, I felt I set my first job in the health service.

[00:05:15] I learned some useful things, but I then came to the conclusion after three years that I wasn’t going [00:05:20] to progress or develop. And so I was looking for something completely different and I went to be a lecturer in it’s now a university, it’s now the University of Abertay Dundee, but at the time it was a, a, a college of technology.

[00:05:32] So I suppose, to be unkind, one would have said it’s a second class university, but let’s say a university with an emphasis on practical matters in [00:05:40] particular. And I learned a lot of statistics from that, also from a colleague, an older colleague of mine who’s since died, Gilbert Rutherford, who taught me a lot.

[00:05:49] I shared an office with him, so that was also lucky. So yeah, so one was prepared in a way to learn things. I sometimes found that I’d be told to give a course at short notice, and I [00:06:00] was then literally one week ahead of the students in learning stuff, you know, so it was, I had some quite, I had some classes where the students were older than I was.

[00:06:12] Yeah. So, you know, that also and of course in many ways they were much more knowledgeable about many things than I was. So you also have to be very, very [00:06:20] careful not to try and appear to be the all knowing all wise lecturer when, when that’s the case. 

[00:06:27] Alexander: I love that point. I think we should all embrace.

[00:06:31] more that attitude of constantly learning and knowing that we don’t know everything. [00:06:40] And that’s, that’s a, that’s another way to, to stay humble and to keep learning. So keep on learning and growing yourself. What is that? Is that one of the key reasons for you to kind of switch between different opportunities?[00:07:00] 

[00:07:00] Stephen: Yeah, I think so. That, that was one of the reasons. I think also yeah, learning opportunities. I think I was lucky at a particular time of my life when I went to work for Ciba Geigy. So it’s a forerunner company of Novartis in Baal, which is actually my hometown, Switzerland, although I had not lived there as a child, but officially I’m a Basler.

[00:07:19] [00:07:20] That was a wonderful opportunity and all sorts of practical problems presented themselves. But I was in a job where publishing methodological papers was not part of my job description. There were two departments in Sibagagi. One was, I used to say unkindly, Sibagagi had two departments. One was full of people with [00:07:40] lots of things to do and no time to think.

[00:07:43] And the other was full of people with lots of time to think and nothing to do, but that’s slightly unfair about the methodology department. But I was not in the methodology department. I was in the clinical trials department. But there were so many wonderful problems that came up all the time. And I thought, how am I going to do, what am I [00:08:00] going to do with this?

[00:08:00] And so I spent the evenings writing papers. So the papers were not written in company time, they were actually written in my time. But I had such wonderful examples that I was benefiting, and really I think it was lucky that I ended up in that situation, rather than in the methodology group, because having my nose [00:08:20] rubbed in practical matters all the time was really good for my development.

[00:08:23] I needed it. I needed to stop, you know, just imagining that. somehow fancy mathematical work or whatever was really what it was about. It’s part of it, but it’s not really what it’s about. 

[00:08:36] Alexander: Yeah. I love that kind of, if you [00:08:40] actually do the work the real challenges that prohibits you from working on problems that are not as important or not as relevant or you know, kind of ivory tower research and I think this is super important.

[00:08:58] Now, [00:09:00] you’ve wrote lots of lots of papers and also some books. So how did you do that? So, so what are your kind of recommendations for others who would also love to publish a lot? 

[00:09:14] Stephen: Well, I mean A lot of my early publications were short pieces, actually, funnily enough, commenting [00:09:20] on things that other people had written, which I disagreed with.

[00:09:22] In some cases, I think they were just technically wrong, although sometimes plausible. And I got into them simply because I was trying to repeat. I would sort of see something, I’d say, oh, that’s interesting, how does that work? And then I would sort of play around with it, try this, try that, try the other.

[00:09:39] [00:09:40] And then suddenly something would click, and I would say, well, maybe this isn’t right after all. And then I would try and find counterexamples or, you know, see where the problem was. And so that got me into publishing thought pieces where really there was no reason for the journal to refuse to publish them because [00:10:00] basically I was saying, look, this is not right.

[00:10:03] And, and then I gradually progressed to doing, to doing longer pieces like that. And then it was my wife again who suggested I write a book and I thought, well, what can I write a book about? You know, why? What am I? I’m an expert in anything. And she said, well, don’t you do a lot of crossover trials? And I said, Oh yes, I do.

[00:10:18] And she said, well, why don’t you write a [00:10:20] book about that? So, so I did. So that was yeah, so I don’t, I don’t know. It’s partly depends. Well, I have to say I’m very, very lucky because although I’m Swiss, my mother was English. So yeah. And English is my first language. And I think that I sometimes get somewhat annoyed with my British colleagues because they [00:10:40] underestimate the sacrifices that people who are not native speakers have to make in order to make the same progress in publishing.

[00:10:48] So there’s a terrific advantage. In being a native English speaker, in that sense, you know, I’ve always spoken English some of my life, I was born in Germany, in fact, in Saarbrücken, [00:11:00] so some of my life, I also spoke German as my other language, but I switched, and then some of my life, I spoke French as my other language, so the net result was that I ended up thinking in English, but that has been an advantage for publishing, no question. 

[00:11:13] Alexander: So when you actually write a paper, how do you start?

[00:11:18] Stephen: Well, a lot of it, a lot of [00:11:20] it would involve some mathematics and obviously to a certain extent one has to have solved the mathematical problems one wants to present. And so it will involve thinking about a particular problem. I’m working on a paper at the moment, actually with, with two other statisticians.

[00:11:38] And there, I [00:11:40] think that that’s been a back and forth between us, but a lot of my papers have been single author papers. And then, then yes, you’ve got to, you’ve got to have decided what particular problem you have solved. You’re stating in this paper. particular, you know, this paper, even if it’s not a mathematical problem, you’ve got to decide how to solve it.

[00:11:53] And the way in which I usually solve these things is I spend a lot of time thinking about them before I write down [00:12:00] anything. So you know, really to understand what’s involved. And then if I want to say something about analysis of covariance and what exactly are the key features of this and so forth, think about it and think about it.

[00:12:13] And then, then start writing. There has to be a stage at which you should write. I mean, you should write [00:12:20] the mathematical bit that you want to do, that you actually try and sketch out the ideas that you’re trying to present and get those sorted out. By the way, I also have a piece of software that I regularly use to help me solve mathematical problems.

[00:12:33] I’ve been using that since 1993, I think, or 94. [00:12:40] And that is Mathcad, which is a mathematical software package used mainly by engineers. There are very few statisticians who use it. But so I think maybe more statisticians might use Mathematica or also MATLAB to a certain extent, which is a slightly different function.

[00:12:58] And so that enables me to [00:13:00] do quite a lot of theoretical development. So I often, I often will create quite an elaborate Mathcad solution to a particular problem so I can see exactly how it goes. It’s very easy to read on the whole because the languages, although it’s a programming language, it looks just like mathematics.

[00:13:18] So if you want to do an integral, [00:13:20] you actually write down the integral. Okay. Look exactly as if it was in a paper. So that already helps in, in, in a certain way in in developing a lot of stuff. And also as regards problems, I always say that anybody who wants to be a serious statistician has to understand all solutions, two ways [00:13:40] at least.

[00:13:41] One is the formal mathematical way, this works because, and then you can work through a particular proof. But also there’s the intuitive way, if it doesn’t, doesn’t make any sense intuitively, what’s going wrong. 

[00:13:55] Alexander: Yep, yep, yep. 

[00:13:57] Stephen: And some, sometimes it’s taken me very late in my career [00:14:00] to, to, to realize this.

[00:14:01] For example, I, I realized some years ago that a common thing that’s said about sequential analysis in the frequentist mode is not correct. Sequential analysis doesn’t in fact bias in the way that some people think it does if you don’t adjust, provided that you wait [00:14:20] according to the amount of information produced.

[00:14:22] So if you were doing a meta analysis and you discovered that all the trials that are going into the meta analysis were in fact sequential analysis trials, some of them had finished early, some of them had gone to the end and everything like that, then in that case provided you weight them according to the amount of information, so the trials that [00:14:40] stopped early would.

[00:14:41] get less weight than the trials that came to the end. And in that case, you can show, and I have a paper in Pharmaceutical Statistics, that the biasing factors cancel out. They cancel out by weighting, because anything which is optimistically good is actually likely to have had less information, and is then down weighted by the trials that compete to the end.[00:15:00] 

[00:15:00] And what people have forgotten was, if a trial goes to the end, Then it can’t have been particularly good in the early phase, because otherwise it would have stopped early. And so once you mix the two types of trials together, equally, then appropriately, as in the fixed effect measurement analysis, then it cancels out.

[00:15:18] But this took me, I don’t know why it took me [00:15:20] so long to realize this. I suddenly felt good. I’ve been thinking about sequential analysis. Incorrectly for all this time. And that’s an example where the back of my mind I was worried about something. Because it didn’t make intuitive sense. 

[00:15:38] Alexander: Yep. Yep. Yep.

[00:15:39] Stephen: And then in [00:15:40] the end I could look at it. I give you one more example. In my book on crossover trials, even the second edition I write that trials in infertility are not suitable for crossover trials. And the reason I said that was that if a couple conceives in the first treatment Then in that case, obviously, I’m not going to try the second one.

[00:15:57] So essentially, to the extent that a [00:16:00] crossover trial infertility is, in infertility, is successful, it will actually not provide the period for, the information for both treatments. But then suddenly it occurred to me, well, this is very odd, because what am I doing? It’s like, as if I had a parallel group trial.

[00:16:19] And then I [00:16:20] said, right, okay, that’s the end of the parallel group trial. Now, those of you that it didn’t work out for, you can try the other treatment. Now, everybody says a parallel group trial is okay. So why should collecting extra information, once you have completed a parallel group trial, why should this be completely unacceptable?

[00:16:39] That’s illogical. [00:16:40] It must be possible, allowable to have extra information. The challenge is how would you use the extra information? and then I had a PhD student boy Cano ti that I set to work on the, on the problem. And we have a paper together explaining exactly how you can do [00:17:00] this. And now again, I tend to think, how could I have been so stupid to think that this was true?

[00:17:05] Alexander: Yeah, it’s very nice to hear from someone like you that first, it takes quite some time to overcome these challenges and said we can correct ourselves over time. Yeah. I think this [00:17:20] is really, really important. We have a development over our career. And we definitely make mistakes early on.

[00:17:27] We have wrong assumptions or misperceptions about things. But that shouldn’t stop us from correcting us later on. So. If you, [00:17:40] you mentioned kind of publish to solve a specific problem, now when I think about kind of these different problems, there’s very often kind of side problems or you can make some bigger or, you know, how, how do you [00:18:00] limit a paper so that it’s actually not going to too broad.

[00:18:06] Stephen: Oh yeah, I suppose that’s a, that’s a problem. And most papers tend to tend to end up being longer than I intended. It’s true. Yeah, it’s sort of annoying [00:18:20] sometimes because also sometimes you find that you you deliberately didn’t cover a sort of development from a paper because you sort of said, well, yes, that would happen. And it’s sort of obvious. And you know, anybody who wants to can join the, join the dots and you don’t do it.

[00:18:36] And then sometime later you find that somebody said, yeah, but what they [00:18:40] overlooked was, you know, such and such, you didn’t overlook it. You actually just decided not to put it in the paper. So, so, but nevertheless, it’s there is a limit, I think. Of course, most most journals have some sort of a formula or informally as to what they would accept.

[00:18:54] And sometimes you have to propose to have a part one, a part two paper or whatever, [00:19:00] and publish, publish something later. I think it’s Yes, it’s similar to although, you know, I can’t always say that I’ve kept my paper to a reasonable length I do pride myself and I don’t go over the time a lot of thought to me when I’m giving a lecture yeah, and That’s also a similar discipline [00:19:20] and I get really annoyed at conferences where the Chairman says something, oh, there’s just time for one quick question when in actual fact there isn’t time because the speaker has used up all the time that was allotted for the question.

[00:19:34] Alexander: Yeah. Yeah. I find this also super annoying, 

[00:19:38] Stephen: really, really [00:19:40] annoying. And it’s just not, it’s not, I mean, you know, the speakers have to be, have to be disciplined in that way. And the same is true, I think, to a certain extent with papers, there’s a, there’s a, and certainly when speaking, there is a difference between, which is something you have to understand is what you want to say and what your audience can hear.

[00:19:58] Can 

[00:19:59] Alexander: you elaborate a [00:20:00] little bit on that one? 

[00:20:01] Stephen: Well, I think that you’ll find that some speakers choose to, to, to stuff their head. Okay. their lectures with details, which really won’t come across in a, in a lecture and submission time. You’d have to have a super, super intelligent person, someone who was really up on the field, almost to the level that you [00:20:20] were to understand if you’re trying to try and get that across in 20 minutes or whatever you’ve been given or 45 if you’re, you know, some sort of invited speaker.

[00:20:28] And you have to ask yourself, well, what can they understand? And the other thing that you have to. have to think about is whether you really want to spend the time in covering the [00:20:40] algebraic derivation of a particular result, or whether instead you want to show them a plot of some function which incorporates what the what the result is, and to try and give them some intuitive understanding for, you know, why the result is the way it is.

[00:20:54] So that’s That’s attention there. And I think also in writing papers, you can do a similar [00:21:00] thing by deciding to put difficult material into an appendix, which I often think is a, is a wise thing to do, so that you don’t interrupt the main flow of the argument with intricate Derivation, which you can then leave to the appendix, but, but, you know, taste will differ, I suppose, regarding [00:21:20] that.

[00:21:20] Alexander: I completely agree that whenever you communicate, you need to take care of the audience. And you need to adjust to the to the different situations. Yeah. If you have a presentation, then communication is synchronous. Yeah. So [00:21:40] people kind of need to follow you at your speed that you’re giving the presentation.

[00:21:46] And if you’re too detailed, or if you’re too fast. It becomes nearly impossible for people to follow. And so that is an actually disservice to the audience. And [00:22:00] for me, that’s basically. bad presentations. Yeah. So providing an idea, providing kind of an overall understanding an intuitive kind of understanding of how the solution looks like is something that you can bring across.

[00:22:17] In a presentation [00:22:20] for asynchronous communication like a paper. Yeah. People can spend much more time. Yeah. And then of course, the other nice thing about the paper is that it’s not necessarily sequential, you know, like a presentation. [00:22:40] Well, it starts, it has a middle and it has an end. Well, a paper, you can, you know, flip around, you can read it and, you know, go back, you can go to the appendix, read another reference whatsoever, you know, discuss about it with some colleagues.

[00:22:57] So all of that you [00:23:00] can do with a paper you can’t do for a presentation. So always. Make sure that you adjust your communication to these kind of different things. I think it’s a great key point. Yeah. You have given lots of presentations over your career. What do you think [00:23:20] about the presentation skills of statisticians in general?

[00:23:25] Stephen: I’ve I’ve listened to a lot of good presentations as well. So I’m, I think on the whole statisticians are not too bad. I think there are some simple tricks. that one can learn. I was fortunate that when I was at [00:23:40] the Dundee College of Technology, I went on a sort of course about the technology of presentation.

[00:23:47] But remember that this was the era before computer projection. When we were preparing stuff on acetates, you know, if we were going to a, to a conference, it would be a pile of acetates [00:24:00] that you then, you then use and you presented them. But nevertheless, there was some useful things that were taught to me, some small tricks that people overlook.

[00:24:08] One strange thing is the use of typeface. A lot of people imagine that using block capitals makes things easier to read. Oh no. This is actually the opposite. And [00:24:20] they’re confusing two particular things. One is if you write out something in handwriting, then usually you can read someone having written in block capitals quicker than you can read their own handwriting because their handwriting is idiosyncratic and you have a much bigger interpretation task.

[00:24:36] in the, in the handwriting. But actually when it comes to print, it reverses the [00:24:40] case. And the word shape is much, much clearer if you’re not using uppercase. And so you find people we’ve got a slide for for the audience and everything on the slide is in uppercase, and I think, why, why are you doing that?

[00:24:54] This is really stupid, you know? You should use lowercase and, and it’s quicker to read. And there’s been [00:25:00] quite a lot of research done, done on that. And so this is something that was hammered home to me when I was when I was a young lecturer in this institution, which took, took teaching quite seriously.

[00:25:10] And that’s one point. The other point is the importance of sticking to time, as I said. I think the point another point which I think statisticians are [00:25:20] quite good at is you should use you should try and graph things as much as possible. So, you know, for example, a typical error that people make, I think, in presenting simulation material.

[00:25:32] I also think there’s far too much simulation used, by the way. Not enough theory. There should be more theory, less simulation. But anyway. In presenting [00:25:40] simulation material, they give you, they give you a table which is full of figures. And they say, and I appreciate that this is hard to read, but, you know, well, if you appreciate it’s hard to read, why did you put this up in the first place?

[00:25:53] Yeah, which is what you should do instead is you should show some plots. 

[00:25:57] Yeah. Yeah. Show some, show some plots of the [00:26:00] results or whatever, and then say a few things about them or, you know, the key features are whatever, but you know, giving people a a great big table like that to digest during a lecture is just completely stupid.

[00:26:11] Yeah. Yeah. So you know, so, so there are, I mean, there are things like that. I think on the whole most of us, I think do pretty well in avoiding [00:26:20] those particular errors. But occasionally you find some, you know, surprising, surprising mistakes. So 

[00:26:29] Alexander: yeah, I think if you get, if you invest time in becoming a good teacher, yeah that helps for lots [00:26:40] of, lots of different things.

[00:26:41] Because as a good teacher, you take care of the students, you take care of the audience, Yeah. You always think about. How can you do it in such a way that it’s best for the audience? So you think less about yourself, but you think more about [00:27:00] the people that you’re talking to. Yes. That makes a huge difference.

[00:27:05] Stephen: That’s absolutely. I mean, as I often say to people, if you want to really. try and think about what’s going on, then you have to be on the stage. Yes. But also in the hall, you have to think what’s it [00:27:20] like out there listening to me like this. Yeah. Yeah. That’s so you have, you’re on the stage and in the hall, that’s that’s how it has to be.

[00:27:29] Yes. You’re not the same as everybody because you’re the person who has to be talking, but nevertheless, you have to put yourself in that position and see what are they experiencing, what is [00:27:40] happening here. 

[00:27:40] Alexander: Yeah. And the problem with the table is. Yeah, exactly. Kind of what you, what’s the difference between a presentation and a report and a report.

[00:27:51] You can look over the table and you can, you know, look where’s the biggest, where’s the smallest, what are the trends, [00:28:00] what is exactly kind of the number and can I compare that to something else? It’s great for looking up data, but it’s really, really bad for communicating kind of what’s going on. Yeah.

[00:28:13] You mentioned already a couple of misperceptions. What do you think are currently the biggest [00:28:20] mistakes that statisticians do and that really annoy you? 

[00:28:25] Stephen: Well, I think, I think that medical statisticians as a whole, that we have one great big failure, and that is as regards measurement. We are not really getting through the [00:28:40] message as to what is sensible about the measures that people use in clinical.

[00:28:44] I think that responder analysis, which is common everywhere, is an absolute crime. It’s unbelievable how awful this is. And it gets me somewhat annoyed when you get a topic which is completely fashionable at the moment, still [00:29:00] fashionable, been going for some years. Flexible designs. I’m not against flexible designs.

[00:29:03] But the contribution that flexible designs can make to improving efficiency is very minimal. It’s not, it’s not worthless. It’s, Dynamic decision making is always better if you get it right, no question about that. And a flexible design is a way to help you do [00:29:20] that. But there’s an awful lot of absolute nonsense which is talked about how much they’re going to save the pharmaceutical industry and so forth.

[00:29:26] This is ridiculous. If you want to cut drug development costs, just completely ban all responder dichotomies tomorrow. Yeah. Yeah. I mean, we know that when you dichotomize data, that what [00:29:40] happens is that you, if you get it at the median, then you have to increase the sample size by about 60 percent or something like that.

[00:29:48] Yeah. Yeah. It’s it’s And if you get it somewhere else, you could double, treble, whatever the necessary sample size. But we’ve made absolutely no impact on this. I’ve written about this several times. It makes no difference. And I honestly think that now [00:30:00] is the time for action. And what we need to do is we need to insist on doing all sample size calculations two ways.

[00:30:06] And on the continuous, on the continuous outcome, and then we say, we use, let’s use a forced expiratory volume in one second, continuously, and oh, you want the [00:30:20] responder dichotomy, the number of people who have an FEV1, which is 15 percent more than baseline, let’s do the calculation both ways. And then, then the life science colleague says, yeah, but you know, the responder dichotomy is more clinically meaningful.

[00:30:32] They said, yeah, but you know, clinically meaningful costs you 5 million euros. I is that is that how is that how [00:30:40] you want to spend your money? You know, that’s Let’s present both figures to the board and, you know, see, see how we get on or whatever. 

[00:30:47] Alexander: So in terms of this, I see that this is very often kind of related to clinically meaningful differences.

[00:30:59] Yeah. [00:31:00] And then people say, well, if I have, if I do a responder analysis and then I see a significant difference. Yeah. Oh, yes. And I have shown that one treatment is significantly more [00:31:20] relevant, better than the other. And I get really kind of hung up about it, that part, because whenever you have a mean change.

[00:31:35] Yeah. And you dichotomize it. Well, [00:31:40] this mean change of course, kind of corresponds to some response differences always. Yeah. Always. 

[00:31:48] Stephen: Yes. As soon as you know the mean response, it just depends where you put the threshold, what the difference will be. So you could have a complete shift, which has got nothing to do with anything like that.

[00:31:58] And you would see this. [00:32:00] It doesn’t have to be the explanation, but there’s no way you can prove it’s not the explanation. Unless you do something more, you have to, you have to be doing something like a crossover trial, repeated measures or, you know, something else to show the natural fact. Yes, there’s a difference between people who respond and don’t respond.

[00:32:18] Otherwise, it’s just a shift thing. [00:32:20] That’s all. 

[00:32:21] Alexander: Yeah. Yeah. And if it’s not a shift thing, then I think you need to look into it more closely anyway. 

[00:32:28] Stephen: I mean, what, what, what people don’t realize, again, what people make is that they make the mistake, I think, for example, of comparing the result in the active group to the average result in the control group.[00:32:40] 

[00:32:41] And then they look at the people who are the worst in the active group and they say, well, these people who were the worst in the active group. were actually worse than, than the average in the control group, so it didn’t work for them. I said, well, why are you comparing the worst patient in the active group with the average patient in the [00:33:00] control group?

[00:33:00] Shouldn’t you be comparing the worst patient in the active group with the worst patient in the control group? And then if you draw the two curves, you can often see that there’s a complete separation. And if you were to match them rank by rank, by rank, by rank, by rank, you would find something similar to a constant shift between the [00:33:20] two groups.

[00:33:21] And so you’re failing to understand who that worst person in the treatment group would have been, had they been treated with a controlled treatment. So, I mean, that’s, that’s a complete misunderstanding, but also people, you know, people will say, well, the dichotomy is clinically meaningful, but then when you have a look, they’ve not been consistent.

[00:33:39] Different, [00:33:40] different authors have used different 

[00:33:41] Alexander: Yeah. Yeah. 

[00:33:42] Stephen: Different definitions of respondents sometimes. Yeah. Yeah. But it’s, it’s, it’s just astonishingly ignorant and bad. And you know, I’ve, I’ve seen papers by theoretical statisticians in which they’ve considered how, you know, you should [00:34:00] analyze binary data, but actually when you have a look, they’re not analyzing true binary data, they’re analyzing dichotomies.

[00:34:05] Alexander: Yeah. The other thing is, even if you have. You know, not normally distributed data, skew data. Well, that’s not an excuse to dichotomize it. [00:34:20] No, no. I would 

[00:34:20] Stephen: say no, not at all. I mean, try and find a transformation if you can, and say something on that particular scale. So, I mean, I think that’s one of the things.

[00:34:29] The other thing is that in general, I think that that statisticians have, sort of, medical statisticians have not really always. [00:34:40] learned the lessons about random variation that statisticians who had worked earlier in agricultural statistics and in industrial statistics have learned. So one of the lessons from industrial statistics quality control is don’t try and deal with non assignable causes.

[00:34:59] So [00:35:00] if you, if you can’t, if you cannot identify what is causing a particular variability, then in that case, you have to live with it. And you have to deal with the sy the system in a way which can accept the variability. But you know, as they found, if, if what you do is you keep on stopping [00:35:20] the process and checking to see whether there’s some apparent drift in the process and then try to address the drift without knowing whether the drift is just random noise.

[00:35:28] Then in that case what happens is you actually increase the variability in the process and you harm the harm the product quality. And I think that sometimes that what we’re trying to do is we’re chasing variability in [00:35:40] observed response. which is simply inexplicable. We are not at a stage where we can explain why these things vary.

[00:35:47] And yet we insist on saying, Oh, well, you know, personalized medicine or precision medicine, to use another phrase, we’ll somehow find the answer to this. And maybe the answer is no, it won’t. Because all that you’ll be [00:36:00] doing is you’ll be labeling particular people as being one sort of person when actually, if you were to come back and measure them another time, they wouldn’t behave like that at all.

[00:36:08] So that, that’s one of the things I failed to learn from, from from industrial statistics. We should have a look much more closely at what the quality assurance movement would be and see how it would apply to, [00:36:20] to pharma development. 

[00:36:21] Alexander: Yeah. So that is basically one of the big challenges that people often see is yeah, doing subgroup analysis, modeling kind of all the variation as much as possible, all kinds of different things.

[00:36:37] So and then later on, we find like [00:36:40] much of. What is done there, you can’t repeat in, in other datasets and so that, that’s exactly the point, isn’t it? 

[00:36:51] Stephen: Yes. Yes. You can’t, you can’t repeat it. And I think also another lesson I think from from manufacturing statistics that we could maybe have, have [00:37:00] learned is how they made progress with complex factorial designs.

[00:37:04] So you might find in the manufacturing world that you have you have 10 particular factors that you could set at high or low levels. So that gives you a combination of two to the power of 10. So in excess of a thousand possible different experimental combinations that you can run, [00:37:20] but you actually have a look at what the budget will do.

[00:37:22] And you say, well, it can maybe run 64, 64 runs. So how do you deal with that? Well, you deal with that by running a fractional factorial which will maybe enable you to estimate all the main effects and possibly, I don’t know, depending on how things are first order [00:37:40] interactions and what happens to the rest.

[00:37:42] Well, the rest you declare them to be random. You just say well, it could be that, you know, there’s a small effect at a third, fourth, fifth level interaction or whatever. But basically it’s unlikely to be very, very important. So we just treat it as if it was random. And by treating it as if it’s random experimentally, [00:38:00] huge progress can be made, about understanding which elements of the of the variation are actually addressable. So deliberately ignoring all these possibilities. And I think that we’ve gotten the other way with personalized meds. Whenever you see a slightly strange result, Oh, that must be [00:38:20] explicable in terms of, it must be the blah, blah, blah, blah gene, or, you know, some omic or, you know, whatever. And ultimately just. Unscientific. 

[00:38:29] Alexander: Yeah. Yeah. Yeah. And I think that leads to just a lot of publications, which is [00:38:40] more confused than actually enlightened. 

[00:38:43] Stephen: Well, yes. I mean, also, also I have to say I sometimes think there’s an enormous amount of work on prognostic scores which in the end you have to say, well, what’s the point anyway?

[00:38:55] Alexander: Yeah, if you can’t change it. 

[00:38:56] Stephen: If you have a look at the progress in medicine, [00:39:00] I mean, we could diagnose diabetes 2000 years ago,

[00:39:04] but it took it took until something like 1920 or whatever for do anything serious about diabetes. So, you know, that was So it doesn’t necessarily follow that just because you can diagnose a massive of things. Even in some cases I think the [00:39:20] diagnosis will be sort of stupid. But even if it were meaningful that it would necessarily automatically lead to therapy.

[00:39:29] Yeah. Shortly, but that’s couple bandwagon is precision medicine . 

[00:39:35] Alexander: That’s great. Love it. Now if we look into the future of [00:39:40] biostatistics, what do you see as the biggest challenges?

[00:39:43] Stephen: I don’t know. I mean, I think, I think one sort of practical challenge is that the whole world is so enamored with artificial intelligence that one will find a [00:40:00] large number of exciting claims will be made, some of which will be true, many of which will not be true regarding what is possible. And there may be pressure on statisticians to deliver all sorts of things.

[00:40:17] Progress is good. I had a opinion piece in [00:40:20] Pharmaceutical Statistics about 15 years ago in which I said that we should spend more time looking at what we have done in the past, in the sense of looking at the databases and companies and saying, what can we learn from this? So now, suddenly, everybody thinks this is a good idea, so I’m pleased that that’s, [00:40:40] that that’s happened, nothing to do with me, it was just, you know, independently people coming to this particular realization.

[00:40:45] But I had written the piece because when I started consulting, having left Pharma in 1995, the thing I was most often asked by people is, what can we do better? And I would say, study your own data, that’s the simplest thing to start with. And so I [00:41:00] think there was a lot to be learned there in particular about what outcome measures are good, what covariates should be adjust for, and so forth.

[00:41:09] These, there’s a lot to be learned there and there’s much that we could do that we’re not already doing. I think what I worry about is that people will, Then go [00:41:20] beyond that and we’ll start saying, Oh, well, we can actually predict exactly who will respond and who will not respond. And all I can say is, well, if you can, that’s good.

[00:41:30] But let’s look at these claims very, very carefully before we get too excited. 

[00:41:34] Alexander: Yep. Yep. I think the when it comes to looking at [00:41:40] your past data, one other really important thing is look at your past decision making in terms of that. Yeah. So decision making needs to be really, really good because ultimately we don’t run studies [00:42:00] for the sake of running studies. We run them to make a decision. 

[00:42:04] Stephen: Yeah, that’s true. And yeah, no, that’s, that’s true. There’s a missing data problem. So but that doesn’t, there’s no excuse for not looking at what we have. So yes. The missing data problem is that we don’t know what would have happened to [00:42:20] projects we cancelled had we continued living.

[00:42:22] Alexander: Yep. 

[00:42:22] Stephen: So you know, when it comes to decision making, because ultimately a lot of decision making basically in in drug development one way or another is to kill or not to kill. Mm hmm. Some stage you have to decide if you’re going to go to the next stage or not. And of course, whenever you decide not to develop a project, then you [00:42:40] don’t nearly always, sometimes somebody picks it up elsewhere, but you nearly always don’t know what would have happened anyway.

[00:42:45] And so that’s a, that’s a problem regarding trying to learn. But but as regards projects that went forward, then there have been cases where you can sort of suggest, well, was pretty obvious that this was [00:43:00] going nowhere. One should have, one should have killed it earlier. Yeah. Yeah, but it’s a field for the last two years I worked for Sivagaigi.

[00:43:09] I was working on a project amongst other things, but this is one of my important tasks was to work together with others on a method for deciding which projects to develop. And [00:43:20] I have a paper in statistics in medicine and another in the Drug Information Journal describing how you can try and rank projects but you have to use the fact that, that you hold options at various stages, and the option you hold in particular for a project is to kill the project if necessary.

[00:43:37] So it’s, these options have to be taken into [00:43:40] account. This work, by the way, was later done in much greater depth and intellectual clarity by Carl Frederick Berman working at AstraZeneca. And I was Involved with collaborating with him for a while, but he’s done some very good stuff though. 

[00:43:54] Alexander: Okay. Awesome. If you would recommend [00:44:00] a statistician to focus on one thing to have, you know, more impact, what would that be?

[00:44:10] Stephen: Well, I mean, I think that if we’re talking about medical statisticians and I sort of think about medical statistician work in drug development, then in that case, it’s it is [00:44:20] communication with your fellow life scientists. So you’re part of a team and you have to be able to communicate your point of view.

[00:44:29] But you also have to understand to a certain extent what the others are about. And you have to learn from them. I learned a lot not [00:44:40] particularly in any depth or particularly well, but nevertheless, it’s been useful to me about pharmacology and pharmacokinetics, pharmacodynamics, so forth, from being able to interact with people who were dealing with that side of the business.

[00:44:55] Although I was more in phase two, phase three and they were in [00:45:00] mainly in phase one. Nevertheless, that was a contact that I had. And I think that was very beneficial for understanding what goes on. I think you have to be realistic. I used to say to statisticians working in my group that working for me that this is the question to which you have to be able to answer yes.

[00:45:18] And the question is, are you [00:45:20] interested in finding out about the effects of drugs? 

[00:45:23] Alexander: Yeah, yeah, 

[00:45:25] Stephen: that’s why you’re doing it. So you can have mathematical fun on the way, but it has to be But that’s what it’s about. And there’s a quote from Definetti, which I particularly like, he says, the mathematician abstracts from reality, falls in love with the [00:45:40] abstraction, and then blames reality for not conforming to it.

[00:45:44] So unfortunately, there’s a lot of yeah, but statistics out there, where you say to somebody, well, this model is fine, but it’s not. These assumptions are not realistic, and they say, Yeah, but it could be the case [00:46:00] that blah, blah, blah, blah, blah. And I, and I sort of say, well, the other question is whether it could be the case that This is a theoretical possibility.

[00:46:09] But the question you have to ask yourself is, is this likely to be useful in practice as particular? 

[00:46:15] Alexander: Yep.

[00:46:16] To end the discussion today [00:46:20] what’s your favorite career advice?

[00:46:23] Stephen: Well my favorite career advice is the following. I think you have two extreme choices. One is to develop your career, and the other is to develop yourself. So you develop your [00:46:40] career is and there are some cases where this is the right thing to do. If you’re in a big organization, and there are obvious ways to go ways that you can learn by following the career within that organization, then that’s fair enough.

[00:46:52] But that’s never worked out for me. And it’s always come to a particular stage [00:47:00] in which I’ve had to say, well, where am I going next? And if the answer is nowhere, probably then. Nowhere within this particular career here, then the thing is to switch in order to learn something new. And so it’s risky [00:47:20] because you may be losing an opportunity by doing it, but on the other hand it means that you become a more interesting person from that point of view.

[00:47:30] As I say, I switched every, I switched every eight years, more or less in my, if you don’t switch before 10. Then in that case, people will begin to [00:47:40] question whether you can switch or not. 

[00:47:42] Alexander: Yeah. Yeah. It becomes definitely harder. 

[00:47:44] Stephen: If you, if you switch every five years and people may think, well, you know, less than five years, then do they ever actually do anything useful in a job before they’re looking to do something else?

[00:47:55] Or, you know, why are they switching? So for me, somewhere between five or 10 [00:48:00] years to switch, if things are not working out, think, okay I need a new challenge. Where can I get a new challenge? Yeah. Yeah. And then your CV will become more interesting in that way. Even if your CV doesn’t become more interesting, you will become more interesting because you will have learned new things and learning new things is a strength, basically.

[00:48:18] Alexander: I think [00:48:20] for me, kind of what I take from your response is you need to be really careful about considering what. a useful career for you means, actually. Is it about getting titles and positions and maybe salary increases? [00:48:40] Or is it about something else? And getting really clear on that is really important.

[00:48:48] And it will definitely evolve over time. So I’ve been chasing promotions and all this kind of different things definitely for too long. And [00:49:00] I’m really happy is that now there’s basically no promotion anymore possible for me because I’m self employed. I run my own company. Well, I could give myself a better title whatsoever, yeah, but what would that mean?

[00:49:18] Yeah it’s, [00:49:20] I think it’s really important to think about what kind of impact you want to have and that is maybe more meaningful than getting some kind of title. 

[00:49:35] Stephen: Yeah, I think that’s, that’s probably true. I think nevertheless, I suppose that [00:49:40] part of what inspires people to, to write and publish is some sort of recognition.

[00:49:44] And I would be lying if I say that, you know, I thought that was part of, part of the reason that one did it. But yes, in the, in the end it’s what’s important is to to enjoy what you’re doing as well. And I’ve been very fortunate in that. I’ve been, I was [00:50:00] employed all of my, you know, I’m now semi-retired, but I was employed for more than 40 years as a statistician.

[00:50:06] And statistics is a thing I like doing. So, you know, it’s really wonderful in that point of view. Yeah. And, and you have to, to find satisfaction in that. But satisfaction also, I, it, it. is increased by finding new challenges [00:50:20] within the particular field you decide to work in. 

[00:50:23] Alexander: Yep. Yeah, I think getting on with new challenges.

[00:50:28] is I think inspiring, motivating. Yeah. And that is your passion. Then well follows it. Yeah. Don’t go [00:50:40] somewhere where you then get a great title and great salary, but you don’t love what you do. So I think that is, that’s really, really important. Thanks so much. Stephen for these awesome discussion about your career, about the challenges that we [00:51:00] face, your learnings from your career, a couple of yeah, really interesting and so provoking Challenges that, that we have, especially I remember about the dichotomization, which definitely kind of bugs me all the time as well.

[00:51:15] So thanks so much for speaking to all these kinds of stuff. 

[00:51:19] Stephen: Okay. Thanks [00:51:20] Alexander. Yeah. Okay. Bye.

Join The Effective Statistician LinkedIn group

I want to help the community of statisticians, data scientists, programmers and other quantitative scientists to be more influential, innovative, and effective. I believe that as a community we can help our research, our regulatory and payer systems, and ultimately physicians and patients take better decisions based on better evidence.

I work to achieve a future in which everyone can access the right evidence in the right format at the right time to make sound decisions.

When my kids are sick, I want to have good evidence to discuss with the physician about the different therapy choices.

When my mother is sick, I want her to understand the evidence and being able to understand it.

When I get sick, I want to find evidence that I can trust and that helps me to have meaningful discussions with my healthcare professionals.

I want to live in a world, where the media reports correctly about medical evidence and in which society distinguishes between fake evidence and real evidence.

Let’s work together to achieve this.