Prof. Andrew Moore (2)
  • University of Oxford
  • 2015-01-07
  • 2373회 열람
  • 프린트

About Prof. Andrew Moore


Director of Pain Research,
Nuffield Department of Anaesthetics,
University of Oxford.


Chairman of the International Association
for the Study of Pain Systematic Review and Evidence special interest group. 


Founding editor of the evidence-based journal – Bandolier,
contributed 500+ scientific and clinical publications
and 200 systematic reviews including over 80 Cochrane reviews.




Q11.

Can you tell me strengths of responder vs. non-responder approach rather than using mean VAS scores in pain research?


This is the difference between measurement and outcome. With the Visual Analogue pain intensity score, you measure pain.


What we’ve done in the past, is to take the average change in pain scores from the beginning to the end of the trial, so if on average it was 6 at the beginning and 5 at the end, we would say, there’s an average 10mm change on a 100mm VAS. Okay? Fine. Are you average?


Q12.

I’m not sure.


I promise you, you’re not. Nobody is average. When it comes to pain relief, what you see typically is the people either get good levels of pain relief or none. We argued that what we should be looking for is a level of pain relief that is likely to be useful to patients. You’ve got to get to this level of pain relief to be what we would regard as a responder. We set that as a 50% reduction in pain intensity. So, if you started at 8, you’ll have to be below 4.


Now for most people, that degree of benefit also means that you’ve also, only got mild or mild pain or even no pain at the end of the trial, which is fantastic for people. If you ask patients, and we did a systemic review of studies asking patients, it’s what they say they want. 


The argument isn’t about VAS and responder rates because you use VAS to calculate responder rates. It’s between average results and looking at it from the point of view of the individual patient. The most important thing is that this business of 50% pain intensity reduction or having no worse that mild pain doesn’t just give you good pain relief; it also means sleep disturbances go away.


Sleep comes back to normal. Fatigue disappears. Depression disappears. Quality of life rebounds. People are able to reconnect to their families. They are able to function in terms of either their paid employment or looking after their families. This is a major change. This is life transforming. And if they don’t get that, there’s almost nobody at 49%, there’s almost nobody at 48%. Most of them are down at less than 10% pain intensity improvement. If they got such low levels of pain relief, that’s not helpful. But if you take the average, you get a result that nobody has. It’s logically inconsistent.


Q13.

But, why do the other researchers keep on using VAS?


Most of them are not. I mean it’s true. That’s because that most trials are done to regulatory purposes. And regulatory authorities are way, way behind the cutting edge of this. 


Think of a boat travelling across a still lake. And you see the bow wave. And faster the bow wave goes, the sharper the wake. We’re going pretty fast. And these guys are in a slow boat indeed. 


But we’re working now with various groups within Cochrane, other groups, chronic pain groups, others to establish these sorts of outcomes as the preferred outcomes from trials and systemic reviews. 


Q14.

Do you have any comments on the claim that a typical RCT design is not suitable for detecting acupuncture’s efficacy?


I think you may well be right. but that’s not just acupuncture. This is probably true about awful lot of interventions. And let me explain why. When it comes to chronic pain, particularly it’s what we are really talking about here, the success rates, we’re talking here about achieving 50% pain relief. The success rates we have are at maximum about 30%. That means 30% get pain relief, 70% do not. And that would be you know for, something like an NSAID in osteoarthritis or pregabalin in diabetic neuropathy or something like that.


When you drop that then to something like low back pain or fibromyalgia, the success rates are only 10%. And indeed, when you look at the standard clinical trial design and you get success rates much below 10%, it’s frankly impossible to tell it from placebo unless you have an enormous trial, with many thousands of patients in it. We don’t do that sort of trial. And indeed, it’s because we’re now looking at trials in this way that we’ve said we will, we can see that actually, it’s gonna be jolly difficult state of affairs. Particularly when you’ve got acupuncture, where you’ve got many different many things going on and where you know that some people are gonna do well but you don’t know who it’s gonna be, and it’s very difficult to tell the difference between placebo. We have the same thing for some drugs. Give you an example, we recently wrote a Cochrane review on a, I can’t remember the name of the drug but, just a drug in neuropathic pain and the conclusion was that we couldn’t tell the difference between placebo and this drug. And one of my colleagues phoned me up and said, “You know, I know this is right, I just had two patients in clinic who had the most fantastic response on this drug that doesn’t work.”


My attitude is, you don’t throw anything out the door, because your specialists will always want to try people on something when everything else has failed. There’s a trial design that might help in some of these cases. It’s called the ‘Enriched Enrollment Randomised Withdrawal Design’ where everybody gets, you know, put on to a drug and it’s only those who respond by having good pain relief and they would keep taking the drug. Then go on. And then what you do is you randomise them to either carrying on with the drug at the level that’s giving them pain relief or placebo. Now, that’s about 3 times more sensitive than the standard design. And it also tells you what proportion of people from a whole population are likely to get good pain relief. I believe that if we were clever, we could adapt that design to acupuncture. That’s one way of doing it.


There are other designs as well. One of them for example, which is a modified version of that, would be a ‘Clinical Effectiveness Design’. And we have published on this in the past. It’s basically a modification of a randomised controlled trial, but one in which what you’re trying to do is to see whether this is working in the real world. But the point is always that the outcome that you’re measuring is people having good levels of pain relief and continuing to use the medicine. It’s minor things like that which make the difference. So, if I were an acupuncturist, and I wanted to prove that acupuncture would work, I would be…


Q15.

No, I don’t want to prove that acupuncture works.


In a funny sort of way I do. Because if I can prove that there’s what we know and what we believe. What we’ve got to do is to be careful about making sure that these two things are kept apart. I can tell you I know that there’s no evidence that acupuncture works but I believe…


Q16.

But, there are some.


I, as it happens, do not think so. But we will not argue about that right now. It’s very weak evidence at best.


Anyway. So, anyway the point is. I think there are other ways of doing it. I actually think that there would be a small proportion of people for whom acupuncture is a great benefit. And being able to demonstrate it would be really good. Because if we could do it for acupuncture, then we could do it for a lot of other treatments that will benefit maybe no more than 5% of people. But, for chronic pain, nothing works in more than a small proportion. And the benefits we’re talking about are life changing benefits. So, being able to demonstrate that there are these other therapies you might not want them first line but they’re there in the backroom if you need them is really important. Because I’ll tell you, for chronic pain, there are never gonna be any magic bullets with the possible exception of stem cell therapy. And that’s not gonna be available anytime soon. It’s never gonna be cheap.


Q17.

I would appreciate your valuable comments for the young researchers in this field.


That’s a very interesting question.


Q18.

From your experience.


Well yes, but that’s my experience. And you have to remember, I am me. And from my experience, most important thing to do is to keep asking awkward questions.


I told you when I was a biochemist, I was told to go out to wards, and cause you’re gone onto a ward and there’s the professor of medicine talking about something. I know about it. Medicine is very big. I mean, it’s huge. In most westernised countries, you’re talking about something that consumes 10% of the GDP. It’s enormous. We run it by having specialists who know an enormous amount about very little. But clinical biochemists like me have to know an awful lots about everything.


And if you’re going into a ward and you wanted to talk to a professor of medicine, how are you gonna deal with this? I was told by a very old, wise biochemist. And he said to me, “Andrew,” he says, “just ask the question “Why” three times.” So and so somebody makes a statement; you say “Why?” you get an answer, “Why?” and by the time you get to the third “Why?” nobody knows. You’re on a plane field of level ignorance, and then you can take it from there. You see what I mean? That’s an awkward question.


And then the other thing of course is that, again this is me, I spent too long when I was young thinking that I was stupid. 


Cause I didn’t understand things. Now, there are two reasons for not understanding things. One is because you do not truly understand them; you do not have the mental function for it. And the other one is because what you’ve been told is wrong. You come to meetings like this one here in Buenos Aires, you go into the halls and I promise you, an awful lot of what you’re hearing is wrong. And somebody said once that, it was the editor of the BMJ? I think I read in an article, something like 98% of papers in the medical science literature were wrong. 


And other people have said this as well. There’s a guy called John Ioannidis who writes these days and he wrote a paper, and I hate him for it. Because it’s one I would’ve loved to have written. He said why most medical research is false.


It’s a brilliant paper, lovely. What can you say? I mean most of it is. Studies which are too small, which use the wrong designs, and I use some of these in my lecturing where you see in a paper, you read the abstract in the conclusion it says this drug is ‘the best thing since sliced bread’ . I don’t know whether the sliced bread is differing. Some, one of your everyday staples you know, and you think, “Oh, that’s, that’s interesting. I’d be a bit surprised by that.” Then you read the paper and actually if you read the paper, they present the data in the paper which demonstrates that their conclusion is wrong. And yet these papers get published in reputable journals.


So, young people, you can go with the flow and you’ll probably do very well and life’s gonna be boring, ask awkward questions, ask “Why?” and just assume that 98% of what you are being told is wrong, you won’t go far wrong.


Q19.

It’s a very good advice for researchers, young researchers like me.


I doubt that it’s a good advice at all, actually. 


Q20.

No. It’s very good. Okay. Thank you very much.


It’s been pleasure. And I hope one day I get an opportunity to visit Korea. It’ll be lovely.


Q21.

Yeah, sure. Sure.


Thank you.


Q22.

You should come.


Bye-bye


Q23. Thank you very much.