Firstly, I would like to thank Roger Kerry, Associate Professor at Nottingham University (@RogerKerry1) for the inspiration for this post whose content has been derived from his and his colleagues’ work.
I would like to explore what guides our (physiotherapists’) decision-making in the context of understanding concepts. One part of our decision-making processes usually comes from some sort of evidence. Evidence is based upon the testing of a hypothesis which in itself is grounded in theory or a concept. If we have difficulty in clarifying the underlying concept how can we substantiate a sound hypothesis? If we can not substantiate a sound hypothesis, how can we substantiate evidence?
In the physiotherapy community there appears to be a view that clinical research alone provides sufficient ‘evidence’.
I would like to summarise some of the tensions that exist within evidence-based practice (EBP) within the context of physiotherapy intervention. This has been fantastically demonstrated in this video with Roger Kerry and LJ Lee (@1LJLee). It was a keynote presentation at IFOMPT 2012 and sparked my interest in this topic. The next IFOMPT conference will be in Glasgow 2016 and I would recommend all who are interested in Neuromusculoskeletal Physiotherapy to attend.
Evidence-based practice attempts to bring together the theory of knowledge or epistemology and methodology from the natural sciences into the areas of medicine, healthcare and education.
“Evidence-based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available clinical evidence from systematic research” (Sackett et al., 1996).
Where is the evidence (E) to show that evidence is actually evidence (E)?
EBP uses a hierarchy of evidence and represents positivist and realist approaches to epistemology with the aim of reducing epistemic risk. Epistemic risk relates to how likely is a form of enquiry likely to be reflective of truth in regards to the formation of knowledge.
The EBP hierarchy triangle above presents the hierarchy of the value of evidence with the most rigorous and influential level of enquiry at the top (systematic reviews of randomised control trials) and with the least influential level (expert opinion) at the bottom. What becomes immediately apparent is that the hierarchy itself comes from expert opinion – seen as the lowest form of evidence. Not a good start for the EBP hierarchy model.
Testing evidence by using randomised control trials (RCTs) could be seen as a paradoxical method of enquiry. In other words, we want to test the evidence (E) gained through a RCT by doing another RCT. Apart from being non-sensical it is fraught with risk of bias and increased likelihood of being false.
The purpose of a trial is to accept or reject a null hypothesis. A null hypothesis states that there is no difference between interventions. The p-value represents the statistical significance of an observation – in order to prove that a treatment intervention is effective and not down to chance the p-value has to be lower than 0.05. The lower the p-value the more significant the intervention is. There are difficulties with p-values as this excellent youtube video shows and it proposes answers to the problems of the power of the study and the use of confidence intervals. However, what tends to happen in my personal experience is that therapists disregard studies based on the p-value alone, rejecting a treatment intervention immediately as not significant, sometimes without reading the study carefully. In addition, using a larger volume of people in studies may increase the statistical power of the study but may not tell me how to treat an individual person. There is a likelihood that the research done on a large population may not reflect the patient in front of me!
Ioannidis (2005) compares sequential RCTs and describes that simulations for most study designs are more likely to be false than true. Kerry et al (2012) went one step further and evaluated the truth status of systematic reviews of RCTs in physiotherapy practice over time to see if by repeating the studies they would become more true. Staggeringly, the probability of future trials being true was between 2 and 5%.
They did find that randomising a study increased the likelihood of the study to be more true. However, even if a study is randomised it can be a victim of statistical chance. An example is the study by with Leibovivi (2001) which used large statistical numbers and double blinding.
It examined remote, retroactive intercessory prayer on the outcome of blood stream infection. It concluded that praying for someone who was sick from between 4 to 10 years ago with a bloodstream infection will reduce their length of stay and reduce fever!! If one is to believe the outcomes of high quality RCTs according to the hierarchy of evidence-based medicine then you must believe that this is true! Or alternatively, we can accept that statistical errors can happen.
Another aspect of the EBP triangle is expert clinical practice. This appears to be a grey area and some would argue that it is fraught with a lack of objectivity. How do we know if someone is a clinical expert? Studies have discussed this in relation to expert clinical reasoning, expert judgement, management of ambiguity, professionalism, time management, learning strategies and effective use of team work (Epstein and Hundert, 2002). Petty et al (2011) discusses the impact of Musculoskeletal Masters education in the development of clinical expertise through grounded theory. It discusses the direction of development towards clinical expertise which involves three developmental aspects. They are the critical understanding of practice knowledge which leads to patient-centred practice. This evolved into an understanding and improved capability to learn in, and from, clinical practice. I would encourage you to read this work and can speak from personal experience how helpful this study was for my personal practice. Being comfortable doing structured observed clinical practice and critical reflection have been areas that I personally feel are key areas to develop.
The area of patient values and experience has to be considered and with it conversations regarding their beliefs, previous experiences and personal biases. Arguably, we should value the patient experience more than anything else to deliver patient-centred care and guide them on a road to recovery through collaboration. How can we collaborate if we do not consider their values or experience?
The exploration of both clinical expertise and patient values and their contributions to EBP are posts by themselves and thus have not been fully discussed here.
Not only does ‘E’ have ramifications in terms of patient management but it has a wider impact on the commissioning of services and the future of healthcare within the NHS.
Where does this leave us?
To start, I believe that the way forwards is to create clarity of the concept or theory before moving onto the exploration of hypotheses and providing evidence. The theory of intervention centres on the philosophical perspective of causation. Are we comfortable to discuss areas of philosophy? Should this be a part of our education? How do we interpret causation? Do we view it in the respect that causation can be observed through regularity? (e.g. A causes B again, and again and again.) Can we only view it as something that can be proven through counter-causation? (e.g A causes B but B could not have caused A.) Can we view causation thorough correlation? These are important questions that need to be thought through in order to move forwards.
An excellent book (that I am currently reading) is “Causation: A Very Short Introduction” by Stephen Mumford (@SDMumford) and Rani Lill Anjum (@ranilillanjum). It explains causation exceptionally well, even to a novice to the topic like myself. What it reveals is the complexity of the topic in a way that is understandable. It describes dispositionalism and Kerry et al (2012) use it as a more useful way of interpreting evidence-based practice. However, my understanding of this approach is very superficial at this stage and I intend on getting a deeper understanding as it appears to be a very compelling approach. I encourage you read this paper.
This is my perspective on dispositionalism and its use in EBP. Rather than viewing all elements as separate entities from a positivist and realist perspective it views EBP far more pragmatically. In this way evidence can be viewed in terms of its attributes, characteristics and commonalities of outcomes that lean towards or away from its ‘truth’ status. It stands up to the challenges raised by EBP and becomes something meaningful for clinicians. It is inclusive of clinical research, patient values and clinical expertise and contributes to wise decision-making. A dispositional view of evidence values that of an individual study as well as population studies but recognises limitations in the methodological design and conclusions made in combination with other areas of study. I aim to continue to explore this in more detail in the future.
In summary, I hope that I have successfully outlined some of the tensions that exist in EBP and have encouraged clinicians to take a step back to have a broader view. This is not an easy topic but I think it is important to move our profession forward with collaboration with our patients. Hopefully it has challenged some views and opinions. Thank you for reading.