Evidence-based practice (EBP) is firmly entrenched in the lexicon of physical therapist practice,1,2 but beliefs about how best to translate scientific evidence into clinical practice are far from settled. There are major gaps in our scientific knowledge; however, even more disturbing is the fact that an enormous amount of existing scientific knowledge remains unused in practice. As noted in the Institute of Medicine (IOM) report titled Crossing the Quality Chasm, “Between the health care we have and the care we could have lies not just a gap, but a chasm.”3
Thankfully, the infamous 264-year period between the discovery of citrus's benefit in preventing scurvy and the widespread use of citrus on British ships is no longer the norm.4 But the frequently quoted statement about the lag time between publication and adoption of research—only 14% of original research is applied for the benefit of patient care, and that takes 17 years5,6—is alarming enough. There is consensus that the transfer of evidence from proven health care discoveries to patient care is unpredictable and highly variable and needs to be accelerated.4,7,8
For those of us who want to speed the adoption of EBP in physical therapy and across health care more broadly, Naylor9 described 4 distinct phases or strategies that are instructive:
Phase 1, the “Era of Optimism,” is characterized by a belief in passive diffusion of scientific evidence into practice. In this (still-dominant) phase, students and clinicians are trained to critically appraise the scientific literature to identify valid new information that could be applied to practice.
Phase 2, the “Era of Innocence Lost and Regained,” acknowledges that much of clinical practice is not evidence based and that it is virtually impossible for clinicians to keep up with the explosion of medical literature. This understanding has led to the emergence of evidence-based clinical practice guidelines, in which the literature is systematically reviewed and summary recommendations are graded according to the strength of the supporting evidence. Guidelines are widely disseminated on the assumption that providers will read them and that practice will change accordingly.
Phase 3, the “Era of Industrialization,” is on the rise, as evidence mounts that the passive efforts of phases 1 and 2 fail to actually change practice. In this phase, aggressive strategies are implemented by regulatory entities or professions to improve care. These efforts frequently involve performance measurement and reporting,10 which are intended to encourage providers to become more accountable and more focused on quality improvement. Many professions have risen to this challenge and have developed their own approaches to change patient management as described by Naylor.9 APTA's Physical Therapy Outcomes Registry,11 an organized system for collecting data to evaluate patient function and other clinically relevant measures, is a phase 3 effort, with improving practice and fulfilling quality reporting requirements as 2 of its major goals.
Phase 4, the final phase, is the “Era of Information Technology and Systems Engineering,” which is driven by the belief that it is not sufficient to focus on individual practitioners, but rather the redesign of service delivery systems to address barriers and incentives is required to bridge the wide gap between best evidence and common practice. For this phase, a different type of evidence base—one describing the most effective ways to change provider behavior9,12—is needed. Hence the emergence of the relatively new field of implementation research.
Even though passive dissemination (phases 1 and 2) has been shown to be ineffective when used as the sole means of bringing evidence to practice, it remains the most often-used approach13–15—in large part because it requires few resources and is consistent with professional incentives that motivate scientists to disseminate their research findings primarily via peer-reviewed journals16 such as PTJ.
T1 and T2 Translational Research
Brownman17 distinguishes between dissemination, in which practitioners are made aware of current scientific evidence and guidelines, and translation or implementation, where active approaches to encourage uptake are used (phases 3 and 4). The National Institutes of Health (NIH) made translational research a priority with its Clinical and Translational Science Award program (CTSA) launched in 2006.8 The CTSAs primarily support Type 1 (T1) translational research, which focuses on moving basic science discoveries into research that involves humans, that is, applying research that was done in cells or animals to a human problem that requires human participants—bench to bedside. T1 translational research has proven to be a powerful process that drives the clinical research engine in the United States and elsewhere.
Type 2 (T2) translational research, also incorporated into the CTSA program, is a fairly new categorization of research that arises from the concern that many clinical research discoveries in academic medical centers never find their way into commonly accepted medical practice. The addition of T2 translational research reflects not only a push to discover things in academia, but a push to get these discoveries accepted in practice so that they actually improve health. T2 translational research is focused on integration and implementation research strategies to get the community (practitioners or the public) to adopt new clinical research findings—bedside to practice. But as Woolf8 has pointed out, T1 research has overshadowed T2 research in the United States and attracts considerably more funding.
Implementation Research
Implementation research has been defined as “the scientific study of methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice, and hence to improve the quality and effectiveness of health care. It refers specifically to efforts to promote the uptake of evidence-based practices recommended by clinical practice guidelines.”18 Although implementation research is similar to quality improvement efforts, there is an important distinction to be made. Quality improvement research has been defined as “systematic, data-guided activities designed to bring about immediate improvement in health care delivery in particular settings.”19 Both types of research aim to improve quality of services, but implementation research is directed more toward producing generalizable knowledge, whereas quality improvement is focused on locally applicable knowledge.20
There is no consensus on the key components of implementation research; however, many implementation research models include the following steps:
Identify care gaps and the need for change.
Identify barriers to consistent use of evidence-based guidelines.
Review evidence on implementation interventions.
Tailor or develop intervention to improve performance.
Implement intervention.
Evaluate the process of implementation.
Evaluate outcomes of the intervention.
Types of implementation research interventions used to improve the adoption of EBP include but are not limited to:
Patient-mediated interventions: providing education to patients on best practice.
Audit and feedback: reporting any summary of clinical performance of health care over a specified period of time.
Consensus processes: including all participating providers in the discussion to ensure that they agree that the chosen clinical problem was important and the approach to managing the problem was appropriate.
Use of “positive deviants”: using local opinion leaders nominated by their colleagues as “positive deviants and educationally influential.”
Reminders and incentives: providing patient- or encounter-specific information to prompt a provider to implement evidence-based guidelines, and providing incentives to the provider when the guidelines are implemented.
In a 2015 example of implementation research in rehabilitation, Connell et al21 developed a behavioral intervention to increase the clinical use of upper limb exercise in stroke rehabilitation. The researchers noted that despite strong evidence to support the use of intensive repetitive task-oriented exercise training for upper extremity recovery after stroke, the intervention is not frequently used in clinical practice. Their behavioral intervention targeted clinicians through several components: establishing an intervention development group consisting of members of the research team and clinical staff; holding structured discussions with clinical staff to understand the problem and prioritize target behaviors; collaboratively designing theoretically underpinned intervention components; and piloting and refining the intervention. A definitive effectiveness trial is now underway to evaluate the impact of this behavioral intervention.
Several considerations guide the choice of implementation intervention: effectiveness, resources required, appropriateness to practice context, and overall cost. At a macro level, the published Standards for QUality Improvement Reporting Excellence (SQUIRE)22,23 are designed to promote knowledge-building for implementation research and quality improvement studies by standardizing how findings from these types of studies are reported.
If we are serious about speeding the rate of adoption of EBP in physical therapy and in rehabilitation more broadly, funding agencies and researchers need to direct more attention and resources toward designing and using active implementation strategies to consistently deliver to our patients what is already known to work. For its part, PTJ seeks to be a vehicle for “active dissemination,” and I strongly encourage the submission of these types of papers to PTJ.
- © 2016 American Physical Therapy Association