Abstract
Background Curricula changes in physical therapist education programs in Canada emphasize evidence-based practice skills, including literature retrieval and evaluation. Do graduates use these skills in practice?
Objectives The aim of this study was to evaluate the use of research information in the clinical decision making of therapists with different years of experience and evidence-based practice preparation. Perceptions about evidence-based practice were explored qualitatively.
Design A cross-sectional study with 4 graduating cohorts was conducted.
Methods Eighty physical therapists representing 4 different graduating cohorts participated in interviews focused on 2 clinical scenarios. Participants had varying years of clinical experience (range=1–15 years) and academic knowledge of evidence-based practice skills. Therapists discussed the effectiveness of interventions related to the scenarios and identified the sources of information used to reach decisions. Participants also answered general questions related to evidence-based practice knowledge.
Results Recent graduates demonstrated better knowledge of evidence-based practice skills compared with therapists with 6 to 15 years of clinical experience. However, all groups used clinical experience most frequently as their source of information for clinical decisions. Research evidence was infrequently included in decision making.
Limitations This study used a convenience sample of therapists who agreed to volunteer for the study.
Conclusions The results suggest a knowledge-to-practice gap; graduates are not using the new skills to inform their practice. Tailoring academic evidence-based activities more to the time constraints of clinical practice may help students to be more successful in applying evidence in practice. Academic programs need to do more to create and nurture environments in both academic and clinical settings to ensure students practice using evidence-based practice skills across settings.
In the last 10 to 15 years, the content of evidence-based practice knowledge and skills in physical therapy curricula has increased.1 The assumption was that this knowledge set would result in the increased use of evidence-based practice behaviors by graduates. However, both the medical2,3 and rehabilitation4–6 literature suggest that a knowledge-to-practice gap exists, with a lack of uptake of these new skills into clinical decision making by graduates. Factors affecting the use of research evidence in clinical decision making are broadly categorized into characteristics of the individual and the workplace.7 Specific factors include attitudes and beliefs about research,5,7–11 level of education,5,10,12,13 available time,5,10,13,14 access to the Internet,6 organizational culture and leadership,7,15,16 and availability of research articles.7 Diffusion of innovations theory17 would suggest that individual and organizational factors combine, in some manner, and result in people who are early adopters of a change (those who will be the first to try out a new way of doing things) and others who lag behind or never adopt a new intervention.
The studies evaluating the use of evidence-based practice skills in physical therapy have mostly evaluated clinicians' knowledge, skills, and attitudes toward evidence-based practice rather than their actual behaviors.5,8–11,14,18,19 For example, a 2003 survey of American physical therapists showed that younger therapists and those with fewer years of clinical experience were more likely to have formal training in evidence-based practice skills, such as critical appraisal and search strategies, and were more confident in those skills.5 An Australian study that assessed perceived skills and behaviors demonstrated that younger therapists rated their evidence-based practice skills higher compared with older therapists. However, younger therapists did not report that they performed evidence-based practice tasks such as searching the literature and reading research reports any more often than older therapists.10 These studies were survey designs and subject to the reporting biases inherent with this type of study design.20 Mikhail and colleagues21 used a clinical vignette to elicit responses from therapists about interventions they would choose for clients with low back pain. They reported that therapists who chose interventions with high evidence of effectiveness were more likely to have practiced for less than 15 years. However, while 68% of therapists chose interventions with strong or moderate evidence, 90% also used interventions with limited or no research evidence.21 These studies suggest that graduates from contemporary programs have knowledge of evidence-based practice skills but that they have not integrated them into clinical decision making at the level of the individual client. There appears to be a “know-do” gap, as described by Bennett and Jessani22 and Bucknell23; graduates know how to search and evaluate the literature, but they are not doing it consistently in practice.
At the University of Alberta, the physical therapist clinical training program changed from a baccalaureate to a master's degree designation in 2003. Three components of evidence-based practice as defined by Sackett et al24—research evidence, client values, and clinical experience—are embedded within all courses in the revised curriculum, and the overarching curriculum models25 were designed to create an evidence-based culture. Students learn how to search for and critique research literature in many clinical areas. Active learning strategies using case studies and scenarios are incorporated into all clinical courses and provide students with many opportunities to use their new search and evaluation skills. Formal evaluations of these evidence-based skills confirm that the students know how to search and evaluate the literature and that they understand the importance of including clients' goals and values in decision making. They also know that in the absence of strong research evidence, they often will have to rely on their clinical experience when making decisions. While in school, they demonstrate evidence-based practice skills in their decision making. But how do they make decisions as practicing clinicians? Is there a gap between what they know (knowledge and skills) and what they actually do (behaviors)? Do students from the revised curriculum have more evidence-based knowledge than their older colleagues, and, more importantly, do they use this knowledge to make clinical decisions?
The overall aim of this cross-sectional mixed-methods study was to evaluate evidence-based behaviors and knowledge in therapists with both differing years of experience and differing amounts of training in evidence-based practice. The specific objectives were: (1) to determine if the use of research evidence in clinical decision making was the same or different between therapists who had been trained in curricula with differing emphasis on evidence-based practice, (2) to evaluate the influence of years of experience on therapists' use of research evidence, and (3) to evaluate and explore therapist knowledge, perceptions, and attitudes toward evidence based-practice.
Method
Participants and Recruitment
Four cohorts of physical therapy graduates of the University of Alberta (graduating years 1996–2000, 2002–2005, 2005–2008, and 2009–2010) were recruited to represent the different degree designations (baccalaureate and master's) and years of experience. In 2005, there were 2 graduating classes from different cohorts, representing the final baccalaureate graduates in May and the first master's graduates in December. These 2 cohorts allowed evaluation of the effect of curriculum without the confounding influence of years of experience. To be included in the study, graduates had to be working in a physical therapist role, either part-time or full-time. Therapists were excluded if they had completed a PhD or if they were working in an academic setting.
Internal class lists were used to identify the total number of graduates in the specified time periods (N=1,035). E-mail and home addresses were obtained through both the university alumni lists and from member lists at Physiotherapy Alberta. To identify graduates who worked outside Alberta, we searched publicly available lists of licensed physical therapists and placed advertisements in professional media. One hundred sixty-two graduates could not be found or were ineligible, leaving 873 eligible participants with contact information (Figure). Letters of invitation were sent by e-mail or post to eligible participants. In the letter of invitation, therapists were told that the study aimed to learn more about how therapists make clinical decisions. The term evidence-based practice was not used on the information letter. A follow-up e-mail was sent if no response was received within a 2- to 3-week period. Therapists interested in participating contacted the project coordinator (A.V.N.) to receive more information. The response rate, calculated using the number of invitations that were sent, was 12%. This response rate is conservative, as the actual number of invitations that reached participants is not known.
Recruitment and sampling results.
Our target sample size was 20 practicing physical therapists in each cohort. This target provided a large enough sample to represent the various cohorts and was feasible within the time and financial constraints of the grant. The target sample was reached within a reasonable amount to time (4 months) for cohorts 1 and 4. Cohorts 2 and 3 were more challenging to recruit—they reported especially busy lives with young children and jobs. The volunteer sample represented population demographics in terms of sex and geographic location as described in the provincial licensing annual report.
Scenario Development
The scenarios developed for the study were designed to evaluate how therapists made decisions regarding intervention choices. Scenarios are a valid means of assessing physical therapist practice behaviors26 and have been used previously in physical therapy research related to clinical decision making.21,27,28 The scenario development process has been described previously.29 Briefly, teams of researchers and expert clinicians (called “content experts”) representing 3 main practice areas (neurology, cardiorespiratory, and musculoskeletal) searched the literature to identify the extent and quality of evidence available for possible interventions. They then developed a scenario that described a prevalent condition. A functional goal important to the client was embedded in the scenario description. In total, 21 scenarios were developed for use in this study. Scenarios were developed to reflect the proportional representation of therapists working in each main practice area (ie, 5 neurology, 5 cardiorespiratory, and 11 musculoskeletal scenarios were developed). The literature supporting the scenarios was updated 2 or 3 times during the course of data collection to identify new research literature in the topic areas and to confirm that the level of evidence assigned to the intervention was still appropriate. A scenario example is available in a previous publication.29
Procedure
Participants took part in one 60-minute scenario-based interview. They received 2 written clinical scenarios by e-mail at least 1 week before a scheduled telephone (n=51) or face-to-face (n=29) interview. Scenarios were matched to the therapist's self-identified clinical area, and therapists who indicated more general areas of practice received scenarios from 2 different areas of practice. In the week between scenario receipt and the interview, therapists were instructed to read the scenarios, make a decision about an intervention, and keep track of the resources that they used to make their decisions. Each therapist received one “strong evidence” scenario and one “weak evidence” scenario. Strong evidence scenarios described a clinical problem that had strong evidence to support a specific intervention; for these scenarios, the therapists had to suggest an intervention and discuss how they chose it. Weak evidence scenarios described a clinical problem and provided an intervention that did not have strong evidence to support it. Therapists were asked to discuss the effectiveness of the intervention and whether they would support it. Weak evidence scenarios reflect the current state of the evidence for many accepted physical therapist intervention choices, and it was important to include them. The intervention was provided for the weak evidence scenarios so that therapists would not have to spend many hours evaluating an array of intervention choices with poor evidence; their decision making was curtailed to one intervention.
After obtaining informed consent, the digital recorder was turned on, and the interview began. Interviewers followed the structured interview guide (Tab. 1). The questions related to the strong scenario are listed first in the interview guide, but the order in which the scenarios were discussed was randomized. The general section was always asked after completion of both scenario sections. Participants were asked to discuss their decisions related to the interventions. Subsequently, standardized open-ended questions were used to identify: (1) all sources of information the therapists used to inform their decisions about the intervention (eg, literature search, clinical experience, colleagues, conferences), (2) the most important source of information they used, and (3) how they would know if their intervention was successful (Tab. 1). In addition to the scenarios, the interview had a third section designed to evaluate therapists' knowledge of the 3 components of Sackett and colleagues' definition of evidence-based practice,24 how to do an effective literature search and critically evaluate an article, and how to interpret the value of clinical guidelines. This section of the interview evaluated “what they knew” rather than “what they would do.”
Interview Guide and Scoring Rubric
The interviewers were physical therapists (n=3) or physical therapy graduate students (n=3). Training was standardized in a 1-day workshop. Each interviewer then completed and scored 2 pilot interviews. The first pilot interview was completed face-to-face with physical therapists who were not eligible to participate in the study. The second pilot interview was a mock interview, on the telephone, with one of the investigators. Standardized responses were used during the interview, and the predetermined score associated with that standardized response was the gold standard. Agreement between interviewer scores and the gold standard was above 80% for all interviews (range=80%–93%).
Data Analysis
All interviews were digitally recorded and transcribed. The project coordinator checked all interviewer scores and read the transcript to confirm the accuracy of the scores and add descriptive information from the transcript if necessary. Together, we reviewed 20 transcripts and questionnaires to confirm interview scoring. We also examined and resolved any issues or discrepancies in scoring identified by the project coordinator.
The separate scores given to the strong scenario, weak scenario, and general question sections of the interview were summed for a total score out of a possible 19 (Tab. 1). To receive full marks in the scenario sections, a therapist had to: (1) identify a literature search as the most important resource and (2) discuss that achievement of the client's functional goal was an important indicator of a successful intervention. For the general section, participants who were able to name 3 or more appropriate search terms and databases received full marks. Knowledge of at least 2 areas important for critical appraisal of an article (ie, sample size, study design, description of the sample), as well as the 3 components of evidence-based practice, also resulted in full marks.
Analysis of variance (ANOVA) was used to determine if there were differences among cohorts on: (1) demographic variables and (2) scenario subscores (strong and weak), general knowledge subscores, and total scores. Alpha level was set at P<.01 to adjust for the 4 ANOVAs specific to the interview scores. Subsequently, frequencies were calculated related to what participants indicated as the most important source of information for making intervention decisions. A chi-square test was used to determine if there was a significant difference in the proportion of therapists graduating before and after the curriculum revision who were able to identify the 3 components of evidence-based practice.
Participant responses to the general evidence-based practice question (ie, What does the term evidence-based practice mean to you?) were synthesized into one document. Both authors (P.J.M., J.D.) separately read the responses, identified units of information from the data30 and independently developed preliminary themes related to the units of information.31 They then met to discuss and agree on themes arising from the data. Participants' quotes that exemplified the themes were identified.
Role of the Funding Source
This study was supported by a grant from the Teaching and Learning Enhancement Fund at the University of Alberta.
Results
Eighty physical therapists participated in the study (Tab. 2). More female than male therapists participated, reflecting the typical distribution by sex in physical therapist practice.32 The majority of therapists practiced in private practice, in primarily orthopedic settings, which reflects the usual distribution of practice type in our area. All cohorts, except cohorts 2 and 3, differed significantly by age. Years since graduation were significantly different among all cohorts.
Participant Characteristicsa
Scenario scores and the total scores did not differ across cohorts (Tab. 3). Scores on the general section were significantly different among cohorts (F=5.759, df=3, P=.001). Post hoc analysis showed that the scores of cohort 4 were significantly higher than the scores of cohorts 1 and 2. There were no differences in the general scores between the 2 cohorts educated in the revised curriculum (cohorts 3 and 4). Forty-five percent of the participants in all cohorts used clinical experience as the primary source of information for decision making (Tab. 4). There was no difference in the proportion of responses related to the most prevalent resource (clinical experience) between the 2 types of scenarios across cohorts (χ2=0.802, P=.414); thus, frequencies are displayed for strong and weak scenarios combined. Subsequently, a chi-square test showed that there was no difference among the cohorts in terms of the proportion of therapists who selected clinical experience as their most important source of information (χ2=5.634, P=.131). Only 12% of the participants identified research evidence gathered from a literature search as their most important resource (see “Total” column of Tab. 4). Other information sources (not shown), such as continuing education courses, practice guidelines, and theoretical justification, made up approximately 3% of responses. The absolute number of therapists who chose research evidence as their most important resource was similar across the cohorts (range=3–6 therapists per cohort; Tab. 4). Of the 40 therapists educated in the revised curriculum, half were able to name the 3 components of evidence-based practice compared with only 18% of graduates from the previous curriculum (χ2=12.69, P=.013).
Interview Scores: Subsections and Total Score by Cohorta
Most Important Resource for Decision Makinga
The qualitative responses to the evidence-based practice question enhanced our understandings of therapist perceptions and attitudes related to evidence-based practice. One theme arose from the data: evidence-based practice is research. Discussions about research evidence dominated the participants' comments about evidence-based practice, and many therapists only thought of the use of research evidence when they reflected on evidence-based practice. Others did talk about the other components of evidence-based practice, but comments showed that research evidence was viewed as separate and in some cases mutually exclusive to the use of clinical experience and patient values in decision making. One therapist with 15 years of clinical experience stated, “I actually don't like evidence-based practice because what it says is research drives decision making, instead of saying the context of the patient and how the evidence applies to that patient in this time in space influences what decisions are made.” Another said, “So definitely we need to strive for evidence in our practice, but realizing that there is an individual, too, and not everyone fits into that.” Therapists generally seemed to have difficulty conceptualizing what the combination of evidence, clinical experience, and patient values would look like in everyday practice and thus were unsure that the combination was valuable. Clinical relevance of research was important to participants when they discussed the types of research they perceived as useful. One therapist stated, “To me, it is clinically relevant, so it's that you've done it and this is how it works clinically, because then it's real evidence, not just academic.” Although quantitatively more recent graduates could name all components of evidence-based practice, generally participant comments suggested that across cohorts participants did not internalize all components of evidence-based practice in their decision making.
Discussion
This investigation is the first to examine differences in evidence-based practice behaviors of therapists trained in different curricula within the same university program. Students taught in the revised curriculum, which had an increased and integrated focus on evidence-based practice, had significantly better knowledge of the use of evidence to inform practice. However, scenario-based discussions showed that therapists from all cohorts, regardless of curriculum or years of experience, approached clinical decision making in the same way, by depending primarily on their clinical experience. Therapists in all cohorts used research evidence infrequently. Clinical experience has previously been identified as the most important source of information for decision making in physical therapy.8,33 As one piece of the evidence-based practice approach defined by Sackett and colleagues and widely adopted in physical therapy programs,1,24 the use of clinical experience should not be discouraged. However, an evidence-based practice approach also includes the judicious use of research evidence in decision making,34 and our findings are consistent with those of other researchers who reported that research evidence was used minimally.5,8,9,35 The gap between knowledge and practice continues.
This gap may be affected by 2 important environmental constructs: 1 in the university teaching program and 1 in the clinical context. How evidence-based practice skills are taught in the university program and how therapists have to implement them in practice are not congruent. In early course work, students are typically given more than an hour to look up and appraise the literature and discuss in seminar groups how to incorporate clinical experience, research evidence, and patient values into their decisions for a client. This time frame may be appropriate when students are initially learning evidence-based practice skills, but as their education program progresses, greater speed and efficiency need to be encouraged. Practice of evidence-based practice processes should mirror the reality of the time constraints of clinical practice by the end of the students' university education.36 In the same way, formative evaluation needs to better reflect the desired educational outcome. Objective, structured clinical examinations, similar to those used to evaluate clinical skills, could be used to evaluate the ability to quickly and appropriately complete an evidence-based practice exercise from question to application.36 Students could demonstrate their evidence-based skills such as literature retrieval, formulating a research question, and respecting client goals, at a number of time-limited testing stations.
The workplace culture of students' clinical placements or a therapist's job is another factor that may affect the use of evidence in practice. Physical therapists generally indicate that, although they value evidence-based practice, they do not do it,5,10,16 and as such neither students nor new clinicians are immersed in an evidence-based culture. A disconnect exists between the emphasis on evidence-based practice at the university and the workplace. Even though both students and clinical instructors recognize the need for better evidence-based practice role models in clinical practice,37 students report that they do not see evidence-based practice approaches modeled on a consistent basis.37,38 Both students and clinicians prioritize the acquisition of practical skills in clinical placements over the perceived “extra” of evidence-based practice, perhaps lessening the value and importance of evidence-based practice skills.37 Our participants' comments echo this previous work that use of research evidence in decision making is an extra and not an integral part of decision making along with clinical experience and patient values. A more visible evidence-based practice approach in clinical placement experiences is a viable starting place and may help to narrow the knowledge-to-practice gap.
The challenge of widespread adoption of evidence-based practice behaviors also may be viewed through the lens of the diffusion of innovation literature, largely underpinned by the work of Rogers39 and reviewed broadly with specific health care applications by Greenhalgh.40 Understanding more about the characteristics of successfully adopted innovations and the characteristics of individuals who adopt innovations may help to understand some of the challenges in adopting an evidence-based practice approach in clinical practice. Generally, the attributes of innovations that influence their adoption are: (1) relative advantage, (2) compatibility, (3) complexity, (4) trialability, (5) observability, and (6) reinvention (for detailed descriptions, see Rogers39 and Greenhalgh40). Our results suggest that both the relative advantage (over existing practices) of an evidence-based practice approach and its compatibility with existing practices are not valued in many workplaces. The broader adoption of an evidence-based practice approach may require reinvention (modification or changes during adoption) at the level of the individual or the organization, or both. Generally, there has been a move to focus more on organizational adoption, as opposed to an approach that focuses on the individual.40
There are several possible strategies that may help bridge the gap between academic and clinical contexts with respect to evidence-based practice. Rehabilitation clinics that are located in a university setting are one venue that can provide student clinical experiences and modeling of evidence-based practice behaviors. Role modeling of evidence-based practice in actual clinical settings may help change the perception that use of evidence in practice is an extra and not a core part of practice.37 Exploration of learning theories, such as workplace learning and situated learning, may help to increase our understanding of the transition from an academic to a clinical setting and the challenges of different contexts in the process of learning.41 Patton and colleagues41 recently explored learning theories and their potential applications in physical therapy clinical education. The importance of the environment and the influence of the environment on student learning were emphasized. We learn in context.
To increase the comfort level of community-based clinicians with evidence-based practice skills, academic programs can offer clinically integrated evidence-based medicine workshops. These types of workshops have been successful in medicine18,42 and may result in a more congruent evidence-based practice culture between academia and the workplace.18 Researchers can collaborate with clinicians to develop evidence summaries and practice guidelines that may make the use of research evidence in practice more mainstream and easier to incorporate as core practice. University teaching programs need to seek input from clinicians regarding their perceptions of barriers to evidence-based practice and how to more effectively incorporate evidence-based practice skills and behaviors into clinical placements. Lack of time is often cited as a reason why an evidence-based practice approach happens minimally in clinical settings.5,10 Researchers and clinicians who value the evidence-based practice approach can work together to try to bring about change in the workplace around the provision of time for literature search and evaluation. The implementation of evidence-based practice skills in the workplace needs a 2-way collaboration between academic and clinical educators.
Our study design is a strength of this investigation. In contrast to other studies that have examined evidence-based practice skills and behaviors using surveys,4,5,8,11,15 we interviewed participants and elicited information about clinical behaviors related to scenarios. Although reacting to scenarios is different from interacting with an actual patient, therapists were asked whether the decision-making process they used related to the scenarios was different from their usual process. The overwhelming majority indicated that the processes they used were similar. Our methods reflect more about what therapists actually do in clinical practice compared with just what they say they do, which can be influenced by social desirability bias. Our response rate was low but not unusual for this type of recruitment strategy.43,44 Nevertheless, there remains a possibility of responder bias. However, that bias, if present, would likely mean that the study would attract therapists committed and knowledgeable about evidence-based practice. A limitation of the study is that the questionnaire we used was developed specifically for this study, and although it included modified questions from the well-validated questionnaire,28 we did not formally validate our questionnaire beyond content validation during pilot work.
In conclusion, among 4 cohorts of practicing clinicians with varying years of experience and exposure to evidence-based practice principles, therapists exposed to academic instruction in evidence-based skills demonstrated higher scores related to evidence-based practice knowledge and skills but not behaviors. Changing clinical behaviors of therapists requires tailoring of academic activities more to the time demands of clinical practice and the creation of environments in both academic and clinical settings to ensure students practice using evidence-based practice skills across settings. Without some level of immersion in evidence-based culture, either on a clinical placement or after graduation in a workplace, the expectation that graduates will apply evidence-based knowledge in clinical situations may be unrealistic.4,15 Activities designed to foster collaboration between academic programs and workplace will create a more coherent evidence-based practice message for students.
Footnotes
Dr Manns and Dr Darrah provided concept/idea/research design. All authors provided writing, data collection, data analysis, and project management. Dr Manns provided fund procurement, facilities/equipment, and institutional liaisons. Ms Norton provided administrative support and consultation (including review of manuscript before submitting).
Ethics approval for this study was obtained through the Health Research Ethics Board at the University of Alberta.
Findings from the research project were presented at the Canadian Physiotherapy Association Congress; May 23–26, 2013; Montreal, Quebec, Canada, and the International Society for the Scholarship of Teaching and Learning Conference; October 2–5, 2013; Raleigh, North Carolina.
This study was supported by a grant from the Teaching and Learning Enhancement Fund at the University of Alberta.
- Received September 26, 2013.
- Accepted April 22, 2014.
- © 2015 American Physical Therapy Association