Abstract
Background Many factors affect student learning throughout the clinical education (CE) component of professional (entry-level) physical therapist education curricula. Physical therapist education programs (PTEPs) manage CE, yet the material and human resources required to provide CE are generally overseen by community-based physical therapist practices.
Purpose The purposes of this systematic review were: (1) to examine how the construct of quality is defined in CE literature and (2) to determine the methodological rigor of the available evidence on quality in physical therapist CE.
Methods This study was a systematic review of English-language journals using the American Physical Therapy Association's Open Door Portal to Evidence-Based Practice as the computer search engine. The search was categorized using terms for physical therapy and quality and for CE pedagogy and models or roles. Summary findings were characterized by 5 primary themes and 14 subthemes using a qualitative-directed content analysis.
Results Fifty-four articles were included in the study. The primary quality themes were: CE framework, CE sites, structure of CE, assessment in CE, and CE faculty. The methodological rigor of the studies was critically appraised using a binary system based on the McMaster appraisal tools. Scores ranged from 3 to 14.
Limitations Publication bias and outcome reporting bias may be inherent limitations to the results.
Conclusion The review found inconclusive evidence about what constitutes quality or best practice for physical therapist CE. Five key constructs of CE were identified that, when aggregated, could construe quality.
Clinical education (CE) in health profession programs is unique to higher education in the proportion of program contact hours spent outside of the classroom. Clinical education involves immersion of students in actual clinical practice, which is separate from the didactic components typically delivered in classrooms. Physical therapist education programs (PTEPs) devote 44.9% of professional (entry-level) physical therapist education curricula to CE.1 These programs utilize directors of clinical education (DCEs) to manage the CE component of the curriculum; DCEs aim to define, pursue, and influence the quality of the CE product. Yet, the material and human resources required to provide CE experiences for physical therapist students are generally managed by community-based physical therapist practices, which is in contrast to the didactic components where the PTEP maintains direct control over the factors affecting the quality of the educational product. These physical therapist practices become CE sites when affiliated with PTEPs through written contractual agreements. Regardless of the contractual arrangements, there are many factors that affect student learning and are outside of a PTEP's control. However, the ultimate responsibility for the provision of high-quality education remains with the PTEP.2
Although the PTEP maintains responsibility for student learning and the outcomes of CE experiences, CE sites are given latitude to develop site-specific CE programs. Currently, the typical PTEP utilizes an average of 373 CE sites.1 Clinical educators—center coordinators of clinical education (CCCEs) and clinical instructors (CIs)—within these sites design and implement learning experiences to engage students in the management of patients common to each respective practice. The CE sites also may include exposure to practice administration, patient advocacy, or interdisciplinary care. Clinical educators are responsible for assessing student performance toward mastery of entry-level standards, although agreement on what are considered entry-level standards is variable.3,4 A national manual, adopted by the American Physical Therapy Association's (APTA) Board of Directors, Guidelines and Self-Assessments for Clinical Education,5 is available to guide CE sites in the design, implementation, and assessment of CE experiences for students; however, the frequency of its use is unknown.3 Additionally, the APTA Physical Therapist Clinical Education Principles document6 provides consensus standards for use in physical therapist CE, although its impact on quality outcomes also is unknown. These factors contribute to considerable variation among CE experiences, presenting a challenge for the PTEP in monitoring the overall quality of student clinical experiences.
How does a PTEP measure quality in physical therapist CE? Quality can be defined as a distinctive or essential characteristic or attribute, character with respect to grade of excellence, a personality or character trait, or accomplishment or attainment.7 In physical therapist education, the Commission on Accreditation of Physical Therapy Education (CAPTE) accredits programs that comply with standards that demand demonstration of quality and continuous improvement.2 Many evaluative criteria pertain to the CE component of the curriculum, including references to qualified faculty, environments conducive to learning, protection of rights and safety, sufficient resources to support the curriculum, and assessment of the CE program. Taken collectively, CAPTE standards may guide academic and clinical educators toward factors influencing high-quality CE, yet they do not identify evidence-based definitions or measures of quality.
A historical overview of CE as it pertains to physical therapist education in the United States was documented by Gwyer et al in 2003,8 providing a historical review of physical therapist CE framed by the categories of CE sites, structure, assessment, and faculty (Fig. 1). Although this summary of the historical roots of the profession documented the positive role CE research has made on the advancements in physical therapist education, it lacked critical appraisal of the methodological rigor of the literature. Two systematic reviews9,10 of physical therapist CE literature were conducted in the mid-2000s; however, both reviews had a narrow focus on CE models. Baldry Currens10 reviewed the advantages and disadvantages of the 2:1 student-to-CI model, whereas Lekkas et al9 took a broader approach in considering the breadth of CE models in the health care literature. Both reviews concluded that there was insufficient evidence and methodological rigor to support or favor one particular CE model. What is lacking from the literature to date is a broad and critically appraised review of the breadth of CE research. The purposes of this systematic review, therefore, were: (1) to examine how the construct of quality is defined in CE literature and (2) to determine the methodological rigor of the available evidence on quality in physical therapist CE.
Key constucts of quality in clinical education identified in this review. Dotted border indicates construct was identified by Gwyer et al7 as a key foundational component in clinical education.
Method
Identification and Selection of Studies
Relevant search terms were developed and agreed upon by the researchers and sorted according to the following categories: CE pedagogy, CE models, CE roles, and descriptors of quality. CINAHL subject headings were added to the search term categories where appropriate to enhance search outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)11 Statement guided the selection of the search-identified studies (Fig. 2).
Flow diagram of systematic review.
Literature Search
The APTA's Open Door Portal to Evidence-Based Practice12 was used for the computer-based search on October 11, 2011, and repeated on July 19, 2012. The databases included MEDLINE, CINAHL, SPORTDiscus, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Database of Abstracts of Reviews of Effects, NHS Economic Evaluation Database, Health Technology Assessments, and Cochrane Methodology Register. The search terms and search strategy are presented in Table 1. A hand search and content expert consultation complemented the database search. All literature citations and full-text files were collected and organized using the Zotero research management tool13 (Zotero, Fairfax, Virginia).
Search Terms and Search Strategya
Inclusion and Exclusion of Studies
Two researchers (P.J.J., C.A.M.) reviewed each of the identified studies for inclusion. A third researcher (S.P.G.) was consulted as tie-breaker. Studies of any research design were included if full-text was available in the English language, they originated from peer-reviewed sources, and they addressed quality in physical therapist CE pedagogy as it related to the roles or models of physical therapist CE. Studies were excluded if they:
Did not address, describe, or measure the construct of quality as it related to CE pedagogy and roles or models.
Addressed only academic and didactic influences on CE.
Were exclusively from disciplines other than physical therapy.
Were dissertations, abstracts, or conference proceedings.
This review excluded the role of the DCE during the search to focus the review on roles that were beyond the immediate control of the PTEP.
Data Analysis
Descriptive data were extracted for all articles. The data included the publication citation and date of publication, the country of research, the aim or purpose of the study, the description of participants, the summary of the intervention and methods, and the summary of the outcomes, results, or conclusions. All studies were classified according to research design and methods. Research designs were categorized as either quantitative or qualitative. Level of evidence classification was assigned hierarchically for quantitative study designs according to the Oxford Centre for Evidence-Based Medicine Levels of Evidence.14 The system was selected due to its wide acceptance in the classification of research rigor, including “expert opinion” as a level of evidence.11 Qualitative study designs were classified according to the categories of the McMaster University Occupational Therapy Evidence-Based Practice Research Group. This group developed a valid and reliable protocol to critically review quantitative research articles.15 Both classification tools were used in a previous CE systematic review.9
The methodological rigor of each article was critically appraised using the binary system developed by Lekkas et al9 based on the McMaster appraisal tools.15 Studies were appraised and scored 1 point for adequate methodological rigor and 0 points for inadequate rigor for each of 14 areas, for a potential critical appraisal score (CAS) between 0 and 14. Two reviewers (S.M.G., C.A.M.) independently completed the initial data extraction and critical appraisal with a data extraction form developed using Google Docs (Mountain View, California).16 The 2 reviewers confirmed the abstracted results.
Data pooling for the purposes of meta-analysis was deemed inappropriate given the heterogeneous research designs and methods. Therefore, data were grouped, synthesized, and analyzed descriptively according to CE themes using a qualitative content analysis.17 Interobserver agreement of critical appraisal was assessed using kappa statistics.18 An adjusted kappa also was calculated because an unadjusted kappa could provide misleading results if the data were unbalanced, despite high observed agreement.19
Two reviewers (C.A.M., S.M.G.) reached consensus on the level of evidence considering the methodological design of articles, assigned CAS, and descriptive qualitative summary of the thematic outcomes. The strength of available evidence on the quality of physical therapist CE was assessed in summary for all articles.
Results
Selection of Studies
Initially, the search yielded 202 citations. One hundred eleven articles were excluded after title and abstract review. The remaining 91 articles received a full-text review, yielding 54 articles for inclusion. The articles originated from 6 countries: Australia (8), Canada (8), Finland (1), Sweden (1), United Kingdom (6), United States (29), and Zimbabwe (1). Four studies were published in the 1980s, 12 in the 1990s, 26 from 2000 to 2009, and 12 from 2010 to 2012.
Interrater Reliability
Review of title and abstracts produced an adjusted interobserver agreement of kappa=.989 (CI=.925–.989). Full-text articles were reviewed in the same manner and produced an interobserver agreement of kappa=.703 (CI=.510–.825), which indicates substantial agreement.18 Disagreements occurred primarily because of how quality issues were interpreted between the 2 reviewers.
Design and Rigor
Thirty-seven (68.5%) of the 54 articles were quantitative studies. The strength of the evidence as indicated by study design was variable, with most studies using a low level of evidence designs. Only 1 level 1B randomized controlled trial (RCT) (1.8%) and 6 level 2 studies (11.1%) were identified, including 2 level 2A systematic reviews of cohort studies. The majority (n=22 [40.7%]) of the quantitative evidence was level 4 descriptive research with outcome measures. Two level 3 case-controlled studies (3.7%) and 6 level 5 expert reviews (11.1%) were included. The CAS for the quantitative studies ranged from 4 to 14.
Seventeen qualitative studies (31.5%) were included, with the majority being descriptive designs (n=11 [20.3%]). Three grounded theory studies (5.5%)—1 (1.8%) with a case study design, 1 (1.8%) with an ethnographic design, and 1 (1.8%) with a phenomenological design—were identified. The CAS for the qualitative studies ranged from 3 to 14. Refer to eAppendix 1 for the CASs.
The 14-point CAS for methodological rigor was divided into tertiles20 for equal ordered distribution into 3 parts to determine the risk of bias scale among the included studies. Each part contains a third of the studies identified in the systematic review. The top 33% (67%–100%) were identified qualitatively as “high quality, low risk of bias,” the middle third (34%–66%) were qualitatively defined as “moderate quality, moderate risk of bias,” and the lower third (0%–33%) were identified as “low quality, high risk of bias.” The scores associated with each tertile were: top: 13 to 14 points; middle: 10 to 12 points; and lowest: 0 to 9 points. We used a distribution-based division because there is no current standard for risk of bias scoring in studies of educational interventions or CE. We believe the distribution-based system most accurately defines the studies that truly are at low or high risk of bias. Collectively, these articles may be considered “bench research” or “first principles” on the topic of quality in physical therapist CE.14,21,22
Summary Findings
Gwyer and colleagues' presentation of the structure and format of CE8 was used as a framework for organizing relational themes. Five primary themes and 14 subthemes emerged from the review. The primary themes were: (1) CE framework, (2) CE sites, (3) structure of CE, (4) assessment in CE, and (5) CE faculty. Refer to Figure 1 for the theme and subtheme pairings. Refer to Table 2 for the study design/thematic categorization/CAS score table. eAppendix 2 highlights the overall summary findings chart.
CE Framework
Two articles (3.7%) were categorized within the CE framework theme. The study designs were of low-level evidence. The CASs ranged from 3 to 4, indicating low quality, high risk of bias. Clinical education in professional PTEPs has evolved over the past century. Gwyer et al8 provided a level 5 expert review on historical events that shaped the development of physical therapist CE, detailing the growth of various aspects of physical therapist CE. Higgs'23 level 5 expert opinion presented a valuable paradigm for organizing a comprehensive analysis of CE by applying systems theory to CE programs.
CE Sites: Interprofessional, Practice and Productivity
Seven articles (13.0%) were categorized within the CE sites theme. The majority of study designs were low level. The CASs ranged from 5 to 14, with only 1 study emerging as high quality, low risk of bias. Four of the 7 studies investigated interprofessional or intraprofessional development as a result of training programs.24–27 The results revealed collaborative training among health professionals led to increased knowledge about other professional roles and facilitated development of collaborative relationships during work. One study assessed the physical therapist practice, which included the type of patients seen and interventions provided by students during clinical placements.28 The results showed that patients with musculoskeletal conditions (47%) represented the majority of those treated and that exercise (57%) was the primary intervention provided.
Productivity was used as an outcome measure in 2 studies. Dupont et al29 found a significant increase in the number of patients treated and the direct patient care provided during clinical placements for students in the Canadian health system. The number of patients seen by the student-CI team increased the most during second and third clinical placements (32%), and time in direct patient care increased the most during third and fourth rotations (36%). A 4% increase in CIs' workloads was calculated overall. Ladyshewsky30 found that overall productivity increased by 47% and direct patient care by 106% when using a 2:1 collaborative model. Clinical instructors spent the same amount of time in administrative tasks with students compared with baseline; however, students' productivity allowed CIs to perform more hours of didactic teaching and supervision.
Structure of CE: International, Models, Sequencing, Standards and Trends
Twenty-four articles (44.4%) were categorized within the structure of CE theme. Four studies were high level in design, including 1 RCT. The CASs ranged from 4 to 14, with 10 studies emerging as high quality, low risk of bias. Subthemes included articles on international CE, models of CE, sequencing of CE within a curriculum, and CE standards or trends. Common to articles under the structure of CE theme were descriptions of CE practices, including innovative approaches, changes, variation from the norm, and exploration of current systems.
Three articles (5.5%) drew upon international clinical experiences (ICEs). Pechak31 presented an overview of ICEs in US-based PTEPs. The results revealed 40.9% of the respondents offered ICEs, with a larger number reporting some availability of international experience for students who did not fit the definition of ICEs. Most ICE offerings were developed in the past decade and were offered in high-, upper-, or middle-income countries (Europe, Canada, and Australia). Length of rotations ranged from 6 to 8 weeks. Barriers to ICEs included faculty time, expense, and site coordination. Crawford et al32 reported on the outcomes involving Canadian physical therapist student experiences over a 10-year period. The results revealed that roughly 4% of students enrolled in physical therapist education programs participated in ICEs in more than 50 different countries. Programs limited international opportunities to those students without academic concerns; other program-specific limits also were presented. Both studies reported benefits to ICEs included broadening student perspectives on global and community health issues. Finally, Rodger et al33 presented outcomes of collaborative efforts of 21 research-intensive universities across 12 countries. Elements to collaborative partnerships at the international level included national and local commitments to training of health care workforce, support of clinical educators by all stakeholders, and the need for innovative models to prepare the workforce in emerging markets.
Ten articles addressed varying models of CE. Two level 2A systematic reviews discussed the 2:1 model and CE models broadly. Lekkas et al9 and Baldry Currens10 concluded there is inadequate evidence to conclusively support the use of 1 CE model over another, although lower-level and anecdotal evidence have clearly identified the advantages, disadvantages, and recommendations for each model according to various stakeholders. Both reviews encouraged further research into CE models with higher levels of methodological standardization and rigor. Kelly et al34 compared student learning on a collaborative “mock” clinical experience in a self-contained, pro bono campus clinic with that on a traditional clinical rotation. Results revealed minimal differences between the 2 models of CE on student outcomes. Seven articles sought to assess collaborative CE models—primarily the 2:1 and 4:2 models.35–41 Outcome measures varied among these studies, including stakeholder (student and CI) impressions, clinical productivity, and learning and teaching models. Barriers to using a collaborative model in CE include lack of funding, need for CI training, and student acceptance of the model.
Five articles addressed curricular sequencing involving either student or site outcomes. Graham et al42 found sites were most productive when student-CI teams worked together for longer rotations (5 weeks/full-time, 5 days/week) compared with shorter overall lengths (1 week) or part-time (1 day/week). Student performance as measured by a clinical evaluation instrument was highest for students on full-time, longer rotations. Teaching scores for CIs also were highest for full-time, longer-duration clinical experiences. Kell and Owen43 examined the effects of varying placement on student learning in CE in either the second or third year of a professional education program. Although the results were inconclusive, they suggested that student learning strategies depended on site characteristics such as the number of students per site and the number of CIs per student. In particular, increasing the student-to-educator or educator-to-student ratio may have detrimental effects during a 4-week clinical experience. Martorello44 found CCCEs preferred a shorter number of weeks for a first rotation (X̅=7.3 weeks, SD=2.26) compared with final clinical rotations (X̅=9.1 weeks, SD=2.09), which is similar to Sass and colleagues'4 findings that clinical educators preferred longer lengths (8–10 weeks) for final rotations. Weddle and Sellheim45 documented 1 program's outcomes for a curricular model that included integrated CE experiences, defined as one half day per week for multiple weeks and multiweek (8-week) full-time experiences. Outcomes revealed students were prepared for practice, and National Physical Therapy Examination data revealed no difference in outcomes from those of students who completed the program using a different curricular design. Finally, Watson et al,46 in a 2-parallel group, single-blinded, multicenter RCT, found students who participated in a simulated learning program for 25% of a 4-week clinical placement performed no worse than students who completed a traditional 4-week clinical immersion rotation. The results indicate CE offered in a simulated environment can successfully replace a portion of clinical time without compromising student learning outcomes.
Four studies emerged in relation to development of CE standards. Wetherbee et al3 and Sass et al4 reported on standardization of CE formats and lengths, breadth and depth of exposure to practice settings and patient types, competencies for entry-level status, and the need for mechanisms to credential clinical sites; however, no final agreement on best practice was identified. Strohschein,47 on the other hand, reported that CE is a process that is guided by 7 needs involving the profession, the site, and clinical educators. He outlined 10 models that share commonalities related to the process of CE, roles established and relationships that develop, and the collaborative responsibility and development of nontechnical standards, such as professional behaviors. Finally, Weddle and Sellheim48 reported on a curricular model grounded in educational theory that includes integrated CE experiences. Outcomes reinforced student motivation to learn and readiness to prepare for clinical practice.
Lastly, 2 articles within the structure of CE theme documented trends occurring within CE. Baldry Currens and Bithell49 found that being a CI is not a primary role for physical therapists, CE is not a primary focus within physical therapist practice, and standards for CESs or PTEPs to determine clinical capacity of students do not exist. Scully and Shepard50 reported organizational and human factors influenced the type, quality, and quantity of student learning. Organizational factors included ground rules from the PTEP level, the health care institution, and the physical therapy department itself. Human factors included CI and student perspectives and teaching tools, such as coaching, as used by CIs.
Assessment in CE: Clinical Instruction, Site and Student
Ten articles (18.5%) were categorized within the assessment in CE theme. The majority of studies were high to moderate level in design. The CASs ranged from 4 to 14, with 5 studies emerging as high quality, low risk of bias. Common to the articles were the descriptions of practices and expectations or strategies targeting performance improvement of the CI, the site, or the student. These studies sought to promote best practice in CE. Although the subject areas varied, the studies relied on feedback from both students and CIs to define or advance various quality initiatives.
Five studies examined the effectiveness of activities adjunctive to CE experiences that added an overall value to learning. First, critical reflection, as a program of learning, was found to contribute to learning within students and CIs by increasing a sense of validation, increasing empowerment, and broadening perspectives.51–53 Second, Low54 reported that utilizing a Web-based program that included reflective journaling and discussion boards provided a moderately positive learning experience that facilitated peer-to-peer and student-to-professor communications. Finally, Wright's55 study illustrated the value of student feedback to the CES in the assessment and improvement of the CE experience and CI performance.
Two studies measured CIs' perspectives relative to entry-level practice. Jette et al56 explored the CIs' perception of student behaviors that comprise entry-level performance. They concluded with a model of decision making that involves not only the assessment of specific performance measures but also a subjective synthesis or “gut feeling” that integrates all observations in concluding whether the student has achieved entry level, defined by them as “mentored independence.” Hayes et al,57 on the other hand, examined clinical performance behaviors that led to unsafe or ineffective practice. This study brought attention to affective behaviors (poor communication, unprofessional behaviors) that were less likely to be addressed by CIs.
Finally, 3 studies examined outcomes of student performance within clinical practice. Solomon58 reported learning contracts (LCs) during CE were useful to focus student attention on internal strengths and weaknesses and in the development of objectives for a clinical experience. However, development of the LC was time-consuming, and clinicians perceived a decreased flexibility of caseload as a result of implementation. Housel and Gandy59 and Vendrely and Carter60 assessed the outcomes of training programs on rating of student performance in a clinical setting, finding minimally significant differences on student ratings of safety behaviors60 and overall student improvement from midterm to final ratings59 between noncredentialed and credentialed CIs.
CE Faculty: Demographics and Characteristics and CI Education Needs
Eleven articles (20.3%) were categorized within the CE faculty theme. The majority of studies were moderate to low level in design. The CASs ranged from 7 to 14, with 3 studies emerging as high quality, low risk of bias. Commonly described CI characteristics included: are typically female, hold a bachelor of science degree, have between 6 and 8 years of clinical experience, have been a CI for 5 years, and instruct 1 to 4 students every 2 years.61–63 Trends for effective credentialed CIs showed they set clear goals for students and provided timely and thorough orientation.64 There was a negative correlation between credentialed and noncredentialed CIs for years of experience as a physical therapist and years as a CI.64 Positive teaching behaviors of CIs included using a line of questioning and coaching throughout a clinical experience.63,65,66 Hindering behaviors included intimidating questioning and correcting students in front of patients. Exemplary CIs were characterized as physical therapists who sought out continuing education about teaching and learning, involved themselves in the teaching and learning process, participated in reflective practice, encouraged student participation in the learning process, and provided supervision congruent with the level of the learner.67–69 Morren et al70 found no association between CI characteristics and student assessment of overall clinical experience.
Only 1 article was categorized within the CI education needs subtheme. Recker-Hughes et al71 reported CIs do not believe professional development activities support clinical teaching roles and desire more opportunities for continuing professional development and support from PTEPs.
Discussion
The purposes of this systematic review were: (1) to examine how the construct of quality is defined in CE literature and (2) to determine the methodological rigor of the evidence on quality in physical therapist CE. Clinical education research is particularly important because PTEPs are accountable to demonstrate quality for accreditation. Congruent with the findings of previous CE systematic reviews,9,10 the foci, methods, measures, outcomes, and rigor of CE research were variable. The volume and variability of educational research methods in the area of physical therapist CE may have prevented comprehensive and rigorous assessment of the literature prior to this review.
The use of reliable data is needed to support educational practice and policies, especially if the demand for CE continues to expand in the midst of limited resources. Studies need to be rigorously assessed before their outcomes are operationalized. The results of this systematic review reveal the methodological rigor of studies on quality in physical therapist CE varies, regardless of study design. The results of the lowest tertile studies, assessed to be of low rigor and high risk of bias (CAS ≤9), should be used with caution. The results of the highest tertile studies, assessed to be of high rigor and low risk of bias (CAS 13–14), could be used to generate benchmarks for best practice in physical therapist CE.
Our review found inconclusive evidence about what constitutes quality or best practice for physical therapist CE, yet identified research in 5 key themes of CE that when aggregated could construe quality. Many individual studies reported outcomes about a variety of components related to CE; however, heterogeneity of methods and measures prevented meta-analysis. We present a qualitative descriptive summary that highlights CE as a multidimensional, complex program.
Physical therapist CE involves an intricate process. Its overall intent is to provide a means for students to reach entry-level clinical competence in real-time clinical practice. This systematic review showed that CE is affected by various stakeholders, including PTEPs, CESs, CIs, and students. Clinical education programs are offered in the homeland of the PTEP and internationally. International clinical experiences are a small, but growing part of CE. The sequencing of CE within PTEPs is variable. The variety of curricula includes integration of CE as part of clinical science courses, use of simulated experiences in place of traditional placements, and traditional full-time placements of variable lengths. No sequence of the delivery of CE was shown to be superior to the other.
Variability also existed in the structure of CE at the site level. The evidence identified in this review and other reviews9,10 does not support or favor one particular model for CE over others. Clinical instructor descriptors were identified and some comparisons across studies were aggregated that reveal similarities in sex, academic background, years of clinical experience, and as a CI; however, no comparisons of descriptive characteristics of CIs and impact on outcomes in CE were found. Some data are emerging about the benefits of CI training; however, results were inconclusive in this review. Clinical instruction at the site level is not viewed as a priority within clinical practice, and professional development opportunities to advance the culture of CE at both the site and instructor level are needed. Evaluation of student experiences at the site level was identified in one study and was found to be of benefit. Two studies reported on the importance of assessing student clinical performance using cognitive, skill, and behavioral dimensions, which is reflected in the development and current use of the APTA Physical Therapist Clinical Performance Instrument (PT CPI) assessment tool.72
These findings are novel because this is the first time the literature about CE in professional PTEP has been systematically compiled and critically appraised to define and summarize the quality themes and the strength of evidence about quality in CE. Often, the relevance of research is left to individuals to make connections about the data available, and all too often not all of the pieces are addressed. This situation can leave decision makers with incomplete information and faulty recommendations for policy.73 Our systematic review compiles, critically appraises, and organizes this evidence. The summarized results lead us to believe there is much work to be done to build the body of evidence in physical therapist CE.
The question remains, however, are we asking the right questions to generate the research needed to move CE forward in a doctoring profession? The current model of physical therapist CE has been called at least vulnerable and at worst indefensible by some leaders in the profession.74 The broken nature of CE is a belief held by some clinicians, academicians, and students as well; however, what conclusive data exist that it is broken? On one hand, the findings of this review do support the notion of a problematic system evidenced by a paucity of CE research at high levels of methodological rigor. The breadth and heterogeneity of research methods, measures, and conclusions do not bring us closer to defining quality or best practice for physical therapist CE. At the same time, recent calls for homogeneity and uniformity in CE cannot be defended as evidence-based given the findings of this review. As examples, the 1-year internship, self-contained, health systems-based, clinician-paid, and required residency models have gained notoriety in recent years, yet these designs lack their own evidence basis according to our literature searches. The current heterogeneity of CE may leave the profession vulnerable and indefensible. However, assimilating to homogeneous models and methods of CE may not be any more defensible unless or until the research is better able to define and measure quality and what models of best practice should become standardized.
Although CAPTE defines a set of minimum educational standards, it is incumbent upon academic and clinical educators to design and conduct educational research oriented toward defining and measuring quality and best practice in CE. To this end, we propose the development of a national CE research agenda. Whether developed independent of the existing Education Research Questions in Ranked Priority Order75 or as a subagenda of that consensus document, defining CE best practice is a daunting goal that will require well-coordinated and intentional efforts. Such an agenda could be oriented to define the construct of quality for CE and to direct the development and validation of tools and methods by which to better measure constructs of quality. Future systematic reviews could assess each of the CE themes identified in this systematic review separately. Unfortunately, although CE is seen as a cornerstone for the viability and growth of physical therapy as a doctoring profession, the research supporting CE is not always so highly prioritized. Success defining and measuring quality in CE faces some of the same obstacles as other educational research for receiving the time and attention, creative and intellectual investment, and grants and financial support commensurate with its presumed importance to the field; this challenge especially affects those who hold the role of DCE due to the disparate responsibilities of the position.76
Apart from the variable methods and conclusions of the reviewed literature, the methods of this review itself have their own inherent limitations. First, unlike other CE systematic reviews, our search excluded studies from other professions besides physical therapy to make the review more manageable; however, this approach may have led to publication bias. There may be findings in other health sciences CE literature that might help our own profession to define and measure CE quality and as such is recommended for a future study. Similarly, APTA's Open Door search engine is not inclusive of all databases available to catalog physical therapist education research, although it did capture the most likely and relevant databases and journals. Second, use of McMaster University's critical appraisal framework,15 which was applied quantitatively by Lekkas et al,9 may have been subject to outcome reporting bias.77 Although the critical appraisal tool has been used in a previous systematic review,9 this scale has not been validated in the literature. As such, we agreed upon using a tertile distribution scale to assess rigor and risk of bias rather than consensus or the arbitrary selection of a cut point. Next, although we categorized each article into one theme based upon the primary goal of the study to assist with categorization, some study outcomes may actually reflect multiple themes. Although the CASs would have remained the same regardless of the theme placement, the qualitative descriptive summary may have expanded. Finally, although this study excluded the roles of the DCE as we sought to examine the variables of CE quality beyond the immediate control of a PTEP, inclusion of this role may have added to the rich discussion of the design, implementation, and assessment of a CE program and warrants further study.
Conclusion
The methodological rigor of the available evidence is not high enough to draw definitive conclusions about how quality in physical therapist CE programs should be defined or how it can be measured. Similar to previous reviews of CE, this study uncovered more questions than it found answers. This systematic review offers a summary of the broad and variable literature that addresses some facets of quality in CE and provides a starting point when determining gaps in the literature. The development of a research agenda for defining quality in CE would be highly beneficial for directing methodologically rigorous research oriented toward CE best practice for the profession.
Footnotes
Dr McCallum, Dr Jacobson, Dr Gallivan, and Dr Giuffre provided concept/idea/research design. All authors provided writing and data collection. Dr McCallum, Dr Mosher, Dr Jacobson, and Dr Giuffre provided data analysis. Dr McCallum, Dr Mosher, and Dr Jacobson provided project management. Dr Gallivan provided institutional liaisons. Dr McCallum and Dr Gallivan provided consultation (including review of manuscript before submission).
- Received October 9, 2012.
- Accepted April 25, 2013.
- © 2013 American Physical Therapy Association