Skip to main content
  • Other Publications
  • Subscribe
  • Contact Us
Advertisement
JCORE Reference
this is the JCORE Reference site slogan
  • Home
  • Most Read
  • About Us
    • About Us
    • Editorial Board
  • More
    • Advertising
    • Alerts
    • Feedback
    • Folders
    • Help
  • Patients
  • Reference Site Links
    • View Regions
  • Archive

Quality in Physical Therapist Clinical Education: A Systematic Review

Christine A. McCallum, Peter D. Mosher, Peri J. Jacobson, Sean P. Gallivan, Suzanne M. Giuffre
DOI: 10.2522/ptj.20120410 Published 1 October 2013
Christine A. McCallum
C.A. McCallum, PT, PhD, Division of Physical Therapy, Walsh University, 2020 E Maple St, North Canton, OH 44720 (USA).
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peter D. Mosher
P.D. Mosher, PT, DPT, OCS, Department of Physical Therapy, College of Mount St Joseph, Cincinnati, Ohio.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peri J. Jacobson
P.J. Jacobson, PT, DPT, MBA, Physical Therapy Program, Bellarmine University, Louisville, Kentucky.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sean P. Gallivan
S.P. Gallivan, PT, MS, NCS, Doctor of Physical Therapy Program, University of Dayton, Dayton, Ohio.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Suzanne M. Giuffre
S.M. Giuffre, PT, EdD, Department of Physical Therapy, Youngstown State University, Youngstown, Ohio.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Background Many factors affect student learning throughout the clinical education (CE) component of professional (entry-level) physical therapist education curricula. Physical therapist education programs (PTEPs) manage CE, yet the material and human resources required to provide CE are generally overseen by community-based physical therapist practices.

Purpose The purposes of this systematic review were: (1) to examine how the construct of quality is defined in CE literature and (2) to determine the methodological rigor of the available evidence on quality in physical therapist CE.

Methods This study was a systematic review of English-language journals using the American Physical Therapy Association's Open Door Portal to Evidence-Based Practice as the computer search engine. The search was categorized using terms for physical therapy and quality and for CE pedagogy and models or roles. Summary findings were characterized by 5 primary themes and 14 subthemes using a qualitative-directed content analysis.

Results Fifty-four articles were included in the study. The primary quality themes were: CE framework, CE sites, structure of CE, assessment in CE, and CE faculty. The methodological rigor of the studies was critically appraised using a binary system based on the McMaster appraisal tools. Scores ranged from 3 to 14.

Limitations Publication bias and outcome reporting bias may be inherent limitations to the results.

Conclusion The review found inconclusive evidence about what constitutes quality or best practice for physical therapist CE. Five key constructs of CE were identified that, when aggregated, could construe quality.

Clinical education (CE) in health profession programs is unique to higher education in the proportion of program contact hours spent outside of the classroom. Clinical education involves immersion of students in actual clinical practice, which is separate from the didactic components typically delivered in classrooms. Physical therapist education programs (PTEPs) devote 44.9% of professional (entry-level) physical therapist education curricula to CE.1 These programs utilize directors of clinical education (DCEs) to manage the CE component of the curriculum; DCEs aim to define, pursue, and influence the quality of the CE product. Yet, the material and human resources required to provide CE experiences for physical therapist students are generally managed by community-based physical therapist practices, which is in contrast to the didactic components where the PTEP maintains direct control over the factors affecting the quality of the educational product. These physical therapist practices become CE sites when affiliated with PTEPs through written contractual agreements. Regardless of the contractual arrangements, there are many factors that affect student learning and are outside of a PTEP's control. However, the ultimate responsibility for the provision of high-quality education remains with the PTEP.2

Although the PTEP maintains responsibility for student learning and the outcomes of CE experiences, CE sites are given latitude to develop site-specific CE programs. Currently, the typical PTEP utilizes an average of 373 CE sites.1 Clinical educators—center coordinators of clinical education (CCCEs) and clinical instructors (CIs)—within these sites design and implement learning experiences to engage students in the management of patients common to each respective practice. The CE sites also may include exposure to practice administration, patient advocacy, or interdisciplinary care. Clinical educators are responsible for assessing student performance toward mastery of entry-level standards, although agreement on what are considered entry-level standards is variable.3,4 A national manual, adopted by the American Physical Therapy Association's (APTA) Board of Directors, Guidelines and Self-Assessments for Clinical Education,5 is available to guide CE sites in the design, implementation, and assessment of CE experiences for students; however, the frequency of its use is unknown.3 Additionally, the APTA Physical Therapist Clinical Education Principles document6 provides consensus standards for use in physical therapist CE, although its impact on quality outcomes also is unknown. These factors contribute to considerable variation among CE experiences, presenting a challenge for the PTEP in monitoring the overall quality of student clinical experiences.

How does a PTEP measure quality in physical therapist CE? Quality can be defined as a distinctive or essential characteristic or attribute, character with respect to grade of excellence, a personality or character trait, or accomplishment or attainment.7 In physical therapist education, the Commission on Accreditation of Physical Therapy Education (CAPTE) accredits programs that comply with standards that demand demonstration of quality and continuous improvement.2 Many evaluative criteria pertain to the CE component of the curriculum, including references to qualified faculty, environments conducive to learning, protection of rights and safety, sufficient resources to support the curriculum, and assessment of the CE program. Taken collectively, CAPTE standards may guide academic and clinical educators toward factors influencing high-quality CE, yet they do not identify evidence-based definitions or measures of quality.

A historical overview of CE as it pertains to physical therapist education in the United States was documented by Gwyer et al in 2003,8 providing a historical review of physical therapist CE framed by the categories of CE sites, structure, assessment, and faculty (Fig. 1). Although this summary of the historical roots of the profession documented the positive role CE research has made on the advancements in physical therapist education, it lacked critical appraisal of the methodological rigor of the literature. Two systematic reviews9,10 of physical therapist CE literature were conducted in the mid-2000s; however, both reviews had a narrow focus on CE models. Baldry Currens10 reviewed the advantages and disadvantages of the 2:1 student-to-CI model, whereas Lekkas et al9 took a broader approach in considering the breadth of CE models in the health care literature. Both reviews concluded that there was insufficient evidence and methodological rigor to support or favor one particular CE model. What is lacking from the literature to date is a broad and critically appraised review of the breadth of CE research. The purposes of this systematic review, therefore, were: (1) to examine how the construct of quality is defined in CE literature and (2) to determine the methodological rigor of the available evidence on quality in physical therapist CE.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Key constucts of quality in clinical education identified in this review. Dotted border indicates construct was identified by Gwyer et al7 as a key foundational component in clinical education.

Method

Identification and Selection of Studies

Relevant search terms were developed and agreed upon by the researchers and sorted according to the following categories: CE pedagogy, CE models, CE roles, and descriptors of quality. CINAHL subject headings were added to the search term categories where appropriate to enhance search outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)11 Statement guided the selection of the search-identified studies (Fig. 2).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Flow diagram of systematic review.

Literature Search

The APTA's Open Door Portal to Evidence-Based Practice12 was used for the computer-based search on October 11, 2011, and repeated on July 19, 2012. The databases included MEDLINE, CINAHL, SPORTDiscus, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Database of Abstracts of Reviews of Effects, NHS Economic Evaluation Database, Health Technology Assessments, and Cochrane Methodology Register. The search terms and search strategy are presented in Table 1. A hand search and content expert consultation complemented the database search. All literature citations and full-text files were collected and organized using the Zotero research management tool13 (Zotero, Fairfax, Virginia).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Search Terms and Search Strategya

Inclusion and Exclusion of Studies

Two researchers (P.J.J., C.A.M.) reviewed each of the identified studies for inclusion. A third researcher (S.P.G.) was consulted as tie-breaker. Studies of any research design were included if full-text was available in the English language, they originated from peer-reviewed sources, and they addressed quality in physical therapist CE pedagogy as it related to the roles or models of physical therapist CE. Studies were excluded if they:

  • Did not address, describe, or measure the construct of quality as it related to CE pedagogy and roles or models.

  • Addressed only academic and didactic influences on CE.

  • Were exclusively from disciplines other than physical therapy.

  • Were dissertations, abstracts, or conference proceedings.

This review excluded the role of the DCE during the search to focus the review on roles that were beyond the immediate control of the PTEP.

Data Analysis

Descriptive data were extracted for all articles. The data included the publication citation and date of publication, the country of research, the aim or purpose of the study, the description of participants, the summary of the intervention and methods, and the summary of the outcomes, results, or conclusions. All studies were classified according to research design and methods. Research designs were categorized as either quantitative or qualitative. Level of evidence classification was assigned hierarchically for quantitative study designs according to the Oxford Centre for Evidence-Based Medicine Levels of Evidence.14 The system was selected due to its wide acceptance in the classification of research rigor, including “expert opinion” as a level of evidence.11 Qualitative study designs were classified according to the categories of the McMaster University Occupational Therapy Evidence-Based Practice Research Group. This group developed a valid and reliable protocol to critically review quantitative research articles.15 Both classification tools were used in a previous CE systematic review.9

The methodological rigor of each article was critically appraised using the binary system developed by Lekkas et al9 based on the McMaster appraisal tools.15 Studies were appraised and scored 1 point for adequate methodological rigor and 0 points for inadequate rigor for each of 14 areas, for a potential critical appraisal score (CAS) between 0 and 14. Two reviewers (S.M.G., C.A.M.) independently completed the initial data extraction and critical appraisal with a data extraction form developed using Google Docs (Mountain View, California).16 The 2 reviewers confirmed the abstracted results.

Data pooling for the purposes of meta-analysis was deemed inappropriate given the heterogeneous research designs and methods. Therefore, data were grouped, synthesized, and analyzed descriptively according to CE themes using a qualitative content analysis.17 Interobserver agreement of critical appraisal was assessed using kappa statistics.18 An adjusted kappa also was calculated because an unadjusted kappa could provide misleading results if the data were unbalanced, despite high observed agreement.19

Two reviewers (C.A.M., S.M.G.) reached consensus on the level of evidence considering the methodological design of articles, assigned CAS, and descriptive qualitative summary of the thematic outcomes. The strength of available evidence on the quality of physical therapist CE was assessed in summary for all articles.

Results

Selection of Studies

Initially, the search yielded 202 citations. One hundred eleven articles were excluded after title and abstract review. The remaining 91 articles received a full-text review, yielding 54 articles for inclusion. The articles originated from 6 countries: Australia (8), Canada (8), Finland (1), Sweden (1), United Kingdom (6), United States (29), and Zimbabwe (1). Four studies were published in the 1980s, 12 in the 1990s, 26 from 2000 to 2009, and 12 from 2010 to 2012.

Interrater Reliability

Review of title and abstracts produced an adjusted interobserver agreement of kappa=.989 (CI=.925–.989). Full-text articles were reviewed in the same manner and produced an interobserver agreement of kappa=.703 (CI=.510–.825), which indicates substantial agreement.18 Disagreements occurred primarily because of how quality issues were interpreted between the 2 reviewers.

Design and Rigor

Thirty-seven (68.5%) of the 54 articles were quantitative studies. The strength of the evidence as indicated by study design was variable, with most studies using a low level of evidence designs. Only 1 level 1B randomized controlled trial (RCT) (1.8%) and 6 level 2 studies (11.1%) were identified, including 2 level 2A systematic reviews of cohort studies. The majority (n=22 [40.7%]) of the quantitative evidence was level 4 descriptive research with outcome measures. Two level 3 case-controlled studies (3.7%) and 6 level 5 expert reviews (11.1%) were included. The CAS for the quantitative studies ranged from 4 to 14.

Seventeen qualitative studies (31.5%) were included, with the majority being descriptive designs (n=11 [20.3%]). Three grounded theory studies (5.5%)—1 (1.8%) with a case study design, 1 (1.8%) with an ethnographic design, and 1 (1.8%) with a phenomenological design—were identified. The CAS for the qualitative studies ranged from 3 to 14. Refer to eAppendix 1 for the CASs.

The 14-point CAS for methodological rigor was divided into tertiles20 for equal ordered distribution into 3 parts to determine the risk of bias scale among the included studies. Each part contains a third of the studies identified in the systematic review. The top 33% (67%–100%) were identified qualitatively as “high quality, low risk of bias,” the middle third (34%–66%) were qualitatively defined as “moderate quality, moderate risk of bias,” and the lower third (0%–33%) were identified as “low quality, high risk of bias.” The scores associated with each tertile were: top: 13 to 14 points; middle: 10 to 12 points; and lowest: 0 to 9 points. We used a distribution-based division because there is no current standard for risk of bias scoring in studies of educational interventions or CE. We believe the distribution-based system most accurately defines the studies that truly are at low or high risk of bias. Collectively, these articles may be considered “bench research” or “first principles” on the topic of quality in physical therapist CE.14,21,22

Summary Findings

Gwyer and colleagues' presentation of the structure and format of CE8 was used as a framework for organizing relational themes. Five primary themes and 14 subthemes emerged from the review. The primary themes were: (1) CE framework, (2) CE sites, (3) structure of CE, (4) assessment in CE, and (5) CE faculty. Refer to Figure 1 for the theme and subtheme pairings. Refer to Table 2 for the study design/thematic categorization/CAS score table. eAppendix 2 highlights the overall summary findings chart.

View this table:
  • View inline
  • View popup
Table 2.

Study Design, Themes, and Critical Appraisal Score Sorted by Theme and Critical Appraisal Score (Using Oxford Centre for Evidence-Based Medicine and McMaster University Occupational Therapy Evidence-Based Practice Research Group by Law et al15 from Lekkas et al9)

CE Framework

Two articles (3.7%) were categorized within the CE framework theme. The study designs were of low-level evidence. The CASs ranged from 3 to 4, indicating low quality, high risk of bias. Clinical education in professional PTEPs has evolved over the past century. Gwyer et al8 provided a level 5 expert review on historical events that shaped the development of physical therapist CE, detailing the growth of various aspects of physical therapist CE. Higgs'23 level 5 expert opinion presented a valuable paradigm for organizing a comprehensive analysis of CE by applying systems theory to CE programs.

CE Sites: Interprofessional, Practice and Productivity

Seven articles (13.0%) were categorized within the CE sites theme. The majority of study designs were low level. The CASs ranged from 5 to 14, with only 1 study emerging as high quality, low risk of bias. Four of the 7 studies investigated interprofessional or intraprofessional development as a result of training programs.24–27 The results revealed collaborative training among health professionals led to increased knowledge about other professional roles and facilitated development of collaborative relationships during work. One study assessed the physical therapist practice, which included the type of patients seen and interventions provided by students during clinical placements.28 The results showed that patients with musculoskeletal conditions (47%) represented the majority of those treated and that exercise (57%) was the primary intervention provided.

Productivity was used as an outcome measure in 2 studies. Dupont et al29 found a significant increase in the number of patients treated and the direct patient care provided during clinical placements for students in the Canadian health system. The number of patients seen by the student-CI team increased the most during second and third clinical placements (32%), and time in direct patient care increased the most during third and fourth rotations (36%). A 4% increase in CIs' workloads was calculated overall. Ladyshewsky30 found that overall productivity increased by 47% and direct patient care by 106% when using a 2:1 collaborative model. Clinical instructors spent the same amount of time in administrative tasks with students compared with baseline; however, students' productivity allowed CIs to perform more hours of didactic teaching and supervision.

Structure of CE: International, Models, Sequencing, Standards and Trends

Twenty-four articles (44.4%) were categorized within the structure of CE theme. Four studies were high level in design, including 1 RCT. The CASs ranged from 4 to 14, with 10 studies emerging as high quality, low risk of bias. Subthemes included articles on international CE, models of CE, sequencing of CE within a curriculum, and CE standards or trends. Common to articles under the structure of CE theme were descriptions of CE practices, including innovative approaches, changes, variation from the norm, and exploration of current systems.

Three articles (5.5%) drew upon international clinical experiences (ICEs). Pechak31 presented an overview of ICEs in US-based PTEPs. The results revealed 40.9% of the respondents offered ICEs, with a larger number reporting some availability of international experience for students who did not fit the definition of ICEs. Most ICE offerings were developed in the past decade and were offered in high-, upper-, or middle-income countries (Europe, Canada, and Australia). Length of rotations ranged from 6 to 8 weeks. Barriers to ICEs included faculty time, expense, and site coordination. Crawford et al32 reported on the outcomes involving Canadian physical therapist student experiences over a 10-year period. The results revealed that roughly 4% of students enrolled in physical therapist education programs participated in ICEs in more than 50 different countries. Programs limited international opportunities to those students without academic concerns; other program-specific limits also were presented. Both studies reported benefits to ICEs included broadening student perspectives on global and community health issues. Finally, Rodger et al33 presented outcomes of collaborative efforts of 21 research-intensive universities across 12 countries. Elements to collaborative partnerships at the international level included national and local commitments to training of health care workforce, support of clinical educators by all stakeholders, and the need for innovative models to prepare the workforce in emerging markets.

Ten articles addressed varying models of CE. Two level 2A systematic reviews discussed the 2:1 model and CE models broadly. Lekkas et al9 and Baldry Currens10 concluded there is inadequate evidence to conclusively support the use of 1 CE model over another, although lower-level and anecdotal evidence have clearly identified the advantages, disadvantages, and recommendations for each model according to various stakeholders. Both reviews encouraged further research into CE models with higher levels of methodological standardization and rigor. Kelly et al34 compared student learning on a collaborative “mock” clinical experience in a self-contained, pro bono campus clinic with that on a traditional clinical rotation. Results revealed minimal differences between the 2 models of CE on student outcomes. Seven articles sought to assess collaborative CE models—primarily the 2:1 and 4:2 models.35–41 Outcome measures varied among these studies, including stakeholder (student and CI) impressions, clinical productivity, and learning and teaching models. Barriers to using a collaborative model in CE include lack of funding, need for CI training, and student acceptance of the model.

Five articles addressed curricular sequencing involving either student or site outcomes. Graham et al42 found sites were most productive when student-CI teams worked together for longer rotations (5 weeks/full-time, 5 days/week) compared with shorter overall lengths (1 week) or part-time (1 day/week). Student performance as measured by a clinical evaluation instrument was highest for students on full-time, longer rotations. Teaching scores for CIs also were highest for full-time, longer-duration clinical experiences. Kell and Owen43 examined the effects of varying placement on student learning in CE in either the second or third year of a professional education program. Although the results were inconclusive, they suggested that student learning strategies depended on site characteristics such as the number of students per site and the number of CIs per student. In particular, increasing the student-to-educator or educator-to-student ratio may have detrimental effects during a 4-week clinical experience. Martorello44 found CCCEs preferred a shorter number of weeks for a first rotation (X̅=7.3 weeks, SD=2.26) compared with final clinical rotations (X̅=9.1 weeks, SD=2.09), which is similar to Sass and colleagues'4 findings that clinical educators preferred longer lengths (8–10 weeks) for final rotations. Weddle and Sellheim45 documented 1 program's outcomes for a curricular model that included integrated CE experiences, defined as one half day per week for multiple weeks and multiweek (8-week) full-time experiences. Outcomes revealed students were prepared for practice, and National Physical Therapy Examination data revealed no difference in outcomes from those of students who completed the program using a different curricular design. Finally, Watson et al,46 in a 2-parallel group, single-blinded, multicenter RCT, found students who participated in a simulated learning program for 25% of a 4-week clinical placement performed no worse than students who completed a traditional 4-week clinical immersion rotation. The results indicate CE offered in a simulated environment can successfully replace a portion of clinical time without compromising student learning outcomes.

Four studies emerged in relation to development of CE standards. Wetherbee et al3 and Sass et al4 reported on standardization of CE formats and lengths, breadth and depth of exposure to practice settings and patient types, competencies for entry-level status, and the need for mechanisms to credential clinical sites; however, no final agreement on best practice was identified. Strohschein,47 on the other hand, reported that CE is a process that is guided by 7 needs involving the profession, the site, and clinical educators. He outlined 10 models that share commonalities related to the process of CE, roles established and relationships that develop, and the collaborative responsibility and development of nontechnical standards, such as professional behaviors. Finally, Weddle and Sellheim48 reported on a curricular model grounded in educational theory that includes integrated CE experiences. Outcomes reinforced student motivation to learn and readiness to prepare for clinical practice.

Lastly, 2 articles within the structure of CE theme documented trends occurring within CE. Baldry Currens and Bithell49 found that being a CI is not a primary role for physical therapists, CE is not a primary focus within physical therapist practice, and standards for CESs or PTEPs to determine clinical capacity of students do not exist. Scully and Shepard50 reported organizational and human factors influenced the type, quality, and quantity of student learning. Organizational factors included ground rules from the PTEP level, the health care institution, and the physical therapy department itself. Human factors included CI and student perspectives and teaching tools, such as coaching, as used by CIs.

Assessment in CE: Clinical Instruction, Site and Student

Ten articles (18.5%) were categorized within the assessment in CE theme. The majority of studies were high to moderate level in design. The CASs ranged from 4 to 14, with 5 studies emerging as high quality, low risk of bias. Common to the articles were the descriptions of practices and expectations or strategies targeting performance improvement of the CI, the site, or the student. These studies sought to promote best practice in CE. Although the subject areas varied, the studies relied on feedback from both students and CIs to define or advance various quality initiatives.

Five studies examined the effectiveness of activities adjunctive to CE experiences that added an overall value to learning. First, critical reflection, as a program of learning, was found to contribute to learning within students and CIs by increasing a sense of validation, increasing empowerment, and broadening perspectives.51–53 Second, Low54 reported that utilizing a Web-based program that included reflective journaling and discussion boards provided a moderately positive learning experience that facilitated peer-to-peer and student-to-professor communications. Finally, Wright's55 study illustrated the value of student feedback to the CES in the assessment and improvement of the CE experience and CI performance.

Two studies measured CIs' perspectives relative to entry-level practice. Jette et al56 explored the CIs' perception of student behaviors that comprise entry-level performance. They concluded with a model of decision making that involves not only the assessment of specific performance measures but also a subjective synthesis or “gut feeling” that integrates all observations in concluding whether the student has achieved entry level, defined by them as “mentored independence.” Hayes et al,57 on the other hand, examined clinical performance behaviors that led to unsafe or ineffective practice. This study brought attention to affective behaviors (poor communication, unprofessional behaviors) that were less likely to be addressed by CIs.

Finally, 3 studies examined outcomes of student performance within clinical practice. Solomon58 reported learning contracts (LCs) during CE were useful to focus student attention on internal strengths and weaknesses and in the development of objectives for a clinical experience. However, development of the LC was time-consuming, and clinicians perceived a decreased flexibility of caseload as a result of implementation. Housel and Gandy59 and Vendrely and Carter60 assessed the outcomes of training programs on rating of student performance in a clinical setting, finding minimally significant differences on student ratings of safety behaviors60 and overall student improvement from midterm to final ratings59 between noncredentialed and credentialed CIs.

CE Faculty: Demographics and Characteristics and CI Education Needs

Eleven articles (20.3%) were categorized within the CE faculty theme. The majority of studies were moderate to low level in design. The CASs ranged from 7 to 14, with 3 studies emerging as high quality, low risk of bias. Commonly described CI characteristics included: are typically female, hold a bachelor of science degree, have between 6 and 8 years of clinical experience, have been a CI for 5 years, and instruct 1 to 4 students every 2 years.61–63 Trends for effective credentialed CIs showed they set clear goals for students and provided timely and thorough orientation.64 There was a negative correlation between credentialed and noncredentialed CIs for years of experience as a physical therapist and years as a CI.64 Positive teaching behaviors of CIs included using a line of questioning and coaching throughout a clinical experience.63,65,66 Hindering behaviors included intimidating questioning and correcting students in front of patients. Exemplary CIs were characterized as physical therapists who sought out continuing education about teaching and learning, involved themselves in the teaching and learning process, participated in reflective practice, encouraged student participation in the learning process, and provided supervision congruent with the level of the learner.67–69 Morren et al70 found no association between CI characteristics and student assessment of overall clinical experience.

Only 1 article was categorized within the CI education needs subtheme. Recker-Hughes et al71 reported CIs do not believe professional development activities support clinical teaching roles and desire more opportunities for continuing professional development and support from PTEPs.

Discussion

The purposes of this systematic review were: (1) to examine how the construct of quality is defined in CE literature and (2) to determine the methodological rigor of the evidence on quality in physical therapist CE. Clinical education research is particularly important because PTEPs are accountable to demonstrate quality for accreditation. Congruent with the findings of previous CE systematic reviews,9,10 the foci, methods, measures, outcomes, and rigor of CE research were variable. The volume and variability of educational research methods in the area of physical therapist CE may have prevented comprehensive and rigorous assessment of the literature prior to this review.

The use of reliable data is needed to support educational practice and policies, especially if the demand for CE continues to expand in the midst of limited resources. Studies need to be rigorously assessed before their outcomes are operationalized. The results of this systematic review reveal the methodological rigor of studies on quality in physical therapist CE varies, regardless of study design. The results of the lowest tertile studies, assessed to be of low rigor and high risk of bias (CAS ≤9), should be used with caution. The results of the highest tertile studies, assessed to be of high rigor and low risk of bias (CAS 13–14), could be used to generate benchmarks for best practice in physical therapist CE.

Our review found inconclusive evidence about what constitutes quality or best practice for physical therapist CE, yet identified research in 5 key themes of CE that when aggregated could construe quality. Many individual studies reported outcomes about a variety of components related to CE; however, heterogeneity of methods and measures prevented meta-analysis. We present a qualitative descriptive summary that highlights CE as a multidimensional, complex program.

Physical therapist CE involves an intricate process. Its overall intent is to provide a means for students to reach entry-level clinical competence in real-time clinical practice. This systematic review showed that CE is affected by various stakeholders, including PTEPs, CESs, CIs, and students. Clinical education programs are offered in the homeland of the PTEP and internationally. International clinical experiences are a small, but growing part of CE. The sequencing of CE within PTEPs is variable. The variety of curricula includes integration of CE as part of clinical science courses, use of simulated experiences in place of traditional placements, and traditional full-time placements of variable lengths. No sequence of the delivery of CE was shown to be superior to the other.

Variability also existed in the structure of CE at the site level. The evidence identified in this review and other reviews9,10 does not support or favor one particular model for CE over others. Clinical instructor descriptors were identified and some comparisons across studies were aggregated that reveal similarities in sex, academic background, years of clinical experience, and as a CI; however, no comparisons of descriptive characteristics of CIs and impact on outcomes in CE were found. Some data are emerging about the benefits of CI training; however, results were inconclusive in this review. Clinical instruction at the site level is not viewed as a priority within clinical practice, and professional development opportunities to advance the culture of CE at both the site and instructor level are needed. Evaluation of student experiences at the site level was identified in one study and was found to be of benefit. Two studies reported on the importance of assessing student clinical performance using cognitive, skill, and behavioral dimensions, which is reflected in the development and current use of the APTA Physical Therapist Clinical Performance Instrument (PT CPI) assessment tool.72

These findings are novel because this is the first time the literature about CE in professional PTEP has been systematically compiled and critically appraised to define and summarize the quality themes and the strength of evidence about quality in CE. Often, the relevance of research is left to individuals to make connections about the data available, and all too often not all of the pieces are addressed. This situation can leave decision makers with incomplete information and faulty recommendations for policy.73 Our systematic review compiles, critically appraises, and organizes this evidence. The summarized results lead us to believe there is much work to be done to build the body of evidence in physical therapist CE.

The question remains, however, are we asking the right questions to generate the research needed to move CE forward in a doctoring profession? The current model of physical therapist CE has been called at least vulnerable and at worst indefensible by some leaders in the profession.74 The broken nature of CE is a belief held by some clinicians, academicians, and students as well; however, what conclusive data exist that it is broken? On one hand, the findings of this review do support the notion of a problematic system evidenced by a paucity of CE research at high levels of methodological rigor. The breadth and heterogeneity of research methods, measures, and conclusions do not bring us closer to defining quality or best practice for physical therapist CE. At the same time, recent calls for homogeneity and uniformity in CE cannot be defended as evidence-based given the findings of this review. As examples, the 1-year internship, self-contained, health systems-based, clinician-paid, and required residency models have gained notoriety in recent years, yet these designs lack their own evidence basis according to our literature searches. The current heterogeneity of CE may leave the profession vulnerable and indefensible. However, assimilating to homogeneous models and methods of CE may not be any more defensible unless or until the research is better able to define and measure quality and what models of best practice should become standardized.

Although CAPTE defines a set of minimum educational standards, it is incumbent upon academic and clinical educators to design and conduct educational research oriented toward defining and measuring quality and best practice in CE. To this end, we propose the development of a national CE research agenda. Whether developed independent of the existing Education Research Questions in Ranked Priority Order75 or as a subagenda of that consensus document, defining CE best practice is a daunting goal that will require well-coordinated and intentional efforts. Such an agenda could be oriented to define the construct of quality for CE and to direct the development and validation of tools and methods by which to better measure constructs of quality. Future systematic reviews could assess each of the CE themes identified in this systematic review separately. Unfortunately, although CE is seen as a cornerstone for the viability and growth of physical therapy as a doctoring profession, the research supporting CE is not always so highly prioritized. Success defining and measuring quality in CE faces some of the same obstacles as other educational research for receiving the time and attention, creative and intellectual investment, and grants and financial support commensurate with its presumed importance to the field; this challenge especially affects those who hold the role of DCE due to the disparate responsibilities of the position.76

Apart from the variable methods and conclusions of the reviewed literature, the methods of this review itself have their own inherent limitations. First, unlike other CE systematic reviews, our search excluded studies from other professions besides physical therapy to make the review more manageable; however, this approach may have led to publication bias. There may be findings in other health sciences CE literature that might help our own profession to define and measure CE quality and as such is recommended for a future study. Similarly, APTA's Open Door search engine is not inclusive of all databases available to catalog physical therapist education research, although it did capture the most likely and relevant databases and journals. Second, use of McMaster University's critical appraisal framework,15 which was applied quantitatively by Lekkas et al,9 may have been subject to outcome reporting bias.77 Although the critical appraisal tool has been used in a previous systematic review,9 this scale has not been validated in the literature. As such, we agreed upon using a tertile distribution scale to assess rigor and risk of bias rather than consensus or the arbitrary selection of a cut point. Next, although we categorized each article into one theme based upon the primary goal of the study to assist with categorization, some study outcomes may actually reflect multiple themes. Although the CASs would have remained the same regardless of the theme placement, the qualitative descriptive summary may have expanded. Finally, although this study excluded the roles of the DCE as we sought to examine the variables of CE quality beyond the immediate control of a PTEP, inclusion of this role may have added to the rich discussion of the design, implementation, and assessment of a CE program and warrants further study.

Conclusion

The methodological rigor of the available evidence is not high enough to draw definitive conclusions about how quality in physical therapist CE programs should be defined or how it can be measured. Similar to previous reviews of CE, this study uncovered more questions than it found answers. This systematic review offers a summary of the broad and variable literature that addresses some facets of quality in CE and provides a starting point when determining gaps in the literature. The development of a research agenda for defining quality in CE would be highly beneficial for directing methodologically rigorous research oriented toward CE best practice for the profession.

Footnotes

  • Dr McCallum, Dr Jacobson, Dr Gallivan, and Dr Giuffre provided concept/idea/research design. All authors provided writing and data collection. Dr McCallum, Dr Mosher, Dr Jacobson, and Dr Giuffre provided data analysis. Dr McCallum, Dr Mosher, and Dr Jacobson provided project management. Dr Gallivan provided institutional liaisons. Dr McCallum and Dr Gallivan provided consultation (including review of manuscript before submission).

  • Received October 9, 2012.
  • Accepted April 25, 2013.
  • © 2013 American Physical Therapy Association

References

  1. ↵
    Commission on Accreditation in Physical Therapy Education. 2011–2012 fact sheet physical therapist education programs. Available at: http://www.capteonline.org/uploadedFiles/CAPTEorg/About_CAPTE/Resources/Aggregate_Program_Data/AggregateProgramData_PTPrograms.pdf. Accessed June 29, 2012.
  2. ↵
    Commission on Accreditation in Physical Therapy Education, American Physical Therapy Association. Evaluative criteria for accreditation of education programs for the preparation of physical therapists. Available at: http://www.capteonline.org/uploadedFiles/CAPTEorg/About_CAPTE/Resources/Accreditation_Handbook/EvaluativeCriteria_PT.pdf. Updated March 1, 2013. Accessed September 25, 2012.
  3. ↵
    1. Wetherbee E,
    2. Peatman N,
    3. Kenney D,
    4. et al
    . Standards for clinical education: a qualitative study. J Phys Ther Educ. 2010;24:35–43.
    OpenUrl
  4. ↵
    1. Sass K,
    2. Frank L,
    3. Thiele A,
    4. et al
    . Physical therapy clinical educators' perspectives on students achieving entry-level clinical performance. J Phys Ther Educ. 2011;25:46–59.
    OpenUrl
  5. ↵
    Guidelines and Self-Assessments for Clinical Education (ie, Guidelines and Self-Assessments for Clinical Education). Available at: http://www.apta.org/Educators/Clinical/SiteDevelopment/.
  6. ↵
    American Physical Therapy Association. Physical therapist clinical education principles. Available at: http://www.apta.org/PTClinicalEducationPrinciples/. Accessed September 25, 2012.
  7. ↵
    Merriam-Webster Dictionary. Available at: http://www.merriam-webster.com/dictionary/quality. Accessed September 25, 2012.
  8. ↵
    1. Gwyer J,
    2. Odom C,
    3. Gandy J
    . History of clinical education in physical therapy in the United States. J Phys Ther Educ. 2003;17:34–43.
    OpenUrl
  9. ↵
    1. Lekkas P,
    2. Larsen T,
    3. Kumar S,
    4. et al
    . No model of clinical education for physiotherapy students is superior to another: a systematic review. Aust J Physiother. 2007;53:19–28.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Baldry Currens J
    . The 2:1 clinical placement model: review. Physiotherapy. 2003;89:540–554.
    OpenUrlCrossRef
  11. ↵
    1. Moher D,
    2. Liberati A,
    3. Tetzlaff J,
    4. Altman D
    . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7).
  12. ↵
    American Physical Therapy Association. Open Door: APTA's portal to evidence-based practice. Available at: http://www.apta.org/OpenDoor/. Accessed September 25, 2012.
  13. ↵
    Roy Rosenzweig Center for History and New Media. Zotero. Available at: http://zotero.org. Accessed July 12, 2012.
  14. ↵
    Oxford Centre for Evidence-based Medicine-Levels of Evidence. Available at: http://www.cebm.net/index.aspx?o=1025. Accessed September 25, 2012.
  15. ↵
    1. Law M,
    2. Stewart D,
    3. Pollock N,
    4. et al
    . Occupational Therapy Evidence-Based Practice Research Group. Available at: http://www.musallamusf.com/resources/Qualitative-Lit-Analysis-pdf.pdf. Accessed July 11, 2012.
  16. ↵
    Google Docs. Available at: https://accounts.google.com/ServiceLogin?service=writely&passive=1209600&continue=http://docs.google.com/%23&followup=http://docs.google.com/&ltmpl=homepage. Accessed July 25, 2012.
  17. ↵
    1. Miles M,
    2. Huberman M
    . Qualitative Analysis: An Expanded Source Book. 2nd ed. Thousand Oaks, CA: Sage Publications; 1994.
  18. ↵
    1. Landis JR,
    2. Koch GG
    . The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174.
    OpenUrlCrossRefPubMedWeb of Science
  19. ↵
    1. Portney L,
    2. Watkins M
    . Foundations of Clinical Research: Applications to Practice. 2nd ed. Upper Saddle River, NJ: Prentice Hall; 2000.
  20. ↵
    Wiktionary. Available at: http://en.wiktionary.org/wiki/tertile. Accessed September 25, 2012.
  21. ↵
    1. Jewell D
    . What is evidence? In: Guide to Evidence-Based Physical Therapist Practice. 2nd ed. Sudbury, MA: Jones & Bartlett Learning; 2011:17–31.
  22. ↵
    1. Sackett D,
    2. Straus S,
    3. Richardson W
    . Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. Edinburgh, Scotland: Churchill Livingstone; 2000.
  23. ↵
    1. Higgs J
    . Managing clinical education: the programme. Physiotherapy. 1993;79:239–246.
    OpenUrlCrossRef
  24. ↵
    1. Dubouloz C,
    2. Savard J,
    3. Burnett D,
    4. Guitard P
    . An interprofessional rehabilitation university clinic in primary health care: a collaborative learning model for physical therapist students in a clinical placement. J Phys Ther Educ. 2010;24:19–24.
    OpenUrl
  25. ↵
    1. Hallin K,
    2. Kiessling A,
    3. Waldner A,
    4. Henriksson P
    . Active interprofessional education in a patient based setting increases perceived collaborative and professional competence. Med Teach. 2009;31:151–157.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Jelley W,
    2. Larocque N,
    3. Patterson S
    . Intradisciplinary clinical education for physiotherapists and physiotherapist assistants: a pilot study. Physiother Can. 2010;62:75–80.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Mostrom E,
    2. Ribesky C,
    3. Klukos M
    . Collaboration in clinical education: use of a 2:1 student physical therapist : student physical therapist assistant model. Phys Ther Case Rep. 1999;2:45–57.
    OpenUrl
  28. ↵
    1. Wells PA,
    2. Lessard E
    . Survey of student clinical practice: implications for educational programs. Phys Ther. 1986;66:551–554.
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Dupont L,
    2. Gauthier-Gagnon C,
    3. Roy R,
    4. Lamoureux M
    . Group supervision and productivity: from myth to reality. J Phys Ther Educ. 1997;11:31–37.
    OpenUrl
  30. ↵
    1. Ladyshewsky R
    . Enhancing service productivity in acute care inpatient settings using a collaborative clinical education model. Phys Ther. 1995;75:503–510.
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Pechak C
    . Survey of international clinical education in physical therapist education. J Phys Ther Educ. 2012;26:69–77.
    OpenUrl
  32. ↵
    1. Crawford E,
    2. Biggar J,
    3. Leggett A,
    4. et al
    . Examining international clinical internships for Canadian physical therapy students from 1997 to 2007. Physiother Can. 2010;62:261–273.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Rodger S,
    2. Webb G,
    3. Devitt L,
    4. et al
    . Clinical education and practice placements in the allied health professions: an international perspective. J Allied Health. 2008;37:53–62.
    OpenUrlPubMed
  34. ↵
    1. Kelly D,
    2. Brown D,
    3. Perritt L,
    4. Gardner D
    . A descriptive study comparing achievement of clinical education objectives and clinical performance between students participating in traditional and mock clinics. J Phys Ther Educ. 1996;10:26–31.
    OpenUrl
  35. ↵
    1. Baldry Currens J,
    2. Bithell CP
    . The 2:1 clinical placement model: perceptions of clinical educators and students. Physiotherapy. 2003;89:204–218.
    OpenUrlCrossRef
  36. ↵
    1. Ladyshewsky RK
    . Clinical teaching and the 2:1 student-to-clinical-instructor ratio. J Phys Ther Educ. 1993;7:31–35.
    OpenUrl
  37. ↵
    1. Ladyshewsky RK,
    2. Barrie SC,
    3. Drake VM
    . A comparison of productivity and learning outcome in individual and cooperative physical therapy clinical education models. Phys Ther. 1998;78:1288–1301.
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Miller A,
    2. Pace T,
    3. Brooks D,
    4. Mori B
    . Physiotherapy internship: an alternative collaborative learning model. Physiother Can. 2006;58:157–166.
    OpenUrlCrossRef
  39. ↵
    1. Moore A,
    2. Morris J,
    3. Crouch V,
    4. Martin M
    . Evaluation of physiotherapy clinical educational models: comparing 1:1, 2:1 and 3:1 placements. Physiotherapy. 2003;89:489–501.
    OpenUrlCrossRef
  40. ↵
    1. Stiller K,
    2. Lynch E,
    3. Phillips AC,
    4. Lambert P
    . Clinical education of physiotherapy students in Australia: perceptions of current models. Aust J Physiother. 2004;50:243–247.
    OpenUrlPubMed
  41. ↵
    1. Triggs Nemshick M,
    2. Shepard KF
    . Physical therapy clinical education in a 2:1 student-instructor education model [erratum in Phys Ther. 1996;76:1261]. Phys Ther. 1996;76:968–981.
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Graham C,
    2. Catlin P,
    3. Morgan J,
    4. Martin E
    . Comparison of 1-day-per-week, 1-week, and 5-week clinical education experiences. J Phys Ther Educ. 1991;5:18–23.
    OpenUrl
  43. ↵
    1. Kell C,
    2. Owen G
    . Approaches to learning on placement: the students' perspective. Physiother Res Int. 2009;14:105–115.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Martorello L
    . The optimal length of clinical internship experiences for entry-level physical therapist students as perceived by center coordinators of clinical education: a pilot study. J Phys Ther Educ. 2006;20:56–58.
    OpenUrl
  45. ↵
    1. Weddle M,
    2. Sellheim D
    . An integrative curriculum model preparing physical therapists for Vision 2020 practice. J Phys Ther Educ. 2009;23:12–21.
    OpenUrl
  46. ↵
    1. Watson K,
    2. Wright A,
    3. Morris N,
    4. et al
    . Can simulation replace part of clinical time: two parallel randomised controlled trials. Med Educ. 2012;46:657–667.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Strohschein J,
    2. Hagler P,
    3. May L
    . Assessing the need for change in clinical education practices. Phys Ther. 2002;82:160–172.
    OpenUrlAbstract/FREE Full Text
  48. ↵
    1. Weddle M,
    2. Sellheim D
    . Linking the classroom and the clinic: a model of integrated clinical education for first-year physical therapist students. J Phys Ther Educ. 2011;25:68–80.
    OpenUrl
  49. ↵
    1. Baldry Currens J,
    2. Bithell C
    . Clinical education: listening to different perspectives. Physiotherapy. 2000;86:645–653.
    OpenUrlCrossRef
  50. ↵
    1. Scully RM,
    2. Shepard KF
    . Clinical teaching in physical therapy education: an ethnographic study. Phys Ther. 1983;63:349–358.
    OpenUrlAbstract/FREE Full Text
  51. ↵
    1. Delany C,
    2. Watkin D
    . A study of critical reflection in health professional education: “learning where others are coming from.” Adv Health Sci Educ Theory Pract. 2009;14:411–429.
    OpenUrlCrossRefPubMed
  52. ↵
    1. Healey W
    . Physical therapist student approaches to learning during clinical education experiences: a qualitative study. J Phys Ther Educ. 2008;22:49–58.
    OpenUrl
  53. ↵
    1. Sellars J,
    2. Clouder L
    . Impact of the accreditation of clinical educators scheme: reflections from one higher education institution. Physiotherapy. 2011;97:339–344.
    OpenUrlCrossRefPubMedWeb of Science
  54. ↵
    1. Low S
    . Supporting student learning during physical therapist student internships using online technology. J Phys Ther Educ. 2008;22:75–82.
    OpenUrl
  55. ↵
    1. Wright B
    . Evaluation of a clinical education program in physical therapy. Clin Manage Phys Ther. 1984;4:46–47.
    OpenUrl
  56. ↵
    1. Jette DU,
    2. Bertoni A,
    3. Coots R,
    4. et al
    . Clinical instructors' perceptions of behaviors that comprise entry-level clinical performance in physical therapist students: a qualitative study. Phys Ther. 2007;87:833–843.
    OpenUrlAbstract/FREE Full Text
  57. ↵
    1. Hayes KW,
    2. Huber G,
    3. Rogers J,
    4. Sanders B
    . Behaviors that cause clinical instructors to question the clinical competence of physical therapist students. Phys Ther. 1999;79:653–671.
    OpenUrlAbstract/FREE Full Text
  58. ↵
    1. Solomon P
    . Learning contracts in clinical education: evaluation by clinical supervisors. Med Teach. 1992;14:205–210.
    OpenUrlCrossRefPubMed
  59. ↵
    1. Housel N,
    2. Gandy J
    . Clinical instructor credentialing and its effect on student clinical performance outcomes. J Phys Ther Educ. 2008;22:43–51.
    OpenUrl
  60. ↵
    1. Vendrely A,
    2. Carter R
    . The influence of training on the rating of physical therapist student performance in the clinical setting. J Allied Health. 2004;33:62–69.
    OpenUrlPubMed
  61. ↵
    1. Buccieri K,
    2. Brown R
    . Evaluating the performance of the academic coordinator of clinical education in physical therapist education: determining appropriate criteria and assessors. J Phys Ther Educ. 2006;20:17–28.
    OpenUrl
  62. ↵
    1. Giles S,
    2. Wetherbee E,
    3. Johnson S
    . Qualifications and credentials of clinical instructors supervising physical therapist students. J Phys Ther Educ. 2003;17:50–55.
    OpenUrl
  63. ↵
    1. Laitinen-Väänänen S,
    2. Talvitie U,
    3. Luukka MR
    . Clinical supervision as an interaction between the clinical educator and the student. Physiother Theory Pract. 2007;23:95–103.
    OpenUrlCrossRefPubMed
  64. ↵
    1. Housel N,
    2. Gandy J,
    3. Edmondson D
    . Clinical instructor credentialing and student assessment of clinical instructor effectiveness. J Phys Ther Educ. 2010;24:26–34.
    OpenUrl
  65. ↵
    1. Jarski RW,
    2. Kulig K,
    3. Olson RE
    . Allied health perceptions of effective clinical instruction. J Allied Health. 1989;18:469–478.
    OpenUrlPubMed
  66. ↵
    1. Jarski RW,
    2. Kulig K,
    3. Olson RE
    . Clinical teaching in physical therapy: student and teacher perceptions. Phys Ther. 1990;70:173–178.
    OpenUrlAbstract/FREE Full Text
  67. ↵
    1. Buccieri K,
    2. Brown R,
    3. Malta S
    . Evaluating the performance of the academic coordinator/director of clinical education: tools to solicit input from program directors, academic faculty, and students. J Phys Ther Educ. 2011;25:26–35.
    OpenUrl
  68. ↵
    1. Kelly S
    . The exemplary clinical instructor: a qualitative case study. J Phys Ther Educ. 2007;21:63–69.
    OpenUrl
  69. ↵
    1. M'kumbuzi V,
    2. Chinhengo T,
    3. Kaseke F
    . Perception of physiotherapy and occupational therapy students supervision of field attachment in Zimbabwe. Asia Pac Disabil Rehabil J. 2009;20:112–128.
    OpenUrl
  70. ↵
    1. Morren K,
    2. Gordon S,
    3. Sawyer B
    . The relationship between clinical instructor characteristics and student perceptions of clinical instructor effectiveness. J Phys Ther Educ. 2008;22:52–63.
    OpenUrl
  71. ↵
    1. Recker-Hughes C,
    2. Brooks G,
    3. Mowder-Tinney J,
    4. Pivko S
    . Clinical instructors' perspectives on professional development opportunities: availability, preferences, barriers, and supports. J Phys Ther Educ. 2010;24:19.
    OpenUrl
  72. ↵
    1. Roach KE,
    2. Frost JS,
    3. Francis NJ,
    4. et al
    . Validation of the revised physical therapist clinical performance instrument (PT CPI): version 2006. Phys Ther. 2012;92:416–428.
    OpenUrlAbstract/FREE Full Text
  73. ↵
    1. Helfand M,
    2. Berg A,
    3. Flum D,
    4. et al
    . Patient-centered outcomes research institute draft methodology report: our questions, our decisions: standards for patient centered outcomes research. 2012. Available at: http://www.pcori.org/assets/Methodology-Report-072312.pdf. Accessed September 25, 2012.
  74. ↵
    1. Delitto A
    . We are what we do. Phys Ther. 2008;88:1219–1227.
    OpenUrlFREE Full Text
  75. ↵
    American Physical Therapy Association. Education research questions in ranked priority order. 2006. Available at: http://www.apta.org/Educators/Curriculum/APTA/ResearchQuestions/. Accessed September 25, 2012.
  76. ↵
    1. Kaufman RR,
    2. Chevan J
    . The gender gap in peer reviewed publications by physical therapy faculty members: a productivity puzzle. Phys Ther. 2011;91:122–131.
    OpenUrlAbstract/FREE Full Text
  77. ↵
    1. Millett D
    . Bias in systematic reviews? J Orthod. 2011;38:158–160.
    OpenUrlFREE Full Text
View Abstract
PreviousNext
Back to top
Vol 93 Issue 10 Table of Contents
Physical Therapy: 93 (10)

Issue highlights

  • Exercise Interventions for Upper Limb Dysfunction Due to Breast Cancer Treatment
  • Quality in Physical Therapist Clinical Education
  • Physical Performance in Thai Elderly People
  • Shared Decision Making in Physical Therapy
  • Kinematic and EMG Assessment of Sit-to-Stand Transfers in Patients With Stroke
  • Physical Therapy Benefit in a Typical Blue Cross Blue Shield Preferred Provider Organization Plan
  • Conceptual Limitations of Balance Measures for Community-Dwelling Older Adults
  • Validity of the Dynamic Gait Index in People With Multiple Sclerosis
  • Individual Responsiveness of 2 Measures
  • Outcome Measures for People With Stroke
  • Efficacy of TENS in the Clinical Setting
  • Self-Reported Aging-Related Fatigue
  • The Next Evolution
Email

Thank you for your interest in spreading the word on JCORE Reference.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Quality in Physical Therapist Clinical Education: A Systematic Review
(Your Name) has sent you a message from JCORE Reference
(Your Name) thought you would like to see the JCORE Reference web site.
Print
Quality in Physical Therapist Clinical Education: A Systematic Review
Christine A. McCallum, Peter D. Mosher, Peri J. Jacobson, Sean P. Gallivan, Suzanne M. Giuffre
Physical Therapy Oct 2013, 93 (10) 1298-1311; DOI: 10.2522/ptj.20120410

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Download Powerpoint
Save to my folders

Share
Quality in Physical Therapist Clinical Education: A Systematic Review
Christine A. McCallum, Peter D. Mosher, Peri J. Jacobson, Sean P. Gallivan, Suzanne M. Giuffre
Physical Therapy Oct 2013, 93 (10) 1298-1311; DOI: 10.2522/ptj.20120410
del.icio.us logo Digg logo Reddit logo Technorati logo Twitter logo CiteULike logo Connotea logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Article
    • Abstract
    • Method
    • Results
    • Discussion
    • Conclusion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

Cited By...

More in this TOC Section

  • Reliability and Validity of Force Platform Measures of Balance Impairment in Individuals With Parkinson Disease
  • Predictors of Reduced Frequency of Physical Activity 3 Months After Injury: Findings From the Prospective Outcomes of Injury Study
  • Effects of Locomotor Exercise Intensity on Gait Performance in Individuals With Incomplete Spinal Cord Injury
Show more Research Reports

Subjects

  • Systematic Reviews/Meta-analyses

Footer Menu 1

  • menu 1 item 1
  • menu 1 item 2
  • menu 1 item 3
  • menu 1 item 4

Footer Menu 2

  • menu 2 item 1
  • menu 2 item 2
  • menu 2 item 3
  • menu 2 item 4

Footer Menu 3

  • menu 3 item 1
  • menu 3 item 2
  • menu 3 item 3
  • menu 3 item 4

Footer Menu 4

  • menu 4 item 1
  • menu 4 item 2
  • menu 4 item 3
  • menu 4 item 4
footer second
footer first
Copyright © 2013 The HighWire JCore Reference Site | Print ISSN: 0123-4567 | Online ISSN: 1123-4567
advertisement bottom
Advertisement Top