Those of you who know me know that my middle name should have been “research,” because I push the need for more investigators, more funding to conduct research, and more research reports. A common mantra is the need for sound theory, evidence of mechanisms that underpin interventions, and reports of efficacy or effectiveness. There is evidence that we are making progress in generating new knowledge and that some research findings have immediate relevance to clinical practice. For example, many of the articles that appear in PTJ and other scientific journals challenge the reader to define or redefine best practice. There also is sufficient research to begin to formulate clinical guidelines for some of our practice areas.
The result of these research efforts has trickled into the classroom. In order to adopt evidence-based practice as a framework for our curricula, we needed adequate evidence in the literature to generate a discussion that leads to the choice of one intervention instead of another. We also needed studies that define the psychometric properties of the tools and studies that have used the tools for diagnostic or outcomes assessment.
In this fall academic semester, for instance, students across the country are engaged in practicing examination, evaluation, and intervention skills and searching for evidence to justify a plan of care. They also are being introduced to a variety of tests to assist them in examination and evaluation and in assessment of outcomes. Faculty members in many academic programs have moved beyond providing the general laundry lists of special tests for the students to memorize. Instead, we have begun to guide students in decision-making skills to select the test(s) with the best likelihood of affecting the post-test probability and improving their confidence in the physical therapist diagnosis. In a similar manner, faculty members emphasize the need to select the best outcome measure as a tool to document the effectiveness of the intervention. Although we have “miles to go before we sleep,” at least we now use the vocabulary of evidence-based practice and understand what has to be done in order to define best practice.
I am not sure why we aren’t asking similar “best practice” questions and using similar research methods to determine the most effective model for clinical education. Consider 2 of the papers that appear in this month's issue. APTA President Scott Ward reminded us that “the public expects our graduates to be prepared to skillfully manage their physical therapy care.”1(p1229) How do we demonstrate to the public that our graduates are prepared to “skillfully” manage? Is it sufficient to state that the new employee graduated from an accredited program and passed the licensure examination? Anthony Delitto, in the Thirty-Ninth Mary McMillan Lecture, asked us to “at least consider postprofessional, entry-level residencies as a clinical education model.”2(p1226) How do we engage in a discussion about the value of entry-level residencies or compare this model of clinical education to other models? What is the framework, and what are the tools?
We don’t seem to consider the need to evaluate the outcome of a clinical education experience in the same way that we have begun to quantify the outcomes of a clinical intervention. I am not dismissing the efforts to articulate the clinical performance standards,3 nor am I ignoring all of the effort to validate and standardize the Clinical Performance Instrument (CPI).4 Rather, I am arguing that we need another and very different tool that reports the bottom line.
Isn’t it important to know how many times—and with what types of patients—the student matched the history and impairments with the correct diagnosis?
Isn’t it important to know how many times—and with what types of patients—the student safely and efficiently provided the most effective intervention that led to clinically significant improvement in physical performance?
Answers to these questions require more than self-reports using a standardized form by the students and the clinical instructors; more than graduation from an accredited program; more than performance on a licensure examination. Without information about the student's actual clinical competence, it seems that we have only one portion of the picture. Without this information, how do we know that a student actually delivers an effective plan of care and whether the patient responds? Is it possible to have an excellent clinical instructor who is not an excellent clinician, and, if so, what is the reference standard against which to compare student and clinical instructor self-report? It seems that we need a series of tools to examine the success of clinical education. Why isn’t this a research priority?
I believe that knowledge about how effectively students deliver care during a clinical education experience is essential to engage in a meaningful conversation about the best model for clinical education. I also believe that we need to know how students perform so that we can compare their performance to novice and master clinicians, identify gaps in knowledge and skills, and improve the quality of care delivery. These type of studies are another source of information that should help us define best practice. Of course, my plea for performance measures associated with quality of care assumes that we have all agreed upon and implemented standardization for best practice. So we need one research agenda—and we need all of the researchers at the same table.
A systematic review by Choudhry et al5 serves as an example of the kind of discussions that we need to begin to have in our profession. The investigators examined the relationship between the amount of physician clinical experience and the quality of care. They identified 59 papers that examined quality of care. Studies were characterized into 4 groups based on outcome assessment, that is, knowledge assessment; adherence to standards of care for diagnosis, screening, or prevention; adherence to standards of care for therapy; or health outcomes.
The conclusion was that physicians who practice longer might be at risk for providing lower-quality care. You can imagine the controversy raised by these findings, but at least the question could be asked! In September, Choudhry joined PTJ for a podcast conversation titled “Clinic-Level Factors Affect Quality of Care of Patients with Low Back Pain: What's the Next Step?” (www.ptjournal.org/misc/podcasts.dtl). In this discussion of a research report by Resnik et al,6 Choudry offered insights from the larger health care arena. I urge you all to listen to it.
I am not alone in pleading for the development of meaningful clinical performance assessment. But as you read both Ward's and Delitto's addresses, ask yourself how we are going to address the challenges that they introduce. Tony, thank you for giving us a specific model to consider. Let's now determine whether it is, in fact, a better model.
- American Physical Therapy Association