Abstract
Background Activity Measure for Post-Acute Care (AM-PAC) “6-Clicks” tools are functional measures used in acute care. No studies have identified reactions and perceptions of therapists in implementing these measures.
Objectives The purpose of this study was to explore therapists' perceptions regarding the application and implementation of AM-PAC “6-Clicks” tools.
Design This study used a qualitative design with thematic analysis.
Methods A convenience sample of 13 physical therapists and occupational therapists participated in semistructured telephone interviews. Interviews were recorded, transcribed, and coded, after which thematic analysis was used to determine common themes.
Results Five themes were identified: (1) unclear purpose, (2) lack of confidence in scoring, (3) too simple for decision making or generalizing patient function, (4) no effect on clinical routine, and (5) potential for communicating patient function across disciplines.
Limitations Participants came from one health care system. A relatively small percentage of staff agreed to participate in this study, and additional interviews might have revealed new themes.
Conclusions As participants in this study implemented the AM-PAC “6-Clicks” tools, they considered the role of the measures, how they fit within the context of practice, and their value. They also were concerned with the accuracy and feasibility of the tools. The tools were accepted as potentially valuable to assist administrative decisions and research; however, they were not perceived as particularly useful for routine patient care. Participants lacked complete confidence in the reliability of their scoring and expressed concern that the scores might be substituted for their clinical decision making. They also felt that the tools were too simple to fully reflect patients' overall function and were not useful alone for discharge planning. Participants believed the tools had the potential to be used for communication among colleagues about patients' physical function.
Over the past 2 decades, use of standardized measures of patients' activity limitations and participation restrictions by rehabilitation professionals has become common practice. This practice in the United States can be attributed to changes within the health care environment; however, there is evidence that clinicians in other countries also are engaging in this process.1–5 In the United States, increases in demand for health care services, paired with strategies aimed at cost containment, have placed pressure on rehabilitation professionals to demonstrate their influence on improving individuals' health and participation in daily life activities.6 Standardized measures may be used to determine variation in patient care, assess the performance of individual clinicians and organizations, determine the most cost-effective setting for services, and identify potentially overused services.6 Some rehabilitation researchers have suggested that standardized rehabilitation measures that use common language, such as that used in the International Classification of Functioning, Disability and Health (ICF),7 may facilitate improved continuity of care during transition periods, assist clinicians in determining appropriate interventions, and identify those at risk for adverse outcomes.8
In 2011, as part of a broad institutional push to ensure high-quality services, Cleveland Clinic Health System implemented Activity Measure for Post-Acute Care (AM-PAC) “6-Clicks” short forms to assess patients' basic mobility and daily activity function in the acute care setting. The tools were introduced to physical therapy and occupational therapy staff at a 1-hour in-service meeting that provided them with the background, rationale, and instructions for completion of the new tools.
The AM-PAC is a validated measure based on the activity limitation domain of the ICF,7 originally designed to assess function in patients receiving post-acute care rehabilitation.9 To meet needs in the acute care setting, 2 new tools, AM-PAC “6-Clicks,” were developed using calibrated items from the original AM-PAC: one form to measure basic mobility and another to measure daily activity function.10 The tools were designed to be simple and quick to complete and to represent the types of patient functions that physical therapists and occupational therapists commonly must assess in the acute care setting. Each tool contains 6 items, such as ability to move in bed, walk short distances, feed oneself, and perform toileting. Each item is rated on a scale of 1 to 4. The scores are based on the amount of difficulty patients have or the amount of help required for completing functional activities. Therapists determine the scores for each item based on observing a patient's performance or based on their clinical judgment if they do not directly observe the patient complete the activity. Total scores for each tool are derived by adding the scores for the 6 items. In one study, intraclass correlation coefficients for interrater reliability were .85 and .78 for the basic mobility and daily activity forms, respectively.11
Despite the potential value, standardized functional measures have not been commonly used in the treatment of patients in the acute care setting.12 The low use of these measures in acute care may be due to their perceived limitations. A systematic review of perceived barriers to using standardized measures in practice noted 4 themes: therapists' knowledge, education, and perceived value in completing the measures; organizational support for routine outcomes measurement; practicality of implementation (eg, time); and patient considerations (eg, relevance to practice).13
Few studies have attempted to describe and explain practitioners' perceptions of the implementation of new required processes for standardized measurement.1–4 These studies have suggested that the reactions and perceptions of rehabilitation professionals to such processes may include ambivalence and skepticism about the value of the initiative. One study suggested that standardized measures must be perceived by the users as fitting within the context they are used.4 If clinicians do not perceive a good fit between a measurement tool and the setting in which they practice, they may use only parts of it or selectively use it with only some patients, resulting in biased, invalid, or unusable data. Clinicians also may fail to participate at all when they perceive lack of fit, particularly when they do not have any control or flexibility in how the assessment is done.1 We were unable to find studies using qualitative analyses of clinicians' perceptions related to implementation of new standardized measurement processes that included occupational therapists, and we found only one study that included physical therapists.1 The authors of the study including physical therapists noted potential lack of rigor in their analytical methods. Physical therapists and occupational therapists are likely to be called upon to implement new standardized measurement processes with their patients. It seems important, therefore, to understand their perspectives in order to proactively address barriers, enhance measurement validity, and encourage participation. The purpose of this study was to explore physical therapists' and occupational therapists' reactions and perceptions regarding the implementation, application, and usefulness of AM-PAC “6-Clicks” in acute care settings of one large health system.
Method
Design
We used qualitative thematic analysis, as generally described by Braun and Clarke,14 to identify, summarize, and interpret therapists' perceptions related to the implementation of AM-PAC “6-Clicks” tools.
Participants
We used a convenience sample drawn from the 280 physical therapy and occupational therapy staff working in 11 acute care settings at Cleveland Clinic in January 2014. All staff were informed of the study and asked to participate. An email describing the study was sent through a distribution list that included all therapy staff. The email contained a link to an electronic demographic survey and a statement of consent that asked individuals to check a box indicating that they wished to participate in the study and were willing to be contacted by telephone. The survey comprised demographic questions and asked participants to provide a telephone number. Those who provided telephone numbers were contacted to arrange an interview at their convenience. Messages were left and follow-up calls were made to participants who did not answer. In an attempt to increase participation, a follow-up email was sent to the original email addresses approximately 4 weeks after the first contact. Thirty-four individuals completed the online demographic survey. Of these, 25 provided a telephone number. Ten physical therapists, 2 occupational therapists, and 1 occupational therapist assistant committed to an interview; 12 did not return telephone calls. Upon completion of the 13 scheduled interviews, there appeared to be no language or ideas expressed that were not present in several other transcripts. Because of this redundancy among transcripts, we decided to conduct no additional participant recruitment. Participant demographics are described in the Table.
Participant Demographic Informationa
Data Collection
A semistructured interview process was used to collect data. This approach allowed interviewers to explore the topic with participants by first using the main questions defined by our interview guide and then to diverge from the questions to probe new ideas and explore responses in greater detail.15 The interview guide comprised 4 broad questions and some suggested follow-up questions that might be used to further the conversation (Appendix). The guiding questions were designed to elicit reflective answers that fully described participants' experiences with the implementation of AM-PAC “6-Clicks” tools. Each participant completed one semistructured telephone interview between January and March of 2014. Three investigators conducted interviews, and 2 interviewers were present during each interview. One investigator conversed on the telephone with the participant, and the other investigator listened and took notes with comments, for example, about tone of voice or pauses in the conversation that might inform the analysis of the transcripts. The interviews were digitally recorded using a handheld recorder. Each interview lasted approximately 30 minutes. Following each interview, the interviewer transcribed the dialogue verbatim. Transcribing the interviews facilitated familiarization with the data early in the process.
Data Analysis
We used an inductive, semantic approach with a goal of revealing themes in the data without a preconceived framework. In the first phase of analysis, as the interviews were completed and transcribed, we read the transcripts and began to assign codes. This process allowed us to became increasingly familiar with the topic and led to discussion of areas where additional probing was warranted or new questions might be explored in subsequent interviews.15 Weekly meetings were held to discuss perceptions of common and contrasting elements among the interviews and whether or not there were new, unique ideas or perceptions that we had not heard from previously interviewed participants. The second phase of analysis began upon completion of 13 interviews. We reread and coded each transcript, highlighting text and making notes, using our own words to label features of the participants' responses that seemed important and meaningful. These iterative discussions resulted in combining and sorting codes into potential themes. We then refined, described, and defined the themes through discussion and consensus. Once the common themes were agreed upon, quotes that exemplified them were drawn from the interviews. These quotes allowed us to further refine the descriptions of each theme. Relevant quotes were debated, discussed, and eventually agreed upon. We then reassessed the transcripts to ensure that relevant information was not missed or misinterpreted during the thematic analysis process and that we understood the relationships between the themes. We also sought to ensure that the themes were all inclusive and mutually exclusive. The themes were then assessed alongside the collected demographic information to determine whether themes varied based on participants' characteristics.
Methodological Rigor
In order to approach the interviews in a nonjudgmental manner and reduce bias,16(p121) prior to data collection, we discussed, among ourselves, any preconceptions regarding the topic of implementing standardized measures. We retained all documents related to our decision-making processes and activities throughout the study. For example, during initial coding, we retained all copies of transcripts with our highlights and notes. To enhance trustworthiness and reliability, we held several meetings to discuss and compare our initial individual coding, with the goal of discovering agreements and disagreements and arriving at a consensus. Our collection of documents also included notes from those meetings to provide a record of our conceptualization and reconceptualization of the themes as they emerged from the data. We further sought to enhance methodological rigor by assuring agreement among the researchers as to the meaning of the themes. To enhance the correct interpretation of ideas shared by participants, the interviewers reflected back to participants the answers they heard during the interview process.15 This approach to the interviews was used to reduce the possibility of interviewers leading the discussion in the direction of their own biases. Finally, we shared our themes with the participants and asked for their thoughts about whether the themes represented an accurate reflection of the information they had provided us.17(pp275–276)
Results
Five themes emerged that described the participants' perceptions of the application and implementation of AM-PAC “6-Clicks” in their practice setting.
Unclear Purpose
Participants felt that “the purpose of the ‘6-Clicks’ initially was not really explained” (participant 11). Nearly all responses indicated a degree of uncertainty about the actual use of the data obtained from the measures. For example, participant 5 described the data as “being helpful to someone.” Participant 12 indicated her perception that “they [management] weren't completely sure how they were going to use it,” and participant 13 noted her belief that “there are lots of things that they will be able to do with [the data] but nothing specifically.”
Participants had some ideas about the purpose of the new tools, but revealed differing perceptions. Some believed that the “primary purpose was for research” (participant 8). Participant 3 noted a similar perception: “[The tools were] something we could use to collect data on and use it down the line for research purposes.” Others thought the tools might be intended to play a role in clinical practice. These perceived clinical purposes included providing “a more objective measure to quantify a patient's function” (participant 3), “objectify[ing] why we are doing PT [physical therapy]” (participant 10), and “track[ing] the progress of the patient and also help[ing] other people who are looking at our notes to decide discharge planning” (participant 7).
Lack of Confidence in Scoring
Participants expressed a lack of confidence in choosing item scores and there was a belief that they did not receive thorough instruction on scoring criteria. As participant 7 stated, “They never gave us a full [detailed] explanation.” Participant 9 indicated, “There has never been any education on if [patients] are at this level [of assistance], then they score this.” Participant 2 described how this seeming lack of education had an impact on her use of the tools: “We kind of say, ‘do your best and do whatever you think,’ but I know there are definitely discrepancies between therapist to therapist.” Several issues related to rating patients' performance were described as unclear and were suggested as contributors to the perceived discrepancies in scoring. Participant 7 felt there was a lack of information, leaving questions in her mind such as “What [type of] assistance does [the score] mean? Just physical assistance? Physical assistance and the amount of cueing you have to give them?” Participant 10 was one of many who expressed concern regarding the accuracy of the scoring: “There is no definition of what constitutes a little help or a lot of help.” Even though participant 9 was fairly confident in the reliability of the scoring, she identified similar concerns: “I would say probably 75% of the time I feel confident that my peers would pick the same [score]…. The other 25% are usually when it's that [minimal to a little bit of assistance or moderate-assistance] patient where I wonder if my colleague would either say ‘a little’ or ‘a lot.’”
Too Simple
The tools were viewed as assessing very basic physical functions with a relatively imprecise scoring system. Participants noted that in order to thoroughly describe patients' functional abilities and make decisions about their care, they required more refined details about their patients' abilities and the environmental context in which they must function after discharge than those provided by AM-PAC “6-Clicks.” We identified 2 facets to this theme: (1) too simple to use for decision making and (2) too simple to use for generalizing function.
Too simple to use for decision making.
Participants felt the tools did not provide enough information to serve as the basis for the type of complex decision making in which they routinely engaged. Comorbidities, chronic disease, adherence, home structure, caregiver assistance, cognitive factors, and psychosocial issues were identified as important factors not taken into account by the tools. Participants stated that they mostly ignored scores during decision making: “It just gives you a number. It tells nothing about the background of the patient, the mentation of the patient, anything like that” (participant 1). Similarly, participant 9 stated she did not use the scores from the tools during discharge planning: “I base that [discharge planning] off the entire clinical picture of the patient and what their mobility level is, as well as their home setup and the support that they have at home.” Many participants expressed concern regarding “management's” potential use of the tools for making decisions about patient care, and they felt that their own clinical decision making skills were far more inclusive than what the scores represented. Participant 10 explained her concern: “I think there is still some fear that [management] will try to take away from our clinical decision making by using that [score] to dictate our discharge recommendation, which, I think, would be a true shame.”
Too simple to use for generalizing function.
Participants felt that the tools reflected patients' abilities to perform the simple tasks identified by the items; however, they explained that the scores did not always represent ability to accomplish activities and participate in a broader context. They felt that the tools could be used with all patients and that it was “a good start…gives a good range of performance, a range of potential…however, it needs to be used with other material” (participant 6). Participant 1 explained her belief that the tools could “show [patients'] progress, but, I think, it is just physical progress and nothing beyond that.” A score was not viewed as sufficient for describing patients' ability to participate in life situations. This concern was explained by participant 8, who identified an area in which the individual items were not seen as generalizable to function: “The 3 to 5 steps [item assessing a patient's ability to negotiate stairs] seems a little odd to me. I feel like it should be a flight…because most people have a full flight of stairs [at home].”
No Effect on Clinical Routine
Participants felt the tools were just another step in the documentation process and that completing the documentation of scores was quick and effortless. Scores were not something they reflected on during evaluation or treatment sessions. Participant 8 stated, “[The items on the tools reflect] what I do every day with my patients; and it really is nothing that I think about at all until I need to do “6-Clicks” to get through it.” Participant 4 noted, “I just complete the tool because it is mandatory in our notes.” Although mandatory, participants did not feel as though they were influenced at all by the tool. As participant 2 stated, “I think most [coworkers] are indifferent…. It doesn't take long. It doesn't really affect us negatively in any sense, but it doesn't really add anything to our day either.” Since implementation, most participants “don't think it affects [treatment time and efficiency] at all” (participant 7). Participant 7 stated, “I don't really see any difference. You just go through it when you do your notes, and it doesn't really change much of anything. It's pretty quick, which is nice.”
Potential for Communicating Patients' Function Across Disciplines
Our participants felt that patients' scores based on the tools could potentially be used to describe patients' function to other health professionals. As expressed by participant 8, however, participants noted that other health professionals had variable understanding of the tools and the meaning of scores: “Some people have no idea what a “6-Clicks” score is, and certain case managers and social workers know exactly what it is because they learn about it in meetings and think it is important.” Participant 4 thought that scores could facilitate communication with nurses. “If I work with a patient who was evaluated yesterday, and yesterday they were a 19 [AM-PAC “6-Clicks” score] and today they are a 14, it is like, ‘Why a decline in status?’ So we make sure the nurses know that this is a big decline. Yeah, in those cases, it promotes conversation.” Participant 2 stated, “I would like to see it more hospital-wide…. It would be beneficial so the nurses have a better idea of who they can safely mobilize.” Some participants also brought up how the tools could help in communicating with nurses and physicians about the fact that some patients seemed to be referred unnecessarily: “They're looking at the [score] spreads, they're looking at how many patients maxed out on the ‘6-Clicks’ to see if they have needs for therapy” (participant 6).
Discussion
The perceptions of participants regarding the implementation of the AM-PAC “6-Clicks” tools were characterized by 5 themes that are reflected to some degree in previous qualitative studies of speech-language therapists',2,4 mental health clinicians'3 and physical therapists'1 perceptions of implementation of standardized measurements. We found support for our findings in that the themes support the theory of alignment proposed by Skeat and Perry4 to explain how outcome measurements were implemented by speech-language therapists in Australia. For example, our participants' responses seemed to support the proposition that clinicians implementing a standardized measurement process consider a tool's role, how it fits within the context of practice, and its value.
We heard different viewpoints regarding the purpose of the AM-PAC “6-Clicks” tools, suggesting that the implementation process did not result in a clear understanding among staff of the intended role. Previous research has suggested that sharing with clinicians how data are used is important for successful implementation.4 If clinicians do not know why a standardized measure is needed, or if a tool fails to meet an imagined essential purpose, the perceived value of the tool may be affected.4 It is possible that a lack of uniform understanding of the purpose of the tools contributed to our participants' somewhat indifferent attitudes. That indifference was reflected in that participants were willing to collect the data but did not really use the data in their care of patients. Participants also expressed concern that management might use the data from AM-PAC “6-Clicks” for determining resource allocation; however, they were supportive of the use of data for research. Similarly other studies have reported clinicians' concerns about how management might use data from outcomes measures to reduce resources,1,3 as well as support for measurements that could be used in research.1
The theoretical framework proposed by Skeat and Perry4 also suggests that clinicians may seek to make standardized measures work within the context of their setting through accommodation, that is, by changing their practices and priorities. For example, within a year of implementing AM-PAC “6-Clicks” tools in the settings in which our participants practiced, the items were moved to the top of the online documentation system used by therapists. This change meant that AM-PAC “6-Clicks” item scores were the first data that clinicians entered after each patient encounter and the first information that another provider might encounter when reviewing therapists' documentation.
Another strategy to align standardized measurements with the practice context is to adapt the way they are used.4 For example, our participants noted the potential for using AM-PAC “6-Clicks” scores for communication with other health care professionals in the hospital when describing patients' function. This type of communication is needed in acute care given that a patient's functional status changes quickly and there are usually many providers working with a single individual, particularly around discharge planning. Application of data provided by the AM-PAC “6-Clicks” tools could improve the ability of team members to communicate about appropriate discharge settings for patients.
Skeat and Perry4 also identified suitability as an important aspect of a tool. Suitability implies that a tool yields an accurate assessment of what it is purported to measure and that its application is feasible within the context of practice. Participants voiced concern about the reliability of the tools because they felt some confusion in interpreting the score descriptors of amount of assistance or difficulty patients might have completing an activity. Participant interviews and data analysis were conducted prior to publication of a study of the reliability of the tools.11 That study demonstrated very high reliability levels for total scores; however, levels of agreement for some individual item scores were small, to some extent supporting our participants' stated confusion about how to score items. Unclear instructions have been shown to be a barrier to measurement tool implementation in previous studies,5 and unease about reliability of scoring may have led to poor integration of the data into participants' clinical reasoning. Previous literature supports the need for adequate initial training and follow-up support after training to help users interpret and use the data from standardized measures.3 Training could include detailed instruction, example cases, adequate question and answer sessions, and user guides. Training opportunities would allow therapists to have a full understanding of the scoring and intent of a measure, which could minimize confusion and improve measurement reliability.
Our participants found that implementation of AM-PAC “6-Clicks” tools was feasible; however, they believed that the tools were not comprehensive enough to provide meaningful information for making generalizations about patients' function or contributing to clinical decision making. Deutscher et al1 also reported that although the physical therapists participating in their study technically completed the standardized assessments, some failed to integrate the information provided by the measures into their clinical decision-making process. Although the AM-PAC “6-Clicks” tools were viewed as being quick and easy and convenient, the absence of items assessing social support and other home factors in the tools led participants to believe that they were too simple to guide discharge planning, an important role for clinicians in this environment. Similarly, other studies have reported that clinicians' sense of disconnect between their expectations of what should be measured and what they perceive a tool actually measures is a barrier to implementation of a standardized measurement process.1,3 Meehan et al3 noted that even clinicians who were in favor of standardized measurement questioned the validity when the measures were seen as reductionist and too brief to be useful. Although our participants seemed to want a more detailed tool that addressed various factors influencing functional ability, there is likely an inverse relationship between ease of use and detail of information. Fully inclusive tools that provide a breadth of information are likely to take a significant amount of time to complete, and lack of time has been reported as a significant barrier to implementation of standardized measurement tools.13
The universal use of AM-PAC “6-Clicks” by physical therapists and occupational therapists in acute care settings at Cleveland Clinic Health System was mandated by leaders at the institution. Although mandate by management has been reported to be a barrier to implementation of standardized measurement tools,13 we perceived an overall positive attitude in study participants. There appeared to be cooperation among the therapists and the support of management during the implementation process, and these factors have been reported to facilitate standardized data collection.13 As a whole, the interviews with our participants reflected a sentiment reported by speech-language therapists interviewed by Isaksen2: collection of outcomes measures is a necessary part of the clinician's management of patient care and makes sense for practice.
Limitations
Given a qualitative analysis, the results may have been influenced by the biases of the research team. We attempted to reduce bias by acknowledging and discussing our perceptions prior to the analysis and questioning one another during the analysis. One of the research team members had conducted previous research on the use of standardized measurement tools by physical therapists. None of the team, however, had ever been involved in a similar implementation process. The 3 interviewers were novice clinicians and had not previously been involved in research; however, the fourth member of the research team had experience in both qualitative and quantitative research. We also conducted our analysis using an inductive approach, completing our literature review after the data collection and analyses were completed. A deductive approach might have allowed us to explore areas directly related to the alignment theory that we applied to our findings. The main questions outlined in the Appendix were consistently used in our interviews; however, the nature of the semistructured interview process is such that not all participants were asked the same follow-up questions in the same manner. Some follow-up questions may have biased responses as we attempted to obtain information about the therapists' full experience.
Our participants came from one health care system; thus, the context within which standardized measurement was implemented was similar for all participants. Given the relatively small percentage of staff who agreed to participate in this study, participants may have had different perspectives than those who did not volunteer, and additional interviews may have revealed new themes. The majority of participants had 5 years or less of experience and, therefore, likely graduated from professional programs with curricula that addressed evidence-based practice and introduced standardized measures of function. This fact may have made them more receptive to applying new standardized measures and the idea of collecting clinical data for research purposes. We did not observe how participants used the tool; therefore, analysis relied solely on the information provided in the interviews. Finally, our study examined perceptions related to application and implementation of AM-PAC “6-Clicks” tools only and results may or may not apply to perceptions of other standardized measures implemented in other settings.
Implications for Future Research
This study explored therapists' perceptions regarding the implementation of one specific set of standardized measures in one large health system. To expand the scope of the existing small body of knowledge, our findings might be used to design subsequent qualitative studies in other types of rehabilitation settings. Furthermore, quantitative research approaches might be based on the results of this study to explore the perceptions of a larger sample of therapists across various settings. New knowledge derived from such studies might serve to refine the theory of alignment proposed by Skeat and Perry.4
This study also only explored the perceptions of the staff who implemented standardized measures. It might be valuable to gather similar information from individuals who make decisions about how practices are managed and use patient data to judge practice performance. Coalescing data from staff and management perspectives may help to facilitate future implementation of standardized measurement processes in rehabilitation practices.
This study also examined therapists' perceptions of implementation of standardized measures at only one point in time. Based on the theory of alignment, it is possible that therapists adapt their thinking and practices related to standardized measurement over time. Therefore, a longitudinal study might provide additional information.
In conclusion, the study adds support for the theoretical framework describing implementation of a standardized measurement process proposed by Skeat and Perry.4 In implementing AM-PAC “6-Clicks” participants in our study considered the tools' role; how the tools fit within the context of their practice; and the value of the tools for themselves, their managers, and other clinicians with whom they worked. They also were concerned with the accuracy and feasibility of the tools. The AM-PAC “6-Clicks” tools were accepted as potentially valuable in facilitating administrative decisions and research; however, they were not perceived as useful for therapists' routine patient care. Participants generally lacked confidence in the reliability of their scoring, which suggested the need for complete and ongoing discussion, education, and support regarding the implementation of a new measure. Participants expressed concern that the AM-PAC “6-Clicks” scores might be substituted for their clinical decision making. They also felt that the tools did not fully reflect patients' overall function, nor were they particularly useful for discharge planning. Participants believed the AM-PAC “6-Clicks” tools had the potential to facilitate communication among colleagues about patients' physical function. The themes described in this article support previous research findings related to the implementation of new standardized measurement processes among various groups of health care professionals.
Appendix.
Interview Guide
Footnotes
Dr Jette provided concept/idea/research design, project management, and administrative support. All authors provided writing and data analysis. Dr Dewhirst, Dr Ellis, and Dr Mandara provided data collection. The authors thank their colleagues at the Cleveland Clinic for their participation and assistance.
The University of Vermont and Cleveland Clinic Health System institutional review boards approved the study.
- Received January 21, 2015.
- Accepted February 4, 2016.
- © 2016 American Physical Therapy Association