Email updates

Keep up to date with the latest news and content from Implementation Science and BioMed Central.

Journal App

google play app store
Open Access Highly Accessed Research article

Organizational readiness to change assessment (ORCA): Development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework

Christian D Helfrich12*, Yu-Fang Li13, Nancy D Sharp12 and Anne E Sales4

Author Affiliations

1 Northwest HSR&D Center of Excellence, VA Puget Sound Healthcare System, Seattle, Washington, USA

2 Department of Health Services, University of Washington School of Public Health, Seattle, Washington, USA

3 Department of Biobehavioral Nursing and Health Systems, University of Washington, School of Nursing, Seattle, Washington, USA

4 Faculty of Nursing, University of Alberta, Edmonton, Alberta, Canada

For all author emails, please log on.

Implementation Science 2009, 4:38  doi:10.1186/1748-5908-4-38

Published: 14 July 2009

Abstract

Background

The Promoting Action on Research Implementation in Health Services, or PARIHS, framework is a theoretical framework widely promoted as a guide to implement evidence-based clinical practices. However, it has as yet no pool of validated measurement instruments that operationalize the constructs defined in the framework. The present article introduces an Organizational Readiness to Change Assessment instrument (ORCA), organized according to the core elements and sub-elements of the PARIHS framework, and reports on initial validation.

Methods

We conducted scale reliability and factor analyses on cross-sectional, secondary data from three quality improvement projects (n = 80) conducted in the Veterans Health Administration. In each project, identical 77-item ORCA instruments were administered to one or more staff from each facility involved in quality improvement projects. Items were organized into 19 subscales and three primary scales corresponding to the core elements of the PARIHS framework: (1) Strength and extent of evidence for the clinical practice changes represented by the QI program, assessed with four subscales, (2) Quality of the organizational context for the QI program, assessed with six subscales, and (3) Capacity for internal facilitation of the QI program, assessed with nine subscales.

Results

Cronbach's alpha for scale reliability were 0.74, 0.85 and 0.95 for the evidence, context and facilitation scales, respectively. The evidence scale and its three constituent subscales failed to meet the conventional threshold of 0.80 for reliability, and three individual items were eliminated from evidence subscales following reliability testing. In exploratory factor analysis, three factors were retained. Seven of the nine facilitation subscales loaded onto the first factor; five of the six context subscales loaded onto the second factor; and the three evidence subscales loaded on the third factor. Two subscales failed to load significantly on any factor. One measured resources in general (from the context scale), and one clinical champion role (from the facilitation scale).

Conclusion

We find general support for the reliability and factor structure of the ORCA. However, there was poor reliability among measures of evidence, and factor analysis results for measures of general resources and clinical champion role did not conform to the PARIHS framework. Additional validation is needed, including criterion validation.