Skip to main content

Measurement of a model of implementation for health care: toward a testable theory

Abstract

Background

Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs.

Methods

A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus.

Results

The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported.

Conclusion

Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures.

Peer Review reports

Background

There is currently a wide gap between what treatments have been found to be efficacious in randomized controlled trials and what treatments are available in routine clinical care. One comprehensive theoretical model of dissemination and implementation of healthcare innovations intended to bridge this gap was developed by Greenhalgh et al.[1]. Derived from a systematic review of 13 distinct research traditions[2, 3], this model is both internally coherent and based largely on scientific evidence. The model is consistent with findings from other systematic narrative reviews[4–6] regarding the factors found to be related to implementation. In addition, it served as the starting point for development of the Consolidated Framework for Implementation Research[7].

As shown in Figure1, implementation is viewed as complex processes organized under six broad constructs: innovation; adopter; communication and influence; system antecedents and readiness (inner organizational context); outer (inter-organizational) context; and implementation process. However there are no explicit recommendations for operational definitions or items to measure most of the identified constructs. The authors recommend a structured, two-phase approach for capturing their model[1]. For phase one, they advised assessment of specific individual components of the model (i.e., perceived characteristics of the innovation, adopter characteristics). For the second phase, they proposed construction of a broad, unifying meta-narrative of how these components interact within the social, political, and organizational context[8].

Figure 1
figure 1

Greenhalgh and colleagues (2004) model of Implementation processes.

In order to advance toward a testable theory and thus benefit implementation science, an operationalization of key constructs and their measurement is needed. Articulation of this model may also aid the implementation process in other ways. For example, administrator or treatment developers may ask providers to complete these measures in order to understand individual and organizational barriers to implementation and to identify strengths that can help teams overcome these challenges. This information can then be used to inform design of training, help promote provider engagement in evidence-based innovations, assist in problem-solving with obstacles, and guide development of the implementation process.

Our research group set out to operationalize the constructs in Greenhalgh et al.’s[1] model for use in a quantitative survey and a semi-structured interview guide (a full copy of the survey can be found in Additional file1 and a full copy of the semi-structured interview in Additional file2). The present paper provides the background, rationale, working definitions, and measurement of constructs. This work was done in preparation to study a national roll-out of two evidence-based psychotherapies for post-traumatic stress disorder (PTSD) within the Department of Veterans Affairs (VA)[9]. Although the questionnaire and interview guide were developed to assess factors influencing implementation of specific treatments for PTSD, they can likely be adapted for assessing the implementation of other innovations. This systematic effort represents a first step at operationalizing constructs in the Greenhalgh model.

Methods

Construction of measures: systematic literature search and article review selection process

Measure development began with a systematic literature search of keywords representing 53 separate sub-constructs from the six broad constructs (innovation, adopter, communication and influence, system antecedents and readiness, outer context, and implementation process) identified in Figure1. Only those constructs that were both related to implementation of an existing innovation (rather than development of an innovation) and were definable by our research group were included.1 Searches were conducted in two databases (PsycInfo and Medline) and were limited to empirical articles published between 1 January 1970 and 31 December 2010. Search terms included the 53 sub-constructs (e.g., relative advantage) and ‘measurement’ or ‘assessment’ or ‘implementation’ or ‘adoption’ or ‘adopter’ or ‘organization.’ After culling redundant articles, eliminating unpublished dissertations and articles not published in English, we reviewed abstracts of 6,000 remaining articles. From that pool, 3,555 citations were deemed appropriate for further review.

Two members (CO, SD) of the investigative team conducted a preliminary review of titles and abstracts for possible inclusion. Articles were selected for further consideration if they proposed or discussed how to measure a key construct. Clear and explicit definitions of constructs were rarely provided in the literature, resulting in our inclusion of articles with concepts that overlapped with one another. From the review of titles and abstracts, 270 articles were retrieved for full text review. If the actual items from the measure were not provided in the paper, a further search was made using cited references. The investigative team also reviewed surveys that had been used in studies on health providers’ adoption of treatments[10–12] and organizational surveys related to implementation[13, 14].

We next developed a quantitative survey and semi-structured interview using an iterative process whereby the full investigative team reviewed potential items. In order for inclusion of an item in our measurement approach, all members of the team had to agree. The resulting pool of items was presented to 12 mental health professionals who offered feedback on item redundancy and response burden. Items were further revised by the team for clarity and consistency. In addition, our team concluded that it would be burdensome to participants if we included items reflecting every aspect of the model in the quantitative survey. Therefore, we made strategic decisions, described below, as to which items to retain in the survey versus the semi-structured interview. Certain constructs in the Greenhalgh model appear under more than one domain (e.g., social network appears under both adopter and communication and influence) or assess overlapping constructs (e.g., peer and opinion leaders). For certain constructs the use of administrative data was deemed as the most efficient means of assessment and served to augment survey or interview questions (e.g., incentives and mandates, environmental stability).

Results

Table1 presents the constructs and working definitions as well as a sample item for each. For each construct, an overview of relevant measures is provided followed by explanation of the measures that ultimately influenced our survey and semi-structured interview questionnaires, or for relevant constructs the use of administrative data.

Table 1 Model constructs and examples of survey and interview questions and administrative data

Innovation

The five innovation attributes originally identified by Rogers[2] and included in the Greenhalgh et al. model are: relative advantage, compatibility, complexity, trialability, and observability. Additional perceived characteristics given less emphasis by Rogers but included by Greenhalgh et al. are potential for reinvention, risk, task issues, nature of the knowledge required for use, and augmentation/technical support.

Several investigators have attempted to operationalize Rogers’ innovation attributes[14–18]. The approach most theoretically consistent with Rogers was constructed by Moore and Benbasat[19], but this was not developed for application to a healthcare innovation[20, 21]. The 34- and 25-item versions of that scale have high content and construct validity and acceptable levels of reliability. Our group used several items from the Moore-Benbasat instrument that were deemed applicable to mental health practice (i.e., complexity, observability, trialability, compatibility) and reworded others to be more relevant to healthcare treatments (e.g., ‘The treatment [name] is more effective than the other therapies I have used’).

Others have also assessed Rogers’ innovation characteristics. A questionnaire by Steckler et al.[17] further informed the phrasing of our survey items for content and face validity. Content from additional sources[14, 18, 22, 23] was deemed not applicable because it examined socio-technical factors, deviated too far from the constructs, or did not map onto measurement of a healthcare practice.

Items concerning potential for reinvention were not taken from existing surveys as most focused on identifying procedures specific to a particular intervention[24]. Thus, we were influenced by other discussions of reinvention as they more broadly applied across implementation efforts[25]. In particular, our items were constructed to assess providers’ reasons for making adaptations. As a perceived attribute of innovation, risk refers to uncertainty about the possible detrimental effects. Existing tools for assessing risk focus on the adopter rather than the innovation[26, 27]. Thus, we reviewed these instruments for the adopter characteristics (presented below) as well as utilizing them to inform our items for risk.

The limited literature on nature of knowledge involves how information is instrumentally used both for problem solving and strategic application by adopters[28]. However, Greenhalgh viewed nature of knowledge as whether an innovation was transferable or codifiable. This required us to craft our own items. Assessment of technical support is typically innovation specific, such as adequate support for a technology or practice guideline[29, 30]. Technical support needed to acquire proficiency is likely different across innovations (i.e., training support), and thus we included items on the helpfulness of manuals and accompanying materials. Davis[31] developed a reliable and valid instrument to assess perceived usefulness (i.e., belief that the innovation enhances job performance). Although the construct has a different label, we judged it as nearly identical to Greenhalgh’s task issues. One item was borrowed from this scale to represent task issues.

All innovation attributes in the Greenhalgh model were represented in the quantitative survey. A couple (e.g., technical support) were also included in the semi-structured interview.

Adopter characteristics

Greenhalgh et al.[8] suggested that a range of adopters’ psychological processes and personality traits might influence implementation. Items specifically identified in the model include adopter needs, motivation, values and goals, skills, learning style, and social networks[8]. Not all proposed adopter characteristics were depicted in the model figure; Greenhalgh[1] identified other potentially relevant adopter characteristics such as locus of control, tolerance of ambiguity, knowledge-seeking, tenure, and cosmopolitan in the text.

There was a lack of operational definitions in the literature regarding need; thus, we created our own. Assessment of this construct was informed by questions from the Texas Christian University Organizational Readiness to Change survey[12]. We included one item in our survey specific to need in the context of professional practice.

Assessing motivation or desired levels and ‘readiness for change’ has most often been based on the transtheoretical stages of change model[32–34]. One of the most widely used tools in this area is the University of Rhode Island Change Assessment Scale[33], which has items assessing pre-contemplation (not seeking change), contemplation (awareness of need for change and assessing how change might take place), action (seeking support and engaging in change), and maintenance (seeking resources to maintain changes made). We adapted items from this scale for our survey. Continued development of the stages of change model after construction of the Change Assessment Scale incorporated an additional preparation stage, which we represented in the qualitative interview as a question regarding providers’ interest in and attendance at trainings in evidence-based treatments.

Assessment of values and goals typically reflect estimation of personal traits/values (e.g., altruism) and terminal goals (e.g., inner peace)[34]. Funk et al.[35] devised a survey that included some adopter characteristics in relation to utilizing research-based innovations in healthcare settings. We used an item from their survey[35] as well as one from the Organizational Readiness to Change-Staff Version survey[12] to operationalize this construct.

The preferred means of assessing skills in healthcare practice is observational assessment as opposed to self-report[36, 37]. However, in order to capture some indication of skill, we simply added an ordinal item on level of training in the evidence-based treatment.

Greenhalgh et al.[1] provided no formal definition of learning style. We reviewed numerous learning style measures[38–45], but most had poor reliability and validity[46]. Others had attempted to revise and improve upon these instruments with limited success[47, 48]. Recently, an extensive survey of learning style was created[49]. Although we did not utilize these items due to their lack of reflection of learning processes (e.g., auditory), we did follow the suggestion to directly word items about preference of instructional methods[49] (for reviews see[50, 51]). Due to the potential complexity of this construct and the various ways to measure it, we included three diverse items not expecting them to necessarily represent one scale and also assessed this in the interview.

Measurement of some of the adopter traits has occurred in the larger context of personality research. For example, there are several measures of locus of control[52–54]. After a review of these tools and discussion as to what was most applicable to the implementation of healthcare innovations, our group primarily borrowed items from Levenson’s[53] Multidimensional Locus of Control Inventory. The Levenson inventory includes three statistically independent scales that allow a multidimensional conceptualization of locus of control unlike the widely used Rotter scale, which is unidimensional and conceptualizes locus of control as either internal or external. The Levenson scale has strong psychometric properties[53]. Unlike other LOC scales, it is not phrased to focus on health and therefore appeared more easily applied to measure LOC as a general personality factor. Similarly, numerous surveys of tolerance (and intolerance) for ambiguity have been developed[55–61]. After reviewing these measures, we chose to adapt items from McLain’s[59] Multiple Stimulus Types Ambiguity Scale due to its relevance to healthcare.

For knowledge-seeking, we adapted one additional question from the Organizational Readiness to Change-Staff Version survey[12] and devised two of our own. Tenure has consistently been measured as a temporal variable[62–64]. A clear distinction can be made between organizational and professional tenure. For the purposes of our survey, both organizational tenure[64] and professional tenure were included.

One means of assessing cosmopolitanism is by identifying belonging to relevant groups[65]. Woodward and Skrbis’[66] assessment of cosmopolitanism informed the construction of our items. Pilcher[65] differentiated between two conceptualizations of cosmopolitanism: ‘subjective/identity’ and ‘objective/orientation,’ where the former captures affiliations and the latter relevant attitudes. We followed a more ‘subjective/identity’ approach by including one survey item, capturing how many professional meetings one attends per year[67].

Communication and influence

Communication and influence constructs in the Greenhalgh model included in the survey are: social networks, homophily, peer opinion (leader), marketing, expert opinion (leader), champions, boundary spanners, and change agent.

One of the most common measures of social networks is a name generator response used to map interpersonal connections[68–70]. Relatedly although there are several ways that peer opinion leaders have been assessed[3, 71], the most common is to ask respondents from whom they seek information and advice on a given topic. We included a name generator in the survey to identify social networks as well as items asking about peer relationships. Similarly we asked one item to assess whether a provider had access to a peer opinion leader. This latter item is modeled after the Opinion Leadership scale, which has adequate reliability[72].

Since there was no psychometrically sound measure of homophily in the literature[73], we chose to capture this construct from the interview data in regards to the degree to which providers in a particular program had similar professional and educational backgrounds and theoretical orientations. Similarly, there was no identified measure of marketing, thus we crafted one question for the interview.

While the terms expert opinion leader, change agent and peer opinion leader are often used interchangeably and inconsistently[8], we were careful to create distinct definitions and measurements for each of these. In regards to measurement of an expert opinion leader, in the interview, we assessed access to an expert consultant, and in the survey, we ask if the provider themselves is a consultant or trainer in the treatment.

Innovation champions play multiple roles in promotion (e.g., organizational maverick, network facilitator[1, 15, 74]). Our team assessed this construct in the interview by initiating a discussion of how the innovation was promoted and by whom.

The construct of boundary spanners has received minimal application in studies of implementation in healthcare settings[75]. Because there were no available tools for this construct, we modeled our items from the definition of boundary spanners—individuals who link their organization/practice with internal or external influences, helping various groups exchange information[76]. We also utilized one question to capture the concept of whether providers were affiliated with or were themselves boundary spanners.

The interview also included questions to identify the influence of a change agent by asking about decision-making responsibility in the organization as well as facilitation of internal implementation processes. Thus, while only a limited number of constructs within the communication and influence section were included in the survey, many of the concepts seemed best captured through dialogue and description and thus were included in the interview.

System antecedents and readiness for innovation (inner context)

The constructs that comprise the inner and outer organizational context overlap considerably, making sharp distinctions difficult[6, 77]. Greenhalgh identified two constructs of inner context: system antecedents (i.e., conditions that make an organization more or less innovative) and system readiness (i.e., conditions that indicate preparedness and capacity for implementation).

As can be seen in Figure1, system antecedents for innovation include several sub-constructs organizational structure (size/maturity, formalization, differentiation, decentralization, slack resources); absorptive capacity for new knowledge (pre-existing knowledge/skills base, ability to interpret and integrate new knowledge, enablement of knowledge sharing); and receptive context for change (leadership and vision, good managerial relations, risk-taking climate, clear goals and priorities, high-quality data capture). In a review of organizational measures related to implementation in non-healthcare sectors, Kimberly and Cook[14] noted few standardized instruments.

Measurement of organizational structure has typically used simple counts of particular variables. Although this appears straightforward, providers may be limited in their knowledge of their organizational structure[14]. Thus organizational structure and its sub-constructs deemed best captured through the interview and administrative data sources. For our investigation of national roll-outs of two evidence-based psychotherapies, we were also able to integrate existing data routinely collected by the VA’s Northeast Program Evaluation Center (NEPEC). NEPEC systematically collects program, provider, and patient level data from all specialized behavioral and mental health programs across the US[78, 79], allowing us to assess a number of organizational constructs.

Capitalizing on NEPEC administrative data, we were also able to capture size/maturity as program inception date, number of available beds and number of patients served in past-year, and number of full-time providers at various educational levels. Formalization was represented by program adherence to national patient admission, discharge, and readmission procedures, as well as through interview discussion regarding provider clarity around the organizational rules for decision-making and implementing changes. Differentiation or division among units was examined through providers’ descriptions on the structured interview of separations between staff from different backgrounds (e.g., psychology, nursing) as well as how different staff sectors communicated and shared practices (e.g., outpatient and residential).

Although there is no standardized measure of decentralization, we devised our own regarding dispersion of authority in decision making around the innovation. Additionally, there are no uniform instruments on slack resources. NEPEC data were used to capture staff to patient ratio and program capacity (including number of unique patients and number of visits).

For absorptive capacity for new knowledge, we devised items or questions for pre-existing knowledge/skill base, ability to learn and integrate new information, and enablement of knowledge sharing. Pre-existing knowledge/skill base was also included in the survey by identifying training level and tenure in the particular program as well as the organization. This was explored further though the interview when assessing overlapping skills-focused questions (see Adopter characteristics section). Ability to learn and integrate new information was assessed in the interview by asking about the provider’s learning experience and experience of use of the innovation and was felt to be adequately captured by interview questions regarding knowledge-seeking. Enablement of knowledge sharing was included in the survey and directly assessed communication patterns and exchange of knowledge.

Greenhalgh et al.’s construct of receptive context for change was judged to be somewhat similar to organizational readiness to change and organizational culture and climate. There are at least 43 organizational readiness for change measures, many of which have poor psychometric properties[80]. Although we considered a number of instruments[81–83], the one that most influenced the construction of our survey was the widely-used Texas Christian University Organizational Readiness for Change[12]. It has demonstrated good item agreement and strong overall reliability.

Similarly, although several tools exist for assessing culture and climate[84–86], most do not adequately capture Greenhalgh’s constructs, and so we developed new items to measure a number of these constructs. We reviewed the Organizational Social Context survey[87], but most of these items were also not representative of Greenhalgh’s constructs. Similarly, we reviewed the Organizational Culture Profile[88]. Although various items shared some commonality with Greenhalgh’s constructs (e.g., ‘being innovative’), we found most items to be relatively unspecific (e.g., ‘fitting in’).

We reviewed several questionnaires that specifically measured organizational leadership. One psychometrically-sound measure, the Multifactor Leadership Questionnaire[89, 90] informed our survey item construction. Leadership items examined support for a new initiative from a variety of levels including general mental health and program leaders. We devised an item in order to capture the presence and use of leadership vision.

More specifically, items from the Texas Christian University Organizational Readiness for Change[12] informed our survey items for managerial relations and risk-taking climate. There are no measures of clear goals and priorities and high-quality data capture. We constructed our own items to represent these constructs.

Similarly, no tools were available to capture system readiness for innovation. Many of these constructs are not easily assessed in simple survey items and were therefore included in the interview. System readiness for innovation includes tension for change, innovation-system fit, power balances (support versus advocacy), assessment of implications, dedicated time and resources (e.g., funding, time), and monitoring and feedback.

We were only able to locate one relevant measure of tension for change[91], a rating system developed through interviews with organizational experts to identify factors that influence health system change. Unfortunately, the authors did not provide the specific items utilized, and thus we captured the tension for change in the interview by asking providers about their existing work climate and the perceived need for new treatments. The constructs of innovation-system fit, power balances, assessment of implications, dedicated time and resources, and monitoring and feedback also did not have standardized measures and thus we devised our own questions.

Outer context

Outer context constructs include socio-political climate, incentives and mandates, interorganizational norm-setting and networks, and environmental stability. There are no standard tools to assess these domains. There are limited measures of sociopolitical climate[8]. We devised questions for the interview regarding environmental ‘pressure to adopt’ to tap into this construct.

Because there were no identified existing measures for incentives and mandates, secondary data sources were used, such as a review of national mandates in provider handbooks from VA Central Office and discussions with one of the co-authors (JR), who is in charge of one of the national evidence-based roll-outs. Likewise for interorganizational norm setting and networks, the team devised items to assess these constructs because no reliable existing measures were available. Environmental stability was derived from interview questions asking if staffing changes had occurred and perceived reasons for changes (e.g., moves, policy changes). This construct clearly overlaps with inner context (e.g., funding clearly translates into resources that are available within the inner context); however, environmental stability is assumed to be affected by external influences. Thus, our group devised survey items and interview questions and used administrative data to represent outer context constructs. While organizational written policies and procedures are likely accessible to most researchers, changes in budgets and funding may not be, particularly for researchers studying implementation from outside an organization. When possible, this type of information should be sought to support the understanding of outer context.

Implementation process

Factors proposed to represent the process of implementation include decision-making, hands-on approach by leader, human and dedicated resources, internal communication, external collaboration, reinvention/development, and feedback on progress. Consistent with Greenhalgh et al.’s two-phase approach, we primarily captured the implementation process through the interview.

Decision-making was assessed through questions regarding decentralization described above. Because there are no established measures to assess hands-on approach by leader, or human resources issues and dedicated resources, these were developed by group consensus. For internal communication, we asked a question in the interview about whether a provider sought consultation from someone inside their setting regarding the innovation and its implementation. For external collaboration, we also asked a specific question regarding outside formal consultation. We captured the construct of reinvention/development with an interview question concerning how the two innovations are used and whether they had been modified (e.g., number and format of sessions). Because no formal measure for feedback existed, we utilized interview questions from monitoring feedback to capture both constructs. Even though Greenhalgh et al. outline a separate set of constructs for implementation process, these seem to overlap with the other five constructs.

Discussion

Greenhalgh et al.[1] developed a largely evidence-based comprehensive model of diffusion, dissemination, and implementation that can assist in guiding implementation research as well as efforts to facilitate implementation. Despite numerous strengths of the model, there had been no explicit recommendations for operational definitions or measurement for most of the six identified constructs. Through a systematic literature review of measures for associated constructs and an iterative process of team consensus, our group has taken a first step at operationalizing, and thus testing this model.

We are presently using a mixed-method approach of measurement using quantitative data through survey and administrative data and qualitative data through semi-structured interviews and other artifacts (e.g., review of policies) to examine the implementation of two evidence-based psychotherapies for PTSD nationally within VA[9]. Information from that study should provide knowledge to assist in the refinement of the measures, such as examination of psychometric properties and identifying changes needed to better operationalize the constructs. It will be essential, of course, to test the Greenhalgh et al. model through the use of our mixed-method approach and resulting survey, interview, and administrative data in additional healthcare organizations and settings and with non-mental health interventions. Given the challenge to operationalize such a saturated model, this work should be considered a first step in the advancement of a testable theory. A contextual approach should be taken to strategically determine which constructs are most applicable to the individual study or evaluation. Also, a more in-depth examination of several constructs may be a needed next step.

Limitations

Some variables potentially important in the process of implementation are not addressed in the Greenhalgh model. For example, there are several adopter characteristics and social cognition constructs that are not included (e.g., intention for behavior change, self-efficacy, memory)[92–94]. Further, in times of increasing fiscal constraint, it is important to note that the model does not consider cost of the innovation itself or costs associated with its implementation, including investment, supply, and opportunity costs (as opposed to available resources from the inner setting)[7].

Other constructs receive mention in the model but likely warrant further refinement and elaboration. For example, while several constructs are similar to organizational culture and climate, concurrent use of other measurement tools may be warranted e.g.,[84–87]. Similarly, the concept of leadership received only minimal attention in the Greenhalgh model, even though mental health researchers[10] have found this construct to be influential in implementation. Because the validity of the transtheoretical stages of change model has been questioned[95], alternatives may be needed to capture this important construct.

Other constructs are complicated by overlap (e.g., cosmopolitan, social networks, and opinion leaders) or are similarly applied to more than one domain. One example is feedback on progress, which is listed under the domain implementation process, but the very similar construct monitoring and feedback is listed under the domain system readiness for innovation. Likewise, social networks are captured under both adopter and communication and influence domains. Our measurement process attempted to streamline questioning (both in the survey and interview) by crafting questions to account for redundancy in constructs (e.g., reinvention).

We also chose not to include every construct and sub-construct in the model because their assessment would be burdensome for providers.1 In addition, some of these constructs were viewed as best captured in a larger meta-narrative[8] (e.g., assimilation and linkage), mapping the storyline and the interplay of contextual or contradictory information. Like most measures based on participant responses, our survey and interview may be influenced by intentional false reporting, inattentive responding or memory limitations, or participant fatigue.

It is possible that our search terms may not have identified all the relevant measures. For example, there are several other search terms that may have captured the ‘implementation’ domain, such as uptake, adoption, and knowledge transfer. In addition, searching for the specific construct labels from this model assumes that there is consensus in the research community about the meaning of these terms and that no other terms are ever used to label these constructs.

Of course, operationalizing constructs is only one aspect of making a model testable. It also requires information about construct validity, a clear statement of the proposed relationships between elements in the model that would inform an analysis strategy, and a transparent articulation about the generalizability of the model and which contexts or factors might limit its applicability.

In sum, our work here represents a significant step toward measuring Greenhalgh et al.’ comprehensive and evidence-based model of implementation. This conceptual and measurement development now provides for a more explicit, transparent, and testable theory. Despite limitations, the survey and interview measures as well as our use of administrative data described here can enhance research on implementation by providing investigators with a broad measurement tool that includes, in a single questionnaire and interview, most of the many factors affecting implementation that are included in the Greenhalgh model and other overarching theoretical formulations. One important next step will be to evaluate the psychometrics of this measure across various healthcare contexts and innovations and to examine whether the definitional/measurement boundaries are reliable and valid, and further refine our measure. Empirical grounding of the process of implementation remains a work in progress.

Endnote

1See Figure1. Terms not included in our operationalized survey by construct and sub-construct: System Antecedents for Innovation: Absorptive capacity for new knowledge: ability to find, interpret, recodify and integrate new knowledge; Linkage: Design stage: Shared meanings and mission, effective knowledge transfer, user involvement in specification, capture of user led innovation; Linkage: Implementation stage: Communication and information, project management support; Assimilation: Complex, nonlinear process, ‘soft periphery’ elements.

References

  1. Greenhalgh T, Glenn R, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organizations: systematic literature review and recommendations for future research. Milbank Quart. 2004, 82: 581-629. 10.1111/j.0887-378X.2004.00325.x.

    Article  Google Scholar 

  2. Rogers EM: Diffusion of innovations. 1962, New York: Free Press

    Google Scholar 

  3. Rogers EM: Diffusion of innovations. 2003, New York: Free Press, 5

    Google Scholar 

  4. Fixsen DL, Naoom SF, Blasé KA, Friedman MR, Wallace F: Implementation research: A synthesis of the literature. 2005, Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network

    Google Scholar 

  5. Stith S, Pruitt I, Dees J, Fronce M, Green N, Som A, Link D: Implementing community-based prevention programming: a review of the literature. J Prim Prev. 2006, 27: 599-617. 10.1007/s10935-006-0062-8.

    Article  PubMed  Google Scholar 

  6. Durlak JA, Dupre EP: Implementation matters. a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Com Psychol. 2008, 41: 327-350. 10.1007/s10464-008-9165-0.

    Article  Google Scholar 

  7. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC: Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009, 4: 50-59. 10.1186/1748-5908-4-50.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Greenhalgh T, Glenn R, Bate P, Macfarlane F, Kyriakidou O: Diffusion of innovations in health service organizations: A systematic literature review. 2005, Oxford: Blackwell Publishing Ltd

    Book  Google Scholar 

  9. Karlin BE, Ruzek JI, Chard KM, Eftekhari A, Monson CM, Hembree EA, Resick PA, Foa EB: Dissemination of evidence-based psychological treatment for posttraumatic stress disorder in the Veterans Health Administration. J Traum Stress. 2010, 23: 663-673. 10.1002/jts.20588.

    Article  Google Scholar 

  10. Aarons GA: Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS). Men Health Serv Res. 2004, 6: 61-74.

    Article  Google Scholar 

  11. Cook JM, Biyanova T, Coyne JC: Barriers to adoption of new treatments: an internet study of practicing community psychotherapists. Admin Policy Men Health Men Health Serv Res. 2009, 36: 83-90. 10.1007/s10488-008-0198-3.

    Article  Google Scholar 

  12. Lehman WEK, Greener JM, Simpson D: Assessing organizational readiness for change. J Sub Abuse Treat. 2002, 22: 197-209. 10.1016/S0740-5472(02)00233-7.

    Article  Google Scholar 

  13. Damanpour F: Organizational innovations: a meta analysis of effects of determinants and moderators. Acad Manage J. 1991, 34: 555-590. 10.2307/256406.

    Article  Google Scholar 

  14. Kimberly J, Cook JM: Organizational measurement and the implementation of innovations in mental health services. Admin Policy Men Health Men Health Serv Res. 2008, 35: 11-20. 10.1007/s10488-007-0143-x.

    Article  Google Scholar 

  15. Markham SK: A longitudinal examination of how champions influence others to support their projects. J Product Innov Manage. 1998, 15: 490-504. 10.1016/S0737-6782(98)00031-9.

    Article  Google Scholar 

  16. Lin HF, Lee GG: Effects of socio-technical factors on organizational intention to encourage knowledge sharing. Manage. 2006, 44: 74-88.

    Google Scholar 

  17. Steckler A, Goodman R, McLeroy K: Measuring the diffusion of innovative health promotion programs. Am J Health Prom. 1992, 6: 214-224. 10.4278/0890-1171-6.3.214.

    Article  CAS  Google Scholar 

  18. Voepel-Lewis T, Malviya S, Tait AR, Merkel S, Foster R, Krane EJ, Davis PJ: A comparison of the clinical utility of pain assessment tools for children with cognitive impairment. Anesth Analg. 2008, 106: 72-78. 10.1213/01.ane.0000287680.21212.d0.

    Article  PubMed  Google Scholar 

  19. Moore GC, Benbasat I: Development of an instrument to measure the perceptions of adopting an information technology innovation. Systems Innov Res. 1991, 2: 192-222.

    Article  Google Scholar 

  20. Karahanna E, Straub DW, Chervany NL: Information technology adoption across time: a cross-sectional comparison of pre-adoption and port-adoption beliefs. MIS Quart. 1999, 23: 183-213. 10.2307/249751.

    Article  Google Scholar 

  21. Yi MY, Fiedler KD, Park JS: Understanding the role of individual innovativeness in the acceptance of IT-based innovations: comparative analyses of models and measures. Dec Sci. 2006, 37: 393-426. 10.1111/j.1540-5414.2006.00132.x.

    Article  Google Scholar 

  22. Ramamurthy K, Sen A, Sinha AP: An empirical investigation of the key determinants of data warehouse adoption. Dec Supp Syst. 2008, 44: 817-841. 10.1016/j.dss.2007.10.006.

    Article  Google Scholar 

  23. Vishwanath A, Goldhaber GM: An examination of the factors contributing to adoption decisions among late-diffused technology products. New Media Soc. 2003, 5: 547-572. 10.1177/146144480354005.

    Article  Google Scholar 

  24. Pérez D, Lefèvre P, Castro M, Sánchez L, Toledo ME, Vanlerberghe V, Van der Stuyft P: Process-oriented fidelity research assists in evaluation, adjustment and scaling-up of community-based interventions. Health Pol Plan. 2010, 26: 413-422.

    Article  Google Scholar 

  25. Rebchook GM, Kegeles SM, Huebner D: TRIP Research Team: Translating research into practice: the dissemination and initial implementation of an evidence-based HIV prevention program. AIDS Ed Prev. 2006, 18: 119-136. 10.1521/aeap.2006.18.supp.119.

    Article  Google Scholar 

  26. Ingersoll GL, Kirsch JC, Merk SE, Lightfoot J: Relationship of organizational culture and readiness for change to employee commitment to the organization. J Nursing Admin. 2000, 30: 11-20. 10.1097/00005110-200001000-00004.

    Article  CAS  Google Scholar 

  27. Rolison MR, Scherman A: College student risk-taking from three perspectives. Adolescence. 2003, 38: 689-704.

    PubMed  Google Scholar 

  28. Blancquaert I: Managing partnerships and impact on decision-making: the example of health technology assessment in genetics. Community Genet. 2006, 9: 27-33. 10.1159/000090690.

    PubMed  Google Scholar 

  29. Chung N, Kwon SJ: The effects of customers’ mobile experience and technical support on the intention to use mobile banking. Cyber Psychol Behav. 2009, 12: 539-543.

    Article  Google Scholar 

  30. Raw M, Regan S, Rigotti NA, McNeill A: A survey of tobacco dependence treatment guidelines in 31 countries. Addiction. 2009, 104: 1243-1250. 10.1111/j.1360-0443.2009.02584.x.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Davis F: Perceived usefulness, perceived ease of use and user acceptance of technology. MIS Quart. 1989, 13: 319-339. 10.2307/249008.

    Article  Google Scholar 

  32. McConnaughy EA, Diclemente CC, Prochaska JO, Velicer WF: Stages of change in psychotherapy: a follow-up report. Psychother: Theor Res Pract. 1989, 26: 494-503.

    Article  Google Scholar 

  33. McConnaughy EA, Prochaska JO, Velicer WF: Stages of change in psychotherapy: measurement and sample profiles. Psychother: Theor Res Pract. 1983, 20: 368-375.

    Article  Google Scholar 

  34. Miller WR, Rollnick S: Motivational interviewing, Preparing people for change. 2002, New York: The Guilford Press, 2

    Google Scholar 

  35. Funk SG, Champagne MT, Wiese RA, Tornquist EM: Barriers: the barriers to utilization scale. App Nurs Res. 1991, 4: 39-45. 10.1016/S0897-1897(05)80052-7.

    Article  CAS  Google Scholar 

  36. Martino S, Ball S, Nich C, Frankforter TL, Carroll KM: Correspondence of motivational enhancement treatment integrity ratings among therapists, supervisors, and observers. Psychother Res. 2009, 19: 181-193. 10.1080/10503300802688460.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Perepletchikova F, Treat TA, Kazdin AE: Treatment integrity in psychotherapy research: analysis of the studies and examination of the associated factors. J Consul Clin Psychol. 2007, 75: 829-841.

    Article  Google Scholar 

  38. Mueller DJ, Wornhoff SA: Distinguishing personal and social values. Educ Psychol Meas. 1990, 50: 691-699. 10.1177/0013164490503027.

    Article  Google Scholar 

  39. Curry L: Review of learning style, studying approach, and instructional preference research in medical education. International perspectives on individual differences: Vol. 4. Cognitive styles. Edited by: Riding RJ, Rayner SG. 2000, Stamford: Alex

    Google Scholar 

  40. Entwistle NJ, Hanley M, Hounsel D: Identifying distinctive approaches to studying. Higher Educ. 1979, 8: 365-380. 10.1007/BF01680525.

    Article  Google Scholar 

  41. Entwistle NJ, Tait H: The Revised Approaches to Studying Inventory. Centre for Research on Learning and Instruction. 1995, Edinburgh: University of Edinburgh

    Google Scholar 

  42. Felder RM, Silverman LK: Learning and teaching styles in engineering education. Eng Educ. 1998, 78: 674-681.

    Google Scholar 

  43. Kolb DA, Fry R: Towards an applied theory of experiential learning. Theories of group process. Edited by: Cooper CL. 1975, London: Wiley

    Google Scholar 

  44. Kolb D: Learning style inventory (revised edition). 1985, Boston: McBer

    Google Scholar 

  45. Vermunt JDHM: Learning styles and guidance of learning processes in higher education. 1992, Amsterdam: Lisse Swets and Zeitlinger

    Google Scholar 

  46. Yuen CC, Lee SN: Applicability of the Learning Style Inventory in an Asian context and its predictive value. Educ Psychol Meas. 1994, 54: 541-549. 10.1177/0013164494054002029.

    Article  Google Scholar 

  47. Honey P, Mumford A: The manual of learning styles: Revised version. 1992, Maidenhead: Peter Honey

    Google Scholar 

  48. Romero JE, Tepper BJ, Tetrault LA: Development and validation of new scales to measure Kolb’s (1985) learning style dimensions. Educ Psychol Meas. 1992, 52: 171-180. 10.1177/001316449205200122.

    Article  Google Scholar 

  49. Towler AJ, Dipboye RL: Development of a learning style orientation measure. Org Res Methods. 2003, 6: 216-235. 10.1177/1094428103251572.

    Article  Google Scholar 

  50. DeBelle T: Comparison of eleven major learning style models, variables, appropriate populations, validity of instrumentation, and the research behind them. Read Writ Learn Disabil. 1990, 6: 203-222. 10.1080/0748763900060302.

    Article  Google Scholar 

  51. Pashler H, McDaniel M, Roher D, Bjork R: Learning style: concepts and evidence. Psychol Sci Public Interest. 2009, 9: 105-119.

    Google Scholar 

  52. Duttweiler PC: The internal control index: a newly developed measure of locus of control. Ed Psychol Meas. 1984, 44: 209-221. 10.1177/0013164484442004.

    Article  Google Scholar 

  53. Levenson H, Miller J: Multidimensional Locus of Control in sociopolitical activists of conservative and liberal ideologies. J Personal Soc Psychol. 1976, 33: 199-208.

    Article  CAS  Google Scholar 

  54. Wallston KA, Wallston BS, DeVellis R: Development of the Multidimensional Health Locus of Control (MHLC) Scales. Heal Educ Monogr. 1978, 6: 160-170.

    Article  CAS  Google Scholar 

  55. Budner S: Intolerance of ambiguity as a personality variable. J Pers. 1962, 30: 29-50. 10.1111/j.1467-6494.1962.tb02303.x.

    Article  CAS  PubMed  Google Scholar 

  56. Geller G, Tambor ES, Chase GA, Holtzman NA: Measuring physicians’ tolerance for ambiguity and its relationship to their reported practices regarding genetic testing. Med Care. 1993, 31: 989-1001. 10.1097/00005650-199311000-00002.

    Article  CAS  PubMed  Google Scholar 

  57. Gerrity MS, DeVellis RF, Earp JA: Physicians’ reactions to uncertainty in patient care: a new measure and new insights. Med Care. 1990, 28: 724-736. 10.1097/00005650-199008000-00005.

    Article  CAS  PubMed  Google Scholar 

  58. MacDonald AP: Revised Scale for Ambiguity Tolerance: reliability and validity. Psychol Rep. 1970, 26: 791-798. 10.2466/pr0.1970.26.3.791.

    Article  Google Scholar 

  59. McLain DL: The MSTAT-I: a new measure of an individual’s tolerance for ambiguity. Ed Psychol Meas. 1993, 53: 183-189. 10.1177/0013164493053001020.

    Article  Google Scholar 

  60. Norton RW: Measurement of ambiguity tolerance. J Pers Assess. 1975, 39: 607-619. 10.1207/s15327752jpa3906_11.

    Article  CAS  PubMed  Google Scholar 

  61. Nutt PC: The tolerance for ambiguity and decision making. The Ohio State University College of Business working paper series. 1998, Columbus, Ohio: The Ohio State University College of Business, WP88-WP291.

    Google Scholar 

  62. Bedeian AG, Pizzolatto AB, Long RG, Griffeth RW: The measurement and conceptualization of career stages. J Career Dev. 1991, 17: 153-166.

    Article  Google Scholar 

  63. Martin SL, Boye MW: Using a conceptually-based predictor of tenure to select employees. J Bus Psychol. 1998, 13: 233-243. 10.1023/A:1022959007116.

    Article  Google Scholar 

  64. Sturman MC: Searching for the inverted u-shaped relationship between time and performance: meta-analyses of the experience/performance, tenure/performance, and age/performance relationships. J Manage. 2003, 29: 609-640.

    Google Scholar 

  65. Pichler F: ‘Down-to-earth’ cosmopolitanism: subjective and objective measurements of cosmopolitanism in survey research. Curr Soc. 2009, 57: 704-732. 10.1177/0011392109337653.

    Article  Google Scholar 

  66. Woodward I, Skrbis Z, Bean C: Attitudes towards globalization and cosmopolitanism: cultural diversity, personal consumption and the national economy. Brit J Soc. 2008, 59: 207-226. 10.1111/j.1468-4446.2008.00190.x.

    Article  Google Scholar 

  67. Coleman J, Katz E, Menzel H: The diffusion of innovation among physicians. Sociometry. 1957, 20: 253-270. 10.2307/2785979.

    Article  Google Scholar 

  68. Burt R: Network items and the general social survey. Soc Networks. 1984, 6: 293-339. 10.1016/0378-8733(84)90007-8.

    Article  Google Scholar 

  69. Hlebec V, Ferligoj A: Reliability of social network measurement instruments. Field Meth. 2002, 14: 288-306. 10.1177/15222X014003003.

    Article  Google Scholar 

  70. Klofstad CA, McClurg SD, Rolfe M: Measurement of political discussion networks: a comparison of two ‘name generator’ procedures. Public Op Quart. 2009, 73: 462-483. 10.1093/poq/nfp032.

    Article  Google Scholar 

  71. Doumit G, Gattellari M, Grimshaw J, O’Brien MA: Local opinion leaders: effects of professional practice and health care outcomes. Cochrane Database of Syst Rev. 2007, CD000125-Issue 1. Art. No

  72. King CW, Summers JO: Overlap of opinion leadership across consumer product categories. J Market Res. 1970, 7: 43-50. 10.2307/3149505.

    Article  Google Scholar 

  73. McPherson M, Smith-Lovin L, Cook JM: Birds of a feather: homophily in social networks. Ann Rev Soc. 2001, 27: 415-444. 10.1146/annurev.soc.27.1.415.

    Article  Google Scholar 

  74. Shane S: Uncertainty avoidance and the preference for innovation championing roles. J Int Bus Stud. 1995, 26: 47-68. 10.1057/palgrave.jibs.8490165.

    Article  Google Scholar 

  75. Daft RL: Organization Theory and Design. 1989, New York: West Publishing Co, 3

    Google Scholar 

  76. Tushman M: Special boundary roles in the innovation process. Admin Sci Quart. 1997, 22: 587-605.

    Article  Google Scholar 

  77. Pettigrew AM, Woodman RW, Cameron KS: Studying organizational change and development: challenges for future research. Acad Manage J. 2001, 44: 697-713. 10.2307/3069411.

    Article  Google Scholar 

  78. Fontana AF, Rosenheck RA, Spencer HS: The Long Journey Home: The First Progress Report of the Department of Veterans Affairs PTSD Clinical Teams Program. 1990, West Haven: Northeast Program Evaluation Center

    Google Scholar 

  79. Desai R, Spencer H, Gray S, Pilver C: The Long Journey Home XVIII: Treatment of Posttraumatic Stress Disorder in the Department of Veterans Affairs. 2010, West Haven: Northeast Program Evaluation Center

    Google Scholar 

  80. Weiner BJ, Amick H, Lee SYD: Review: conceptualization and measurement of organizational readiness for change. A review of the literature in health services research and other fields. Med Care Res Rev. 2008, 65: 379-436. 10.1177/1077558708317802.

    Article  PubMed  Google Scholar 

  81. Helfrich CD, Li YF, Sharp ND, Sales AE: Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009, 14: 4-38.

    Google Scholar 

  82. Anderson NR, West MA: Measuring climate for work group innovation: development and validation of the team climate inventory. J Org Behav. 1998, 32: 55-73.

    Google Scholar 

  83. Patterson MG, West MA, Shackletop VJ, Dawson JF, Lawthorn R, Maitlis S, Robinson DL, Wallace AM: Validating the organizational climate measure: links to managerial practices, productivity and innovation. J Org Behav. 2005, 26: 379-408. 10.1002/job.312.

    Article  Google Scholar 

  84. Cameron KS, Quinn RE: Diagnosing and changing organizational culture: Based on the competing values framework. 1999, Reading: Addison-Wesley

    Google Scholar 

  85. Moos R: The work environment scale. 2008, Palo Alto: Mind Garden, 4

    Google Scholar 

  86. Zammuto RF, Krakower JY: Quantitative and qualitative studies of organizational culture. Res Org Change Dev. 1991, 5: 83-114.

    Google Scholar 

  87. Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood KE, Mayberg S: Green P and the Research Network on Youth Mental Health: assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Admin Pol Mental Health Serv Res. 2007, 35: 98-113.

    Article  Google Scholar 

  88. O’Reilly CA, Chatman J, Caldwell DF: People and organizational culture: a profile comparison approach to assessing person-organization fit. Acad Manage J. 1991, 34: 487-516. 10.2307/256404.

    Article  Google Scholar 

  89. Bass BM: Leadership and performance beyond expectations. 1985, New York: Free Press

    Google Scholar 

  90. Bass BM, Avolio BJ: Full range leadership development: Manual for the multifactor leadership questionnaire. 1997, CA: Mind Garden

    Google Scholar 

  91. Gustafson DH, Sainfort F, Eichler M, Adams L, Bisogano M, Steudel H: Developing and testing a model to predict outcomes of organizational change. Health Serv Res. 2003, 38: 751-776. 10.1111/1475-6773.00143.

    Article  PubMed  PubMed Central  Google Scholar 

  92. Fishbein M, Triandis HC, Kanfer FH, Backer M, Middlestadt SE, Eichler A: Factors influencing behavior and behavior change. Handbook of Health Psychology. Edited by: Baum A, Revenson TA, Singer JE. 2001, Mahwah: Lawrence Erlbaum Associates, 727-746.

    Google Scholar 

  93. Bandura A: Exercise of personal agency through the self-efficacy mechanism. Self-efficacy: Thought control of action. Edited by: Schwarzer R. 1992, Washington: Hemisphere, 3-38.

    Google Scholar 

  94. Michie S, Johnston M, Abraham C, Lawton RJ, Parker D, Walker A: Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005, 14: 26-33. 10.1136/qshc.2004.011155.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  95. West R: Time for a change: putting the Transtheoretical (Stage of Change) Model to rest. Addiction. 2005, 100: 1036-1039.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

This project described was supported by Award Numbers K01 MH070859 and RC1 MH088454 from the National Institute of Mental Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Mental Health or the National Institutes of Health.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joan M Cook.

Additional information

Competing interests

The authors declare they have no competing interests.

Authors’ contributions

JMC, CO and SD conducted a systematic review of articles for possible inclusion and identification of key measurement constructs captured. JMC, CO, SD, JCC, JR and PS participated in the development and refinement of both the quantitative survey measure and semi-structured interview guide. All authors contributed to the drafting, editing and final approval of the manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Cook, J.M., O’Donnell, C., Dinnen, S. et al. Measurement of a model of implementation for health care: toward a testable theory. Implementation Sci 7, 59 (2012). https://doi.org/10.1186/1748-5908-7-59

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1748-5908-7-59

Keywords