Skip to main content

Integrating evidence-based practices for increasing cancer screenings in safety net health systems: a multiple case study using the Consolidated Framework for Implementation Research

An Erratum to this article was published on 23 September 2016

Abstract

Background

Implementing evidence-based practices (EBPs) to increase cancer screenings in safety net primary care systems has great potential for reducing cancer disparities. Yet there is a gap in understanding the factors and mechanisms that influence EBP implementation within these high-priority systems. Guided by the Consolidated Framework for Implementation Research (CFIR), our study aims to fill this gap with a multiple case study of health care safety net systems that were funded by an American Cancer Society (ACS) grants program to increase breast and colorectal cancer screening rates. The initiative funded 68 safety net systems to increase cancer screening through implementation of evidence-based provider and client-oriented strategies.

Methods

Data are from a mixed-methods evaluation with nine purposively selected safety net systems. Fifty-two interviews were conducted with project leaders, implementers, and ACS staff. Funded safety net systems were categorized into high-, medium-, and low-performing cases based on the level of EBP implementation. Within- and cross-case analyses were performed to identify CFIR constructs that influenced level of EBP implementation.

Results

Of 39 CFIR constructs examined, six distinguished levels of implementation. Two constructs were from the intervention characteristics domain: adaptability and trialability. Three were from the inner setting domain: leadership engagement, tension for change, and access to information and knowledge. Engaging formally appointed internal implementation leaders, from the process domain, also distinguished level of implementation. No constructs from the outer setting or individual characteristics domain differentiated systems by level of implementation.

Conclusions

Our study identified a number of influential CFIR constructs and illustrated how they impacted EBP implementation across a variety of safety net systems. Findings may inform future dissemination efforts of EBPs for increasing cancer screening in similar settings. Moreover, our analytic approach is similar to previous case studies using CFIR and hence could facilitate comparisons across studies.

Peer Review reports

Background

Although cancer mortality rates in the USA have been declining, significant disparities persist, especially by race/ethnicity and socio-economic status [13]. Unfortunately, disparities also exist in early detection through cancer screening [35]. The persistence of these disparities warrants targeted efforts to employ evidence-based practices (EBPs) within settings that reach populations experiencing a disproportionate share of the cancer burden. Evidence-based guidelines for promoting cancer screening include the Guide to Community Preventive Services which recommends both client- and provider-oriented approaches to increase screening rates [6]. As in many areas of health care, however, there is a gap between evidence-based guidelines and actual practice [7]. In safety net health care systems, which include public hospitals, federally-funded community health centers and Federally Qualified Health Centers (FQHCs), local health department clinics, and free clinics that provide care for low-income, uninsured and vulnerable patients, this gap may be particularly wide. In community health centers, a centerpiece of the nation’s health care safety net, less than 35 % of patients aged 51 to 74 had been appropriately screened for colorectal cancer (CRC) in 2014; in contrast, 65.1 % of the general population in this age group was current on CRC screening in 2012 [4, 8].

Numerous theories and models have been proposed to inform and study the adoption and implementation of innovations, including evidence-based interventions such as recommendations to increase cancer screening [912]. The Consolidated Framework for Implementation Research (CFIR) attempts to advance our understanding of implementation across a range of settings and types of interventions by synthesizing and categorizing constructs across different theories and models [13]. The CFIR organizes 39 constructs and sub-constructs within five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Much of the research using CFIR to date has been qualitative [1421]. Some studies were intentionally designed to examine CFIR constructs [19] and others organized emerging themes using CFIR categories following data collection [2224]. Damschroder and Lowery recently illustrated a methodology for using CFIR to qualitatively identify constructs that differentiate levels of implementation [14]. This or similar methodology has been used to examine implementation in several recent studies [14, 25, 26].

Relatively few studies have examined implementation of EBPs to increase cancer screening. A recent study used CFIR before actual implementation to inform the adaptation of an evidence-based program promoting CRC screening in a FQHC [7]. A second qualitative study of factors influencing cancer prevention in FQHCs did not use a conceptual model per se but identified competing priorities (e.g., medical home transformation), lack of reimbursement, and insufficient patient insurance as barriers to screening [27]. Federal reporting requirements were seen as a facilitator to improved cancer prevention and control. Neither of these studies identified factors that distinguished levels of implementation of EBPs for cancer screening.

The current study aims to fill such gaps by using CFIR to conduct a secondary analysis of data collected from an initiative to increase cancer screening in safety net settings. Understanding barriers and facilitators to cancer screening in settings that serve patients with significant health disparities is an important step in identifying strategies to decrease these disparities. Given persistent health disparities in cancer and the importance of accelerating adoption of EBPs to promote cancer screening, the American Cancer Society (ACS) developed the Community Health Initiatives: Community Health Advocates Implementing Nationwide Grants for Empowerment and Equity (CHANGE grants) program. This program funds primary care systems, with an emphasis on safety net providers, to implement EBPs to increase breast and/or colorectal cancer screening. Many of the grantees were FQHCs or FQHC look-alikes; the latter are certified as meeting federal health center requirements but are not part of the same funding stream. Through conducting a secondary analysis of data collected as part of an evaluation of the ACS’s Community Health Initiatives CHANGE Grants program, the current study aims to identify factors that distinguished implementation performance across safety net systems funded through this ACS initiative.

Methods

Primary evaluation

Program description

The ACS CHANGE Grants program promotes EBPs for cancer screening as recommended by the US Community Preventive Services Task Force [6] and the National Colorectal Cancer Roundtable [28]. Recommended client-oriented strategies include client reminders, small media, group education, one-on-one education, reduction of structural barriers, and reduction of out-of-pocket costs. Provider-oriented strategies include provider assessment and feedback, and provider reminder and recall systems. Grants in 2013 ranged from $40,000 to $80,000, with the majority funded at $50,000.

Study sample

Of the 68 grant recipients of the CHANGE program in 2013, nine health systems were selected for site visits. To maximize variation, systems were chosen based on level of implementation as indicated in interim progress reports, cancer site (breast vs. colorectal), priority population (e.g., race/ethnicity), geographic location, and corporate funders. Qualitative data were collected during site visits conducted 7–8 months into the 12-month grants. Semi-structured interviews were conducted with three to nine key informants per system and with ACS primary care managers. Leadership at each system identified all staff involved in project implementation and evaluators attempted to interview all identified staff. In some cases, interviews were conducted with two to three staff simultaneously, depending on staff availability, but the majority of interviews were conducted one-on-one. Key informants included executive directors, chief medical officers, information technology staff, quality improvement coordinators, medical assistants, nurses, community health workers, and patient navigators. A total of 52 interviews were conducted with 61 individuals for an average of six interviews per system.

Data collection instruments

Interview guides

Interview guides were developed by ACS evaluation staff, with tailoring for category of respondent (i.e., leadership, staff, and ACS primary care managers). The guides did not directly address CFIR constructs but focused on implementation with both general and specific questions surrounding implementation. Interview topics included intervention selection, start-up activities, implementation details (e.g., implementers, training, implementation processes), challenges and facilitators to implementation, policy and practice-level changes, staffing structure, partnerships, and sustainability. See Table 1 for a list of interview questions by topic. The interviews averaged 60 min in length and were audio recorded and transcribed verbatim.

Table 1 Key informant interview guide questions

Progress reports

Each system worked with local ACS support staff, to varying degrees, to develop annual goals and screening targets for their program. However, how such goals were developed varied. Some were based on baseline screening data; some had difficulty calculating their baseline screening rates and determined their goals based on professional judgment. Each system submitted quarterly reports to ACS through an online tracking tool as well as a final report. Grantees provided data to regional ACS primary care managers who then entered it into a centralized database. Data included screening information (screening targets, numbers screened) and intervention-specific data (number of contacts made though one-on-one education, outreach and client reminders).

Secondary data analysis

Implementation level determination

We categorized the nine systems into high-, medium-, and low-performing systems based on data collected from the quarterly and final reports as well as the process and qualitative data collected from the site visits. Each system implemented a combination of patient- and provider-oriented strategies, with plans evolving as the grant year unfolded. To facilitate cross-system comparisons, we examined the extent to which client reminders were implemented. Client reminders was the only EBP implemented by all nine systems. We also examined the extent to which the systems attained their quantitative screening targets. Given the variable quality of reporting, we also included qualitative indicators of success in our analysis including potential sustainability of selected EBPs as described by the key informants at each site. Table 2 provides a brief description of each of the nine systems and summarizes how they performed on these dimensions.

Table 2 Description of systems and implementation outcomes by level of implementation

Qualitative data analysis

Each safety net system is considered a case in our analysis. We employed a largely deductive approach using a codebook based on the CFIR constructs and definitions. All analysts (N = 6) participated in testing the codebook by coding two transcripts from one system and through multiple research team meetings to refine and reach consensus on code definitions and use. Analysts were instructed to adhere strictly to the definitions and apply codes without making inferences from the data. After the codebook was finalized, the qualitative data analysis was conducted in three phases.

The goal of the first phase was to organize the data with CFIR codes and build the foundation for case-based analysis [29]. A pair of analysts coded each transcript independently with CFIR codes and then met to resolve discrepancies. After consensus was reached, final codes were applied to the transcripts using NVivo 10. For each case, an analyst was designated as a case “expert” who coded all transcripts of that case and reviewed its project proposal and evaluation reports. All analysts kept brief memos for each transcript, which were compiled by the expert for each case to facilitate future analyses [30].

The goal of phase two was to distill the data into brief summaries for each CFIR construct and for each case. We generated a report via NVivo containing all relevant text units for each construct from all transcripts within each case. From this report, we created summaries and applied ratings for each construct first at the individual transcript level and then at the case level. We applied a rating system with two dimensions: magnitude and valence. “Valence” refers to the construct’s influence on implementation of EBPs—positive, negative, mixed, or largely descriptive with no valence (see Table 3). At the individual transcript level, valence was determined by respondents’ accounts related to the specific construct. At the case level, we also took into account whether or not the respondents in different transcripts agreed with each other in terms of the constructs’ influence on implementation. “Magnitude” refers to the extent to which the constructs were discussed. At the individual transcript level, “magnitude” was represented by the total number of mentions of a construct in a transcript and the proportion of text coded with that construct per transcript. At the case level, the “magnitude” was determined using two methods that complemented each other. It was first determined by multiplying the number of excerpts coded with a construct within a case by the proportion of transcripts that were coded with a construct per case. We also assessed “magnitude” based simply on the proportion of transcripts within a case that were coded with a construct (i.e., none = 0 %, few = 1 = 25 %, some = 26–50 %, many = 51 = 75 %, most = 76–99 %, all = 100 %).

Table 3 Construct rating criteria

The case expert summarized the following information for each construct in a case-specific matrix with constructs as rows and transcripts as columns: (1) a brief summary of how this construct manifested in a particular transcript, (2) a magnitude and a valence rating of the construct’s influence on implementation of EBPs in a particular transcript. Once this step was completed for all transcripts within each case, the case expert reviewed the information across all the transcripts and aggregated them to the case level for each CFIR construct with a case-level summary and ratings. The case expert also composed a case narrative identifying the most salient constructs that affected implementation. Both the matrix and narrative for each case were reviewed by a secondary analyst. Discrepancies were noted by the second analyst and resolved through discussion.

The goal of phase three was to identify patterns of influence on implementation for each CFIR construct across all nine cases [29]. We sought to identify CFIR constructs that distinguished high-, medium-, and low-performing cases. To achieve this goal, the case summaries and ratings from phase two were imported into a case-ordered matrix in which cases were listed by level of implementation [31]. Two analysts independently reviewed the magnitude and the valence of all constructs across nine systems to identify distinguishing patterns across high, medium, and low-performing systems. Patterns were then confirmed by a third analyst. Table 4 presents results from the intervention characteristics domain as an example of our approach (see additional tables for the other domains).

Table 4 System-ordered matrix of magnitude and valence of intervention characteristics constructs by level of implementation

Results

Table 2 briefly describes the nine systems and presents implementation outcomes by system. Six systems were FQHCs or FQHC look-alikes. The other three systems were large and complex health systems with varying primary care arrangements (e.g., FQHC partner, or affiliated primary care clinics). One of these was an urban community health system (system B), another was a large university-based health system (system C), and the third was a large regional health system (system G). Across the nine systems, four served over 40,000 patients (systems B, C, D, G), three served 15,000 to 40,000 patients (systems A, E, H), and two were relatively small with fewer than 15,000 patients (systems F and I). Four of the systems focused on EBPs for colorectal cancer screening (systems A, B, C, D), four on breast cancer screening (systems E, F, G, I), and one on both types of cancer (system H). The systems rated as having high implementation levels (systems A, B, C) completed more than 2000 screening tests in the funded grant year. Those rated as having medium implementation levels (systems D, E, F) reported more than 250 but fewer than 1500 screenings, combined with at least modest success in implementing client reminders. Systems with low levels of implementation (systems G, H, I) reported 250 screenings or fewer, and modest or few client reminders.

Intervention characteristic domain

Among the eight constructs within this domain, we observed distinguishing patterns for two constructs: adaptability and trialability.

Adaptability

Adaptability, or the degree to which an intervention can be tailored to better fit the organizational context, was described as a positive influence in eight systems. The magnitude of influence, however, was stronger in the high-performing systems relative to the low-performing systems (see Table 4). In the three higher-performing systems, adaptations were made to the EBP delivery models after initial implementation based on lessons learned from early delivery challenges. For example, system A changed the implementer model (from receptionist to medical assistants) for carrying out patient reminders to increase efficiency and increase privacy for patients. System B changed the frequency and timing of the mailing and phone calls for conducting patient reminders to decrease the frequency of missed colonoscopy appointments, modified data-capturing tools, re-vamped how to deliver education on colonoscopy preparation, and eliminated extra steps for patients during the screening process:

“… at one point a patient would go to tell them on the sixth floor that they had arrived, but then go down to the second floor to register at the cashier, and then go back to the sixth floor to say, okay, I checked in, now what? It was – so we try to cut all the extra steps and just make one smooth flow.”

Two of the three medium-performing systems discussed adaptation. System E’s community health worker came up with creative ways to deliver patient education. In system F, comments about adaptation described how certain features of the intervention could be adapted, such as use of appropriate languages to educate patients and supplementing the current electronic medical record (EMR) system with other tracking tools. In contrast, there was less discussion of adaptation in the three low-performing systems. Respondents from systems G and I discussed patient education materials. A respondent from system H simply expressed appreciation for being able to adapt.

While adaptation manifested positively across systems, the main differences across systems lied in the magnitude to which adaptation was discussed and the types of adaptation being made. Detailed descriptions of adaptations were shared in high- and medium-performing systems and most of the adaptations were made to the delivery of the intervention. The lower-performing systems described fewer changes, and when they did, changes were primarily made to educational materials, thus suggesting that adaptability, or the actual adaptation of EBPs, influences implementation outcomes.

Trialability

Trialability, or the ability to pilot test before full-scale implementation, was discussed in six systems; three systems did not refer to this construct. Four out of six accounts of this construct were positive and two were descriptive (Table 4). The high-performing systems engaged in more discussion of trialability than medium- and low-performing systems, as reflected in the higher magnitude scores. All high-performing systems tried at least one pilot to figure out the best model for their system. For example, system A selected a clinic location that was viewed as ideal for piloting and determining the best approach to engage and remind patients about the fecal immunochemical test (FIT):

So it’s kind of a fertile ground to play things out……we’re big enough, so just a great place to have pilots.

Two of the medium-performing systems also used pilots. System E pilot tested the role of outreach workers and learned it was a struggle to schedule outreach events at churches and local pharmacies. System F tried different workflow procedures, such as communicating to providers via EMR pop-ups. Only one system from the lower-performing group (system I) mentioned trialability, and they described the entire CHANGE program as a pilot program with a lot of challenges, while also helping them identify areas for improvement. Because the higher-performing systems described trialability in significant detail and it was not discussed or only mentioned in general terms in the lower-performing systems, trialability appears to have an important influence on implementation. This construct was closely associated with adaptability, given that adaptations were often made after trying a particular approach to implementation.

Outer setting domain

A large majority of respondents across all nine cases discussed patient needs and resources and cosmopolitanism as important factors influencing the implementation of EBPs (see Additional file 1: Table S1). However, discussion of these and other outer setting constructs did not vary noticeably by level of implementation.

Inner setting domain

Three inner setting constructs clearly distinguished systems by implementation outcomes (Additional file 1: Table S2): leadership engagement, tension for change, and access to information and knowledge.

Leadership engagement

Leadership engagement, a sub-construct under readiness for implementation, was referenced in all systems. The accounts of this construct described how leaders were involved in the implementation of the EBPs, and included establishing program goals and individual roles, providing ongoing support and guidance, providing direct services for patients, and monitoring progress with feedback to staff. Eight out of nine systems described this factor as exerting a positive impact. However, the magnitude of influence appeared higher among the high-performing systems. In the high-performing systems, all respondents shared positive views. For example, in system A, both the front-line navigators and staff leaders thought that the program director played a key role in pulling things together to make the CHANGE program happen:

“So she (program director) was the main driver of the project, and …kind of showed us what the – how we would benefit from this, how important it is.”

System B described how executive-level leaders were at the table asking questions and ensuring accountability through reporting of outcomes. Leadership engagement was mentioned less frequently across the medium- and low-performing systems, with fewer concrete examples of how leaders were actively engaged.

These patterns were repeated in the initial phases of the implementation process. In all high-performing systems, both the leadership and key implementers were able to present the CHANGE program or the EBPs in a way that was relevant and useful, which helped them earn buy-in from providers and staff for actual implementation. Such communication typically occurred during staff or leadership meetings, in which project leaders presented the benefits and rationale for the program with supporting data and evidence. In contrast, two of the lower-performing systems expressed uncertainty about the project at the onset rather than enthusiasm. Because the discussions of leadership engagement varied both in terms of magnitude and depth across performance groups, combined with varying approaches to “selling” the program, leadership engagement appears to have an important influence and distinguish implementation outcomes.

Tension for change

Tension for change, or the degree to which stakeholders perceive the current situation as intolerable or needing change, was discussed in eight out of nine systems. The account of this construct was often described in the context of identifying gaps in quality measures (e.g., low cancer screening rates), deficiencies in reaching patients (e.g., existing outreach only reached a limited number of patients), and problems with completing cancer screenings (e.g., high no-show rates for colonoscopies or low return rate of FIT kits). Such gaps and problems were often identified through quality improvement efforts and were frequently reported as the rationale for participating in the CHANGE program. The construct was most commonly discussed (with higher magnitude) in the high-performing systems with detailed and specific descriptions of why the program was needed. For example, a respondent from system B shared how the program addressed a specific need identified by providers:

“The oncologist and GI specialists approached us, this is a need that …. We’re having a lot of no-shows for colonoscopies. Can you guys help us? Usually when a specialist approaches and tells you, it’s real.”

Respondents from the medium-performing systems also discussed tensions for change. System D respondents discussed challenges associated with getting accurate data on screening rates given the need to get reports back from specialists and enter them into EMR in a way that allows for report generation. System E discussed their low screening rates relative to neighboring FQHCs. In contrast, lower-performing systems either did not discuss any tension for change or made general observations about how the program provided an opportunity to address cancer. This pattern suggests that identifying specific deficiencies in measures and practice might have created more tension for change, leading to increased momentum for implementing the EBPs.

Access to information and knowledge

Access to information and knowledge refers to the ease of access to information about the intervention and how to incorporate it into organizational processes. Descriptions centered on training and educational resources available for the appointed implementers of the CHANGE program. Most accounts were positive among the high-performing systems, with adequate trainings for staff often cited as key to successful implementation. Training was not discussed as frequently in medium- and lower-performing systems and tended to be descriptive. Negative perspectives referred to a lack of consensus on training needs and the absence of written plans or protocols to guide implementation. For example, a respondent from system H described:

So there was no set process. There wasn’t a written down set process, which was – it made it more difficult……when you’re on your own, it can be kind of overwhelming sometimes to make sure that you hit everything that you need to do.

Given the notable differences in access to information and availability of training in the higher-performing systems relative to the lower-performing systems, it appears that this construct facilitated successful implementation. This construct also appeared to be highly related to leadership engagement given the knowledge and information for implementation was most often initiated and provided by organizational leaders.

Individual characteristics domain

Constructs from the individual characteristics domain were not often discussed across the nine systems (Additional file 1: Table S3). Knowledge and beliefs about the intervention and personal attributes were discussed most commonly, but none of the constructs from this domain differentiated systems by levels of performance.

Implementation process domain

Respondents discussed the process of implementing the EBPs in depth, with substantive discussions focused on planning, executing, and reflecting and evaluating. Despite the salience of key constructs within this domain, only formally appointed internal implementation leaders varied by level of implementation (Additional file 1: Table S4).

Engaging formally appointed internal implementation leaders

Formally appointed internal implementation leaders, a sub-construct under engaging, was discussed by all nine systems. Implementation leaders were typically patient navigators, health educators, nurses, or medical assistants who implemented the major tasks of the CHANGE program in each system. High- and medium-performing systems tended to discuss the importance of these roles more than lower-performing systems. The magnitude of influence of this construct was higher among high- and medium-performing systems. For example in system A, the medical assistants were appointed as primary implementers of the CHANGE program to increase CRC screening, which improved workflow and required less direct involvement of the providers:

We basically taught the MAs how to do the bulk of the work and then when they got to either a question or at that point at all, once – if they had a FIT test and it was obvious it was done in the last year, then the MA knew not to order it. But if there was no FIT test, the MA ordered it.”

Systems B and C also described the roles of implementation leaders in depth, with respondents from both systems describing the specific roles and responsibilities of patient navigators in EBP implementation. In contrast, respondents from system D did not discuss a single point person and system E shared that no new staff were hired, but clinical leadership were very engaged in the entire intervention planning and implementation process. Lower-performing systems tended to describe implementation leaders less often and in neutral terms, or in the case of system G, acknowledging the challenges associated with not having a dedicated point person:

“But we knew going in from the beginning that it was something that was going to be a challenge because we didn’t have a dedicated staff member. We couldn’t take somebody out of clinical time to send them out to the community.”

These differences in the presence of a formally appointed implementation leader across performance levels, combined with the more frequent and detailed discussions in higher-performing systems, suggest that this construct differentiates levels of implementation.

Discussion

This study examined factors influencing implementation of EBPs to promote breast and colorectal cancer screening in safety net systems using CFIR. One of our main findings was that leadership engagement clearly distinguished high-, medium-, and low-performing systems. Our findings are consistent with a number of studies that have identified leadership engagement as key to successful implementation [3234]. Using similar methods in a study focused on implementation of a weight loss program into five VA systems, Damschroder and Lowery found that leadership engagement was strongly associated with successful implementation; leaders allocated staff time, solved problems, obtained needed resources, and kept the program visible [14]. In contrast, a recent study on implementation of internet-based patient-provider communication by Varsi and colleagues used similar methods and did not find leadership engagement to be consistently related to implementation [25]. Such variance in the influence of leadership across different study settings suggests that while leadership engagement is generally considered key to success, how to maximize leaderships’ positive influence in implementation might depend on the nature of the setting and EBP.

In addition to leadership engagement, formally appointed implementation leaders strongly distinguished level of implementation in our study. Within the VA study, most of the systems had a strong coordinator, and thus, formal implementation leaders did not distinguish between systems due to lack of variability [14]. In the Varsi et al. study, formally appointed implementation leaders weakly distinguished level of implementation [25]. Our findings are consistent with a qualitative study by Martinez-Gutierrez et al., which found designation of non-physician staff to do specific prevention activities had a major impact on cancer screening rates in a single FQHC system [35]. Taken together, our findings on leadership engagement and formally appointed implementation leaders suggest that successful implementation of EBPs requires leaders at multiple levels to function with high capacity. Leaders at the executive level should be highly engaged in goal setting, establishing roles, getting buy-in from providers, providing training, and monitoring progress, while front-line implementation leaders are crucial in day-to-day operations and pilot testing, as well as identifying inefficiencies in current processes and suggesting changes for program delivery [36].

We identified four additional constructs that distinguished levels of implementation across systems; two of these constructs were from the intervention characteristics domain: adaptability and trialability. Higher-performing systems in our study gave detailed accounts of how they adapted or tailored the intervention to fit their setting; systems that struggled with implementation tended to describe adaptations to educational materials rather than clinic flow or processes. Higher-performing systems in our study also conducted pilot studies of the EBPs to inform broader implementation, often using a quality improvement process. Interestingly, neither of these constructs were distinguishing factors in the Damschroder and Lowery study due to lack of variation across systems for adaptability, and because none of the systems conducted a trial of the intervention. Trialability was a distinguishing factor in the Varsi et al. study [25]. Overall, our findings on the importance of various aspects of the intervention itself are consistent with the large body of research on diffusion of innovations and related studies that have examined how intervention characteristics influence implementation [11, 34, 37, 38].

Two additional inner setting constructs distinguished levels of implementation in our study, including tension for change and access to information and knowledge. Similar to prior studies, systems with higher levels of implementation tended to describe a concrete need for the program [25]. In our study, needs tended to be identified through quality improvement efforts showing room for improvement in screening rates or challenges with existing screening approaches. Access to information and knowledge usually related to training opportunities provided by leadership in our study, with high-performing systems describing trainings more often and in greater detail. Training was not available as part of the VA implementation project, and training was viewed as adequate in all of the systems in the Varsi et al. study [14, 25]. Other studies have similarly identified training of key staff as important for implementation [16, 33]. With respect to the other inner setting constructs, Sohng et al. found that communication and resource availability were positively associated with the likelihood of EBP implementation for cancer screening [39]. We did not find these constructs to distinguish levels of implementation, in large part because network and communication appeared to have similar influence (in terms of magnitude and valence) across the nine systems.

Another notable finding from our study is that, despite variability among the nine safety net systems in terms of size, population served, and organizational infrastructure, none of these factors differentiated implementation levels. In a systematic review that included cancer screening efforts in both FQHCs and other health organizations, Anhang Price et al. identified organizational factors and processes that helped to increase cancer screening rates at each step in the cancer screening process [36]. Interestingly, similar to our findings, organizational structures, such as size and type, were not associated with screening rates. Rather, improvements in screening rates were largely driven by organizational strategies to (1) limit the number of interfaces across organizational boundaries; (2) recruit patients, promote referrals, and facilitate appointment scheduling; and (3) promote continuous patient care. These organizational processes were similar to those described in our study and correspond to the roles of leadership and formally appointed implementation leaders, which were both distinguishing factors for implementation success.

Our study employed a slightly different rating approach than the rating system proposed by Damschroder and Lowery and the CFIR guide [14]. The “valence” dimension in both approaches was essentially the same and characterizes the influence of CFIR constructs as positive, negative, mixed, and neutral/descriptive. The “strength” dimension in Damschroder’s and Lowery’s rating system is generally comparable to our “magnitude” dimension. However, their “strength” rating system incorporated several distinct factors, including the level of agreement among participants, strength of language, and use of concrete examples. We used level of agreement among participants to inform our “valence” dimension (e.g., did participants agree on whether the construct had a positive or negative influence), and we found it challenging to assess strength of language in a systematic manner with multiple respondents within each case and multiple mentions of constructs within each transcript. Thus, we chose to assess magnitude as described above, which factored in the extent to which a particular construct was discussed across respondents from each site.

Limitations

Several limitations should be considered when assessing the performance of each system and our related findings. The first limitation is related to the accuracy of the quantitative measures of completed screenings, which might in turn impact the accuracy of implementation outcomes. The participating systems used a variety of methods to specify their screening targets, and to some extent these were arbitrarily determined. Targets were not set using a standardized formula, and grantees were not provided specific criteria for targets. Some systems set goals using baseline data from previous years but baseline data did not always reflect the system’s actual performance due to reporting and EMR system errors. All these limitations, however, reflect the reality of implementation challenges in resource-constrained safety net health care settings. Nevertheless, it does have implications for the validity of our implementation outcome categories, despite our efforts to triangulate across measures. Second, the systems that performed better in terms of implementation outcomes tended to be addressing CRC rather than breast cancer. This may reflect that systems ready to take on CRC were with higher capacity given that addressing CRC within FQHCs is relatively new compared to efforts to increase breast cancer screening. Third, the interview guide was developed to answer ACS’s evaluation questions, and was not informed by CFIR. As a result, some of the constructs that did not emerge as salient may have been salient if asked about explicitly. On the other hand, the interview guides were designed to identify barriers and facilitators to implementation with mostly general questions not specifically addressing CFIR constructs. Therefore, the themes that emerged from the responses to these general questions were in essence perceived by the participants to be the most salient factors. Though the study was not originally designed to test CFIR, the fact that so many of these emergent factors identified in this post hoc, deductive fashion were consistent with CFIR domains and constructs could serve to validate the utility of CFIR [34]. As a meta-theoretical framework, CFIR was developed to serve as a comprehensive typology of factors influencing implementation, to compare results across contexts and to identify new theoretical developments. A recent systematic review of the use of CFIR found that the majority of published studies were using CFIR to guide data analysis alone [40]. Our study will add to this repository of studies that could facilitate cross-context comparisons. Future research could compare findings between studies using CFIR to guide data analysis alone and those using CFIR throughout the data collection and analysis process.

Conclusions

Overall, our study suggests that despite the resource-constrained environment, with moderate support, such as the grants and technical assistance from ACS, safety net health systems can successfully implement EBPs to increase cancer screening. While size, patient population, and organization structure did not distinguish implementation success, the key factors to successful implementation may include strong leadership and front-line implementation leaders, adequate training and information, and organizational processes to identify deficiencies, pilot test new ideas, and make adaptations accordingly. Future interventions aiming to increase cancer screening or implement other EBPs in similar safety net settings could consider intervening on these factors while designing their efforts. Practitioners implementing EBPs could consider focusing their resources on developing implementation strategies that target these factors, such as training designated implementation leaders and setting up organizational processes to provide information and ongoing feedback to those involved in day-to-day implementation. Our study also contributes to the growing body of research on CFIR. As described by Damschroder and Lowery [14], contributing to a repository of findings will enable the field to begin to understand which constructs have predictive ability, which can be manipulated for better implementation outcomes, and the situations in which specific constructs are salient. Our study also provides suggestions for exploring relationships among CFIR constructs which will aid in future studies of construct validity.

Abbreviations

ACS:

American Cancer Society

CFIR:

Consolidated Framework for Implementation Research

CHANGE:

Community Health Advocates Implementing Nationwide Grants for Empowerment and Equity

CRC:

Colorectal Cancer

EBP:

Evidence-based Practice

EMR:

Electronic Medical Record

FIT:

Fecal Immunochemical Test

FQHC:

Federally Qualified Health Center

MA:

Medical Assistant

References

  1. American Cancer Society. Cancer facts & figures 2015. Atlanta: American Cancer Society; 2015.

    Google Scholar 

  2. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2015. CA Cancer J Clin. 2015;65(1):5–29.

    Article  PubMed  Google Scholar 

  3. National Cancer Institute. Cancer trends progress report. Bethesda, MD: National Institutes of Health, U.S. Department of Health and Human Services; 2015.

    Google Scholar 

  4. Centers for Disease Control and Prevention. Vital signs: colorectal cancer screening test use—United States, 2012. MMWR. 2013;62(44):881–8.

    Google Scholar 

  5. Centers for Disease Control and Prevention. Use of colorectal cancer tests—United States, 2002, 2004, and 2006. MMWR. 2008;57(10):253–8.

    Google Scholar 

  6. Sabatino SA, Lawrence B, Elder R, Mercer SL, Wilson KM, DeVinney B, Melillo S, Carvalho M, Taplin S, Bastani R, et al. Effectiveness of interventions to increase screening for breast, cervical, and colorectal cancers: nine updated systematic reviews for the guide to community preventive services. Am J Prev Med. 2012;43(1):97–118.

    Article  PubMed  Google Scholar 

  7. Cole AM, Esplin A, Baldwin LM. Adaptation of an evidence-based colorectal cancer screening program using the Consolidated Framework for Implementation Research. Prev Chronic Dis. 2015;12(E213):150300.

    Article  Google Scholar 

  8. Health Resources and Services Administration. Health Center Program, 2014 National Report, Quality of Care Measures http://bphc.hrsa.gov/uds/datacenter.aspx?q=t6b&year=2014&state=. Accessed 9 Jan 2016.

  9. Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR. Evidence-based public health. 2nd ed. New York: Oxford University Press; 2010.

    Book  Google Scholar 

  10. Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299(2):211–3.

    Article  CAS  PubMed  Google Scholar 

  11. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Nilsen P. Making sense of implementation theories, models, and frameworks. Implement Sci. 2015;10:53.

  13. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8:51.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Gould NJ, Lorencatto F, Stanworth SJ, Michie S, Prior ME, Glidewell L, Grimshaw JM, Francis JJ. Application of theory to enhance audit and feedback interventions to increase the uptake of evidence-based transfusion practice: an intervention development protocol. Implement Sci. 2014;9:92.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Forman J, Harrod M, Robinson C, Annis-Emeott A, Ott J, Saffar D, Krein SL, Greenstone CL. First things first: foundational requirements for a medical home in an academic medical center. J Gen Intern Med. 2014;9:9.

    Google Scholar 

  17. Luck J, Bowman C, York L, Midboe A, Taylor T, Gale R, Asch S. Multimethod evaluation of the VA’s peer-to-peer toolkit for patient-centered medical home implementation. J Gen Intern Med. 2014;29 Suppl 2:S572–578.

    Article  PubMed  Google Scholar 

  18. Richardson JE, Abramson EL, Pfoh ER, Kaushal R. Bridging informatics and implementation science: evaluating a framework to assess electronic health record implementations in community settings. AMIA Ann Symp Proc. 2012;2012:770–8.

    Google Scholar 

  19. Ramsey A, Lord S, Torrey J, Marsch L, Lardiere M. Paving the way to successful implementation: identifying key barriers to use of technology-based therapeutic tools for behavioral health care. J Behav Health Serv Res. 2016;43(1):54–70.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Kalkan A, Roback K, Hallert E, Carlsson P. Factors influencing rheumatologists inverted question mark prescription of biological treatment in rheumatoid arthritis: an interview study. Implement Sci. 2014;9(1):153.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Robins LS, Jackson JE, Green BB, Korngiebel D, Force RW, Baldwin LM. Barriers and facilitators to evidence-based blood pressure control in community practice. J Am Board Fam Med. 2013;26(5):539–57.

    Article  PubMed  Google Scholar 

  22. Sanchez V, Steckler A, Nitirat P, Hallfors D, Cho H, Brodish P. Fidelity of implementation in a treatment effectiveness trial of reconnecting youth. Health Educ Res. 2007;22(1):95–107.

    Article  PubMed  Google Scholar 

  23. Sherr K, Gimbel S, Rustagi A, Nduati R, Cuembelo F, Farquhar C, Wasserheit J, Gloyd S. Systems analysis and improvement to optimize pMTCT (SAIA): a cluster randomized trial. Implement Sci. 2014;9(55):1748–5908.

    Google Scholar 

  24. Prior M, Elouafkaoui P, Elders A, Young L, Duncan EM, Newlands R, Clarkson JE, Ramsay CR. Evaluating an audit and feedback intervention for reducing antibiotic prescribing behaviour in general dental practice (the RAPiD trial): a partial factorial cluster randomised trial protocol. Implement Sci. 2014;9(50):1748–5908.

    Google Scholar 

  25. Varsi C, Ekstedt M, Gammon D, Ruland CM. Using the Consolidated Framework for Implementation Research to identify barriers and facilitators for the implementation of an internet-based patient-provider communication service in five settings: a qualitative study. J Med Internet Res. 2015;18(17):e262.

    Google Scholar 

  26. Gilmer TP, Katz ML, Stefancic A, Palinkas LA. Variation in the implementation of California’s Full Service Partnerships for persons with serious mental illness. Health Serv Res. 2013;48(6 Pt 2):2245–67.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Allen CL, Harris JR, Hannon PA, Parrish AT, Hammerback K, Craft J, Gray B. Opportunities for improving cancer prevention at federally qualified health centers. J Cancer Educ. 2014;29(1):30–7.

    Article  PubMed  Google Scholar 

  28. Steps for increasing colorectal cancer screening rates: a manual for community health centers. http://nccrt.org/wp-content/uploads/0305.60-Colorectal-Cancer-Manual_FULFILL.pdf. Accessed 9 Jan 2016.

  29. Yin R. Case study research: design and methods. Thousand Oaks, CA: Sage Publications; 2003.

    Google Scholar 

  30. Patton M. Qualitative research and evaluation methods. 3rd ed. Thousand Oaks, CA: Sage Publications; 2002.

    Google Scholar 

  31. Miles MB, Huberman AM. Qualitative data analysis: an expanded sourcebook. 2nd ed. Thousand Oaks: Sage Publications; 1994.

    Google Scholar 

  32. Palacio A, Keller VF, Chen J, Tamariz L, Carrasquillo O, Tanio C. Can physicians deliver chronic medications at the point of care? Am J Med Qual. 2015. doi:10.1177/1062860614568646.

    PubMed  Google Scholar 

  33. Green CA, McCarty D, Mertens J, Lynch FL, Hilde A, Firemark A, Weisner CM, Pating D, Anderson BM. A qualitative study of the adoption of buprenorphine for opioid addiction treatment. J Subst Abuse Treat. 2014;46(3):390–401.

    Article  PubMed  Google Scholar 

  34. Ilott I, Gerrish K, Booth A, Field B. Testing the Consolidated Framework for Implementation Research on health care innovations from South Yorkshire. J Eval Clin Pract. 2013;19(5):915–24.

    PubMed  Google Scholar 

  35. Martinez-Gutierrez J, Jhingan E, Angulo A, Jimenez R, Thompson B, Coronado GD, Coronado GD, Sanchez J, Petrik A, Kapka T, et al. Cancer screening at a federally qualified health center: a qualitative study on organizational challenges in the era of the patient-centered medical home. J Immigr Minor Health. 2013;15(5):993–1000.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Anhang Price R, Zapka J, Edwards H, Taplin SH. Organizational factors and the cancer screening process. J Natl Cancer Inst Monogr. 2010;40:38–57.

    Article  Google Scholar 

  37. Pankratz M, Hallfors D, Cho H. Measuring perceptions of innovation adoption: the diffusion of a federal drug prevention policy. Health Educ Res. 2002;17(3):315–26.

    Article  CAS  PubMed  Google Scholar 

  38. Rogers E. Diffusion of innovations. 5th ed. New York: Free Press; 2003.

    Google Scholar 

  39. Sohng HY, Kuniyuki A, Edelson J, Weir RC, Song H, Tu SP. Capability for change at community health centers serving Asian Pacific Islanders: an exploratory study of a cancer screening evidence-based intervention. Asian Pac J Cancer Prev. 2013;4(12):7451–7.

    Article  Google Scholar 

  40. Kirk M, Kelley C, Yankey N, Birken S, Abadie B, Damschroder L. A systematic review of the use of the Consolidated Framework for Implementation Research. Implement Sci. 2016;11(1):17.

    Google Scholar 

Download references

Acknowledgements

This study was funded through a contract from the American Cancer Society. Dr. Kegler’s contribution was also supported by the Intervention Development, Dissemination, and Implementation developmental shared resource, a core supported by the Winship Cancer Institute of Emory University.

Availability of data and materials

Data will not be made available due to its qualitative nature and difficulties in removing all identifiers.

Authors’ contributions

SL led the analysis team, conducted analyses, and wrote sections of the manuscript. MK advised on the analysis plan, conducted analyses, and wrote sections of the manuscript. MC, EP, DB, AH, and RM analyzed data and edited the manuscript. JM, RM, and KR designed and conducted the initial evaluation study and edited the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval

The study protocol was reviewed and approved by the Morehouse School of Medicine Institutional Review Board.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michelle C. Kegler.

Additional information

An erratum to this article can be found at http://dx.doi.org/10.1186/s13012-016-0494-3.

Additional file

Additional file 1.

Table S1. Site-ordered matrix of magnitude and valence of outer setting constructs by level of implementation. Table S2. Site-ordered matrix of magnitude and valence of inner setting constructs by level of implementation. Table S3. Site-ordered matrix of magnitude and valence of individual characteristics constructs by level of implementation. Table S4. Site-ordered matrix of magnitude and valence of implementation process constructs by level of implementation. (DOCX 21 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, S., Kegler, M.C., Cotter, M. et al. Integrating evidence-based practices for increasing cancer screenings in safety net health systems: a multiple case study using the Consolidated Framework for Implementation Research. Implementation Sci 11, 109 (2015). https://doi.org/10.1186/s13012-016-0477-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-016-0477-4

Keywords