Skip to main content

A KT intervention including the evidence alert system to improve clinician’s evidence-based practice behavior—a cluster randomized controlled trial

Abstract

Background

It is difficult to foster research utilization among allied health professionals (AHPs). Tailored, multifaceted knowledge translation (KT) strategies are now recommended but are resource intensive to implement. Employers need effective KT solutions but little is known about; the impact and viability of multifaceted KT strategies using an online KT tool, their effectiveness with AHPs and their effect on evidence-based practice (EBP) decision-making behavior. The study aim was to measure the effectiveness of a multifaceted KT intervention including a customized KT tool, to change EBP behavior, knowledge, and attitudes of AHPs.

Methods

This is an evaluator-blinded, cluster randomized controlled trial conducted in an Australian community-based cerebral palsy service. 135 AHPs (physiotherapists, occupational therapists, speech pathologists, psychologists and social workers) from four regions were cluster randomized (n = 4), to either the KT intervention group (n = 73 AHPs) or the control group (n = 62 AHPs), using computer-generated random numbers, concealed in opaque envelopes, by an independent officer. The KT intervention included three-day skills training workshop and multifaceted workplace supports to redress barriers (paid EBP time, mentoring, system changes and access to an online research synthesis tool). Primary outcome (self- and peer-rated EBP behavior) was measured using the Goal Attainment Scale (individual level). Secondary outcomes (knowledge and attitudes) were measured using exams and the Evidence Based Practice Attitude Scale.

Results

The intervention group’s primary outcome scores improved relative to the control group, however when clustering was taken into account, the findings were non-significant: self-rated EBP behavior [effect size 4.97 (95% CI -10.47, 20.41) (p = 0.52)]; peer-rated EBP behavior [effect size 5.86 (95% CI -17.77, 29.50) (p = 0.62)]. Statistically significant improvements in EBP knowledge were detected [effect size 2.97 (95% CI 1.97, 3.97 (p < 0.0001)]. Change in EBP attitudes was not statistically significant.

Conclusions

Improvement in EBP behavior was not statistically significant after adjusting for cluster effect, however similar improvements from peer-ratings suggest behaviorally meaningful gains. The large variability in behavior observed between clusters suggests barrier assessments and subsequent KT interventions may need to target subgroups within an organization.

Trial registration

Registered on the Australian New Zealand Clinical Trials Registry (ACTRN12611000529943).

Peer Review reports

Introduction

Cerebral palsy (CP) is the most common physical disability in childhood [1]. Of people with CP, three in four are in pain; one in two have an intellectual disability; one in three cannot walk; one in three have a hip displacement; one in four cannot talk; one in four have epilepsy; one in four have a behavior disorder; one in four have bladder control problems; one in five have a sleep disorder; one in five dribble; one in ten are blind; one in fifteen are tube fed; and one in twenty-five are deaf [2]. Allied health professionals (AHPs) who treat people with CP are therefore faced with complex clinical decision making. Also, like many other fields, new evidence-based CP treatments are rapidly emerging [3]. AHPs provide the majority of health services to these people and therefore need to have up-to-date knowledge and skills in providing evidence-based interventions. AHPs endorse providing evidence-based care [4, 5], but goodwill alone does not guarantee the latest research is translated and applied within practice [6, 7]. Survey research suggests that there is a significant gap between best available evidence and what treatments are actually offered to people with CP [8, 9]. Lack of time [10], lack of skill searching and appraising research [11, 12], and lack of access to databases compounded by large volumes of published research are barriers to new knowledge being translated in a timely and efficient way [13].

Knowledge translation (KT) strategies including workshops [14], mentoring [15], outreach visits [16], audit and feedback [17] and reminders and memos [18] aim to embed research into practice and lead to small to moderate changes in health professionals’ behavior. Even though KT is an emergent science, it is known that KT strategies should be tailored to be context specific, and planned in response to a thorough assessment of barriers and facilitators [19, 20]. Although there is no firm evidence that multifaceted strategies are more effective than single interventions it is plausible that they would be more effective if each component and the overall strategy were designed in response to a barriers analysis [19]. In the field of CP, a tailored KT intervention was pilot tested with good results, but the lack of a controlled comparison group precludes certainty of the findings [7].

In addition to tailoring KT interventions, it is recommended that theory is used to guide the KT journey [21]. A number of KT frameworks have been proposed, that incorporate key theories suited for various target settings and professional groups. One example is the knowledge-to-action process (KTA) [22] (Figure 1) which provides a comprehensive and flexible framework to guide and monitor a multifaceted KT intervention. Although the use of theory is recommended there are few rigorous studies detailing the application of theory to a KT intervention [23].

Figure 1
figure 1

Knowledge-to-Action (KTA) process. Source: Graham et al. (2006) [22].

Central to the KTA process, and indeed the basic unit of a KT intervention is up-to-date research being available and accessible to the target group [19, 22]. The basis of a KT intervention is synthesis of research in the form of systematic reviews, evidence summaries or online KT tools. Although health professionals generally prefer systematic reviews to original research articles [24], they still report that systematic reviews do no always answer their clinical questions [3, 25]. There is an increasing call for customized, easy to read summaries. Straus and Haynes (2009) describe the ‘5S’ model [3, 26] for organizing evidence-based information resources (Figure 2). The model is displayed in a pyramid with five levels (studies, syntheses, synopses, summaries, systems) that aim to be increasingly readable, reliable, and relevant as one moves up the pyramid. The top two levels (summaries and systems) may also be referred to as KT tools [19]. Straus and Haynes recommend a top down approach for answering clinical questions.

Figure 2
figure 2

The 5S pyramid model of evidence-based information resources. Adapted from Straus & Haynes (2009) [3].

Previous studies measuring the effectiveness of evidence-based information resources (5S pyramid level 3) detected a change in use however did not detect a change in evidence-based practice (EBP) behavior [27, 28]. Dobbins et al. [29] found that targeted messages (5S pyramid level 3 – 4) were more effective than knowledge brokering and access to research evidence for incorporating evidence into public health policies and programs. Although evidence-based information resources are available for AHPs (PEDro, OTseeker, SpeechBite) they are at 5S pyramid level 3 (synopses), and no studies have rigorously evaluated the usefulness of these tools. There are no KT tools (5S pyramid levels 4 or 5) found in literature specifically targeting AHPs working with people with CP.

KT tools presenting up to date research in a user-friendly way, is however only one piece of a KT strategy. Changing EBP behavior is complex as there is a range of behaviors required to be an ‘evidence-based AHP.’ Previous studies have either used self-developed measures [3033] or have only measured a narrow domain of EBM behavior [34, 35]. KT research in the allied health professions measuring EBP behavior across a range of AHPs is also absent from our evidence base [36, 37].

The primary aim of this cluster randomized controlled trial (RCT) was to evaluate the effectiveness of a multifaceted KT intervention for improving EBP behavior of AHPs. The central element of the KT intervention was an online evidence-based information resource called the Evidence Alert System (EAS). The EAS contained actionable messages (5S pyramid level 4 and 5), clinical decision-making tools and used the ‘top-down’ approach [3]. The other elements of the multifaceted intervention (workshop, mentoring and documentation changes) reinforced, educated and supported the approach set out in the EAS ensuring that the decision-making tools were embedded into the participant’s workflow. The secondary aims were to measure the effect of the KT intervention on EBP knowledge and attitudes. Our study sought to address key gaps in the current KT evidence by: using an RCT to measure the effect of a multi-component KT intervention centred around the EAS; measuring a wide range of EBP behaviors; and sampling a wide range of AHPs. Aims were measured at the individual participant level. Findings are reported according to the updated CONSORT statement for cluster randomized trials [38] (See Additional file 1).

Methods

Trial design and study setting

A multi-site evaluator-blinded, cluster RCT was conducted in a community based CP service in New South Wales (NSW), Australia. NSW is the largest state with a population of approximately 7.25 million people (32% of Australia’s total population). The CP service had 16 sites across NSW, organized into four geographically distinct regions, where AHP services were provided. Each region had centralized management for the sites within its boundaries including clinical seniors, professional development activities and mentoring, and thus was considered a natural cluster grouping. An independent officer not associated with the trial, used computer generated random numbers, to create four opaque envelopes based upon simple randomization. Four geographically distinct clusters were randomized to the intervention or control group. Cluster randomization was chosen to reduce risk of contamination that may have occurred if individuals working at the same site were randomized to different interventions. Individual participants were consented after randomization for pragmatic reasons. The first author (LC) obtained participants’ written consent and data collection took place before and after the workshops, at worksites or nearby locations, between June 2009 and August 2009.

Ethics

The project was approved by the National Health and Medical Research Council Human Research Ethics Committee at Cerebral Palsy Alliance (Approval number: 2009-05-01) and University of Notre Dame Ethics Committee. The study was registered with Australian New Zealand Clinical Trials Registry (ACTRN12611000529943).

Participants

Eligible participants were AHPs employed at the study site providing direct clinical services to people with CP and their families. Figure 3 shows the flow of participants through the study. Exclusion criteria were: managers (non-clinical staff); staff without university qualifications; and staff who were not scheduled to work on the day of the workshops.

Figure 3
figure 3

Participant flow diagram for RCT – from randomisation to primary analysis.

Intervention

Theoretical model

The theoretical model underpinning the project was the KTA process (Figure 1) developed by KT field leaders [22]. The KTA process first, involves knowledge creation (i.e., production of research syntheses) and second, knowledge application (i.e., identification of the research-practice gap; adaption of the research syntheses to local context; identification of utilization barriers; selection of tailored KT strategies to redress barriers; and monitoring, evaluating and sustaining EBP implementation use). Emerging evidence suggests that KT interventions underpinned by theory may be superior to those that are not theoretical-informed, although more research is needed to confirm this [37]. The advantage of theory-informed KT interventions is that they offer a generalizable framework for other researchers and organizations and provide guidance for designing KT interventions to overcome known barriers [37].

Assessment of barriers and facilitators

A comprehensive assessment of barriers and facilitators was done over a one-year period. This took the form of meetings between managers, policy makers, researchers, senior clinicians, and knowledge brokers, as well as observation of clinical staff. As there is no firm evidence regarding the superiority of one KT intervention over another [19], researchers and knowledge brokers jointly designed the KT intervention based on whether or not the barrier was modifiable by a pragmatically feasible intervention. Modifiable barriers included lack of skill, time and knowledge. Partially modifiable or non-modifiable barriers were: that evidence was considered not clinically relevant, that staff did not have access to full electronic databases and that some staff had negative attitudes towards EBP. Modifiable barriers, theoretical underpinnings and strategies for the KT intervention are detailed in Table 1. Details of how the components of our multifaceted intervention correspond to the KTA process are shown in Table 2.

Table 1 Theoretical basis and strategies to address modifiable barriers
Table 2 KT intervention with corresponding KTA phases

Development of multifaceted intervention

Strategic planning meetings were held every six weeks in the year leading up to baseline and included researchers, knowledge brokers, policy makers, and managers. Knowledge brokers were senior staff with allied health backgrounds (one per discipline employed in the most senior role for each discipline). Policy makers were the senior executive staff and managers involved in direct management of AHPs in the organization. Goals around EBP behaviors were set and strategies to achieve these goals were jointly selected based on barriers literature and assessment of the study site. The EAS formed the basis of our KT intervention and was developed by research staff and knowledge brokers using freely available software MediaWiki (Figure 4). The EAS included succinct summaries of all the CP research evidence about intervention, prognosis and outcome measurement. Intervention evidence was labeled using the traffic light system [7] where each intervention was given a traffic light color with an actionable message attached. Green = ’go’ if high-quality evidence supports the effectiveness of this intervention, yellow = ’measure’ where low-quality or conflicting evidence supports the effectiveness of this intervention, therefore measure the outcomes of the intervention to ensure the goal is met, and red = ’stop’ where high-quality evidence demonstrates intervention is ineffective, therefore do not use this approach. Decision-making algorithms with embedded evidence summaries were also available on the EAS. Each section of the EAS included abstracts of research articles, descriptions of the intervention or assessment, and a hyperlink to access the full article.

Figure 4
figure 4

Evidence Alert System infogram.

Experimental group intervention

The intervention group (total n = 73; region A = 39; region B = 34) received a multifaceted KT intervention. A three-day skills training workshop included:

  1. 1.

    Part one, (two days) of the interactive workshop provided training to apply the EAS to decision-making within daily clinical work. A series of clinical examples were explored using the interface of the EAS, training about evidence levels, clinical decision-making algorithms and use of two psychometrically sound, cross disciplinary outcome measures.

  2. 2.

    Part two (one day) of the workshop eight weeks later involved participants presenting a case study detailing how they used the EAS to inform their clinical decision making with a real patient. This was followed by discussion with a small group of colleagues designed to help participants demonstrate the integration of their learning into their own clinical work.

Investigators and each senior clinician [39] led the workshops using knowledge brokering strategies [40]. There was a mix of instructional techniques including didactic, interactive, role-playing and reflection. There was collaboration within and between professional groups.

On day-1 of the 3-day workshop, participants were informed that they had access to the EAS and that there were policy changes including: paid, quarantined EBP time; changes to client documentation including reminders to use the EAS; and embedding outcome measurement within workflow and mentoring by knowledge brokers.

The KT intervention was directed at the cluster level (three-day workshop-part one, access to the EAS and policy changes) and individual level (mentoring, and three-day workshop, part two). Details of the KT intervention are shown in Table 2.

Control group

The control group (total n = 62; region C = 29, region D = 33) received an equal intensity intervention about communication skills with no EBP content and no use of the EAS: three-day workshop about AHP-client communication skills and workplace supports (paid communication time, strategic planning, mentoring) related to communication skills. To minimize the risk of contamination, the control group was not informed about the EAS, paid EBP time, knowledge brokers, or mentoring until the end of the trial. The changes to documentation were not implemented in the control group clusters until the end of the RCT.

Outcome measures

Primary outcome

The primary endpoint was change in self- and peer-rated EBP behavior from baseline to eight weeks (individual and cluster level) measured using Goal Attainment Scaling (GAS) [41]. Participants rated themselves against the self-GAS scales, and then to limit measurement bias, in a separate environment, a well-acquainted peer rated their performance on the peer-GAS scales. Selection of the GAS instrument increased study rigor because it overcame known instrumentation limitations in the KT literature surrounding EBP behavior measurement, including: responsivity—GAS has established validity, reliability, and exquisite responsivity to change, whereas systematic review evidence indicates that for nearly all valid and reliable EBP instruments, test responsivity is unknown [42]; tailoring—GAS is an individualized measure of change, and so progress towards any target behavior (including health professional behaviors [43]) could be validly, reliably, and sensitively measured, including tailored EBP behaviors unique to the study site, e.g., notifications to the CP Register; comprehensive measurement – GAS is an individualized measure of change, and so we could comprehensively measure all desired EBP behaviors, whereas systematic review evidence indicates that other psychometrically sound EBP instruments measure knowledge instead of behavior, or are limited because they only measure one discrete aspect of EBP behavior [29, 42, 4446]; and lack of gold standard tool—accurate, gold-standard, flawless measurement of EBP behavior is not yet established in literature [47]. Even though direct observation of EBP behavior (such as simulated patients, video/audio recordings of practice) is perceived as methodologically preferable to indirect (proxy) reports of EBP behavior (such as chart audit, patient report, self-report, or peer-report), systematic review evidence indicates that direct measures often fail validity testing [47]. This could have introduced other flaws to our clinical trial. Moreover, collecting direct measures throughout NSW, being a state-wide service, would have introduced prohibitive trial costs (NSW’s landmass is 3.25 times larger than the United Kingdom, and is larger than California and New Mexico combined), when the cost-benefit of a potentially invalid measure is weighed-up. Even though self-report proxy measures are an imperfect measure of actual behavior [47], leading KT agencies, such as the Canadian Institutes of Health Research advocate for self-report because the process of self reflection plays a critical role in initiating behavioral changes within organizations. In light of current EBP behavior measurement limitations, GAS offered the best way forward since it was psychometrically sound, it comprehensively measured EBP behavior, was practical across an entire state and could be tailored to the study site.

The GAS scales were devised by a multidisciplinary panel of experts familiar with EBP behaviors of the eligible AHPs, as per literature recommendations for scale establishment. Twenty-five goal scales were developed, one-half relating to EBP behaviors and the other one-half relating to communication behavior for the control group. The scales measured EBP behaviors such as: use of gold standard goal-setting tools to plan services; use of CP classification systems to accurately prognosticate; use of evidence (e.g., via the EAS) to quickly choose evidence-based classification systems, interventions and outcome measures; and use of gold standard outcome measures to routinely evaluate services. The GAS scales are available from the corresponding author by request. As per the test manual, raw scores were converted to GAS T-scores, enabling inferential statistical analysis of continuous data.

Secondary outcomes

Self- and peer-rated attitude changes were measured using subsets three and four of the Evidence-Based Practice Attitude Scale (EBPAS) [48], which is psychometrically permissible. EBP knowledge was measured via open-ended exam questions with right/wrong answers, pre-defined by the panel of experts, derived from published evidence.

EAS utilization was measured by number of web page hits collected via a software program that tracked cluster-specific IP addresses in batches. Web hit data collection was concealed from participants, minimizing the likelihood of observer bias affecting EAS use.

Adverse events

An adverse event log was not required because the intervention was educational in nature and therefore posed no risk.

Blinding

Blinding was judiciously applied wherever pragmatically possible, resulting in a single-blinded trial. This included: independent evaluator blinding to group allocation and phase of the trial when scoring outcome data; and partial participant and facilitator blinding to the specific EBP behavior of interest to the investigators. Participants and workshop facilitators were clearly aware of the content of the workshops, however were not aware of which intervention (KT intervention or communication skills) was of specific interest to the researchers. Fidelity of the evaluator blinding was not formally investigated.

Sample size

We sought to test the efficacy of an organizational KT intervention and therefore conducted the study within one agency, which is the largest of its kind in Australia. This methodological decision imposed pragmatic limitations on the obtainable sample frame. We successfully recruited 88% of the available sampling frame, however the total number of employees at the agency was less than the number of participants required to reach statistical power if correlation of outcome variables within sites was observed (intra-cluster correlation). A sample size calculation identified the probability of detecting an effect size of 1 at an alpha level of 0.05 (one-tail) and a power of 90%. For Goal Attainment Scaling [mean T-score = 50, standard deviation (sd) = 10] an improvement of 10 points or more in the KT intervention group than the control group was sought, (improvement of 1 sd). The expert panel agreed that a 10-point increase in GAS T-scores equated to significant clinical improvement in EBP behaviors. The calculation assumed a 20% non-consent rate and a 20% attrition rate indicating a sample size requirement of 72 (38 per group) for a non-cluster trial. We enrolled 135 professionals (n = 73 interventions and n = 62 controls) at four sites. Based on estimating an intra-cluster correlation co-efficient (ICC) of 0.1 we calculated that the study was underpowered to demonstrate an improvement of 10 points between groups if a cluster effect of this size was observed (Variance Inflation Figure = 4.3).

Statistical analysis

All statistical analysis was carried out with individual participants as the unit of analysis on an intention-to-treat basis by using SPSS for Windows 19.0.0 (SPSS Inc, Chicago, IL) and SAS 9.3 (SAS Institute, Cary NC).

We conducted generalized linear regression analysis for primary and secondary endpoints, using post intervention GAS T-score as the outcome variable and adjusting for potential confounding variables (baseline GAS T-score, profession, group allocation, grade level, and years in the disability field). Effect sizes with 95% confidence intervals (CIs) were calculated and significance was set at 0.05. These estimates would underestimate the standard errors and confidence intervals for the effect size if participant outcomes are correlated within cluster sites, thus mixed effects models with cluster included as a random effect were used to adjust for a cluster effect to calculate the effect size for each outcome [49]. ICC was calculated from the mixed effects model and bootstrapping (1,000 samples generated) was performed to calculate 95% confidence intervals for the ICC.

Results

A total of 135 AHPs (n = 73 interventions and n = 62 controls) were recruited (see Figure 3), which was 88% of the available sampling frame. At baseline, participant attributes were mostly comparable between groups, the exception being prior EBP education attendance (88% compared to 66% for controls) (Table 3). To account for this baseline difference, prior EBP education was treated as a covariate in the regression model. Included professionals were physiotherapists (24%), speech pathologists (26%), occupational therapists (37%), psychologists (6%), and social workers (7%). 64% of participants had over five years experience working with people with disabilities although 63% of the cohort had worked at the study site for less than five years. 94% of the sample had English as their first language. The return rate for the GAS and EBPAS ratings were between 60% and 82% (see Figure 3), with the primary end-point having more missing data. The KT intervention group had 19/73 (31%) eight-week GAS forms missing, compared to the control group who had 17/62 (30%). This difference between groups was not statistically significant (chi square p = 0.95).

Table 3 Baseline characteristics of participants

Clustering effect

The ICC for the primary endpoints were 0.33 (95% CI 0.16,0.69) for self-rated GAS T-scores, that is 33% of the total variation observed in self-rated GAS T-scores can be attributed to differences between the sites, (rather than differences between individuals within each site), and 0.64 (95% CI 0.36,0.80) for peer-report GAS T-scores (Table 4), that is 64% of the total variation observed peer-rated GAS T-scores can be attributed to differences between sites. These results demonstrate the correlation of GAS T-scores within sites was very large, whereas there was a large variation in scores between sites. This cluster effect substantially depleted the study power (because participant scores within each site cannot be regarded as independent). ICCs were smaller for secondary outcomes (Table 4).

Table 4 Primary and secondary outcomes

Effectiveness of KT intervention

Primary outcome—EBP behaviors

Self-rated GAS T-scores increased more in the intervention group compared to controls however this difference was not statistically significant after adjusting for the cluster effect; Effect size 4.43 [95% CI -10.63 to 19.49 (p = 0.56)] (Table 4). Baseline self-rated GAS T-scores were a predictor in the model [Effect size 0.71 (95% CI 0.52–0.90) (p < 0.0001)]; indicating lower performers improved but remained lower performers, and higher performers improved and remained leading performers. No other covariates were significantly predictive of outcome.

Peer-rated GAS T-scores of the intervention group also increased compared to controls, but this difference was also not statistically significant after adjusting for the cluster effect: effect size 6.75 [95% CI -16.95 to 30.44 (p = 0.57)] (Table 4). Similar to the self-rated GAS T-scores, the final peer-rated GAS T-score was predicted by the baseline peer-rated GAS T-score [effect size 0.30 (95% CI 0.150.45) (p < 0.0001)]. No other covariates were significantly predictive of peer-rated GAS T-scores. The peer-rated GAS T-scores for each cluster mirrored the self-rated GAS cluster T-scores, suggesting the observed study effects were behaviorally meaningful, despite low study power to demonstrate a statistically significant difference.

Secondary outcomes—EBP knowledge and attitudes

EBP knowledge scores increased compared to controls, with a statistically significant effect size of 2.97 (95% CI 1.97, 3.97, p < 0.0001). The ICC for this outcome was zero, and this effect remained statistically significant after adjusting for the cluster effect: 2.97 (95% CI 1.97, 3.97, p < 0.0001). Baseline score (p < 0.0001) and professional category (p = 0.03) were also predictors in the model. There was minimal to no correlation between participants within sites for self- or peer-rated EBP attitudes, however we did not demonstrate a statistically significant intervention effect (Table 4). The intervention group accessed the EAS more than the control group (KT intervention group 6,123 total hits; control group 1,677 hits).

Secondary analyses examining mean outcome scores for each cluster revealed that both clusters in the KT intervention group improved their self- and peer-rated GAS T-scores as expected (Table 5). One of the control group clusters (cluster 3) also responded as expected, with very minimal increases in self- and peer-rated GAS T-scores from baseline to eight weeks (self-rated T-score change = 0.22; peer-rated T-score change = 2.27). The other control group cluster (cluster 4) had high baseline scores (self –rated GAS T-score = 66.41; peer-rated GAS T-score = 73.32) and further improved by 10.15 points over the 8-week study period, despite not receiving the KT intervention (Table 5). We performed post hoc Spearman’s correlation tests to assess for correlation between knowledge and attitude scores (at baseline, 8-weeks and change scores) overall, by treatment group, and within individual clusters. No statistically significant positive correlations were found.

Table 5 Mean outcome scores for each cluster

Discussion

We conducted a cluster RCT to evaluate whether a multifaceted KT strategy changed AHP’s EBP behaviors. Both clusters in the KT intervention group improved within the study period, but not statistically significantly more than the control group. We consider this null finding to be a probable type II error because our study was underpowered owing to the fact that the number of participants required to account for clustering of EBP behaviors within sites exceeded the number of employees available. Our study demonstrated increased use of our evidence-based resource (the EAS), however we were unable to confirm that this translated to a statistically significant change in EBP behavior. This finding is in line with previous research involving evidence-based resources [27, 28]. Owing to the type II error, we remain unsure of the true effect of our KT intervention, but we discovered a number of potentially important findings that may contribute to future KT endeavours and the body of research.

The high ICCs (ranging from 0.33 to 0.64) for EBP behavior measures, indicated substantial correlation of behaviors within clusters, and indicated differences in behaviors between clusters. When we examined the mean change scores for each cluster, one of the four clusters (cluster 4), which was randomly allocated to the control group, was an obvious outlier with high baseline GAS T-scores, high baseline knowledge scores and increased self- and peer-rated GAS T-scores over the study period.

Variability between natural groupings (such as clinical, departmental or regional) has been noted in the KT literature previously [29, 32]. Perhaps the high baseline EBP scores for the cluster 4 reflected positive EBP culture and practices due to cluster 4’s manager [32, 50, 51]. The notion that a manager can strongly influence research culture is by no means new [29, 52], as some opinion leaders are known to strongly influence EBP behavior [50, 53]. The cluster 4’s manager was active in promoting EBP behavior among staff. A large range of KT interventions were in place in cluster 4 prior to this study, including audit and feedback, financial incentives, workshops and mentoring. It is conceivable that cluster 4 therefore had both better readiness and receptivity to EBP supports as they had essentially been engaging in active KT for a longer period than the other clusters [32]. That said, positive EBP culture is considered to be related to positive EBP attitudes [52] and EBPAS scores measuring attitude change of cluster 4 were no different from the other clusters at baseline or eight weeks. This may have reflected measurement error, or may indicate that positive attitudes in cluster 4 were not necessary as mandatory policies within that cluster were the driving force behind the higher GAS scores.

Secondary outcomes

Our hypothesis that the KT intervention would improve knowledge was supported with the KT intervention group knowledge exam scores showing a statistically significant improvement compared to the control group. This finding supports previous research suggesting that knowledge change alone does not consistently translate into behavior change [7, 32, 37, 54]. Interestingly, change in knowledge scores was not affected by the cluster effect suggesting that knowledge is not as susceptible to peer influences as behavior.

We found no correlation between behavior, knowledge, and attitude change scores within and between clusters. Attitudes remained unchanged. We hypothesise the lack of change in EBP attitudes in our study may be explained by: (1) high baseline EBP attitudes and there was conceivably a ceiling effect on the EBPAS. This was plausible as EBP had been a focus in the organization for some time prior to the RCT. In this case, positive attitudes at baseline, increased knowledge scores and policy changes may together have resulted in the behaviorally meaningful changes observed. There is however no normative data for AHPs on the EBPAS, so it is difficult to say whether or not baseline attitudes were high compared to AHPs in other organisations; (2) EBPAS subsets potentially not being sensitive enough to detect attitude change and the psychometrics for sensitivity in this population are unknown; (3) the EBPAS being an accurate, sensitive measure and that attitudes did not improve from the KT intervention. This third possibility supports the notion that improved knowledge was not adequate to lead to statistically significant behavior change, and that a shift in attitudes was also needed [55]. Conversely, the behaviorally significant change that was observed potentially bypassed the need for attitude change by employing strategies such as mandatory use of documentation and outcome measures; and (4) EBP attitudes taking a longer period of time than knowledge to change, and the eight-week trial was too short to detect change.

Strengths and limitations

The study had a number of strengths including the rigorous design and broad robust behavior measurement. Our chosen measurement instrument (GAS) was sensitive to change [56, 57] and appeared accurate as self- and peer-rated scores mirrored each other. Distinguishing features of our study were that we measured a wide set of behaviors among AHPs working with people with CP. The mix of AHPs in our sample is fairly representative of other community based disability organizations, increasing external validity. This is the first RCT in the KT literature involving social workers, psychologists or occupational therapists [37]. The KT intervention itself was a study strength being based on a solid theoretical model [2123], in response to a comprehensive barriers assessment, with desired outcomes clearly defined, and included a range of interventions, not only educational interventions [37].

There are a number of study limitations. First and foremost, the pragmatic constraints that limited the number of available clusters and participants led to low statistical power causing a probable type II error. Second, the large differences observed between clusters suggest that we potentially should have tailored the KT intervention to each cluster rather than the whole organization. Third, the evidence base regarding whether proxy behavior measures represent actual behavior is not firmly established, but with preferred rival direct measures also lacking validity and reliability [41, 58]. Moreover, direct measurement was not affordable in our study given the geography involved, and indirect measurement tools were therefore used [43, 59]. To minimize measurement bias, systematic review recommendations regarding indirect measures were followed, and included using: acceptable indirect measures [41, 59] (such as self- and peer-rated behavior triangulated with unbiased web hit data) [42]; measurement tools with strong psychometric properties [47]; more than one tool to measure behavior change [47]; and a sound theoretical model as a basis of the intervention [21]. The time frame of the trial was short considering that many EBP behaviors and system/organizational changes (such as documenting client goals and mentoring) take time to develop [60]. A follow-up study is needed to measure whether the EBP behaviors were sustained [14]. The return rate of the GAS exam form and EBPAS was not perfect (60-82%), with the eight-week data having more missing data.

Conclusions

KT literature recommends tailoring KT interventions to overcome known barriers within organizations [19, 20], however our findings suggest that this may need to go even further with KT interventions being designed for subgroups within an organization. The impact of different workplace culture may mean that there are dramatically different barriers needing different KT interventions to be effective [32]. Considering the importance of management-led change, targeting policy makers and managers may be beneficial. This has been done in the public health sector [29], however no studies customizing KT to policy makers/management was found in the allied health literature. Our study provides extremely rich pilot study data to planning and conducting an adequately powered cluster RCT in future.

Our study highlighted the methodological challenges of conducting empirical research in a community-based organization with fixed cluster and participant numbers. Whether or not RCTs are a feasible option in community organizations is debatable, and it may be that other research designs are more appropriate [29, 61]. Researchers, policy makers, and clients need to effectively collaborate to ensure that reliable, relevant research becomes embedded into everyday care in a timely way. Considering that the cornerstone of KT is access to reliable research, the authors plan to make the EAS publically available.

References

  1. Reddihough DS, Collins KJ: The epidemiology and causes of cerebral palsy. Aust J Physiother. 2003, 49: 7-14.

    Article  PubMed  Google Scholar 

  2. Novak I, Hines M, Goldsmith S, Barclay R: Clinical prognostic messages from a systematic review on cerebral palsy. Pediatrics. 2012, 130: e1285-e1312. 10.1542/peds.2012-0924.

    Article  PubMed  Google Scholar 

  3. Straus S, Haynes R: Managing evidence-based knowledge: the need for reliable, relevant and readable resources. Can Med Assoc J. 2009, 180: 942. 10.1503/cmaj.081697.

    Article  Google Scholar 

  4. Heiwe S, Kajermo KN, Tyni-LennÈ R, Guidetti S, Samuelsson M, Andersson IL, Wengstrom Y: Evidence-based practice: attitudes, knowledge and behaviour among allied health care professionals. Int J Qual Health Care. 2011, 23 (2): 198-209. 10.1093/intqhc/mzq083.

    Article  PubMed  Google Scholar 

  5. Stevenson K, Lewis M, Hay E: Do physiotherapists' attitudes towards evidence-based practice change as a result of an evidence-based educational programme?. J Eval Clin Pract. 2004, 10: 207-217. 10.1111/j.1365-2753.2003.00479.x.

    Article  PubMed  Google Scholar 

  6. Davis D: Continuing education, guideline implementation, and the emerging transdisciplinary field of knowledge translation. J Contin Educ Health Prof. 2006, 26: 5-12. 10.1002/chp.46.

    Article  PubMed  Google Scholar 

  7. Novak I, McIntyre S: The effect of education with workplace supports on practitioners' evidence-based practice knowledge and implementation behaviours. Aust Occup Ther J. 2010, 57 (6): 93-386.

    Article  Google Scholar 

  8. Saleh M, Korner-Bitensky N, Snider L, Malouin F, Mazer B, Kennedy E, Roy MA: Actual vs. best practices for young children with cerebral palsy: A survey of paediatric occupational therapists and physical therapists in Quebec, Canada. Dev Neurorehabil. 2008, 11: 60-80. 10.1080/17518420701544230.

    Article  CAS  PubMed  Google Scholar 

  9. Hanna SE, Russell DJ, Bartlett DJ, Kertoy M, Rosenbaum PL, Wynn K: Measurement practices in pediatric rehabilitation: a survey of physical therapists, occupational therapists, and speech-language pathologists in Ontario. Phys Occup Ther Pediatr. 2007, 27: 25-42.

    PubMed  Google Scholar 

  10. O'Connor S, Pettigrew C: The barriers perceived to prevent the successful implementation of evidence based practice by speech and language therapists. Int J Lang Comm Disord. 2009, 44: 1018-1035.

    Google Scholar 

  11. McCluskey A: Occupational therapists report a low level of knowledge, skill and involvement in evidence-based practice. Aust Occup Ther J. 2003, 50: 3-12. 10.1046/j.1440-1630.2003.00303.x.

    Article  Google Scholar 

  12. Salbach N, Jaglal S, Korner-Bitensky N, Rappolt S, Davis D: Practitioner and organizational barriers to evidence-based practice of physical therapists for people with stroke. Phys Ther. 2007, 87: 1284. 10.2522/ptj.20070040.

    Article  PubMed  Google Scholar 

  13. Glasziou P, Ogrinc G, Goodman S: Can evidence-based medicine and clinical quality improvement learn from each other?. BMJ Qual Saf. 2011, 20: i13. 10.1136/bmjqs.2010.046524.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Forsetlund L, Bjorndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf F, Davis D, Odgaard-Jensen J, Oxman AD: Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2009, 2: CD003030-

    PubMed  Google Scholar 

  15. Parmelli E, Flodgren G, Beyer F, Baillie N, Schaafsma ME, Eccles MP: The effectiveness of strategies to change organisational culture to improve healthcare performance: a systematic review. Implement Sci. 2011, 6: 33. 10.1186/1748-5908-6-33.

    Article  PubMed  PubMed Central  Google Scholar 

  16. O'Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, Forsetlund L, Bainbridge D, Freemantle N, Davis DA: Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2007, 4: CD000409-

    PubMed  Google Scholar 

  17. Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD: Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006, 2: CD000259-

    PubMed  Google Scholar 

  18. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009, 3: CD001096-

    PubMed  Google Scholar 

  19. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE: Knowledge translation of research findings. Implement Sci. 2012, 7: 50. 10.1186/1748-5908-7-50.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Davies HTO, Powell AE, Rushmer RK: Healthcare professionals’ views on clinician engagement in quality improvement. 2007, Available at: http://www.health.org.uk/publications/engaging-clinicians-report/ (Accessed: 28/07/2011)

    Google Scholar 

  21. Davies P, Walker AE, Grimshaw JM: A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010, 5: 14. 10.1186/1748-5908-5-14.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Graham Logan J, Harrison M, Straus S, Tetroe J, Caswell W, Robinson N: Lost in knowledge translation: Time for a map?. J Contin Educ Health Prof. 2006, 26: 13-24. 10.1002/chp.47.

    Article  Google Scholar 

  23. Grol RP, Bosch MC, Hulscher ME, Eccles MP, Wensing M: Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Q. 2007, 85: 93-138. 10.1111/j.1468-0009.2007.00478.x.

    Article  PubMed  PubMed Central  Google Scholar 

  24. McKinlay RJ, Cotoi C, Wilczynski NL, Haynes RB: Systematic reviews and original articles differ in relevance, novelty, and use in an evidence-based service for physicians: PLUS project. J Clin Epidemiol. 2008, 61: 449-454. 10.1016/j.jclinepi.2007.10.016.

    Article  PubMed  Google Scholar 

  25. Badgett R: Why would physicians undervalue reviews by the Cochrane Collaboration?. J Slin Epidemiol. 2008, 61: 419-421. 10.1016/j.jclinepi.2007.11.022.

    Article  Google Scholar 

  26. Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K: Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge translation resources. Milbank Q. 2011, 89: 131-156. 10.1111/j.1468-0009.2011.00622.x.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Gülmezoglu A, Langer A, Piaggio G, Lumbiganon P, Villar J, Grimshaw J: Cluster randomised trial of an active, multifaceted educational intervention based on the WHO Reproductive Health Library to improve obstetric practices. BJOG: Int J Obstet Gynaecol. 2007, 114: 16-23.

    Article  Google Scholar 

  28. Haynes R, Holland J, Cotoi C, McKinlay R, Wilczynski N, Walters L, Jedras D, Parrish R, McKibbon K: McMaster PLUS: A cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. J Am Med Inform Assoc. 2006, 13: 593-600. 10.1197/jamia.M2158.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Dobbins M, Hanna S, Ciliska D, Manske S, Cameron R, Mercer S, O'Mara L, DeCorby K, Robeson P: A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009, 4: 61. 10.1186/1748-5908-4-61.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Bekkering G, Hendriks H, Van Tulder M, Knol D, Hoeijenbos M, Oostendorp R, Bouter L: Effect on the process of care of an active strategy to implement clinical guidelines on physiotherapy for low back pain: a cluster randomised controlled trial. Br Med J. 2005, 14: 107-

    CAS  Google Scholar 

  31. Rebbeck T, Maher C, Refshauge K: Evaluating two implementation strategies for whiplash guidelines in physiotherapy: A cluster-randomised trial. Austr J Physiother. 2006, 52: 165. 10.1016/S0004-9514(06)70025-3.

    Article  Google Scholar 

  32. Pennington L, Roddam H, Burton C, Russell I, Russell D: Promoting research use in speech and language therapy: a cluster randomized controlled trial to compare the clinical effectiveness and costs of two training strategies. Clin Rehabil. 2005, 19: 387. 10.1191/0269215505cr878oa.

    Article  PubMed  Google Scholar 

  33. Stevenson K, Lewis M, Hay E: Does physiotherapy management of low back pain change as a result of an evidence-based educational programme?. J Eval Clin Pract. 2006, 12: 365-375. 10.1111/j.1365-2753.2006.00565.x.

    Article  PubMed  Google Scholar 

  34. Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer H, Kunz R: Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ. 2002, 325: 1338. 10.1136/bmj.325.7376.1338.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Ramos KD, Schafer S, Tracz SM: Validation of the Fresno test of competence in evidence based medicine. BMJ. 2003, 326: 319. 10.1136/bmj.326.7384.319.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Dizon JM, Grimmer-Somers KA, Kumar S: Current evidence on evidence-based practice training in allied health: a systematic review of the literature. Int J Evid Based Healthc. 2012, 10: 347-360. 10.1111/j.1744-1609.2012.00295.x.

    Article  PubMed  Google Scholar 

  37. Scott SD, Albrecht L, O'Leary K, Ball GDC, Hartling L, Hofmeyer A, Jones CA, Klassen TP, Burns KK, Newton AS: Systematic review of knowledge translation strategies in the allied health professions. Implement Sci. 2012, 7: 70. 10.1186/1748-5908-7-70.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Campbell MK, Piaggio G, Elbourne DR, Altman DG: Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012, 345: e5661. 10.1136/bmj.e5661.

    Article  PubMed  Google Scholar 

  39. Flodgren G, Parmelli E, Doumit G, Gattellari M, O'Brien MA, Grimshaw J, Eccles MP: Local opinion leaders: effects on professional practice and health care outcomes. Cochrane Library. 2011, 8-

    Google Scholar 

  40. Russell D, Rivard L, Walter S, Rosenbaum P, Roxborough L, Cameron D, Darrah J, Bartlett D, Hanna S, Avery L: Using knowledge brokers to facilitate the uptake of pediatric measurement tools into clinical practice: A before-after intervention study. Implement Sci. 2010, 5: 92. 10.1186/1748-5908-5-92.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Kiresuk T, Sherman R: Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community Ment Health J. 1968, 4: 443-453. 10.1007/BF01530764.

    Article  CAS  PubMed  Google Scholar 

  42. Shaneyfelt T, Baum K, Bell D, Feldstein D, Houston T, Kaatz S, Whelan C, Green M: Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006, 296: 1116. 10.1001/jama.296.9.1116.

    Article  CAS  PubMed  Google Scholar 

  43. Cusick A, Ottenbacher K: Goal attainment scaling: continuing education evaluation tool. J Contin Educ Health Prof. 1994, 14: 141-154. 10.1002/chp.4750140303.

    Article  Google Scholar 

  44. Curran JA, Grimshaw JM, Hayden JA, Campbell B: Knowledge translation research: The science of moving research into policy and practice. J Contin Educ Health Prof. 2011, 31: 174-180. 10.1002/chp.20124.

    Article  PubMed  Google Scholar 

  45. Straus SE, Ball C, Balcombe N, Sheldon J, McAlister FA: Teaching evidence based medicine skills can change practice in a community hospital. J Gen Intern Med. 2005, 20: 340-343. 10.1111/j.1525-1497.2005.04045.x.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Lucas BP, Evans AT, Reilly BM, Khodakov YV, Perumal K, Rohr LG, Akamah JA, Alausa TM, Smith CA, Smith JP: The impact of evidence on physicians,Äô inpatient treatment decisions. J Gen Intern Med. 2004, 19: 402-409. 10.1111/j.1525-1497.2004.30306.x.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Hrisos S, Eccles M, Francis J, Dickinson H, Kaner E, Beyer F, Johnston M: Are there valid proxy measures of clinical behaviour? a systematic review. Implement Sci. 2009, 4: 37. 10.1186/1748-5908-4-37.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Aarons G: Mental health provider attitudes toward adoption of evidence-based practice: the evidence-based practice attitude scale (EBPAS). Ment Health Serv Res. 2004, 6: 61-74.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Donner A, Klar N, Klar NS: Design and analysis of cluster randomization trials in health research. 2000, London: Arnold

    Google Scholar 

  50. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004, 82: 581-629. 10.1111/j.0887-378X.2004.00325.x.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Rogers EM: Diffusion of innovations. 1995, New York: Free Press

    Google Scholar 

  52. Aarons G, Sawitzky A: Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Adm Policy Ment Health Ment Health Serv Res. 2006, 33: 289-301. 10.1007/s10488-006-0039-1.

    Article  Google Scholar 

  53. French B, Thomas L, Baker P, Burton C, Pennington L, Roddam H: What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context. Implement Sci. 2009, 4: 28. 10.1186/1748-5908-4-28.

    Article  PubMed  PubMed Central  Google Scholar 

  54. McCluskey A, Lovarini M: Providing education on evidence-based practice improved knowledge but did not change behaviour: a before and after study. BMC Med Educ. 2005, 5: 40. 10.1186/1472-6920-5-40.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Graham ID, Bick D, Tetroe J, Straus SE, Harrison MB: Measuring outcomes of evidence-based practice: Distinguishing between knowledge use and its impact. Eval Impact Implement Evidence-Based Pract. 2010, 1: 18-

    Google Scholar 

  56. Steenbeek D, Gorter JW, Ketelaar M, Galama K, Lindeman E: Responsiveness of goal attainment scaling in comparison to two standardized measures in outcome evaluation of children with cerebral palsy. Clin Rehabil. 2011, 25: 1128-1139. 10.1177/0269215511407220.

    Article  PubMed  Google Scholar 

  57. Flodgren G, Eccles M, Shepperd S, Scott A, Parmelli E, Beyer F: An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev (Online). 2011, 7: CD009255-

    Google Scholar 

  58. Dickinson HO, Hrisos S, Eccles MP, Francis J, Johnston M: Statistical considerations in a systematic review of proxy measures of clinical behaviour. Implement Sci. 2010, 5: 20. 10.1186/1748-5908-5-20.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Eccles M, Hrisos S, Francis J, Kaner E, Dickinson H, Beyer F, Johnston M: Do self- reported intentions predict clinicians' behaviour: a systematic review. Implement Sci. 2006, 1: 28. 10.1186/1748-5908-1-28.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Thomson O, Freemantle N, Oxman A, Wolf F, Davis D, Herrin J: Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database System Rev (Online). 2001, 1: CD003030-

    Google Scholar 

  61. Walshe K: Understanding what works–and why–in quality improvement: the need for theory-driven evaluation. Int J Qual Health Care. 2007, 19: 57-59. 10.1093/intqhc/mzm004.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Cerebral Palsy Alliance for their support of this study, for understanding the importance of EBP and adopting systemic changes. The authors also wish to acknowledge the clinical consultants (Cathy Morgan, Salli-Ann Craik, Natalie Morton, Leigha Dark and Elise Stumbles) and research staff at Cerebral Palsy Alliance for their leadership, contributions and assistance, and most importantly we would like to thank staff for their participation in the study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lanie Campbell.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contribution

The study was carried out as part of a Doctor of Philosophy candidature by LC IN, SM and staff at the Research Institute of Cerebral Palsy Alliance (the study site) assisted in study design and developing the Evidence Alert System (searching databases for articles, synthesising results, converting the information to electronic format). LC, IN, LM and staff from the Research Institute and senior staff at Cerebral Palsy Alliance facilitated the workshops that formed part of the KT interventions (experimental and control groups). The participants of the study were all staff at the Cerebral Palsy Alliance. All authors had full access to all of the data, including statistical reports and tables and take responsibility for the integrity of the data and accuracy of the data analysis. All authors read and approve the final manuscript.

Electronic supplementary material

13012_2013_709_MOESM1_ESM.docx

Additional file 1: Table S1: CONSORT 2010 checklist of information to include when reporting a cluster randomised trial. Table S2: Extension of CONSORT for abstracts 1, 2 to reports of cluster randomised trials. (DOCX 33 KB)

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Campbell, L., Novak, I., McIntyre, S. et al. A KT intervention including the evidence alert system to improve clinician’s evidence-based practice behavior—a cluster randomized controlled trial. Implementation Sci 8, 132 (2013). https://doi.org/10.1186/1748-5908-8-132

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1748-5908-8-132

Keywords