Skip to main content

Designing a rapid response program to support evidence-informed decision-making in the Americas region: using the best available evidence and case studies

Abstract

Background

The objective of this work was to inform the design of a rapid response program to support evidence-informed decision-making in health policy and practice for the Americas region. Specifically, we focus on the following: (1) What are the best methodological approaches for rapid reviews of the research evidence? (2) What other strategies are needed to facilitate evidence-informed decision-making in health policy and practice? and (3) How best to operationalize a rapid response program?

Methods

The evidence used to inform the design of a rapid response program included (i) two rapid reviews of methodological approaches for rapid reviews of the research evidence and strategies to facilitate evidence-informed decision-making, (ii) supplementary literature in relation to the “shortcuts” that could be considered to reduce the time needed to complete rapid reviews, (iii) four case studies, and (iv) supplementary literature to identify additional operational issues for the design of the program.

Results

There is no agreed definition of rapid reviews in the literature and no agreed methodology for conducting them. Better reporting of rapid review methods is needed. The literature found in relation to shortcuts will be helpful in choosing shortcuts that maximize timeliness while minimizing the impact on quality. Evidence for other strategies that can be used concurrently to facilitate the uptake of research evidence, including evidence drawn from rapid reviews, is presented. Operational issues that need to be considered in designing a rapid response program include the implications of a “user-pays” model, the importance of recruiting staff with the right mix of skills and qualifications, and ensuring that the impact of the model on research use in decision-making is formally evaluated.

Conclusions

When designing a new rapid response program, greater attention needs to be given to specifying the rapid review methods and reporting these in sufficient detail to allow a quality assessment. It will also be important to engage in other strategies to facilitate the uptake of the rapid reviews and to evaluate the chosen model in order to make refinements and add to the evidence base for evidence-informed decision-making.

Peer Review reports

Background

While research evidence is only one of many inputs into decision-making when it comes to health policy [1], it is important to try to maximize its usefulness and uptake. A range of programs and efforts already exist to promote the uptake of research evidence into policy and practice. These include efforts to conduct systematic reviews of the evidence (e.g., the Cochrane Collaboration) as well as efforts to package research evidence, including systematic reviews to inform policy and practice (e.g., Evidence-Informed Policy Network (EVIPNet), Health Technology Assessment agencies). However, while extremely useful, these programs are often not able to provide access to research quickly nor answer specific policy questions in a timely way.

There is a wide literature describing the barriers and facilitators to the consideration of research evidence in decision-making that can inform the design of a rapid response program to address this gap [26]. The most frequently reported barriers to evidence uptake are poor access to good quality relevant research and the lack of timely and relevant research output [5, 7]. The most frequently reported facilitators are collaboration between researchers and policymakers, improved relationships and skills [5], and research that accords with the beliefs, values, interests or practical goals, and strategies of decision-makers [8].

What is a rapid response program and what types of programs already exist?

A rapid response program that provided rapid reviews of the results of high-quality research evidence, which was contextualized and targeted to the needs of decision-makers, with a fast turn-around time, and that included interaction between researchers and decision-makers could overcome some of the barriers and facilitate the uptake of research into policy and practice.

When describing and designing a “rapid response program,” it is important to consider (1) the product offered; (2) the strategies utilized to facilitate the uptake of the product into decision-making; and (3) how the program is operationalized to ensure that it runs smoothly. These three things together will determine if the rapid response program meets its intended purpose, i.e., to facilitate the use of high-quality research in health decision-making.

The number of rapid response programs offering such a service is increasing [9, 10]. The authors of a recent study surveyed 29 rapid response programs internationally [10]. Within, and across, these programs, there was a wide variation in the program objectives, types of questions answered, and processes and methods used [10]. The primary objectives for producing rapid reviews (the main product of rapid response programs) reported by respondents were to inform decision-making with regard to funding health care technologies, services, and policy and program development [10]. The rapid reviews answered many types of questions—clinical effectiveness (55.2 %), clinical efficacy (41.4 %), cost-effectiveness and/or cost savings (41.4 %), and safety (31.0 %)—and were used to support clinical practice guideline preparation (17.2 %) for either a health care technology or service. Some rapid response programs focused exclusively on questions centered on health system interventions, health services delivery, health care policy, coverage of a technology, operational efficiency, and quality improvement. Two organizations focused on specific health topics [10].

What are the main products of rapid response programs?

The types of products produced by rapid response programs are widely varied [10, 11] but were categorized in a recent study based on the extent of synthesis [11]. Four product types were described: (1) inventories, which simply list what evidence is available; (2) rapid responses, which present the best available evidence but with no formal synthesis; (3) rapid reviews, which synthesize the quality of and findings from the evidence; and (4) automated approaches, which generate meta-analyses in response to user-defined queries [11].

We focused our research on products that include a synthesis of the evidence because this is the most widely researched product and most comparable to a standard systematic review and because we consider it to be the most useful for increasing the use of research in health decision-making. This type of rapid product has many names, including rapid review, evidence summary, brief review, rapid systematic review, and rapid health technology assessment. In this paper, we have used the term “rapid review” as it is the most widely used term in the published literature [12, 13].

We understand a rapid review to be “a type of systematic review in which components of the systematic review process are simplified, omitted or made more efficient in order to produce information in a shorter period of time, preferably with minimal impact on quality. Further, rapid reviews typically involve a close relationship with the end-user and are conducted with the needs of the decision-maker in mind” (Haby MM, Chapman E, Clark R, Barreto J, Reveiz L, Lavis JN.: What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision making in health policy and practice: a rapid review, submitted).

Objective of this research

The objective of this work was to inform the design of a rapid response program to support evidence-informed decision-making in the Americas region. Because the design of the program could vary depending on the level of decision-making (e.g., country, state, municipality), the setting, the resources available (financial and workforce), and the needs of decision-makers, we offer guidance on factors that would need to be taken into account in setting up a rapid response program in the Americas region. In practice, the work is also likely to be relevant to other regions of the world. Specifically, we set out to answer the following three questions:

  1. 1.

    What are the best methodological approaches for rapid reviews of the research evidence (the product)?

  2. 2.

    What other strategies are needed to facilitate evidence-informed decision-making in health policy and practice?

  3. 3.

    How best to operationalize the program?

Methods

The evidence used to inform the design of a rapid response program included the following: (i) two rapid reviews of the best available research evidence (for questions 1 and 2), (ii) supplementary literature in relation to the “shortcuts” that could be considered to reduce the time needed to complete rapid reviews (question 1), and (iii) four case studies and supplementary literature to identify operational issues for the design of the program (question 3).

Rapid reviews

We conducted two rapid reviews of the research evidence to answer the questions:

  1. 1.

    What are the best methodologies to enable a rapid review of research evidence for evidence-informed decision-making in health policy and practice?

  2. 2.

    What are the best strategies to facilitate evidence-informed decision-making in health policy and practice?

Both reviews utilized systematic review methods and were conducted according to a pre-defined protocol, including clear inclusion criteria [14]. Comprehensive search strategies were used, including published and gray literature, written in English, French, Portuguese, or Spanish, from 2004 onwards. Eleven databases and two websites were searched. Two review authors independently applied eligibility criteria. Data extraction was done by one reviewer and checked by a second reviewer. The methodological quality of included studies was assessed independently by two reviewers. A narrative summary of the results is presented. Full details of the methods, including the inclusion criteria, can be found in an associated report [15] and paper (Haby MM, Chapman E, Clark R, Barreto J, Reveiz L, Lavis JN.: What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision making in health policy and practice: a rapid review, submitted) for question 1 and in Additional files 1 and 2 for question 2. We have labelled these reviews as rapid reviews because they were conducted in a limited timeframe and with the needs of the decision-makers in mind.

Identifying supplementary evidence for potential shortcuts

In relation to areas where “shortcuts” could be considered to reduce the time needed to complete the rapid reviews, we used the systematic reviews included in the previously mentioned review of methodologies of rapid reviews and the A MeaSurement Tool to Assess systematic Reviews (AMSTAR) questions [16] to make a list of all possible areas (shown in column 2, Table 1). To determine the potential impact of the shortcut on the validity of the results, we used the primary studies included in the reviews by Ganann and colleagues [17] and Cameron and colleagues [18] and complemented them with studies cited in the Cochrane Handbook [19] or found during the search process for the rapid review. It is important to note that this was not a systematic search for evidence, though the majority of the references come from the systematic review by Gannan and colleagues.

Table 1 Areas where “shortcuts” could be considered to reduce time to completion of rapid reviews

Case studies of existing rapid response programs

We developed cases studies of existing rapid response programs to highlight operational issues that were not addressed in the systematic reviews included in the review of methodologies for rapid reviews [9, 17, 18, 2023]. Case studies are useful for identifying program processes, barriers and facilitators and can alert the practitioner to the existence of otherwise unexplored or unusual phenomenon [24]. The sampling frame for the selection of case studies was programs known to the authors and/or identified during the search process for the two rapid reviews. To select the four programs for case studies, we used purposeful sampling to identify four unique cases [25]. We aimed for maximum variation in terms of their reach (state level, national, or global) and stage of development (just starting or long history) and to ensure representation of both developed and developing countries. These case studies are not meant to be representative of all rapid response programs. The four programs chosen and the main reason/s for their selection are shown in Table 2.

Table 2 Rapid response programs selected for case studies

Case studies were developed based on a structured interview with a key informant from the program, conducted using Skype. This information was supplemented with information on the program website and from the published literature. The questions used in the structured interview are included in Additional file 3. These were sent to the key informant prior to the interview. Key informants were also offered the chance to review the draft case study before publication. Our first request for participation was made to the most senior and knowledgeable representative of the organization offering the program. In two cases, we were referred to an alternative knowledgeable contact (McMaster Health Forum and Cochrane Response).

For one case study (Regional East African Community Health (REACH) Policy Initiative), the key informant agreed to participate in the interview but then did not make any further contact, despite various attempts on our part, including a request to review the draft case study. As an alternative, the informant could not be identified; we decided to develop the case study based on the information obtained from the published literature [26] and websites only. All three interviews were conducted in the first two weeks of June 2015. One key informant (from the Sax Institute) provided written answers to most of the questions prior to the interview, and this information was supplemented in the interview.

The answers to the interview questions were summarized in a one-page format (see Additional file 3). The general structure used was a two- to three-paragraph description of the program, including its scope, who it services, the length of time it has been operating, governance and operational arrangements, staffing, products, and process. This was followed by sections on documents available to guide the reviews, strengths of the program, challenges, and future work. The source of information is noted at the end. Where the program offered more than one product type, we focused on the product closest to a rapid review. Given the variation in set-up and conduct of the four programs, it was not possible to be too prescriptive about how the relevant elements were reported. The draft case study was checked by the key informant (with the exception of the REACH Policy Initiative as no response was received from them) and appropriate revisions made. For Cochrane Innovations, the CEO was also contacted, at the suggestion of the key informant, to comment on the case study but no comments were received.

Supplementary literature to inform the design and operationalization of the program

In the process of conducting the rapid reviews, we identified a recent study by Hartling and colleagues that provides information gathered from interviews with key informants of 20 different rapid response programs [11]. As part of this study, they describe contextual factors that influenced rapid review methods. Other insights into how a rapid response program could be operationalized are presented by Wilson and colleagues based on findings from an issue brief and stakeholder dialogue conducted with health system decision-makers to inform the development of a rapid response program for Canada [27, 28].

Results

Question 1—methodologies for rapid reviews

Key findings from the rapid review

While five systematic reviews of methods for rapid reviews were found, none of these were of sufficient quality to allow firm conclusions to be made. Thus, the findings need to be treated with caution. There is no agreed definition of rapid reviews in the literature and no agreed methodology for conducting rapid reviews [9, 17, 18, 2023]. However, the systematic reviews included in this review are consistent in stating that a rapid review is generally conducted in a shorter timeframe and may have a reduced scope. A wide range of “shortcuts” are used to conduct rapid reviews more quickly than a full systematic review. While authors of the included systematic reviews tend to agree that changes to scope or timeframe can introduce biases (e.g., selection bias, publication bias, and language of publication bias), they found little empirical evidence to support or refute that claim [9, 17, 18, 20, 22]. Further, there are few comparisons available in the literature of full and rapid reviews to be able to determine the impact of these “shortcuts.” There is some evidence from a good quality randomized controlled trial with low risk of bias that rapid reviews may improve clarity and accessibility of research evidence for decision-makers when compared to a systematic review alone [29].

The included systematic reviews included a variety of rapid products, including rapid systematic reviews, rapid health technology assessments, and rapid overviews of systematic reviews. However, no examples of rapid evidence briefs for policy were included in the reviews.

The authors of the published systematic reviews of rapid review methods suggest that, rather than focusing on developing a formalized methodology, which may not be appropriate, researchers and users should focus on increasing the transparency of the methods used for each review [17, 18, 23]. The authors of the most recent systematic review also suggest that: “the similarity of rapid products lies in their close relationship with the end user to meet decision-making needs in a limited timeframe” ([9], p. vii).

Key findings from supplementary literature in relation to ‘shortcuts’

The areas where “shortcuts” could be considered to reduce the time needed to complete the reviews are shown in Table 1. As can be seen from column 3 of the table, the supporting literature for the potential impact of shortcuts in the review process is quite limited and not always conclusive. It is important to note that not all shortcuts necessarily lead to a reduction in the AMSTAR score (e.g., limiting the scope of the review question) and some shortcuts can have significant implications for the AMSTAR score but not necessarily save a significant amount of time (e.g., omitting key methodological details from the report).

Question 2—strategies to facilitate evidence-informed decision-making

Key findings from the rapid review

Forty systematic reviews of strategy effectiveness met the inclusion criteria for this question, of which data were extracted from 27 of the systematic reviews (see Additional files 1 and 2). Using the domains of the linking research to action framework [7, 30], the majority of the interventions that were evaluated in the included systematic reviews focused on the practice rather than the policy environments and, within the former, on “push,” “facilitating pull,” and “pull” activities. Examples of interventions with significant impact include dissemination of printed educational materials, including systematic reviews; clinical librarian services; education in evidence-based practice; local opinion leaders; and tailored and targeted messaging (Additional file 1). For linkage and exchange, knowledge brokers and interaction between users and producers of research are the only interventions evaluated but the included studies do not provide any evidence of effectiveness for these interventions.

In regard to rapid reviews, no primary studies of the effectiveness of strategies to facilitate their uptake were found by the systematic reviews’ authors. Further, there are no good quality evaluations of the impact on research use of packaging of systematic reviews as overviews of systematic reviews or as evidence briefs based on systematic reviews [31, 32]. However, there is evidence that dissemination of printed educational materials [33] and summaries of systematic reviews [32, 34] are effective at improving awareness and/or clinical practice (Additional file 1).

Question 3—how best to operationalize the program

Key findings from the case studies

The case studies are presented in Additional file 3. The models presented in the case studies vary in terms of when they started, the reach of the service, how the service is funded, whether reviews undergo external review, and whether reviews are made publicly available (Table 3). Most of the models include a lag period before publication of the review to allow the commissioning agency time to prepare for any resulting publicity or to allow for journal publications to be submitted.

Table 3 Features of the four rapid response models developed as case studies

While all programs have some documentation to guide the process and methods, the actual methods used can vary between reviews conducted within the same program—depending on the question and needs of the requestor. The reporting templates used in the models vary in their prescriptiveness but none include minimum methodological standards or reporting requirements to enable an accurate quality assessment. For all models, the process involved in conducting the review requires regular interaction and communication between researchers and decision-makers. None of the models have needed to use priority setting to determine which reviews to undertake—all reviews that are within the scope of the service have been accepted.

Each of the models is unique in some way. The McMaster model is the only one that reported using other knowledge translation strategies associated with the conduct of the review, i.e. wide dissemination and promotion of the completed review, including on social media, and the offer of a presentation of the review’s findings to the requestor. The Sax Institute model is the only one that brokers the reviews out to research groups selected from their database of researchers or identified through other means. This model has the advantages of having researchers with specific expertise in the topic conduct the review and of building skills in writing for policy makers in a wider range of researchers. A potential disadvantage of this approach is the extra work required by the Sax Institute staff to ensure the review deadlines are met and to support new researchers (which is currently done through a one on one informal mentoring process). The Cochrane Innovations model is unique in having the support of a large group of highly skilled reviewers, though only one review has been conducted to date. The REACH-PI model is the first known example in a low-income country and is the only one with some published evaluation results [35]. Operational issues highlighted by the key informants or in the published evaluation [26] that need to be considered by developers of new programs are reported in Table 4.

Table 4 Operational issues highlighted by the case studies

Key findings from supplementary literature

In relation to how rapid response programs are operationalized to ensure that they run smoothly and meet their intended purpose, the available literature is limited. The study by Hartling and colleagues describes contextual factors that influenced rapid review methods [11]. These were identified by thematic analysis of the interviews. Two of these contextual factors related to how the program was operationalized and are:

  • The continuous close relationship with a specific end user in an iterative fashion throughout the work to ensure that the product will meet the end user’s need

  • A high reliance on maintaining highly trained staff to conduct the reviews in a short time frame and that understand the type of product that might meet the needs of the decision-maker [11]

Other insights into how a rapid response program could be operationalized are presented by Wilson and colleagues based on findings from their issue brief and stakeholder dialogue [27, 28]. Wilson and colleagues were unable to find any systematic reviews that addressed how a rapid review program could be organized. Therefore, they based their insights on examples of existing rapid review programs [28]. In their issues brief they presented four organizational features and possible approaches to operationalizing each feature in Canada. The features included governance, management and staffing, program resources, and collaboration [27].

The following suggestions and issues raised by Wilson and colleagues for a Canadian program warrant consideration in setting up a rapid response program in other jurisdictions:

  • Include high-level representation from all relevant stakeholders, including policymakers and researchers, on a steering committee to govern the service

  • Implement minimum training standards and provide ongoing mentorship for staff contributing to the program

  • Funding needs to be long-term and cover both program delivery and ongoing evaluation of the program

  • Operationalization of the priority setting (if needed) and administrative process needs to be done at the institutional level [27, 28]

Two key challenges noted by the stakeholder dialogue participants when discussing the issues brief included securing stable, long-term funding and finding a way to effectively and equitably manage the expected demand [27].

Insights for designing a rapid response program for the Americas region

The product — rapid reviews

The systematic reviews published on this topic advise against developing a formalized methodology but advocate for greater transparency regarding the methods used for each review [17, 18, 23]. The four case studies also showed that current models do not use a formalized, one-size-fits-all methodology. It is likely that this variation in methodology is due to the differing user requirements. Further, the rapid reviews produced by the four programs (and available on their website) do not consistently report clearly and in sufficient detail the methods used within their reviews to enable an accurate quality assessment.

None of the included systematic reviews or case studies explicitly included evidence briefs for policy as a rapid product. However, rapid reviews produced as part of a rapid response program could be used to inform evidence briefs for policy. Also, there is a potential for evidence briefs for policy to be conducted quickly as part of a rapid response program. However, the evidence base supporting this product is limited.

The ideal methods for evidence syntheses continue to be high quality systematic review methods. Thus, it is important that the authors of the rapid reviews are transparent in reporting the methods they used so that the “shortcuts” taken can be clearly seen and the quality of the resulting product evaluated. The current, or modified, PRISMA Guidelines are a good starting point for this [36]. This will also enable researchers to evaluate the impact of methodological choices on the results of the reviews, whether rapid or full systematic reviews or overviews of systematic reviews.

Given that rapid reviews are often (though not always) written up in a concise fashion to meet the needs of busy decision-makers, a checklist could be developed that lists the key aspects of systematic review methodology and requires a simple tick or cross to indicate whether or not it was done (e.g., two reviewers screened titles and abstracts). For some aspects, a short answer would be appropriate, e.g., which databases were searched. Table 1 will be a very useful reference for this. If preferred, the methods section could be included at the end of the review, as an appendix to the review or placed online. Further, it is appropriate to evaluate rapid reviews using the same criteria as systematic reviews, e.g., using the AMSTAR criteria [16], which will also allow better comparisons with systematic reviews on issues of validity of the results. We suggest that the AMSTAR criteria could be used to assess the quality of reviews of systematic reviews (also known as overviews) as well as reviews of primary studies.

It is possible that the greater the number of “shortcuts” that are taken to reduce the time needed to complete the review the greater the risk that biases will be introduced. Therefore, an appropriate balance needs to be found between quality and timeliness when deciding what methods to use, whether a rapid (vs full systematic) review is needed and how quickly it really is needed.

Consideration also needs to be given to which “shortcuts” are taken to ensure maximum quality. Some shortcuts may be seen as more important by users in terms of their perceived impact on risk of bias of the results. We suggest that omission of the quality assessment of included studies could be one such shortcut, as implied by participants’ ranking of six rapid review approaches in the study by Tricco and colleagues [13, Table 5], where the top ranked approach included a quality assessment. Its omission will also lead to a loss of two points on the AMSTAR score (Table 1). In contrast, limiting the number of databases searched to to to three and only including the last 10 years of literature may have limited impact on perceived risk of bias for many topics and no impact on the AMSTAR score.

As well as the shortcuts identified in Table 1, other approaches exist that can be used to make a rapid review faster than a systematic review. These should not have an impact on the validity of the results and include [20]:

  • Making the process more efficient, e.g., using specialized software for the reviews such as DistillerSR®

  • Using a larger, highly skilled staff, who are part of a reserve capacity

  • Updating an existing high quality review

Strategies to facilitate evidence-informed decision-making

Given that there is no support for the effectiveness of rapid reviews alone in promoting the uptake of research evidence into policy and practice (Additional file 1), it will be extremely important to also engage concurrently in other knowledge translation strategies as part of the model. Examples include the use of tailored and targeted messaging in the rapid reviews, dissemination of the rapid reviews, and training of policy makers in the appraisal and use of research (Additional file 1). It will also be important to evaluate the impact of these strategies to improve the evidence base for future decision-making regarding the design of rapid response programs.

It is important to remember that rapid reviews are typically conducted at the request of a decision-maker and the decision-maker has an important involvement in setting the question, the parameters of the review and the timeline. They may also provide feedback along the way. And unlike traditional systematic reviews, they are usually written with the context of the decision-maker in mind, e.g., the policy or practice question that needs an answer [9]. We suggest that these features are likely to increase the chance that the rapid reviews are utilized in decision-making but this needs to be tested.

Operationalization of the rapid response program

In regard to the organizational arrangements for the rapid response program, consideration will need to be given to the issues raised in other related literature presented in this paper [11, 27, 28] (see the “Results, Question 3—how best to operationalize the program” section) or in the case studies (Table 4).

Discussion

The research presented in this paper will help to inform the design of a rapid response program for health policy and practice that can be applied to the Americas region. The rapid review of methodologies for rapid reviews and additional evidence found in relation to shortcuts that could be considered to reduce the time needed to complete the reviews (Table 1) are relevant to the design of the main product of the rapid response program—the rapid review. The rapid review of strategies to facilitate evidence-informed decision-making found a number of strategies that had a significant impact and which could be incorporated into the program. Finally, the case studies of four current models of rapid response identified additional operational issues that need to be considered in designing new programs (Table 4), as did the supplementary literature [11, 27, 28].

Implications for future research

Priority for future research should be given to research into the effect of shortcuts on the conclusions of reviews in general (rapid and systematic)—thus filling the gaps in Table 1, column 3. Further, we suggest that research that compares the conclusions of systematic reviews and rapid reviews for the same topic, controlling for review quality, should also be prioritized.

We agree with Polisena and colleagues that the impact of rapid reviews used to inform health decision-making should be tested, as should the feasibility and desire for an extension to the PRISMA guidelines to incorporate rapid review methodologies, including different types of reports needed [10]. We suggest that the content of revised PRISMA guidelines should be heavily based on the current guidelines but allow for different ways to present the information, including in appendices or using checklists. They should also focus on aspects that are not routinely part of systematic reviews, such as the extent of involvement of the review funders in the conduct of the review as this could impact on quality (e.g., potential conflict of interest) but could also facilitate the use of the rapid review in decision-making.

Current reviews of rapid review programs suggest that one of the defining features of rapid reviews (and other rapid products) is the close relationship with the end user [9], with personal contact between researchers and decision-makers being a known facilitator of the uptake of evidence [2, 5, 37]. The process can also potentially build skills [5] and trust [37] between researchers and decision-makers. Research is required to identify which elements of this relationship increase the use of the review in decision-making (if they do). This research should also be extended to test the impact of these strategies in increasing the uptake of research in general, including systematic reviews, in decision-making.

Strengths and limitations

A key strength of this research was the use of high quality systematic review methods to guide the development of a rapid response program as well as case studies and further literature searching to help fill knowledge gaps. The case studies were able to verify the large variation in existing programs [911, 13, 18, 23] and identify operational issues that had not been previously identified. A limitation of our research is the small number of case studies included, but this was supplemented with the findings of more comprehensive surveys of current rapid response programs [911, 13] (see the “Background” section) and other relevant work in this area [27, 28].

Another possible limitation of our research and insights offered for a rapid response program is the reliance on AMSTAR for assessing and comparing the quality of rapid reviews and systematic reviews. AMSTAR was created to assess the quality of systematic reviews of randomized controlled trials, and thus, it could be argued that it is not appropriate for other types of reviews, e.g., reviews of reviews, reviews of non-randomized studies, and realist reviews. At the present time, however, the AMSTAR tool is the best tool available to our knowledge to assess and compare the quality of review methods for reviews of effectiveness and considers the major potential sources for bias in reviews of the literature [16, 38]. As other tools are developed and more widely used, such as RAMESES for realist reviews [39], variations of AMSTAR for non-randomized studies [40], and the new ROBIS tool to assess the risk of bias in systematic reviews [41], they can also be used for rapid response programs to assess the quality of their products.

Conclusions

There is no one clear method or process for rapid response suggested by the research evidence found in the rapid reviews, related literature, or case studies—and a variety of approaches may be needed. The ideal methods for evidence synthesis continue to be high quality systematic review methods. However, the literature, and our experience, tells us that policy and practice decisions need to be made and cannot always wait until the best evidence is available. In these cases, there is a very real risk that no research evidence or poor quality research evidence will be used. Thus, it is better that researchers try to accommodate the needs of busy decision-makers by producing rapid reviews that optimally balance timeliness and quality and, where possible, to evaluate later how the results differ from full reviews. In this paper, we have highlighted factors that need to be taken into account in setting up a rapid response program. We also advocate for greater attention to be given to the reporting of methods used in rapid reviews to allow a quality assessment. Finally, it will be important to evaluate the rapid response programs chosen model in order to make refinements and to add to the evidence base for evidence-informed decision-making.

References

  1. Wiseman J. Chapter 1. Lost in translation: knowledge, policy, politics and power. In: Carey G, Landvogt K, Barraket J, editors. Creating and implementing public policy: cross-sectoral debates. London: Routledge, Taylor and Francis Group; 2015. Policy Press.

    Google Scholar 

  2. Lavis J, Davies H, Oxman A, Denis JL, Golden-Biddle K, Ferlie E. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy. 2005;10 Suppl 1:35–48.

    Article  PubMed  Google Scholar 

  3. Liverani M, Hawkins B, Parkhurst JO. Political and institutional influences on the use of evidence in public health policy. A systematic review. PLoS One. 2013;8:e77404.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Nutley S. Bridging the policy/research divide. Reflections and lessons from the UK. Keynote paper presented at “Facing the future: Engaging stakeholders and citizens in developing public policy”. National Institute of Governance Conference, Canberra, Australia 23/24 April 2003. http://www.treasury.govt.nz/publications/media-speeches/guestlectures/nutley-apr03. 2003. Accessed 19 Jul 2016

  5. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS One. 2011;6:e21704.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. WHO. Knowledge translation framework for ageing and health. Geneva: Department of Ageing and Life-Course, World Health Organization; 2012.

    Google Scholar 

  8. Lavis JN, Hammill AC, Gildiner A, McDonagh RJ, Wilson MG, Ross SE, et al. A systematic review of the factors that influence the use of research evidence by public policymakers. Final report submitted to the Canadian Population Health Initiative. Hamilton: McMaster University Program in Policy Decision-Making; 2005.

    Google Scholar 

  9. Hartling L, Guise JM, Kato E, Anderson J, Aronson N, Belinson S, et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville: AHRQ Publication No. 15-EHC008-EF; 2015. Research White Paper. (Prepared by the Scientific Resource Center under Contract No. 290-2012-00004-C.).

    Google Scholar 

  10. Polisena J, Garrity C, Kamel C, Stevens A, Abou-Setta AM. Rapid review programs to support health care and policy decision making: a descriptive analysis of processes and methods. Syst Rev. 2015;4:26.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Hartling L, Guise JM, Kato E, Anderson J, Belinson S, Berliner E, et al. A taxonomy of rapid reviews links report types and methods to specific decision-making contexts. J Clin Epidemiol. 2015;68:1451–62.

    Article  PubMed  Google Scholar 

  12. Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al. A scoping review of rapid review methods. BMC Med. 2015;13:224.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Tricco AC, Zarin W, Antony J, Hutton B, Moher D, Sherifali D, et al. An international survey and modified Delphi approach revealed numerous rapid review methods. J Clin Epidemiol. 2015;70:61–7.

    Article  PubMed  Google Scholar 

  14. Haby M, Chapman E, Reveiz L, Barreto J, Clark R. Methodologies for rapid response for evidence-informed decision making in health policy and practice: an overview of systematic reviews and primary studies (Protocol). 2015. PROSPERO: CRD42015015998.

    Google Scholar 

  15. Haby MM, Chapman E, Reveiz L, Barreto J, Clark R, Lavis JN. Methodologies for rapid response for evidence-informed decision making in health policy and practice: a rapid overview. Report submitted to the World Health Organization, Brazil. 2015.

  16. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Cameron A, Watt A, Lathlean T, Sturm L. Rapid versus full systematic reviews: an inventory of current methods and practice in Health Technology Assessment. ASERNIP-S Report No. 60. Adelaide: ASERNIP-S, Royal Australasian College of Surgeons; 2007.

    Google Scholar 

  19. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration; 2011. Available from http://handbook.cochrane.org/ Accessed 19 Jul 2016.

  20. Abrami PC, Borokhovski E, Bernard RM, Wade CA, Tamim R, Persson T, et al. Issues in conducting and disseminating brief reviews of evidence. Evid Policy. 2010;6:371–89.

    Article  Google Scholar 

  21. Featherstone RM, Dryden DM, Foisy M, Guise JM, Mitchell MD, Paynter RA, et al. Advancing knowledge of rapid reviews: an analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Syst Rev. 2015;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10:397–410.

    Article  PubMed  Google Scholar 

  23. Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24:133–9.

    Article  PubMed  Google Scholar 

  24. Daly J, Willis K, Small R, Green J, Welch N, Kealy M, et al. A hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol. 2007;60:43–9.

    Article  PubMed  Google Scholar 

  25. Patton MQ. Qualitative research and evaluation methods: integrating theory and practice. 4th ed. Los Angeles: Sage Publications; 2015. 806.

    Google Scholar 

  26. Mijumbi RM, Oxman AD, Panisset U, Sewankambo NK. Feasibility of a rapid response mechanism to meet policymakers inverted question mark urgent needs for research evidence about health systems in a low income country: a case study. Implement Sci. 2014;9:114.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Wilson MG, Lavis JN, Gauvin FP. Developing a rapid-response program for health system decision-makers in Canada: findings from an issue brief and stakeholder dialogue. Syst Rev. 2015;4:25.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Wilson MG, Lavis JN, Gauvin FP. Issue brief: developing a ‘rapid-response’ program for health system decision-makers in Canada. Hamilton: McMaster Health Forum; 2014. 7 March 2014.

    Google Scholar 

  29. Opiyo N, Shepperd S, Musila N, Allen E, Nyamai R, Fretheim A, et al. Comparison of alternative evidence summary and presentation formats in clinical guideline development: a mixed-method study. PLoS One. 2013;8:e55067.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ. 2006;84:620–8.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q. 2011;89:131–56.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Murthy L, Shepperd S, Clarke MJ, Garner SE, Lavis JN, Perrier L, et al. Interventions to improve the use of systematic reviews in decision-making by health system managers, policy makers and clinicians. Cochrane Database Syst Rev. 2012;Issue 9:Art. No. CD009401.

  33. Giguere A, Legare F, Grimshaw J, Turcotte S, Fiander M, Grudniewicz A, et al. Printed educational materials: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;Issue 10:Art. No. CD004398.

  34. Wallace J, Byrne C, Clarke M. Improving the uptake of systematic reviews: a systematic review of intervention effectiveness and relevance. BMJ Open. 2014;4:e005834.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Mijumbi RM, Oxman AD, Panisset U, Sewankambo NK. Feasibility of a rapid response mechanism to meet policymakers’ urgent needs for research evidence about health systems in a low income country: a case study. Implement Sci. 2014;9:114.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6:e1000097.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7:239–44.

    Article  PubMed  Google Scholar 

  38. Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, et al. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62:1013–20.

    Article  PubMed  Google Scholar 

  39. Wong G, Greenhalgh T, Westhrop G, Pawson R. Quality standards for realist syntheses and meta-narrative reviews. London: Nuffield Department of Primary Care Health Sciences, University of Oxford. http://www.ramesesproject.org/Standards_and_Training_materials.php.

  40. Shea B. AMSTAR - Developments Ottawa, Canada: AMSTAR; 2015. Accessed 7 Jul 2016. Available from: http://amstar.ca/Developments.php. Accessed 19 Jul 2016.

  41. Whiting P, Savovic J, Higgins JP, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Edwards P, Clarke M, DiGuiseppi C, Pratap S, Roberts I, Wentz R. Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records. Stat Med. 2002;21:1635–40.

    Article  PubMed  Google Scholar 

  43. Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59:697–703.

    Article  PubMed  Google Scholar 

  44. Gotzsche PC, Hrobjartsson A, Maric K, Tendal B. Data extraction errors in meta-analyses that use standardized mean differences. JAMA. 2007;298:430–7.

    PubMed  Google Scholar 

  45. Jones AP, Remmington T, Williamson PR, Ashby D, Smyth RL. High prevalence but low impact of data extraction and reporting errors were found in Cochrane systematic reviews. J Clin Epidemiol. 2005;58:741–2.

    Article  PubMed  Google Scholar 

  46. Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19:591–603.

    Article  PubMed  Google Scholar 

  47. Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56:943–55.

    Article  PubMed  Google Scholar 

  48. Royle P, Waugh N. Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the National Institute for Clinical Excellence appraisal system. Health Technol Assess. 2003;7:iii. ix-x, 1-51.

    Article  CAS  Google Scholar 

  49. Topfer LA, Parada A, Menon D, Noorani H, Perras C, Serra-Prat M. Comparison of literature searches on quality and costs for health technology assessment using the MEDLINE and EMBASE databases. Int J Technol Assess Health Care. 1999;15:297–303.

    CAS  PubMed  Google Scholar 

  50. Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess. 2003;7:1–76.

    CAS  PubMed  Google Scholar 

  51. Oxman AD, Schunemann HJ, Fretheim A. Improving the use of research evidence in guideline development: 8. Synthesis and presentation of evidence. Health Res Policy Syst. 2006;4:20.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Doust JA, Pietrzak E, Sanders S, Glasziou PP. Identifying studies for systematic reviews of diagnostic tests was difficult due to the poor sensitivity and precision of methodologic filters and the lack of information in the abstract. J Clin Epidemiol. 2005;58:444–9.

    Article  CAS  PubMed  Google Scholar 

  53. Gartlehner G, West SL, Lohr KN, Kahwati L, Johnson JG, Harris RP, et al. Assessing the need to update prevention guidelines: a comparison of two methods. Int J Qual Health Care. 2004;16:399–406.

    Article  PubMed  Google Scholar 

  54. Hopewell S, Clarke M, Lefebvre C, Scherer R. Handsearching versus electronic searching to identify reports of randomized trials. Cochrane Database Syst Rev. 2007;Issue 2:Art. No. MR000001.

  55. Langham J, Thompson E, Rowan K. Identification of randomized controlled trials from the emergency medicine literature: comparison of hand searching versus MEDLINE searching. Ann Emerg Med. 1999;34:25–34.

    Article  CAS  PubMed  Google Scholar 

  56. McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias RS, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ. 1998;317:1562–3.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Savoie I, Helmer D, Green CJ, Kazanjian A. Beyond Medline: reducing bias through extended systematic review search. Int J Technol Assess Health Care. 2003;19:168–78.

    Article  PubMed  Google Scholar 

  58. Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007;Issue 2:Art. No. MR000010.

  59. McAuley L, Pham B, Tugwell P, Moher D. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet. 2000;356:1228–31.

    Article  CAS  PubMed  Google Scholar 

  60. Sterne JA, Gavaghan D, Egger M. Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature. J Clin Epidemiol. 2000;53:1119–29.

    Article  CAS  PubMed  Google Scholar 

  61. Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet. 1997;350:326–9.

    Article  CAS  PubMed  Google Scholar 

  62. Moher D, Pham B, Klassen TP, Schulz KF, Berlin JA, Jadad AR, et al. What contributions do languages other than English make on the results of meta-analyses? J Clin Epidemiol. 2000;53:964–72.

    Article  CAS  PubMed  Google Scholar 

  63. Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health Technol Assess. 2003;7:1–90.

    Article  CAS  PubMed  Google Scholar 

  64. Juni P, Holenstein F, Sterne J, Bartlett C, Egger M. Direction and impact of language bias in meta-analyses of controlled trials: empirical study. Int J Epidemiol. 2002;31:115–23.

    Article  PubMed  Google Scholar 

  65. Lawson ML, Pham B, Klassen TP, Moher D. Systematic reviews involving complementary and alternative medicine interventions had higher quality of reporting than conventional medicine reviews. J Clin Epidemiol. 2005;58:777–84.

    Article  PubMed  Google Scholar 

  66. Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58:769–76.

    Article  PubMed  Google Scholar 

  67. Moher D, Fortin P, Jadad AR, Juni P, Klassen T, Le Lorier J, et al. Completeness of reporting of trials published in languages other than English: implications for conduct and reporting of systematic reviews. Lancet. 1996;347:363–6.

    Article  CAS  PubMed  Google Scholar 

  68. Vickers A, Goyal N, Harland R, Rees R. Do certain countries produce only positive results? A systematic review of controlled trials. Control Clin Trials. 1998;19:159–66.

    Article  CAS  PubMed  Google Scholar 

  69. Becker LA, Oxman AD. Chapter 22: Overviews of reviews. In: Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions version 5.1.0 (updated March 2011) Available from http://handbook.cochrane.org/: The Cochrane Collaboration; 2011. Accessed 19 Jul 2016.

  70. The Joanna Briggs Institute. Joanna Briggs Institute reviewers’ manual: 2014 edition/supplement. Adelaide: The Joanna Briggs Institute; 2014.

    Google Scholar 

  71. Lee E, Dobbins M, Decorby K, McRae L, Tirilis D, Husson H. An optimal search filter for retrieving systematic reviews and meta-analyses. BMC Med Res Methodol. 2012;12:51.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Westphal A, Kriston L, Holzel LP, Harter M, von Wolff A. Efficiency and contribution of strategies for finding randomized controlled trials: a case study from a systematic review on therapeutic interventions of chronic depression. J Public Health Res. 2014;3:177.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Juni P, Altman DG, Egger M. Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ. 2001;323:42–6.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  74. Wilson MG, Lavis JN, Gauvin FP. Dialogue summary: developing a ‘rapid-response’ program for health system decision-makers in Canada. Hamilton: McMaster Health Forum; 2014. 7 March 2014.

    Google Scholar 

  75. Campbell D, Donald B, Moore G, Frew D. Evidence check: knowledge brokering to commission research reviews for policy. Evid Policy. 2011;7:97–107.

    Article  Google Scholar 

  76. Redman S, Jorm L, Haines M. Increasing the use of research in health policy: the Sax Institute model. Australas Epidemiol. 2008;15:15–8.

    Google Scholar 

Download references

Acknowledgements

We thank the key informants of the case studies for making time to do the interview and to check the completed case studies for accuracy: Chantelle Garritty for Cochrane Response; Mike Wilson for McMaster Rapid Response; and Sian Rudge for Sax Institute Evidence Check.

Funding

This work was developed and funded under the cooperation agreement # 47 between the Department of Science and Technology of the Ministry of Health of Brazil and the Pan American Health Organization. The funders of this study set the terms of reference for the project but, apart from the input of JB, EC, and LR to the conduct of the study, did not significantly influence the work. Manuscript preparation was funded by the Ministry of Health Brazil, through an EVIPNet Brazil project with the Bireme/PAHO.

Authors’ contributions

EC and JB had the original idea for the research and obtained funding; MH was responsible for developing the case studies and analysis of other relevant literature and drafted the manuscript; EC and JB contributed to the development of questions and selection of programs for the case studies; all authors contributed ideas to, provided commentary on, and approved the final manuscript.

Competing interests

Neither the Ministry of Health of Brazil nor the Pan American Health Organization (PAHO) has a vested interest in this research—though they do have a professional interest in increasing the uptake of research evidence in decision making. Two of the review authors (EC and LR) are employees of PAHO and JB was an employee of the Ministry of Health of Brazil at the time of the study. However, the views and opinions expressed herein are those of the overview authors and do not necessarily reflect the views of the Ministry of Health of Brazil or the Pan American Health Organization. In relation to the case studies, MH, as part of her previous employment with an Australian state government Department of Health, was responsible for commissioning and using rapid reviews to inform decision making and has commissioned rapid reviews from the Sax Institute Evidence Check program. JNL has had some involvement with all four rapid response programs that were used in the case studies but did not participate in the selection or conduct of the case studies. The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michelle M. Haby.

Additional files

Additional file 1:

What are the best strategies to facilitate evidence-informed decision-making: a rapid overview of systematic reviews. (DOCX 323 kb)

Additional file 2:

Appendices to the rapid overview of strategies to facilitate evidence-informed decision-making. (DOCX 187 kb)

Additional file 3:

Case studies. (DOCX 40 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Haby, M.M., Chapman, E., Clark, R. et al. Designing a rapid response program to support evidence-informed decision-making in the Americas region: using the best available evidence and case studies. Implementation Sci 11, 117 (2015). https://doi.org/10.1186/s13012-016-0472-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13012-016-0472-9

Keywords