Skip to main content

Development of two shortened systematic review formats for clinicians

Abstract

Background

Systematic reviews provide evidence for clinical questions, however the literature suggests they are not used regularly by physicians for decision-making. A shortened systematic review format is proposed as one possible solution to address barriers, such as lack of time, experienced by busy clinicians. The purpose of this paper is to describe the development process of two shortened formats for a systematic review intended for use by primary care physicians as an information tool for clinical decision-making.

Methods

We developed prototypes for two formats (case-based and evidence-expertise) that represent a summary of a full-length systematic review before seeking input from end-users. The process was composed of the following four phases: 1) selection of a systematic review and creation of initial prototypes that represent a shortened version of the systematic review; 2) a mapping exercise to identify obstacles described by clinicians in using clinical evidence in decision-making; 3) a heuristic evaluation (a usability inspection method); and 4) a review of the clinical content in the prototypes.

Results

After the initial prototypes were created (Phase 1), the mapping exercise (Phase 2) identified components that prompted modifications. Similarly, the heuristic evaluation and the clinical content review (Phase 3 and Phase 4) uncovered necessary changes. Revisions were made to the prototypes based on the results.

Conclusions

Documentation of the processes for developing products or tools provides essential information about how they are tailored for the intended user. One step has been described that we hope will increase usability and uptake of these documents to end-users.

Peer Review reports

Background and significance

Systematic reviews are one tool available to clinicians that provide the current best evidence. Ideally, authors of systematic reviews employ rigorous methods to select credible and relevant information to generate summative reports [1–3]. Although systematic reviews are identified as providing the best evidence for a clinical question [2, 3], the literature indicates that they are not being used regularly for healthcare decision making [4, 5]. One proposed solution is to create filtered resources, where the included original studies and reviews have been subject to explicitly formulated methodological criteria [6]. An example of this is ACP Journal Club (acpjc.acponline.org). This allows information to be validated and refined to facilitate rapid reading [7] by clinicians, whose time constraints are a significant challenge in keeping up to date with current research [8].

Several clinical information tools currently exist that present information from systematic reviews in a shortened or summarized manner (such as the BMJ PICO abridged research articles). We completed two comprehensive reviews of the literature that examined the impact of interventions for seeking, appraising, and applying evidence from systematic reviews in decision-making by clinicians or policymakers [9, 10] and specifically screened for studies that evaluated different strategies for presenting a systematic review. We located two trials using GRADE (Grading of Recommendations Assessment, Development and Evaluation) by Rosenbaum and colleagues [11, 12] who examined a ‘summary of findings’ table added to Cochrane systematic reviews. They reported that participants found it easier to locate results for important outcomes, were more likely to correctly answer questions regarding results, and spent less time finding key information. However, it is necessary to be thoughtful in interpreting these results, as small samples were used and participants were drawn from a convenience sample, including those who had an affiliation with the Cochrane Collaboration. Aside from these two trials that demonstrate considerable limitations related to study quality, the studies screened and reviewed revealed no literature in either guiding the creation of different formats or rigorously evaluating the impact on end-users.

The development of the shortened systematic review formats is informed by the Knowledge-to-Action Cycle (Figure 1) proposed by Graham and colleagues [13]. At the centre of the Knowledge-to-Action cycle is the ‘knowledge funnel,’ which focuses on the process through which knowledge is refined, distilled and tailored to the needs of end-users such as healthcare professionals. Knowledge tools and products are identified as ‘third-generation knowledge’ and consist of knowledge synopses that present knowledge in a clear, concise and user-friendly format. An over-arching component to the knowledge funnel is tailoring knowledge, and this process begins well before seeking the input from end-users. Of key importance is the rigor and methodical process taken beforehand that uses evidence and conventional standards to create knowledge tools, rather than relying exclusively on methods such as consulting with colleagues and experts to gain opinions for the inclusion of content. Documenting the development of products or tools with this evidence-based approach gives critical information about the process of tailoring tools for the intended user. Providing details lends support to the development of interventions in a rigorous, thoughtful manner before implementation, and allows for the concise capturing and sharing of key concepts, plans and processes.

Figure 1
figure 1

Knowledge to action (KTA) framework.

Objective

To describe the development process of two shortened formats for a systematic review intended for use by primary care physicians as an information tool for clinical decision-making.

Methods

We employed a series of strategies to create two shortened formats, case-based and evidence-expertise, to aid in decision-making for clinicians before seeking input from users on their preferences. The components of the process included:

  1. 1.

    selection of a systematic review and creation of initial prototypes that represent a shortened version of the systematic review;

  2. 2.

    a mapping exercise to identify obstacles described by clinicians in using clinical evidence in decision making;

  3. 3.

    a heuristic evaluation (a usability inspection method); and

  4. 4.

    a review of the clinical content in the prototypes.

Phase 1: selecting a systematic review and creation of initial prototypes

We chose a full-length systematic review to be used for developing prototypes by having four generalist clinicians select from a list of systematic reviews that were drawn from 120 medical journals published in the last five years on topics relevant to primary care [14]. These physicians are chosen from a pool of more than 4,000 physicians who have received formal training in rating articles to identify those that are pertinent to practice as part of a larger program in evidence-based health informatics at McMaster University, Canada [14]. The clinicians were asked to rate the articles that they believed would be important to practicing primary care clinicians using the McMaster PLUS (Premium Literature Service) 7-point Likert scale, where 1 indicates that the article is definitely not relevant and 7 indicates that it is directly and highly relevant. The PLUS scale is used by the Health Information Unit at McMaster University to identify articles for inclusion in a secondary journal (ACP Journal Club) and BMJ Updates [14]. The Health Information Unit supplied a list of 927 systematic reviews that scored 6 or better (out of 7) on the McMaster PLUS scale. Initially, two physicians (one internal medicine physician and one family physician) reviewed all the systematic reviews supplied and independently voted on the three most relevant to generalist physicians. The final review was selected by a third family physician independently. The systematic review that was selected for this study was: ‘Systematic review of rosacea treatments.’ van Zuuren EJ, Gupta AK, Gover MD, Graber M, Hollis S. Journal of the American Academy of Dermatology. 2007 Jan;56(1):107–15.

Two shortened formats were developed in collaboration with a human factors engineer using the selected systematic review. Human factors is the application of what is known about human capabilities and limitations to the design of tools in order to enable more effective use [15]. Guiding principles for user-centered design were employed, which focuses on making tools that are usable, useful and accessible for the development of the prototypes [16]. The initial prototypes were designed to be one page in length (front and back), giving them the flexibility to be viewed online (as a PDF document) or be printable. Once this is finalized, our future plans include optimizing them for handheld environments. The decision for this sequencing was based on the lack of evidence that increased availability and advances in electronic health technology affect the use of evidence in practice [17]. The first format used a case study to present contextualized information (case-based format), and the second format integrated evidence and clinical expertise (evidence-expertise format). The case-based format was designed to provide evidence within the context of a specific situation, presenting a real-world example of how the evidence could be used in decision-making. This format was chosen since text is easier to understand when it has personalized elements including examples, such as case studies [18–22]. Personalized texts prompt readers to recall more information [21, 22], as well as allow instructions and information to be embedded more succinctly [23]. The evidence-expertise format was guided by David Sackett’s definition of evidence-based medicine, highlighting the integration of clinical expertise and the best external evidence [24]. El Dib and colleagues [25] analyzed 1,016 randomly selected systematic reviews covering a wide variety of clinical topics and found that approximately half reported results that neither supported nor refuted the intervention tested. Similarly, less than 5% of 2,535 Cochrane systematic reviews explicitly state that no more research is needed or feasible [26]. Primary care physicians expressed the need to have an explicit statement about where the evidence was absent and how clinical expertise could bridge this gap when describing their preferences for the presentation of evidence [27]. These findings indicate that supplementing the review with clinical expertise may be useful, since finding a systematic review relating to a clinical question does not assure guidance for a clinical decision. Content was developed for the case study in the case-based format, and information was obtained specifically to present an expert interpretation for the evidence-expertise format. All other information presented in the shortened formats was drawn directly from the original full-text systematic review.

Phase 2: mapping exercise

The aim of the mapping exercise was: to identify the intrinsic obstacles (i.e., specific to the information tool or document) to answering doctors’ questions about patient care with evidence; and to identify at least one attribute within each shortened systematic review format (case-based and evidence-expertise) that addresses these obstacles. The mapping exercise was not intended to provide a guarantee that each obstacle had been eliminated from the prototypes, but served as a methodical inspection of the documents, with the intrinsic obstacles as guidance for identifying at least one instance where they had been addressed.

Identifying intrinsic obstacles

Ely and colleagues extensively studied the information needs of family physicians [8, 28–34]. They used this work to develop a taxonomy of 59 obstacles encountered while searching for evidence-based answers to doctors’ questions (Additional file 1) [8]. With regards to information tools or documents, the 59 obstacles cover both extrinsic factors (e.g., a physician does not have a computer in his or her office to search for information), and intrinsic factors (e.g., the wording of a clinical practice guideline is too vague). In our study, two people (LP, NP), an information specialist and a family physician, independently reviewed each obstacle and identified if it was an intrinsic factor of an information tool or document, or an extrinsic factor. The intrinsic obstacles are the elements that have the potential to be addressed in the development of an information tool or document. Discrepancies were resolved by discussion until consensus was reached.

Linking items in shortened reviews that address intrinsic obstacles

We reviewed both formats (case-based and evidence-expertise) to determine if they addressed obstacles identified as intrinsic factors. If intrinsic obstacles were not addressed, we changed the documents. For example, if the obstacle ‘resource not authoritative or trusted’ was not addressed, the citation (including authors and journal name) for the systematic review would be added.

Phase 3: completing a heuristic evaluation

Heuristic evaluation is a usability inspection method that involves the examination of the prototypes by comparing them to recognized usability principles (the ‘heuristics’) [16]. It is used to identify major usability problems of a product or tool in a timely manner with reasonable cost [35–38]. Having a number of heuristic evaluators will identify more usability problems; however, it is recommended that a cost-benefit consideration be employed to determine the number of evaluators appropriate for an individual project [39]. Since the prototypes for this study were undergoing a multi-step development and evaluation process, we decided to use one heuristic evaluator. The consultant who carried out the evaluation had no involvement in the study. She was selected to conduct this phase as she has a PhD in mechanical and industrial engineering, and conducts research related to the science and technologies of human factors [40]. A modified set of heuristics applicable to the analysis of printed materials (based on the tool provided by Nielsen [41]) was used for the heuristic evaluation (Additional file 2). As per the heuristic evaluation methodology by Nielsen [42], the errors are first identified, then classified by severity, i.e., cosmetic, minor, moderate, major, or catastrophic. The severity estimates are based on frequency, impact, and persistence of errors.

Phase 4: reviewing the clinical content

Clinical content was reviewed by a family physician (NP). The role of the clinical content reviewer was to ensure that the information was transferred from the original document to the shortened versions accurately (and not to evaluate the accuracy or quality of the information) [43]. The clinical content reviewer is an independently licensed and active family physician with three years of clinical experience. He was selected based on clinical knowledge and willingness to volunteer time to this study. One reviewer was sufficient, as the function of the exercise was to identify obvious errors, and this was done with the knowledge that the next step in development would be to assess the prototypes using iterative focus groups with end-users (not described in this paper).

Results

Phase 1: selecting a systematic review and creation of initial prototypes

We developed and refined a summary of a systematic review on rosacea in two formats, case-based and evidence-expertise, which addressed many obstacles clinicians encounter while searching for evidence-based answers to questions. As reported in the Methods section, we selected a systematic review of rosacea treatments and developed summaries in case-based and evidence-expertise formats.

Phase 2: mapping exercise

Identifying intrinsic obstacles

Thirty-two of 59 factors from Ely’s framework were indicated as intrinsic to an information tool. The strength of agreement between the two reviewers (LP, NP) was very good (kappa statistic of 0.82; CI: 0.687 to 0.972) [44]. Ely and colleagues organized the obstacles into five categories [8]. The majority of the intrinsic obstacles (26 of 32; 81%) in our study fell under the third category, ‘searching for relevant information to answer a question about patient care.’ Four out of the 32 obstacles were categorized as being relevant to ‘formulating an answer,’ and the final two were relevant to ‘using the answer to direct patient care.’

Linking items in shortened reviews that address intrinsic obstacles

Eight items from Ely’s framework could not be addressed, as they were not applicable to the mapping. For the 24 items that were applicable, five were identified as being absent from one of the formats. For instance, both formats did not address the obstacle ‘failure to define important terms.’ This prompted the addition of a definition of odds ratio to the case-based format, given the supporting evidence that statistics commonly found in medical journals are not readily understood by clinicians [45–48]. The decision was made to add this to only one shortened format, since the next tool development step will be to run focus groups to gain input from end-users. The focus groups will provide the opportunity to determine if users perceived ‘odds ratio’ as an important term or as an unnecessary feature. All other intrinsic obstacles centered around the information being up-to-date, relevant, and authoritative or trustworthy. These issues were resolved by adding the full citation, along with the objectives of the study to the evidence-expertise format. For some of the intrinsic obstacles identified, it was not possible to find evidence to support how the prototype could be changed to address the obstacle. As an example, for the obstacle ‘resource not authoritative or trusted,’ this can be addressed explicitly in the review by including the citation for the original publication. In contrast, for the obstacle ‘resource is poorly organized,’ we searched the literature for a systematic review that offered evidence of designing informational text to make linkages with the best evidence available; however, none was found. For these obstacles, it was only possible to identify single studies and present this as supportive evidence from the literature. For example, the obstacle ‘resource is poorly organized’ was addressed by using titles and headings, and identifying literature that links this to better recall and comprehension for users [49–52]. Table 1 indicates if the intrinsic obstacle was addressed or not, as well as identifying items that were not applicable. An Additional file 3 lists comprehensive descriptions of how they were addressed if relevant, and actions taken if obstacles were not initially available in the prototypes.

Table 1 Mapping of intrinsic obstacles to items on prototypes

Phase 3: heuristic evaluation

The heuristic evaluation indicated that there were no major usability problems. Several moderate usability issues were identified, including wording that could potentially confuse readers, the placement of information (e.g., an evidence rating appearing in different columns of tables), or omissions that could potentially confuse readers (e.g., no evidence ratings for some treatments). Minor issues concerned the small size of text and layout for the case-based format. We used all feedback to modify the prototype formats.

Phase 4: clinical content review

The clinical content review revealed that the evidence-expertise format accurately reflected the information in the full-length review. One issue was detected in the case-based prototype, and the reviewer recommended modifications to the case that included not focusing on iatrogenic rosacea, removing references to prednicarbate, and using the term ‘family physician’ instead of ‘general practitioner.’ All of these changes were made to the case-based prototype. Additional file 4 and Additional file 5 provide the prototypes before and after the mapping exercise, heuristic evaluation, and clinical content review.

Discussion

We have described the components of the development process for two shortened formats of systematic reviews. Aside from the first phase of selecting and creating the initial prototype, each component of the development process stimulated alterations within the two formats. The second phase was mapping items within the prototypes to obstacles identified by doctors they encountered while searching for evidence-based information, as described by Ely and colleagues [8]. Most obstacles were addressed within the prototypes, but some changes were prompted, such as adding the citation in order to address the obstacle ‘resource is not current.’ The heuristic evaluation and clinical content review stimulated additional modifications. None were significant, and the clinical content review prompted amending the content of the case study offered in one of the prototypes.

Although shortened formats may be familiar and currently available to clinicians, no formal evaluations of these formats have been published. This was confirmed when the 8,104 relevant records of the published and gray literature from our systematic reviews were also examined for studies that described alternate formats [9, 10]. No alternate formats concentrating on the presentation of systematic reviews that were developed, tested and evaluated in a rigorous manner for healthcare professionals were found in our literature review.

Limitations

The development process for the prototypes described in this paper needs to be considered within the context of certain limitations. It may not be possible for all groups to collaborate directly with a human factors engineer when developing information tools. One consideration is to hire consultants for this expertise and include this cost into the budget of research grants. Alternatively, online resources can be used to provide guidance [16]. A single reviewer was used for both the heuristic evaluation and the clinical content review. Although using more than one reviewer has the potential to identify more problems, a pragmatic approach was taken, and cost-benefit considerations guided this decision. For the clinical content review, using one reviewer was also influenced by the fact that the full-length systematic review came from a peer-reviewed journal, which meant that the clinical content had already gone through peer review. We made these decisions with the knowledge that this process was the first step in a multi-step strategy in which the prototypes will be tested by end-users in a series of focus groups.

Conclusions

Reporting these steps and the outcomes has made the process for the development of the two prototypes transparent for users and publishers. As well, it encompasses one step of developing a viable document, which we hope will increase its usability and uptake to end-users.

Future development

In the next step in the development of these prototypes, we plan to conduct focus groups with primary care physicians to gain their input on the format, presentation and layout of the revised prototypes after all revisions had been made. The purpose of the focus groups is to generate essential components of the shortened systematic reviews, and to seek reactions to these prototypes and their potential for clinical decision making. This activity will provide the opportunity to hear from users about their requirements when using such tools, as well as to make changes and correct problems as they emerge. Iterative focus groups allow the chance to take results and quickly incorporate them into the new design. This is an important step of the Knowledge-to-Action cycle that facilitates the tailoring of information tools to the needs of potential users [13]. Following this, we will complete usability testing. Finally, we will test the prototypes in a randomized trial to determine their impact on knowledge and ability to apply the evidence to clinical scenarios.

References

  1. Straus SE, Richardson WS, Glasziou R, Haynes RB: Evidence-based medicine: How to practice and teach EBM. 3rd edition. Edinburgh. 2005, New York: Elsevier/Churchill Livingstone

    Google Scholar 

  2. Cochrane Collaboration. About Cochrane Reviews. Available at: http://www.cochrane.org/cochrane-reviews. Accessed March 3, 2013

  3. Alper BS, Hand JA, Elliott SG, Kinkade S, Hauan MJ, Onion DK, Sklar BM: How much effort is needed to keep up with the literature relevant for primary care?. J Med Libr Assoc. 2004, 92 (4): 429-437.

    PubMed  PubMed Central  Google Scholar 

  4. Laupacis A, Straus S: Systematic reviews: time to address clinical and policy relevance as well as methodological rigor. Ann Intern Med. 2007, 147 (4): 273-274. 10.7326/0003-4819-147-4-200708210-00180.

    Article  PubMed  Google Scholar 

  5. De Vito C, Nobile CG, Furnari G, Pavia M, De Giusti M, Angelillo IF, Villari P: Physicians’ knowledge, attitudes and professional use of RCTs and meta-analyses: a cross-sectional survey. Eur J Public Health. 2009, 19 (3): 297-302. 10.1093/eurpub/ckn134.

    Article  PubMed  Google Scholar 

  6. Coumou HC, Meijman FJ: How do primary care physicians seek answers to clinical questions? A literature review. J Med Libr Assoc. 2006, 94 (1): 55-60.

    PubMed  PubMed Central  Google Scholar 

  7. Grandage KK, Slawson DC, Shaughnessy AF: When less is more: a practical approach to searching for evidence-based answers. J Med Libr Assoc. 2002, 90 (3): 298-304.

    PubMed  PubMed Central  Google Scholar 

  8. Ely JW, Osheroff JA, Ebell MH, Chambliss ML, Vinson DC, Stevermer JJ, Pifer EA: Obstacles to answering doctors’ questions about patient care with evidence: qualitative study. BMJ. 2002, 324 (7339): 710-10.1136/bmj.324.7339.710.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Perrier L, Mrklas K, Lavis JN, Straus SE: Interventions encouraging the use of systematic reviews by health policymakers and managers: a systematic review. Implement Sci. 2011, 6: 43-10.1186/1748-5908-6-43.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Perrier L, Mrklas K, Shepperd S, Dobbins M, McKibbon KA, Straus SE: Interventions encouraging the use of systematic reviews in clinical decision-making: a systematic review. J Gen Intern Med. 2011, 26 (4): 419-426. 10.1007/s11606-010-1506-7.

    Article  PubMed  Google Scholar 

  11. Rosenbaum SE, Glenton C, Nylund HK, Oxman AD: User testing and stakeholder feedback contributed to the development of understandable and useful summary of findings tables for Cochrane reviews. J Clin Epidemiol. 2010, 63 (6): 607-619. 10.1016/j.jclinepi.2009.12.013.

    Article  PubMed  Google Scholar 

  12. Rosenbaum SE, Glenton C, Oxman AD: Summary-of-findings tables in Cochrane reviews improved understanding and rapid retrieval of key information. J Clin Epidemiol. 2010, 63 (6): 620-626. 10.1016/j.jclinepi.2009.12.014.

    Article  PubMed  Google Scholar 

  13. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, Robinson N: Lost in knowledge translation: time for a map?. J Contin Educ Health Prof. 2006, 26 (1): 13-24. 10.1002/chp.47.

    Article  PubMed  Google Scholar 

  14. Health Information Research Unit. McMaster PLUS. Available at: http://hiru.mcmaster.ca/hiru/HIRU_McMaster_PLUS_projects.aspx. Accessed February 15, 2013

  15. Licht DM, Polzella DJ, Boff K: Human Factors, Ergonomics, and Human Factors Engineering: An Analysis of Definitions. CSERIAC-89-01. 1989, CSERIAC: Wright Patterson AFB, Dayton, OH

    Google Scholar 

  16. U.S. Department of Health & Human Services: HHS Web Communications and New Media Division. Available at: http://www.usability.gov. Accessed February 15, 2013

  17. McGowan J, Grad R, Pluye P, Hannes K, Deane K, Labrecque M, Welch V, Tugwell P: Electronic retrieval of health information by healthcare providers to improve practice and patient care. Cochrane Database of Systematic Reviews. 2009, 3

  18. Atkinson RA, Derry SJ, Renkl A, Wortham D: Learning from examples: Instructional principles from worked examples research. Rev Educ Res. 2000, 70 (2): 181-214. 10.3102/00346543070002181.

    Article  Google Scholar 

  19. Lee AY, Hutchison L: Improving learning from examples through reflection. J Exp Psychol Appl. 1998, 4 (3): 187-210.

    Article  Google Scholar 

  20. Robertson I, Kahney H: The use of examples in expository texts: Outline of an interpretation theory for text analysis. Instruc Sci. 1996, 24: 93-123. 10.1007/BF00120485.

    Article  Google Scholar 

  21. Moreno R, Mayer RE: Engaging students in active learning: The case for personalized multi-media messages. J Educ Psych. 2000, 92 (4): 724-733.

    Article  Google Scholar 

  22. Czuchry M, Dansereau DF: The generation and recall of personally relevant information. J Exp Info. 1998, 66 (4): 293-315.

    Google Scholar 

  23. Thomas JC: Story-based mechanisms of tacit knowledge transfer. 2002, ECSCW 2001 Workshop on Managing Tacit Knowledge

    Google Scholar 

  24. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS: Evidence based medicine: what it is and what it isn’t. BMJ. 1996, 312 (7023): 71-72. 10.1136/bmj.312.7023.71.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. El Dib RP, Atallah AN, Andriolo RB: Mapping the Cochrane evidence for decision making in health care. J Eval Clin Pract. 2007, 13 (4): 689-692. 10.1111/j.1365-2753.2007.00886.x.

    Article  PubMed  Google Scholar 

  26. Clarke L, Clarke M, Clarke T: How useful are Cochrane reviews in identifying research needs?. J Health Serv Res Policy. 2007, 12 (2): 101-103. 10.1258/135581907780279648.

    Article  PubMed  Google Scholar 

  27. Lottridge DM, Chignell M, Danicic-Mizdrak R, Pavlovic NJ, Kushniruk A, Straus SE: Group differences in physician responses to handheld presentation of clinical evidence: a verbal protocol analysis. BMC Med Inform Decis Mak. 2007, 7: 22-10.1186/1472-6947-7-22.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Ely JW, Osheroff JA, Ebell MH, Bergus GR, Levy BT, Chambliss ML, Evans ER: Analysis of questions asked by family physicians regarding patient care. West J Med. 2000, 172 (5): 315-319. 10.1136/ewjm.172.5.315.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  29. Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME: Answering physicians’ clinical questions: obstacles and potential solutions. J Am Med Inform Assoc. 2005, 12 (2): 217-224.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Ely JW, Osheroff JA, Ebell MH, Bergus GR, Levy BT, Chambliss ML, Evans ER: Analysis of questions asked by family doctors regarding patient care. BMJ. 1999, 319 (7206): 358-361.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Ely JW, Levy BT, Hartz A: What clinical information resources are available in family physicians’ offices?. J Fam Pract. 1999, 48 (2): 135-139.

    CAS  PubMed  Google Scholar 

  32. Ely JW, Yankowitz J, Bowdler NC: Evaluation of pregnant women exposed to respiratory viruses. Am Fam Physician. 2000, 61 (10): 3065-3074.

    CAS  PubMed  Google Scholar 

  33. Ely JW, Osheroff JA, Gorman PN, Ebell MH, Chambliss ML, Pifer EA, Stavri PZ: A taxonomy of generic clinical questions: classification study. BMJ. 2000, 321 (7258): 429-3-

    Article  Google Scholar 

  34. Ely JW: Why can’t we answer our questions?. J Fam Pract. 2001, 50 (11): 974-975.

    CAS  PubMed  Google Scholar 

  35. Nielsen J, Molich R: Heuristic evaluation of user interfaces. Proceedings of the ACM CHI 90 Human Factors in Computing Systems Conference 1990. Edited by: Carrasco J, Whiteside J. 1990, Seattle, Washington,USA, 249-256.

    Google Scholar 

  36. Nielsen J: Finding usability problems through heuristic evaluation. Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992. Edited by: Bauersfeld P, Bennett J, Lynch G. 1992, Monterey, California, 373-380.

    Google Scholar 

  37. Usability inspection methods. Edited by: Nielsen J, Mack R. 1994, New York: Wiley

    Google Scholar 

  38. Nielsen J: Usability engineering. 1994, Boston: AP Professional

    Google Scholar 

  39. Nielsen J: Determining the number of evaluators. Available at: http://www.useit.com/papers/heuristic/heuristic_evaluation.html#evaluatornumber. Accessed March 3, 2013

  40. Jovicic A, Chignell M, Wu R, Straus SE: Is Web-only self-care education sufficient for heart failure patients?. AMIA Annu Symp Proc. 2009, 2009: 296-300.

    PubMed  PubMed Central  Google Scholar 

  41. Nielsen J: Heuristic evaluation. Usability inspection methods. Edited by: Nielsen J, Mack RL. 1994, New York: John Wiley & Sons

    Chapter  Google Scholar 

  42. Nielsen J: How to conduct a usability evaluation. Available at: http://www.useit.com/papers/heuristic/heuristic_evaluation.html. Accessed March 3, 2013

  43. van Zuuren EJ, Gupta AK, Gover MD, Graber M, Hollis S: Systematic review of rosacea treatments. J Am Acad Dermatol. 2007, 56 (1): 107-115. 10.1016/j.jaad.2006.04.084.

    Article  PubMed  Google Scholar 

  44. Altman DG: Practical Statistics for Medical Research. 1991, London, England: Chapman and Hall, 404-

    Google Scholar 

  45. Cranney M, Walley T: Same information, different decisions: the influence of evidence on the management of hypertension in the elderly. Br J Gen Pract. 1996, 46 (412): 661-663.

    CAS  PubMed  PubMed Central  Google Scholar 

  46. Young JM, Glasziou P, Ward JE: General practitioners’ self ratings of skills in evidence based medicine: validation study. BMJ. 2002, 324 (7343): 950-951. 10.1136/bmj.324.7343.950.

    Article  PubMed  PubMed Central  Google Scholar 

  47. O’Donnell CA: Attitudes and knowledge of primary care professionals towards evidence-based practice: a postal survey. J Eval Clin Pract. 2004, 10 (2): 197-205. 10.1111/j.1365-2753.2003.00458.x.

    Article  PubMed  Google Scholar 

  48. Allen M, MacLeod T, Handfield-Jones R, Sinclair D, Fleming M: Presentation of evidence in continuing medical education programs: a mixed methods study. JCEHP. 2010, 30 (4): 221-228.

    PubMed  Google Scholar 

  49. Hartley J: Designing instructional and informational text. Handbook of research on educational communications and technology. Edited by: Jonassen D. 2004, Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, 917-948. 2

    Google Scholar 

  50. Wilhite SC: Headings as memory facilitators: the importance of prior knowledge. J Educ Psych. 1989, 81: 115-117.

    Article  Google Scholar 

  51. Niegemann HM: Influences of titles on the recall of instructional texts. Discourse processing. Edited by: Flammer A, Kintsch W. 1982, Amsterdam: North-Holland

    Google Scholar 

  52. Sadoski M, Goetz ET, Rodriguez M: Engaging texts: effects of concreteness on comprehensibility, interest, and recall in four text types. J Educ Psych. 2000, 92 (1): 85-95.

    Article  Google Scholar 

  53. Lakoff G: Hedges: a study in meaning criteria and the logic of fuzzy concepts. Journal of Philosophical Concepts. 1973, 2: 458-508.

    Google Scholar 

  54. Riggle KB: Using the active and passive voice appropriately in on-the-job writing. J Tech Writ Commun. 1998, 28 (1): 85-117.

    Google Scholar 

Download references

Acknowledgements

We are grateful to Andreas Laupacis, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, for his guidance and advice on the methods of the study. We would like to express our appreciation to Aleksandra Jovicic for her participation in the heuristic evaluation.

Funding source

Canadian Institutes of Health Research. The funding source had no role in the study design, collection, analysis and interpretation of results, in the writing of the report, or in the decision to submit the paper for publication.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laure Perrier.

Additional information

Competing interests

Sharon E. Straus is an Associate Editor for Implementation Science. All editorial decisions regarding this manuscript were made independently by other editors. None disclosed for all other authors.

Authors’ contributions

SES conceived of the idea. JG and KAM provided guidance and advice on the methods of the study. LP, MK and NP participated in the mapping exercise. NP conducted the clinical content review. LP and AK prepared the original prototypes. LP wrote the manuscript, and all authors provided editorial advice. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Perrier, L., Persaud, N., Ko, A. et al. Development of two shortened systematic review formats for clinicians. Implementation Sci 8, 68 (2013). https://doi.org/10.1186/1748-5908-8-68

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1748-5908-8-68

Keywords