Endnote x7 compatibility free. Enabling Endnote version X7.8 in Word 365 for Mac
Windows 7 notes for all endnote x7 compatibility free of EndNote:. EndNote for Palm is not compatible with Palm desktop 6. The cookie is used to store the user consent for the cookies in the category “Other. As these are known issues to Apple, it is possible eendnote Apple will release an update to Sierra and High Sierra that will include bug fixes for these issues, thus improving the PDF experience for X8 users without a release from EndNote.
Enabling Endnote version X in Word for Mac – Microsoft Community
Posner has also written against the use of notes in judicial opinions. Garner , however, advocates using notes instead of inline citations. HTML , the predominant markup language for web pages, has no mechanism for adding notes. Despite a number of different proposals over the years, and repeated pleas from the user base, the working group has been unable to reach a consensus on it. It might be argued that the hyperlink partially eliminates the need for notes, being the web’s way to refer to another document.
However, it does not allow citing to offline sources and if the destination of the link changes, the link can become dead or irrelevant. The London printer Richard Jugge is generally credited as the inventor of the footnote, first used in the Bishops’ Bible of From Wikipedia, the free encyclopedia.
Text placed at the bottom of a page or at the end of a chapter. For other uses, see Endnote disambiguation. For the Israeli film, see Footnote film. This section needs expansion. You can help by adding to it. July The Elements of Typographic Style version 3. Point Roberts, Wash. Retrieved Purdue Writing Lab. The Independent , Thursday 27 April Government Printing Office Style Manual.
Retrieved October 26, Retrieved March 24, The New York Times. Summer Court Review. American Judges Association. The Indiana Law Blog. Retrieved 25 January Archived from the original.
Retrieved 5 January Citation indexes can be used to identify studies that are similar to a study report of interest, as it is probable that other reports citing or cited by a study will contain similar or related content. Search appropriate national, regional and subject-specific bibliographic databases.
Databases relevant to the review topic should be covered e. Initiatives to provide access to ongoing studies and unpublished data constitute a fast-moving field Isojarvi et al It is important to identify ongoing studies, so that when a review is updated these can be assessed for possible inclusion.
Awareness of the existence of a possibly relevant ongoing study and its expected completion date might affect not only decisions with respect to when to update a specific review, but also when to aim to complete a review. Even when studies are completed, some are never published. Finding out about unpublished studies, and including their results in a systematic review when eligible and appropriate Cook et al , is important for minimizing bias.
Several studies and other articles addressing issues around identifying unpublished studies have been published Easterbrook et al , Weber et al , Manheimer and Anderson , MacLean et al , Lee et al , Chan , Bero , Schroll et al , Chapman et al , Kreis et al , Scherer et al , Hwang et al , Lampert et al There is no easy and reliable single way to obtain information about studies that have been completed but never published.
There have, however, been several important initiatives resulting in better access to studies and their results from sources other than the main bibliographic databases and journals. These include trials registers and trials results registers see Section 4. A recent study Halfpenny et al assessed the value and usability for systematic reviews and network meta-analyses of data from trials registers, CSRs and regulatory authorities, and concluded that data from these sources have the potential to influence systematic review results.
A Cochrane Methodology Review examined studies assessing methods for obtaining unpublished data and concluded that those carrying out systematic reviews should continue to contact authors for missing data and that email contact was more successful than other methods Young and Hopewell An annotated bibliography of published studies addressing searching for unpublished studies and obtaining access to unpublished data is also available Arber et al One particular study focused on the contribution of unpublished studies, including dissertations, and studies in languages other than English, to the results of meta-analyses in reviews relevant to children Hartling et al They found that, in their sample, unpublished studies and studies in languages other than English rarely had any impact on the results and conclusions of the review.
Correspondence can be an important source of information about unpublished studies. It is highly desirable for authors of Cochrane Reviews of interventions to contact relevant individuals and organizations for information about unpublished or ongoing studies see MECIR Box 4.
Letters of request for information can be used to identify completed but unpublished studies. One way of doing this is to send a comprehensive list of relevant articles along with the eligibility criteria for the review to the first author of reports of included studies, asking if they know of any additional studies ongoing or completed; published or unpublished that might be relevant.
This approach may be especially useful in areas where there are few trials or a limited number of active research groups. It may also be desirable to send the same letter to other experts and pharmaceutical companies or others with an interest in the area.
Some review teams set up websites for systematic review projects, listing the studies identified to date and inviting submission of information on studies not already listed. A recent study assessed the value of contacting trial authors and concluded that data supplied by authors modified the outcomes of some systematic reviews, but this was poorly reported in the reviews Meursinge Reynders et al Another case study of a Cochrane Methodology Review reported that making contact with clinical trials units and trial methodologists provided data for six of the 38 RCTs included in the review, which had not been identified through other search methods Brueton et al C31 : Searching by contacting relevant individuals and organizations Highly desirable.
Contact relevant individuals and organizations for information about unpublished or ongoing studies. It is important to identify ongoing studies, so that these can be assessed for possible inclusion when a review is updated. Asking researchers for information about completed but never published studies has not always been found to be fruitful Hetherington et al , Horton though some researchers have reported that this is an important method for retrieving studies for systematic reviews Royle and Milne , Greenhalgh and Peacock , Reveiz et al A recent study reported successful outcomes of a digital media strategy to obtain unpublished data from trial authors Godard-Sebillotte et al A study assessed the value of requesting information from drug manufacturers for systematic reviews and concluded that this helped to reduce reporting and publication bias and helped to fill important gaps, sometimes leading to new or altered conclusions, primarily where no other evidence existed McDonagh et al The RIAT Restoring Invisible and Abandoned Trials initiative Doshi et al aims to address the problems outlined above by offering a methodology that allows others to re-publish mis-reported and to publish unreported trials.
Anyone who can access the trial data and document trial abandonment can use this methodology. The RIAT Support Centre offers free-of-charge support and competitive funding to researchers interested in this approach. It has also been suggested that legislation such as Freedom of Information Acts in various countries might be used to gain access to information about unpublished trials Bennett and Jull , MacLean et al A recent study suggested that trials registers are an important source for identifying additional randomized trials Baudard et al A recent audit by Cochrane investigators showed that the majority of Cochrane Reviews do comply with this standard Berber et al Although there are many other trials registers, ClinicalTrials.
Research has shown that even though ClinicalTrials. The extent to which this might still be the case with the new ICTRP interface released in its final version in June see online Technical Supplement remains to be ascertained. Therefore, the current guidance that it is not sufficient to search the ICTRP alone still stands, pending further research.
Guidance for searching these and other trials registers is provided in the online Technical Supplement. In addition to Cochrane, other organizations also advocate searching trials registers. There has been an increasing acceptance by investigators of the importance of registering trials at inception and providing access to their trials results. Despite perceptions and even assertions to the contrary, however, there is no global, universal legal requirement to register clinical trials at inception or at any other stage in the process, although some countries are beginning to introduce such legislation Viergever and Li Efforts have been made by a number of organizations, including organizations representing the pharmaceutical industry and individual pharmaceutical companies, to begin to provide central access to ongoing trials and in some cases trial results on completion, either on a national or international basis.
Increasingly, as already noted, trials registers such as ClinicalTrials. Search trials registers and repositories of results, where relevant to the topic, through ClinicalTrials. Although ClinicalTrials. A number of organizations, including Cochrane, recommend searching regulatory agency sources and clinical study reports. Details of these are provided in the online Technical Supplement. Clinical study reports CSRs are the reports of clinical trials providing detailed information on the methods and results of clinical trials submitted in support of marketing authorization applications.
Further details of this and other resources are available in the online Technical Supplement. A recent study by Jefferson and colleagues Jefferson et al that looked at use of regulatory documents in Cochrane Reviews, found that understanding within the Cochrane community was limited and guidance and support would be required if review authors were to engage with regulatory documents as a source of evidence.
Specifically, guidance on how to use data from regulatory sources is needed. The online Technical Supplement describes several other important sources of reports of studies. Review authors may also consider searching the internet, handsearching journals and searching full texts of journals electronically where available see online Technical Supplement for details.
They should examine previous reviews on the same topic and check reference lists of included studies and relevant systematic reviews see MECIR Box 4. Search relevant grey literature sources such as reports, dissertations, theses and conference abstracts. Check reference lists in included studies and any relevant systematic reviews identified.
This section highlights some of the issues to consider when designing search strategies. Designing search strategies can be complex and the section does not fully address the many complexities in this area. Many of the issues highlighted relate to both the subject aspects of the search e. For a search to be robust, both aspects require attention to be sure that relevant records are not missed.
Further evidence-based information about designing search strategies can be found on the SuRe Info portal , which is updated twice per year. If the review has specific eligibility criteria around study design to address adverse effects, economic issues or qualitative research questions, undertake searches to address them.
Sometimes a review will address questions about adverse effects, economic issues or qualitative research using a different set of eligibility criteria from the main effectiveness component. In such situations, the searches for evidence must be suitable to identify relevant study designs for these questions. Different searches may need to be conducted for different types of evidence. The starting point for developing a search strategy is to consider the main concepts being examined in a review.
For a Cochrane Review, the review objective should provide the PICO concepts, and the eligibility criteria for studies to be included will further assist in the selection of appropriate subject headings and text words for the search strategy. The structure of search strategies in bibliographic databases should be informed by the main concepts of the review see Chapter 3 , using appropriate elements from PICO and study design see MECIR Box 4.
Although a research question may specify particular comparators or outcomes, these concepts may not be well described in the title or abstract of an article and are often not well indexed with controlled vocabulary terms. Therefore, in general databases, such as MEDLINE, a search strategy will typically have three sets of terms: i terms to search for the health condition of interest, i.
Typically, a broad set of search terms will be gathered for each concept and combined with the OR Boolean operator to achieve sensitivity within concepts. The results for each concept are then combined using the AND Boolean operator, to ensure each concept is represented in the final search results. It is important to consider the structure of the search strategy on a question-by-question basis. In some cases it is possible and reasonable to search for the comparator, for example if the comparator is explicitly placebo; in other cases the outcomes may be particularly well defined and consistently reported in abstracts.
The advice on whether or not to search for outcomes for adverse effects differs from the advice given above see Chapter Inform the structure of search strategies in bibliographic databases around the main concepts of the review, using appropriate elements from PICO and study design. In structuring the search, maximize sensitivity whilst striving for reasonable precision.
Inappropriate or inadequate search strategies may fail to identify records that are included in bibliographic databases. The structure of a search strategy should be based on the main concepts being examined in a review. In general databases, such as MEDLINE, a search strategy to identify studies for a Cochrane Review will typically have three sets of terms: i terms to search for the health condition of interest, i.
There are exceptions, however. For instance, for reviews of complex interventions, it may be necessary to search only for the population or the intervention. Some search strategies may not easily divide into the structure suggested, particularly for reviews addressing complex or unknown interventions, or diagnostic tests Huang et al , Irvin and Hayden , Petticrew and Roberts , de Vet et al , Booth or using specific approaches such as realist reviews which may require iterative searches and multiple search strategies Booth et al Cochrane Reviews of public health interventions and of qualitative data may adopt very different search approaches to those described here Lorenc et al , Booth see Chapter 17 on intervention complexity, and Chapter 21 on qualitative evidence.
Some options to explore for such situations include:. Searches for systematic reviews aim to be as extensive as possible in order to ensure that as many of the relevant studies as possible are included in the review. It is, however, necessary to strike a balance between striving for comprehensiveness and maintaining relevance when developing a search strategy.
Sensitivity is defined as the number of relevant reports identified divided by the total number of relevant reports in the resource. Precision is defined as the number of relevant reports identified divided by the total number of reports identified.
Increasing the comprehensiveness or sensitivity of a search will reduce its precision and will usually retrieve more non-relevant reports. Article abstracts identified through a database search can usually be screened very quickly to ascertain potential relevance. At a conservatively estimated reading rate of one or two abstracts per minute, the results of a database search can be screened at the rate of 60— per hour or approximately — over an 8-hour period , so the high yield and low precision associated with systematic review searching may not be as daunting as it might at first appear in comparison with the total time to be invested in the review.
Table 4. This section should be read in conjunction with Section 3. One is based on text words, that is terms occurring in the title, abstract or other relevant fields available in the database. The other is based on standardized subject terms assigned to the references either by indexers specialists who appraise the articles and describe their topics by assigning terms from a specific thesaurus or controlled vocabulary or automatically using automated indexing approaches.
Searches for Cochrane Reviews should use an appropriate combination of these two approaches, i. Approaches for identifying text words and controlled vocabulary to combine appropriately within a search strategy, including text mining approaches, are presented in the online Technical Supplement.
C33 : Developing search strategies for bibliographic databases Mandatory. Identify appropriate controlled vocabulary e. MeSH, Emtree, including ‘exploded’ terms and free-text terms considering, for example, spelling variants, synonyms, acronyms, truncation and proximity operators.
Search strategies need to be customized for each database. The same principle applies to Emtree when searching Embase and also to a number of other databases. In order to be as comprehensive as possible, it is necessary to include a wide range of free-text terms for each of the concepts selected.
This might include the use of truncation and wildcards. Developing a search strategy is an iterative process in which the terms that are used are modified, based on what has already been retrieved. Searches should capture as many studies as possible that meet the eligibility criteria, ensuring that relevant time periods and sources are covered and not restricted by language or publication status see MECIR Box 4. Review authors should justify the use of any restrictions in the search strategy on publication date and publication format see MECIR Box 4.
To reduce the risk of introducing bias, searches should not be restricted by language. Recommendations for rapid reviews searches to limit publication language to English and add other languages only when justified Garritty et al are supported by evidence that excluding non-English studies does not change the conclusions of most systematic reviews Morrison et al , Jiao et al , Hartling et al , Nussbaumer-Streit et al However, exceptions that non-English studies do influence review findings have been observed for complementary and alternative medicine Moher et al , Pham et al , Wu et al , psychiatry, rheumatology and orthopaedics Egger et al Additionally, when searches are limited to English or to databases containing only English-language articles, there is a risk that eligible studies may be missed from countries where a particular intervention of interest is more common e.
For further discussion of these issues see Chapter Particularly when resources and time are available, the inclusion of non-English studies in systematic reviews is recommended to minimize the risk of language bias Egger et al , Pilkington et al , Morrison et al It has also been argued that, when language restrictions are justified, these should not be imposed by limiting the search but by including language as an eligibility criterion during study selection Pieper and Puljak Further use of a supportive narrative may help explain why a particular date restriction was applied Craven and Levay , Cooper et al b.
For example, a database date restriction of current for a review of nurse-led community training of epinephrine autoinjectors is justified because this is the approval date of the first device Center for Drug Evaluation and Research Conversely, arbitrary date restrictions intended to reduce search yield e. Caution should be exercised when designing database search strategies with date restrictions.
Information specialists should be aware of the various date fields available from database providers e. It may be necessary to search additional sources or datafiles to ensure adequate coverage of the date period of interest for the review. To account for inconsistent publication dates in database records e. As any information about an eligible study may contain valuable details for analysis, document format restrictions should not be applied to systematic review searches.
For example, excluding letters is not recommended because letters may contain important additional information relating to an earlier trial report or new information about a trial not reported elsewhere Iansavichene et al As with comments and letters, preprints versions of scientific articles that precede formal peer review and publication in a journal should also be considered a potentially relevant source of study evidence.
Recent and widespread availability of preprints has resulted from an urgent demand for emerging evidence during the COVID pandemic Gianola et al , Kirkham et al , Callaway , Fraser et al As study data are often reported in multiple publications and may be reported differently in each Oikonomidi et al , efforts to identify all reports for eligible studies, regardless of publication format, are necessary to support subsequent stages of the review process to select, assess and analyse complete study data.
Justify the use of any restrictions in the search strategy on publication date and publication format. Date restrictions in the search should only be used when there are date restrictions in the eligibility criteria for studies. They should be applied only if it is known that relevant studies could only have been reported during a specific time period, for example if the intervention was only available after a certain time point. Searches for updates to reviews might naturally be restricted by date of entry into the database rather than date of publication to avoid duplication of effort.
Publication format restrictions e. When considering the eligibility of studies for inclusion in a Cochrane Review, it is important to be aware that some studies may have been found to contain errors or to be fraudulent or may, for other reasons, have been corrected or retracted since publication.
For review updates, it is important to search MEDLINE and Embase for the latest version of the citations to the records for the previously included studies, in case they have since been corrected or retracted. Errata are published to correct unintended errors accepted as errors by the author s that do not invalidate the conclusions of the article. Including data from studies that are fraudulent or studies that include errors can have an impact on the overall estimates in systematic reviews.
There is an increasing awareness of the importance of not including retracted studies or those with significant errata in systematic reviews and how best to avoid this Royle and Waugh , Wright and McDaid , Decullier et al A recent study, however, showed that even when review authors suspect research misconduct, including data falsification, in the trials that they are considering including in their systematic reviews, they do not always report it Elia et al Details of how to identify fraudulent studies, other retracted publications, errata and comments are described in the online Technical Supplement.
Some studies may have been found to be fraudulent or may have been retracted since publication for other reasons. Errata can reveal important limitations, or even fatal flaws, in included studies. All of these may lead to the potential exclusion of a study from a review or meta-analysis. Care should be taken to ensure that this information is retrieved in all database searches by downloading the appropriate fields, together with the citation data.
Search filters are search strategies that are designed to retrieve specific types of records, such as those of a particular methodological design.
When searching for randomized trials in humans, a validated filter should be used to identify studies with the appropriate design see MECIR Box 4. The site includes, amongst others, filters for identifying systematic reviews, randomized and non-randomized studies and qualitative research in a range of databases and across a range of service providers Glanville et al For further discussion around the design and use of search filters, see the online Technical Supplement.
Use specially designed and tested search filters where appropriate including the Cochrane Highly Sensitive Search Strategies for identifying randomized trials in MEDLINE, but do not use filters in pre-filtered databases e. Search filters should be used with caution. They should be assessed not only for the reliability of their development and reported performance, but also for their current accuracy, relevance and effectiveness given the frequent interface and indexing changes affecting databases.
It is strongly recommended that search strategies should be peer reviewed before the searches are run. Peer review of search strategies is increasingly recognized as a necessary step in designing and executing high-quality search strategies to identify studies for possible inclusion in systematic reviews.
Studies have shown that errors occur in the search strategies underpinning systematic reviews and that search strategies are not always conducted or reported to a high standard Mullins et al , Layton , Salvador-Olivan et al This has also been shown to be the case within some Cochrane Reviews Franco et al Research has shown that peer review using a specially designed checklist can improve the quality of searches both in systematic reviews Relevo and Paynter , Spry et al and in rapid reviews Spry et al , Spry and Mierzwinski-Urban The PRESS checklist covers not only the technical accuracy of the strategy line numbers, spellings, etc.
It is recommended that authors provide information on the search strategy development and peer review processes. For Cochrane Reviews, the names, credentials, and institutions of the peer reviewers of the search strategies should be noted in the review with their permission in the Acknowledgments section. In practice, alerts are based on a previously developed search strategy, which is saved in a personal account on the database platform e.
These saved strategies filter the content as the database is being updated with new information. The account owner is notified usually via email when new publications meeting their specified search parameters are added to the database.
In the case of PubMed, the alert can be set up to be delivered weekly or monthly, or in real-time and can comprise email or RSS feeds. For review authors, alerts are a useful tool to help monitor what is being published in their review topic after the original search has been conducted. Authors should consider setting up alerts so that the review can be as current as possible at the time of publication. Another way of attempting to stay current with the literature as it emerges is by using alerts based on journal tables of contents TOCs.
These usually cannot be specifically tailored to the information needs in the same way as search strategies developed to cover a specific topic. They can, however, be a good way of trying to keep up to date on a more general level by monitoring what is currently being published in journals of interest. Many journals, even those that are available by subscription only, offer TOC alert services free of charge.
In addition, a number of publishers and organizations offer TOC services see online Technical Supplement. Use of TOCs is not proposed as a single alternative to the various other methods of study identification necessary for undertaking systematic reviews, rather as a supplementary method. See also Chapter 22, Section Alerts should also be considered for sources beyond databases and journal TOCs, such as trials register resources and regulatory information.
The published review should be as up to date as possible. Searches for all the relevant databases should be rerun prior to publication, if the initial search date is more than 12 months preferably six months from the intended publication date see MECIR Box 4. This is also good practice for searches of non-database sources. The results should also be screened to identify potentially eligible studies. Ideally, the studies should be incorporated fully in the review. Rerun or update searches for all relevant sources within 12 months before publication of the review or review update, and screen the results for potentially eligible studies.
The search must be rerun close to publication, if the initial search date is more than 12 months preferably six months from the intended publication date, and the results screened for potentially eligible studies. Fully incorporate any studies identified in the rerun or update of the search within 12 months before publication of the review or review update. After the rerun of the search, the decision whether to incorporate any new studies fully into the review will need to be balanced against the delay in publication.
Developing a search is often an iterative and exploratory process. It involves exploring trade-offs between search terms and assessing their overall impact on the sensitivity and precision of the search. It is often difficult to decide in a scientific or objective way when a search is complete and search strategy development can stop. The ability to decide when to stop typically develops through experience of developing many strategies. Suggestions for stopping rules have been made around the retrieval of new records, for example to stop if adding in a series of new terms to a database search strategy yields no new relevant records, or if precision falls below a particular cut-off point Chilcott et al Stopping might also be appropriate when the removal of terms or concepts results in missing relevant records.
Another consideration is the amount of evidence that has already accrued: in topics where evidence is scarce, authors might need to be more cautious about deciding when to stop searching. Although many methods have been described to assist with deciding when to stop developing the search, there has been little formal evaluation of the approaches Booth , Arber and Wood At a basic level, investigation is needed as to whether a strategy is performing adequately.
It is not enough, however, for the strategy to find only those records, otherwise this might be a sign that the strategy is biased towards known studies and other relevant records might be being missed. In addition, citation searches see online Technical Supplement Section 1. If those additional methods are finding documents that the searches have already retrieved, but that the team did not necessarily know about in advance, then this is one sign that the strategy might be performing adequately.
If some of the PRESS dimensions seem to be missing without adequate explanation or arouse concerns, then the search may not yet be complete. Statistical techniques can be used to assess performance, such as capture-recapture Spoor et al , Ferrante di Ruffano et al also known as capture-mark-recapture; Kastner et al , Lane et al , or the relative recall technique Sampson et al , Sampson and McGowan Kastner suggests the capture-mark-recapture technique merits further investigation since it could be used to estimate the number of studies in a literature prospectively and to determine where to stop searches once suitable cut-off levels have been identified.
This would entail potentially an iterative search and selection process. Capture-recapture needs results from at least two searches to estimate the number of missed studies. Further investigation of published prospective techniques seems warranted to learn more about the potential benefits. Relative recall Sampson et al , Sampson and McGowan requires a range of searches to have been conducted so that the relevant studies have been built up by a set of sensitive searches. The performance of the individual searches can then be assessed in each individual database by determining how many of the studies that were deemed eligible for the evidence synthesis and were indexed within a database, can be found by the database search used to populate the synthesis.
If a search in a database did not perform well and missed many studies, then that search strategy is likely to have been suboptimal. If the search strategy found most of the studies that were available to be found in the database, then it was likely to have been a sensitive strategy.
Assessments of precision could also be made, but these mostly inform future search approaches since they cannot affect the searches and record assessment already undertaken. Relative recall may be most useful at the end of the search process since it relies on the achievement of several searches to make judgements about the overall performance of strategies. In evidence synthesis involving qualitative data, searching is often more organic and intertwined with the analysis such that the searching stops when new information ceases to be identified Booth The reasons for stopping need to be documented and it is suggested that explanations or justifications for stopping may centre around saturation Booth Further information on searches for qualitative evidence can be found in Chapter Review authors should document the search process in enough detail to ensure that it can be reported correctly in the review see MECIR Box 4.
The searches of all the databases should be reproducible to the extent that this is possible. By documenting the search process, we refer to internal record-keeping, which is distinct from reporting the search process in the review discussed in online Chapter III. Document the search process in enough detail to ensure that it can be reported correctly in the review.
The search process including the sources searched, when, by whom, and using which terms needs to be documented in enough detail throughout the process to ensure that it can be reported correctly in the review, to the extent that all the searches of all the databases are reproducible. Suboptimal reporting of systematic review search activities and methods has been observed Sampson et al , Roundtree et al , Niederstadt and Droste Research has also shown a lack of compliance with guidance in the Handbook with respect to search strategy description in published Cochrane Reviews Sampson and McGowan , Yoshii et al , Franco et al The lack of consensus regarding optimal reporting has been a challenge with respect to the values of transparency and reproducibility.
These recommendations may influence record keeping practices of searchers. For Cochrane Reviews, the bibliographic database search strategies should be copied and pasted into an appendix exactly as run and in full, together with the search set numbers and the total number of records retrieved by each search strategy.
The search strategies should not be re-typed, because this can introduce errors. The same process is also good practice for searches of trials registers and other sources, where the interface used, such as introductory or advanced, should also be specified. Creating a report of the search process can be accomplished through methodical documentation of the steps taken by the searcher. This need not be onerous if suitable record keeping is performed during the process of the search, but it can be nearly impossible to recreate post hoc.
Many database interfaces have facilities for search strategies to be saved online or to be emailed; an offline copy in text format should also be saved. For some databases, taking and saving a screenshot of the search may be the most practical approach Rader et al Documenting the searching of sources other than databases, including the search terms used, is also required if searches are to be reproducible Atkinson et al , Chow , Witkowski and Aldhouse Details about contacting experts or manufacturers, searching reference lists, scanning websites, and decisions about search iterations can be produced as an appendix in the final document and used for future updates.
The purpose of search documentation is transparency, internal assessment, and reference for any future update. It is important to plan how to record searching of sources other than databases since some activities contacting experts, reference list searching, and forward citation searching will occur later on in the review process after the database results have been screened Rader et al The searcher should record any correspondence on key decisions and report a summary of this correspondence alongside the search strategy in a search narrative.
The narrative describes the major decisions that shaped the strategy and can give a peer reviewer an insight into the rationale for the search approach Craven and Levay A worked example of a search narrative is available Cooper et al b. Local copies should be stored in a structured way to allow retrieval when needed. There are also web-based tools which archive webpage content for future reference, such as WebCite Eysenbach and Trudel The results of web searches will not be reproducible to the same extent as bibliographic database searches because web content and search engine algorithms frequently change, and search results can differ between users due to a general move towards localization and personalization Cooper et al b.
It is still important, however, to document the search process to ensure that the methods used can be transparently reported Briscoe In cases where a search engine retrieves more results than it is practical to screen in full it is rarely practical to search thousands of web results, as the precision of web searches is likely to be relatively low , the number of results that are documented and reported should be the number that were screened rather than the total number Dellavalle et al , Bramer Decisions should be documented for all records identified by the search.
Numbers of records are sufficient for exclusions based on initial screening of titles and abstracts. Broad categorizations are sufficient for records classed as potentially eligible during an initial screen of the full text.
Authors will need to decide for each review when to map records to studies if multiple records refer to one study. The flow diagram records initially the total number of records retrieved from various sources, then the total number of studies to which these records relate. Review authors need to match the various records to the various studies in order to complete the flow diagram correctly.
Lists of included and excluded studies must be based on studies rather than records see also Section 4. A Cochrane Review is a review of studies that meet pre-specified eligibility criteria. Since each study may have been reported in several articles, abstracts or other reports, an extensive search for studies for the review may identify many reports for each potentially relevant study. Two distinct processes are therefore required to determine which studies can be included in the review.
One is to link together multiple reports of the same study; and the other is to use the information available in the various reports to determine which studies are eligible for inclusion. Although sometimes there is a single report for each study, it should never be assumed that this is the case.
As well as the studies that inform the systematic review, other studies will also be identified and these should be recorded or tagged as they are encountered, so that they can be listed in the relevant tables in the review:. Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different outcomes of the study or results at different time points von Elm et al The number of participants may differ in the different publications.
Where uncertainties remain after considering these and other factors, it may be necessary to correspond with the authors of the reports. Multiple reports of the same study should be collated, so that each study, rather than each report, is the unit of interest in the review see MECIR Box 4.
Review authors will need to choose and justify which report the primary report to use as a source for study results, particularly if two reports include conflicting results. They should not discard other secondary reports, since they may contain additional outcome measures and valuable information about the design and conduct of the study. Collate multiple reports of the same study, so that each study, rather than each report, is the unit of interest in the review.
It is wrong to consider multiple reports of the same study as if they are multiple studies. Secondary reports of a study should not be discarded, however, since they may contain valuable information about the design and conduct. Review authors must choose and justify which report to use as a source for study results.
A typical process for selecting studies for inclusion in a review is as follows the process should be detailed in the protocol for the review :. Note that studies should not be omitted from a review solely on the basis of measured outcome data not being reported see MECIR Box 4. Systematic reviews typically should seek to include all relevant participants who have been included in eligible study designs of the relevant interventions and had the outcomes of interest measured.
Reviews must not exclude studies solely on the basis of reporting of the outcome data, since this may introduce bias due to selective outcome reporting and risk undermining the systematic review process. While such studies cannot be included in meta-analyses, the implications of their omission should be considered.
Note that studies may legitimately be excluded because outcomes were not measured. Furthermore, issues may be different for adverse effects outcomes, since the pool of studies may be much larger and it can be difficult to assess whether such outcomes were measured. Decisions about which studies to include in a review are among the most influential decisions that are made in the review process and they involve judgement. Use at least two people working independently to determine whether each study meets the eligibility criteria.
Ideally, screening of titles and abstracts to remove irrelevant reports should also be done in duplicate by two people working independently although it is acceptable that this initial screening of titles and abstracts is undertaken by only one person.
Use at least two people working independently to determine whether each study meets the eligibility criteria, and define in advance the process for resolving disagreements. The inclusion decisions should be based on the full texts of potentially eligible studies when possible, usually after an initial screen of titles and abstracts.
It is desirable, but not mandatory, that two people undertake this initial screening, working independently. It has been shown that using at least two authors may reduce the possibility that relevant reports will be discarded Edwards et al , Waffenschmidt et al , Gartlehner et al although other case reports have suggested single screening approaches may be adequate Doust et al , Shemilt et al Opportunities for screening efficiencies seem likely to become available through promising developments in single human screening in combination with machine learning approaches O’Mara-Eves et al Experts in a particular area frequently have pre-formed opinions that can bias their assessment of both the relevance and validity of articles Cooper and Ribble , Oxman and Guyatt Thus, while it is important that at least one author is knowledgeable in the area under review, it may be an advantage to have a second author who is not a content expert.
Disagreements about whether a study should be included can generally be resolved by discussion. Often the cause of disagreement is a simple oversight on the part of one of the review authors. When the disagreement is due to a difference in interpretation, this may require arbitration by another person.
Occasionally, it will not be possible to resolve disagreements about whether to include a study without additional information. In these cases, authors may choose to categorize the study in their review as one that is awaiting assessment until the additional information is obtained from the study authors. A single failed eligibility criterion is sufficient for a study to be excluded from a review.
The eligibility criteria order may be different in different reviews and they do not always need to be the same. For most reviews it will be worthwhile to pilot test the eligibility criteria on a sample of reports say six to eight articles, including ones that are thought to be definitely eligible, definitely not eligible and doubtful.
The pilot test can be used to refine and clarify the eligibility criteria, train the people who will be applying them and ensure that the criteria can be applied consistently by more than one person.
During the selection process it is crucial to keep track of the number of references and subsequently the number of studies so that a flow diagram can be constructed. The decision and reasons for exclusion can be tracked using reference management software, a simple document or spreadsheet, or using specialist systematic review software see Section 4.
Broad categorizations are sufficient for records classed as potentially eligible during an initial screen. At least one explicit reason for their exclusion must be documented. Lists of included and excluded studies must be based on studies rather than records. This covers all studies that may, on the surface, appear to meet the eligibility criteria but which, on further inspection, do not.
It also covers those that do not meet all of the criteria but are well known and likely to be thought relevant by some readers. By listing such studies as excluded and giving the primary reason for exclusion, the review authors can show that consideration has been given to these studies.
The list of excluded studies should be as brief as possible. It should not list all of the reports that were identified by an extensive search. In particular, it should not list studies that are obviously not randomized if the review includes only randomized trials. An extensive search for eligible studies in a systematic review can often identify thousands of records that need to be manually screened.
Selecting studies from within these records can be a particularly time-consuming, laborious and logistically challenging aspect of conducting a systematic review. Software to support the selection process, along with other stages of a systematic review, including text mining tools, can be identified using the Systematic Review Toolbox.
The SR Toolbox is a community driven, web-based catalogue of tools that provide support for systematic reviews Marshall and Brereton Managing the selection process can be challenging, particularly in a large-scale systematic review that involves multiple reviewers.
Basic productivity tools can help such as word processors, spreadsheets, and reference management software , and several purpose-built systems that support multiple concurrent users are also available that offer support for the study selection process.
Software for managing the selection process can be identified using the Systematic Review Toolbox mentioned above. Compatibility with other software tools used in the review process such as RevMan may be a consideration when selecting a tool to support study selection. Should specialist software not be available, Bramer and colleagues have developed a method for using the widely available software EndNote X7 for managing the screening process Bramer et al Research into automating the study selection process through machine learning and text mining has received considerable attention over recent years, resulting in the development of various tools and techniques for reviewers to consider.
The use of automated tools has the potential to reduce the workload involved with selecting studies significantly Thomas et al Cochrane has also implemented a screening workflow called Screen4Me. Cochrane author teams conducting intervention reviews that incorporate RCTs can access this workflow via the Cochrane Register of Studies. To date January , Screen4Me has been used in over 50 Cochrane intervention reviews. Workload reduction in terms of screening burden varies depending on the prevalence of RCTs in the domain area and the sensitivity of the searches conducted.
In addition to learning from large datasets such as those generated by Cochrane Crowd, it is also possible for machine learning models to learn how to apply eligibility criteria for individual reviews. It is difficult for authors to determine in advance when it is safe to stop screening and allow some records to be eliminated automatically without manual assessment. Recent work has suggested that this barrier is not insurmountable, and that it is possible to estimate how many relevant records remain to be found based on the sample already screened Sneyd and Stevenson , Callaghan and Muller-Hansen , Li and Kanoulas The automatic elimination of records using this approach has not been recommended for use in Cochrane Reviews at the time of writing.
This active learning process can still be useful, however, since by prioritizing records for screening in order of relevance, it enables authors to identify the studies that are most likely to be included much earlier in the screening process than would otherwise be possible. Finally, tools are available that use natural language processing to highlight sentences and key phrases automatically e.
PICO elements, trial characteristics, details of randomization to support the reviewer whilst screening Tsafnat et al Many of the sources listed in this chapter and the accompanying online Technical Supplement have been brought to our attention by a variety of people over the years and we should like to acknowledge this.
Evidence Based Library and Information Practice ; 14 : Agency for Healthcare Research and Quality. Methods guide for effectiveness and comparative effectiveness reviews: AHRQ publication no. Annotated bibliography of published studies addressing searching for unpublished studies and obtaining access to unpublished data. Arber M, Wood H. Search strategy development [webpage]. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate.
Research Synthesis Methods ; 6 : Alimentary Pharmacology and Therapeutics ; 26 : ; author reply Impact of searching clinical trial registries in systematic reviews of pharmaceutical treatments: methodological systematic review and reanalysis of meta-analyses. BMJ ; : j Bennett DA, Jull A. FDA: untapped source of unpublished trials. Lancet ; : A cross-sectional audit showed that most Cochrane intervention reviews searched trial registers. Journal of Clinical Epidemiology ; : Bero L. Searching for unpublished trials using trials registers and trials web sites and obtaining unpublished trial data and corresponding trial protocols from regulatory agencies.
Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic Reviews ; 5 : The “realist search”: A systematic scoping review of current practice and reporting. Research Synthesis Methods ; 11 : Bramer WM. Variation in number of hits for complex searches in Google Scholar.
Journal of the Medical Library Association ; : Reviewing retrieved references for inclusion in systematic reviews using EndNote. Challenges in systematic reviews: synthesis of topics related to the delivery, organization, and financing of health care. Annals of Internal Medicine ; : Briscoe S. A review of the reporting of web searching to identify studies for Cochrane systematic reviews. Research Synthesis Methods ; 9 : Identifying additional studies for a systematic review of retention strategies in randomised controlled trials: making contact with trials units and trial methodologists.
Systematic Reviews ; 6 : Statistical stopping criteria for automated screening in systematic reviews. Systematic Reviews ; 9 : Callaway J. Journal of Health Information and Libraries Australasia ; 2 : Center for Drug Evaluation and Research. Auto Injector. Centre for Reviews and Dissemination. Systematic Reviews: CRD’s guidance for undertaking reviews in health care. York: University of York; Chan AW. Out of sight but not out of mind: how to search for unpublished clinical trial evidence.
BMJ ; : d Discontinuation and non-publication of surgical randomised controlled trials: observational study. BMJ ; : g The role of modelling in prioritising and planning clinical trials.