Skip to main content

SUPPORT Tools for evidence-informed health Policymaking (STP) 17: Dealing with insufficient research evidence

Abstract

This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers.

In this article, we address the issue of decision making in situations in which there is insufficient evidence at hand. Policymakers often have insufficient evidence to know with certainty what the impacts of a health policy or programme option will be, but they must still make decisions. We suggest four questions that can be considered when there may be insufficient evidence to be confident about the impacts of implementing an option. These are: 1. Is there a systematic review of the impacts of the option? 2. Has inconclusive evidence been misinterpreted as evidence of no effect? 3. Is it possible to be confident about a decision despite a lack of evidence? 4. Is the option potentially harmful, ineffective or not worth the cost?

About STP

This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers. The series is intended to help such people ensure that their decisions are well informed by the best available research evidence. The SUPPORT tools and the ways in which they can be used are described in more detail in the Introduction to this series [1]. A glossary for the entire series is attached to each article (see Additional File 1). Links to Spanish, Portuguese, French and Chinese translations of this series can be found on the SUPPORT website http://www.support-collaboration.org. Feedback about how to improve the tools in this series is welcome and should be sent to: STP@nokc.no.

Scenario

The Ministry of Health is considering strategies to recruit and retain health professionals in underserved rural areas. You have been asked to advise the Minister of Health about these strategies. You have found many articles describing strategies that have been used in other settings, but no reliable evaluations of the impacts of such strategies [2].

Background

In this article, we present five questions that policymakers and those who support them can ask when considering scenarios in which there may be insufficient evidence to inform judgements about the impacts of policy and programme options.

It is unrealistic to assume that one can predict the impacts of a health policy or programme with certainty. Many governance, financial and delivery arrangements have not been rigorously evaluated. Neither have many of the programmes, services and drugs that these arrangements support. But policymakers must still make decisions regardless of the availability (or paucity) of evidence to inform such decisions.

In this article, we focus on decision making undertaken in instances in which there is insufficient evidence available to be able to know whether an option will have the impacts intended, or whether it may have unintended (and undesirable) impacts. Common mistakes made when there is insufficient evidence at hand include making assumptions about the evidence without a systematic review, confusing a lack of evidence with evidence of no effect, assuming that insufficient evidence necessarily implies uncertainty about a decision, and the assumption that it is politically expedient to feign certainty. We present four questions in this article that can help to avoid these.

Questions to consider

If there is insufficient evidence at hand to allow one to be confident about the impacts of implementing a policy or programme option, the following questions can be considered:

  1. 1.

    Is there a systematic review of the impacts of the option?

  2. 2.

    Has inconclusive evidence been misinterpreted as evidence of no effect?

  3. 3.

    Is it possible to be confident about a decision despite a lack of evidence?

  4. 4.

    Is the option potentially harmful, ineffective or not worth the cost?

1. Is there a systematic review of the impacts of the option?

The first step in addressing a perceived lack of evidence is to find out what evidence is available. It is risky to make assumptions about the availability of evidence without referring to systematic reviews. Considerations related to finding and critically appraising systematic reviews are addressed in Articles 5 and 6 in this series [3, 4].

For many questions related to health systems it is not possible to find relevant and up-to-date systematic reviews. There is widespread recognition, for example, that health workers are critical to achieving the Millennium Development Goals (MDGs) and other health goals. Yet despite this, an overview of systematic reviews of options to address human resources for health found only a small amount of high-quality, synthesised research evidence regarding the effects of a few options for the improvement of human resources for health [5]. Other overviews of reviews have found similar gaps [e.g. [6]]. A lack of systematic reviews may not necessarily reflect a lack of evidence. But under such circumstances it is difficult for policymakers to know what evidence is available (see Table 1, for example).

Table 1 An independent inquiry into inequalities in health - an example of the need for up-to-date systematic reviews to know what evidence there is

Rapid assessments may need to be undertaken when time or resources are limited. These assessments should be transparent about the methods used, as well as any important methodological limitations or related uncertainties. They should also address the need for, and urgency of, undertaking a full systematic review at a later date [7]. Consideration should also be given to commissioning a new review whenever a relevant, up-to-date review of good quality is unavailable. Appropriate processes should be used, including setting priorities for systematic reviews [8]. Building and strengthening international collaborations, such as the Cochrane Collaboration http://www.cochrane.org, can help to avoid unnecessary duplications of effort involved in producing systematic reviews and help to ensure that up-to-date reviews are more readily available.

2. Has inconclusive evidence been misinterpreted as evidence of no effect?

Another common mistake made in instances when evidence is inconclusive is the confusion of a lack of evidence of an effect with 'evidence of no effect' [9]. It is wrong to claim that inconclusive evidence shows that a policy or programme has had 'no effect'. 'Statistical significance' should not be confused with importance.

When results are not 'statistically significant' it cannot be assumed that there was no impact. Typically a cut-off of 5% is used to indicate statistical significance. This means that the results are considered to be 'statistically non-significant' if the analysis shows that differences as large as (or larger than) the observed difference would be expected to occur by chance more than one out of twenty times (p ≥0.05). There are, however, two problems with this assumption. Firstly, the cut-off point of 5% is arbitrary. Secondly, 'statistically non-significant' results (often mislabelled as 'negative'), might or might not be inconclusive. Table 2 contains a further discussion of this point and Figure 1 illustrates how the use of the term 'statistically non-significant' or 'negative' can be misleading.

Table 2 'Statistical non-significance'
Figure 1
figure 1

Two problems with classifying results as 'statistically non-significant' or 'negative'. The blue dots in the Figure above indicate the estimated effect for each study and the horizontal lines indicate the 95% confidence intervals. A 95% confidence interval means that we can be 95% confident that the true size of the effect is between the lower and upper confidence limit (the two ends of the horizontal lines). Conversely, there is a 5% chance that the true effect is outside this range.

Trends that are 'positive' (i.e. in favour of an option) but 'statistically non-significant' are often described as 'promising' and this can also be misleading. 'Negative' trends of the same magnitude, in contrast, are not typically described as 'warning signs'.

Policymakers should be aware that researchers commonly make these mistakes. To avoid being misled, they should be watchful for misinterpretations of statistical significance.

3. Is it possible to be confident about a decision despite a lack of evidence?

Some policymakers may agree with Charlie Brown, who claimed: "I am always certain if it is a matter of opinion" But most would agree that high-quality evidence provides a better basis for being confident about decisions. Nevertheless, there may be good reasons for being confident about a decision even when there is a lack of evidence. There is very low-quality evidence, for example, that giving aspirin to children with influenza or chicken pox may cause Reye's syndrome (a rare but deadly condition) [10]. Despite the limitations of this evidence, the US Surgeon General and others have confidently advised against the use of aspirin in these circumstances. This is because of the availability of paracetamol (acetaminophen) as an equally effective and inexpensive alternative which allows children not to be put at risk, even if there is uncertainty about the actual level of the risk itself. Conversely, it may be reasonable to be confident that policies or programmes with high costs and potentially serious adverse effects should not be rolled out without a rigorous impact evaluation.

4. Is the option potentially harmful, ineffective or not worth the cost?

"Professional good intentions and plausible theories are insufficient for selecting policies and practices for protecting, promoting and restoring health. Humility and uncertainty are preconditions for unbiased assessments of the effects of the prescriptions and proscriptions of policy makers and practitioners for other people. We will serve the public more responsibly and ethically when research designed to reduce the likelihood that we will be misled by bias and the play of chance has become an expected element of professional and policy making practice, not an optional add-on." (Iain Chalmers, Editor, the James Lind Library, presentation at the Norwegian Directorate for Health and Social Welfare, 1 September 2003. For a more detailed discussion of these comments see Reference [11])

It is risky not to acknowledge uncertainty for the sake of political expediency. As we noted in Article 1 in this series [12], acknowledging that there is imperfect information to inform policies can reduce political risk because it allows policymakers to set in motion ways to alter course if policies do not work as expected.

As the quote above suggests, good intentions and plausible theories are insufficient when selecting policies and practices. This is true for health systems as well as clinical interventions. Examples of clinical interventions found to be relatively ineffective or harmful after initially being believed to be beneficial and widely used, include:

  • High instead of low osmolar rehydration solutions for children with diarrhoea [13]

  • Diazepam or phenytoin instead of magnesium sulphate for women with eclampsia [14, 15]

  • Six or more antenatal care visits instead of four [16]

  • Corticosteroids for patients with severe head trauma [17]

  • Albumin instead of salt water for resuscitation in critically ill patients [18]

  • Hormone replacement therapy to reduce the risk of coronary heart disease and stroke in women [19]

  • Electronic mosquito repellents for preventing mosquito bites and malaria infection [20]

All the above interventions were based on underlying theories, indirect evidence, surrogate outcomes, and observational studies: randomised trials subsequently disproved all the underlying assumptions. This supports the assertion (quoted above) that by making rigorous evaluations an expectation rather than an option for informing decisions about the provision of clinical interventions, the public can be more responsibly and ethically served.

These same concerns apply to health systems and public health interventions. Examples of health systems and public health interventions that have been widely used and advocated, but which may be ineffective and do more harm than good, include the following:

  • Educational and community interventions to reduce the risk of teenage pregnancy [21]

  • Directly observed therapy for tuberculosis [22]

  • User fees for essential medicines [23]

  • For-profit instead of not-for-profit private hospitals [24]

  • Reducing maldistribution by requiring doctors to spend a minimum number of years in an underserved area before allowing them to specialise [2]

  • Some forms of results-based financing or pay-for-performance [25]

  • Contracting with the private sector to provide health services [26]

Substantial caution is required before investing scarce resources in policies or programme options requiring large investments that cannot be recouped [27]. If there is important uncertainty about the impacts of such options, a rigorous evaluation (such as a pilot study, for example), can prevent the potential for resource wastage. And while such undertakings may appear to present unnecessary delays, Julio Frenk, the former Minister of Health from Mexico, has noted: "Both politically, in terms of being accountable to those who fund the system, and also ethically, in terms of making sure that you make the best use possible of available resources, evaluation is absolutely critical" [28]. Decisions both in support of an option and those against, may be equally likely to have undesirable consequences if there is insufficient evidence (see Table 3 for an example and further explanation). Informing policymaking by testing a proposed option within a well-designed impact evaluation offers a better approach.

Table 3 The consequences of saying "no" or "yes" instead of "only in the context of an evaluation"

When judgements about the effects of options are based on theories, surrogate outcomes, limited observational studies, inadequate impact evaluations, anecdotal experience or analogies, policymakers should be cautious about implementing them (see example in Table 4) [29].

Table 4 An example of a potentially ineffective or harmful intervention that has been widely promoted based on insufficient evidence

And even if there is little uncertainty about the benefits of an option, there may still be important uncertainty about other potentially important consequences, including unintended effects (harms) and costs (see example in Table 5). Policies or programmes with compelling rationales can, in fact, cause harm.

Table 5 An example of important uncertainties about potentially important harms

For an option that is promising, but for which there is insufficient evidence to be confident about whether it is potentially harmful, ineffective, or not worth the cost, consideration should be given to requiring a well-designed impact evaluation. This can be undertaken either prior to rolling out the policy or programme, or integrated as part of the rollout. We address further considerations regarding monitoring and evaluation in Article 15 of this series [29].

Conclusion

Most health policies and programmes are complex and they are likely to have multiple effects. Some evidence will almost always be available based on experience with similar policies or programmes in other settings. However, as addressed in Articles 8 and 9 in this series, it is important for policymakers to consider how much confidence to place in such evidence and to assess the applicability of the findings to their own setting [4, 30]. Typically, there will be uncertainty about the impacts of policies and programmes on important outcomes. When there is important uncertainty, common mistakes such as those described in this article should be avoided.

Resources

Useful documents and further reading

  • Chalkidou K, Hoy A, Littlejohns P. Making a decision to wait for more evidence: When the National Institute for Health and Clinical Excellence recommends a technology only in the context of research. J R Soc Med 2007; 100:453-60. http://jrsm.rsmjournals.com/cgi/content/full/100/10/453

  • Oxman AD, Bjørndal A, Becerra F, Gonzalez Block MA, Haines A, Hooker Odom C, et al. Helping to ensure well-informed public policy decisions: a framework for mandatory impact evaluation. Lancet. In press.

References

  1. Lavis JN, Oxman AD, Lewin S, Fretheim A: SUPPORT Tools for evidence-informed health Policymaking (STP). Introduction. Health Res Policy Syst. 2009, 7 (Suppl 1): I1-10.1186/1478-4505-7-S1-I1.

    Article  PubMed Central  PubMed  Google Scholar 

  2. Grobler LA, Marais BJ, Mabunda S, Marindi P, Reuter H, Volmink J: Interventions for increasing the proportion of health professionals practising in underserved communities. Cochrane Database Syst Rev. 2009, 1: CD005314-

    PubMed  Google Scholar 

  3. Lavis JN, Oxman AD, Grimshaw J, Johansen M, Boyko JA, Lewin S, Fretheim A: SUPPORT Tools for evidence-informed health Policymaking (STP). 7. Finding systematic reviews. Health Res Policy. 2009, 7 (Suppl 1): S7-10.1186/1478-4505-7-S1-S7.

    Article  Google Scholar 

  4. Lewin S, Oxman AD, Lavis JN, Fretheim A: SUPPORT Tools for evidence-informed health Policymaking (STP). 8. Deciding how much confidence to place in a systematic review. Health Res Policy Syst. 2009, 7 (Suppl 1): S8-10.1186/1478-4505-7-S1-S8.

    Article  PubMed Central  PubMed  Google Scholar 

  5. Chopra M, Munro S, Lavis JN, Vist G, Bennett S: Effects of policy options for human resources for health: an analysis of systematic reviews. Lancet. 2008, 371: 668-74. 10.1016/S0140-6736(08)60305-0.

    Article  PubMed  Google Scholar 

  6. Lewin S, Lavis JN, Oxman AD, Bastias G, Chopra M, Ciapponi A, Flottorp S, Marti SG, Pantoja T, Rada G: Supporting the delivery of cost-effective interventions in primary health-care systems in low-income and middle-income countries: an overview of systematic reviews. Lancet. 2008, 372: 928-39. 10.1016/S0140-6736(08)61403-8.

    Article  PubMed  Google Scholar 

  7. Oxman AD, Schunemann HJ, Fretheim A: Improving the use of research evidence in guideline development: 8. Synthesis and presentation of evidence. Health Res Policy Syst. 2006, 4: 20-10.1186/1478-4505-4-20.

    Article  PubMed Central  PubMed  Google Scholar 

  8. Lavis JN, Oxman AD, Lewin S, Fretheim A: SUPPORT Tools for evidence-informed health Policymaking (STP). 3. Setting priorities for supporting evidence-informed policymaking. Health Res Policy Syst. 2009, 7 (Suppl 1): S3-10.1186/1478-4505-7-S1-S3.

    Article  PubMed Central  PubMed  Google Scholar 

  9. Alderson P, Chalmers I: Survey of claims of no effect in abstracts of Cochrane reviews. BMJ. 2003, 326: 475-10.1136/bmj.326.7387.475.

    Article  PubMed Central  PubMed  Google Scholar 

  10. Centers for Disease Control (CDC): Surgeon General's advisory on the use of salicylates and Reye syndrome. MMWR Morb Mortal Wkly Rep. 1982, 31: 289-90.

    Google Scholar 

  11. Chalmers I: If evidence-informed policy works in practice, does it matter if it doesn't work in theory?. Evidence & Policy. 2005, 1 (2): 227-42.

    Article  Google Scholar 

  12. Oxman AD, Lavis JN, Lewin S, Fretheim A: SUPPORT Tools for evidence-informed health Policymaking (STP). 1. What is evidence-informed policymaking. Health Res Policy Syst. 2009, 7 (Suppl 1): S1-10.1186/1478-4505-7-S1-S1.

    Article  PubMed Central  PubMed  Google Scholar 

  13. Hahn S, Kim S, Garner P: Reduced osmolarity oral rehydration solution for treating dehydration caused by acute diarrhoea in children. Cochrane Database Syst Rev. 2002, 1: CD002847-

    PubMed  Google Scholar 

  14. Duley L, Henderson-Smart D: Magnesium sulphate versus diazepam for eclampsia. Cochrane Database Syst Rev. 2003, 4: CD000127-

    PubMed  Google Scholar 

  15. Duley L, Henderson-Smart D: Magnesium sulphate versus phenytoin for eclampsia. Cochrane Database Syst Rev. 2003, 4: CD000128-

    PubMed  Google Scholar 

  16. Villar J, Carroli G, Khan-Neelofur D, Piaggio G, Gulmezoglu M: Patterns of routine antenatal care for low-risk pregnancy. Cochrane Database Syst Rev. 2001, 4: CD000934-

    PubMed  Google Scholar 

  17. Alderson P, Roberts I: Corticosteroids for acute traumatic brain injury. Cochrane Database Syst Rev. 2005, 1: CD000196-

    PubMed  Google Scholar 

  18. Liberati A, Moja L, Moschetti I, Gensini GF, Gusinu R: Human albumin solution for resuscitation and volume expansion in critically ill patients. Intern Emerg Med. 2006, 1: 243-5. 10.1007/BF02934748.

    Article  PubMed  Google Scholar 

  19. Farquhar C, Marjoribanks J, Lethaby A, Suckling JA, Lamberts Q: Long term hormone therapy for perimenopausal and postmenopausal women. Cochrane Database Syst Rev. 2009, 2: CD004143-

    PubMed  Google Scholar 

  20. Enayati AA, Hemingway J, Garner P: Electronic mosquito repellents for preventing mosquito bites and malaria infection. Cochrane Database Syst Rev. 2007, 2: CD005434-

    PubMed  Google Scholar 

  21. Guyatt GH, DiCenso A, Farewell V, Willan A, Griffith L: Randomized trials versus observational studies in adolescent pregnancy prevention. J Clin Epidemiol. 2000, 53: 167-74. 10.1016/S0895-4356(99)00160-2.

    Article  CAS  PubMed  Google Scholar 

  22. Volmink J, Garner P: Directly observed therapy for treating tuberculosis. Cochrane Database Syst Rev. 2007, 4: CD003343-

    PubMed  Google Scholar 

  23. Austvoll-Dahlgren A, Aaserud M, Vist G, Ramsay C, Oxman AD, Sturm H, Kosters JP, Vernby A: Pharmaceutical policies: effects of cap and co-payment on rational drug use. Cochrane Database Syst Rev. 2008, 1: CD007017-

    PubMed  Google Scholar 

  24. Devereaux PJ, Choi PT, Lacchetti C, Weaver B, Schunemann HJ, Haines T, Lavis JN, Grant BJ, Haslam DR, Bhandari M: A systematic review and meta-analysis of studies comparing mortality rates of private for-profit and private not-for-profit hospitals. CMAJ. 2002, 166: 1399-406.

    PubMed Central  CAS  PubMed  Google Scholar 

  25. Oxman AD, Fretheim A: An overview of research on the effects of results-based financing. Report Nr 16-2008. 2008, Oslo: Nasjonalt kunnskapssenter for helsetjenesten, [http://www.kunnskapssenteret.no/Publikasjoner/3219.cms]

    Google Scholar 

  26. Lagarde M, Palmer N: The impact of contracting out on health outcomes and use of health services in low and middle-income countries. Cochrane Database Syst Rev. 2009, 4: CD008133-

    PubMed  Google Scholar 

  27. Chalkidou K, Hoy A, Littlejohns P: Making a decision to wait for more evidence: when the National Institute for Health and Clinical Excellence recommends a technology only in the context of research. J R Soc Med. 2007, 100: 453-60. 10.1258/jrsm.100.10.453.

    Article  PubMed Central  PubMed  Google Scholar 

  28. Moynihan R, Oxman A, Lavis JN, Paulsen E: Evidence-Informed Health Policy: Using Research to Make Health Systems Healthier. Rapport nr. 1-2008. 2008, Oslo: Nasjonalt kunnskapssenter for helsetjenesten, [http://www.kunnskapssenteret.no/Publikasjoner/469.cms]

    Google Scholar 

  29. Fretheim A, Oxman AD, Lavis JN, Lewin S: SUPPORT Tools for evidence-informed health Policymaking (STP). 18. Planning monitoring and evaluation of policies. Health Res Policy Syst. 2009, 7 (Suppl 1): S18-10.1186/1478-4505-7-S1-S18.

    Article  PubMed Central  PubMed  Google Scholar 

  30. Lavis JN, Oxman AD, Souza NM, Lewin S, Gruen RL, Fretheim A: SUPPORT Tools for evidence-informed health Policymaking (STP). 9. Assessing the applicability of the findings of a systematic review. Health Res Policy Syst. 2009, 7 (Suppl 1): S9-10.1186/1478-4505-7-S1-S9.

    Article  PubMed Central  PubMed  Google Scholar 

  31. UK Cabinet Office: Modernising government: presented to parliament by the prime minister and the minister for the Cabinet Office by command of Her Majesty. Cm 4310. 1999, London: Stationary Office, [http://www.archive.official-documents.co.uk/document/cm43/4310/4310-00.htm]

    Google Scholar 

  32. Macintyre S, Chalmers I, Horton R, Smith R: Using evidence to inform health policy: case study. BMJ. 2001, 322: 222-5. 10.1136/bmj.322.7280.222.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

Please see the Introduction to this series for acknowledgements of funders and contributors. In addition, we would like to acknowledge Iain Chalmers and Malcolm Maclure for helpful comments on an earlier version of this article.

This article has been published as part of Health Research Policy and Systems Volume 7 Supplement 1, 2009: SUPPORT Tools for evidence-informed health Policymaking (STP). The full contents of the supplement are available online at http://www.health-policy-systems.com/content/7/S1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew D Oxman.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

ADO prepared the first draft of this article. JNL, AF and SL contributed to drafting and revising it.

Electronic supplementary material

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Oxman, A.D., Lavis, J.N., Fretheim, A. et al. SUPPORT Tools for evidence-informed health Policymaking (STP) 17: Dealing with insufficient research evidence. Health Res Policy Sys 7 (Suppl 1), S17 (2009). https://doi.org/10.1186/1478-4505-7-S1-S17

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1478-4505-7-S1-S17

Keywords