Skip to main content

The discordance between evidence and health policy in the United States: the science of translational research and the critical role of diverse stakeholders

Abstract

Background

There is often a discordance between health research evidence and public health policies implemented by the United States federal government. In the process of developing health policy, discordance can arise through subjective and objective factors that are unrelated to the value of the evidence itself, and can inhibit the use of research evidence. We explore two common types of discordance through four illustrative examples and then propose a potential means of addressing discordance.

Discussion

In Discordance 1, public health authorities make recommendations for policy action, yet these are not based on high quality, rigorously synthesised research evidence. In Discordance 2, evidence-based public health recommendations are ignored or discounted in developing United States federal government policy. Both types could lead to serious risks of public health and clinical patient harms.

We suggest that, to mitigate risks associated with these discordances, public health practitioners, health policy-makers, health advocates and other key stakeholders should take the opportunity to learn or expand their knowledge regarding current research methods, as well as improve their skills for appropriately considering the strengths and limitations of research evidence. This could help stakeholders to adopt a more nuanced approach to developing health policy. Stakeholders should also have a more insightful contextual awareness of these discordances and understand their potential harms. In Discordance 1, public health organisations and authorities need to acknowledge their own historical roles in making public health recommendations with insufficient evidence for improving health outcomes. In Discordance 2, policy-makers should recognise the larger impact of their decision-making based on minimal or flawed evidence, including the potential for poor health outcomes at population level and the waste of huge sums. In both types of discordance, stakeholders need to consider the impact of their own unconscious biases in championing evidence that may not be valid or conclusive.

Conclusion

Public health policy needs to provide evidence-based solutions to public health problems, but this is not always done. We discuss some of the factors inhibiting evidence-based decision-making in United States federal government public health policy and suggest ways these could be addressed.

Peer Review reports

Background

Rigorously synthesised health research evidence should inform clinical practice, guideline development and health policy [1]. Although many gaps and barriers to their use remain, policy-makers, clinicians and patients increasingly rely on synthesised research evidence in healthcare decision-making [2]. This is reflected in the substantial annual increase in systematic review production [3], the emergence of organisations and initiatives promoting evidence-based healthcare decisions [4,5,6,7,8], policies established by funding agencies that mandate use and dissemination of research findings, and generally increased attention by the public and the media to the role of scientific evidence in health policy formulation.

Many lifesaving medical practices across numerous disciplines have been established in recent years, based on evidence derived from rigorous syntheses of the scientific literature. Some noteworthy examples include the use of statins for preventing and treating cardiovascular disease [9], adult male circumcision for preventing HIV in sub-Saharan Africa and other high-burden settings [10, 11], and routine immunisation against myriad communicable diseases [12, 13]. Scientific evidence has also influenced legislation and policy in various arenas. For example, research evidence informed laws to lower blood alcohol limits for motor vehicle drivers [14] and to restrict lead in paint and reduce it in gasoline [15]. Evidence also informed United States Food and Drug Administration regulations regarding the use of industrially produced trans fatty acids from partially hydrogenated oils in food [16, 17]. These laws and regulations have in turn saved countless lives.

However, many competing social values may drive public policy, and better health is only one of them. Health guideline methodologists have recognised that financial or other resource constraints, trade-offs between desirable and undesirable outcomes, feasibility of intervention implementation, variability in stakeholder values, and preferences and uncertainty about the stability of effect estimates are all important factors in determining whether research evidence is translated to health policy [18, 19]. Adding further complexity, any consideration of stakeholder values and preferences must include not only deeply held cultural beliefs, including beliefs about the appropriate role of government, but also other subjective forces such as social stigma. Partisan politics, agendas promoted by interest groups, and donations to policy-makers from industries threatened by new public health regulations, among other factors, may all have an impact on implemented health policy. Above this storm of competing subjective forces, proponents of research evidence may struggle to be heard.

The policy-making process may require judgment calls by a variety of stakeholders. Although their intentions may be good, the judgment of health policy stakeholders can be influenced by several unconscious biases, which may not be obvious or apparent to them. These may include ‘irrational escalation’ (the tendency to justify actions that are already taken or to make irrational decisions based upon past rational decisions) [20], ‘status quo bias’ (the preference to keep things relatively the same) [21], ‘confirmation bias’ (the tendency to search for or interpret knowledge in a way that confirms preconceptions) [22], and ‘observer-expectancy effect’ (the unconscious tendency to manipulate or misinterpret facts in order to support one’s viewpoint about a given expected outcome) [23]. Disentangling how these biases may interrupt the health policy development process or their effects on the final outcome can be very challenging, but every day that they still apply may mean worse outcomes for many, as well as wasted money.

As one of its core functions, government is charged with improving the nation’s health and protecting its citizens from harms caused by natural disasters, environmental threats, and people motivated by self-serving interests. In many circumstances, the government is the only entity with the authority and capacity to protect the public good against competing ideological, economic and other interests [24]. Although there are numerous examples where research evidence was used to formulate health policy and practice to accomplish the common good, there are also many examples where there is a discordance between the available evidence and its application.

We provide four examples to illustrate these discordances and then discuss some reasons (i.e. barriers or interrupting factors) for why research evidence may not end up being reflected in health policy recommendations or in United States federal health policy. We also propose a way to resolve these discordances. We wish to emphasise that this paper is not meant to be a comprehensive review of interventions related to the public health areas discussed here, nor does it provide an exhaustive list of factors and barriers that could interfere with the process of translating the best research evidence into health policy.

Main text

Working definitions and conceptual framework

The following working definitions and conceptual framework (Fig. 1) help to contextualise the role of evidence within the context of competing factors for policy development.

Fig. 1
figure 1

Conceptual framework: evidence-based policy-making process and ‘unwanted’ factors influencing discordance. a Ideal process, b and c Discordance 1, d Discordance 2

Evidence-based public health, epidemiology, systematic reviews

Evidence-based public health has been defined as “the process of integrating science-based interventions with community preferences to improve the health of populations” [25]. An important cornerstone of public health is epidemiology, which analyses the causes and determinants of health and illness in populations, the characteristics of public health problems, and the effectiveness of public health interventions. Rigorous epidemiologic methods should be used to synthesise the evidence base for public health policy decisions [1]. High-quality systematic reviews (such as Cochrane reviews) use globally agreed-upon standards and rigorous methods and are widely acknowledged to be the gold standard in approaches to collecting, analysing and critically appraising aggregated research data to inform healthcare and public health decision-making [26].

Evidence

There are at least four types of public health evidence that may inform public health policy and practice:

  1. 1)

    Disease burden: data about incidence, prevalence and severity of a specific health condition in a specific population and setting. In a rational decision-making process, this type of evidence is used to decide if a health condition’s current or potential burden is sufficiently serious that it merits consideration for health policy development, including establishing regulations that may reduce or eliminate disease risk factors.

  2. 2)

    Intervention efficacy: data about how well interventions work to prevent or treat diseases or health conditions. This type of evidence is essential to inform prevention and treatment policies. Ideally, only interventions shown efficacious through rigorous evaluation in a systematic review would be promoted for policy development.

  3. 3)

    Intervention effectiveness (versus efficacy): data about circumstances under which an intervention that is proven to be efficacious in the research setting would also work in real-world practice [27]. Examples of such data include intervention delivery modality, data regarding the quality of the programme provided in different intervention settings, and differences in local infrastructure and feasibility of implementation.

  4. 4)

    Intervention cost and cost-effectiveness: data about the cost of providing an intervention and its cost-effectiveness, as well as the level of population health improvements gained in relation to its cost. Given that public health resources are traditionally limited, promotion of cost-effective interventions results in greater health benefit for the money invested, compared to other options.

Translation of evidence to policy: conceptual framework (Fig. 1)

Several conceptual frameworks and theoretical models have been proposed to portray the lifecycle of research and policy development [28,29,30], and the discordance that can arise [29, 31].

To break down the steps required in the translation of evidence into policy, we adapted the ‘Policy Process’ framework developed by the Centers for Disease Control and Prevention (CDC) [32] and others [33]. For simplicity, we considered the ideal process (Fig. 1, panel a) of integration of research evidence into health policy as a linear, continuous process, with three major steps:

  1. 1.

    Data synthesis (from primary studies to the body of evidence): This is the process through which data from primary studies are collected and analysed via a systematic and transparent process designed to minimise risk of bias and enhance internal validly and precision. Following this process, recommendations can be made in favour of or against an intervention as alternative policy options, with different levels of strength or conditionality regarding the utility of adopting programmes with sufficient evidence.

  2. 2.

    Policy analysis (from the body of evidence to recommended policy): This is a process to examine available options using quantitative and qualitative methods to respond to a public health problem. Several frameworks and checklists have been proposed to achieve the goals of public health policy analyses [32, 34, 35]. In addition to technical criteria, such as intervention effectiveness and cost, there may be cultural, feasibility, equity and political criteria to consider in informing policy development.

  3. 3.

    Policy development (from the recommended policy to the enacted policy). This is a process for identifying strategies for improving policy adoption and implementation. This process may include development of strategies to engage stakeholders in policy uptake to optimally inform law, regulation or other executive action.

The intention of this framework (Fig. 1) is to facilitate understanding of an evidence-based decision-making model in the context of public health interventions and how knowledge of evidence synthesis (or lack of this knowledge) may influence decision-making. The framework is not intended to be comprehensive nor to replace existing theoretical models. It presupposes the existence of public health problems that should be solved, though there may be a lack of consensus regarding the best course of action.

The discordance in evidence to policy

If we were to consider an intervention for improving health, there would ideally be concordance between the government’s proposed health policy and guidance (recommendations) given by leading public health agencies about that intervention. Additionally, recommendations for or against the intervention would be made based on high-quality evidence for achieving the desired health outcome and decision-makers would be motivated to incorporate those recommendations in making new or the necessary changes to existing policies and programmes. In the process of evidence to policy translation, however, at least two main scenarios can result in a discordance.

Discordance 1

This happens when research evidence does not support the use of an intervention, but public health authorities recommend the intervention nonetheless (Fig. 1, panel b and panel c). For example, it may occur when an intervention to address a public health problem is characterised by authorities as effective, despite low quality evidence, or even when evidence demonstrates that it does not work. This can lead to funding for programmes that are less effective than claimed. At minimum, this results in a waste of financial resources that could have been used for programmes that really do work. Worse, this may result in increased morbidity and associated costs. This type of discordance may arise in the process of data synthesis (Fig. 1, panel b) or policy analysis (Fig. 1, panel c).

  • The data synthesis step (Fig. 1, panel b), the process of arriving to a body of evidence from primary studies, can be flawed in several ways. For instance, the results of primary (empirical) studies assessing the effect of interventions may be biased due to weak methodology or investigators’ conflicts of interest, especially when studies are funded by industry [36] or other vested interest groups. In other cases, harms and adverse effects are minimised or completely omitted in scientific literature concerned with a given intervention [37,38,39]. Study findings, including both benefits and harms of interventions, are often reported in complex and confusing ways. Further, substandard methods are used to gather, synthesise and interpret findings of primary studies. Although it used to be so, it is no longer appropriate simply to assert that systematic review evidence is good evidence [40]. Even when systematic review authors believe they are using rigorous methods to examine intervention effects, their methods may in fact be poor, resulting in untrustworthy findings [3]. For instance, review authors may ‘cherry-pick’ favourable outcomes to create an impression of efficacy, either intentionally, or through poor understanding of methods or unconscious bias.

  • The policy analysis step (Fig. 1, panel c), the process for arriving to a policy recommendation based on a body of research evidence and other considerations, can also be adversely influenced. This issue may arise when the review authors themselves are directly or indirectly affected by the implications of the policy analysis. In that case, they may selectively focus on favourable outcomes of certain interventions or even ‘spin’ review evidence to promote an agenda [41, 42]. This selective reporting and outcome-spinning may arise through a sort of altruistic bias associated with unconsciously wanting an intervention to work (i.e. due to confirmation bias or observer-expectancy effect), but there could also be subtle and perhaps borderline conflicts of interest associated with expectations of future funding [43]. Finally, these issues may also arise due to the pressure that public health authorities may face when pressured to come up with solutions to societal problems that politicians and the society want solved. In that case, they may recommend a policy despite a lack of solid and high quality evidence.

Although closely engaged in the policy development process, policy-makers and many other stakeholders may not have sufficient epidemiologic insight to appraise and understand such nuances and several types of bias. All these issues can have a very direct and dynamic bearing on the extent to which research evidence should be believed. Ironically, this may also cause policy-makers to hesitate to rely on evidence.

An ability to understand research evidence is essential to improving health at the population level, but it is only one piece of the puzzle in developing and implementing evidence-based health policy. As discussed below, other interruptive factors and barriers can stop a truly effective intervention from making its way to policy.

Discordance 2

This discordance (Fig. 1d) often occurs in two contexts. First, public health authorities may recommend an intervention that is well supported by research evidence, but policy-makers reject it. Despite even high quality and conclusive evidence supporting its efficacy and cost-effectiveness, the intervention is a ‘hot potato’ that policy-makers would rather drop. In some cases, policy-makers may not consider such interventions at all, or only consider them with partial coverage, limited resources or with a delay in implementation. Second, an intervention may hold only inconclusive evidence, yet would still be approved and implemented. This issue could arise when constituents or special interest groups pressure politicians around election time to fix a problem. The politician might then rush into taking actions that would serve short-term political gains, at the expense of giving policy options adequate scrutiny. In both scenarios, the discordance arises through social, cultural and other external considerations (e.g. the influence of special interests) that compete on equal (or even stronger) terms with research evidence [44].

There are other legitimate considerations beyond evidence for an intervention’s efficacy and cost-effectiveness. These may include costs, feasibility of implementation, market dominance and other factors. External considerations, such as sociocultural factors, political influence, interpersonal dynamics and action based on shared misunderstanding, may also come into play as strategic tools for affecting policy adoption. These factors are part of what can be described as an ‘ecosystem’ of health policy development [44]. In an ideal world, this latter group of considerations should not influence the policy itself; health policy should promote an intervention with the highest health impact at the lowest reasonable cost. However, in the real ecosystem of health policy development, even very efficacious interventions with potential for high impact may be shunted aside and then ‘die’ in a legislative committee. Interventions shown to be efficacious in well-controlled study conditions may have very different effects in real-world settings or in populations with different cultural values and norms. Important harms may also arise; the intervention could become ineffective over the long term, or there could be other kinds of undesirable effects. Depending on these variables, it may not always be detrimental that an efficacious intervention is not implemented immediately in all settings. These are all legitimate considerations in a rational decision-making process and in such instances may argue for a longer timeline to fully achieve policy implementation, especially if evidence and data are not available regarding all populations that will be impacted by the policy.

Depending on the specific context, the views of interest groups with fixed ideas about what should be done may prevail over research evidence in the policy-making process. Deeply held ideologies and philosophical positions of deep-pocketed political donors may also be a force. The interests of corporations or even whole industries may be an imposing shadow that looms behind policy decisions [45,46,47]. General aversion to change, political expediency and unstated conflicts of interest may also serve to exclude research evidence from policy enactment and implementation [48]. Politicians are often unwilling to invest in programmes with long-term returns due to the realities of their short-term election cycles. In the United States federal health policy context, all of these variables are often in play and the result is a fractured and somewhat incoherent health policy landscape.

Illustrative examples

To contextualise the idea of discordance in the real-life arena of public health policy in the United States, we selected and analysed four diverse public health problems with a relatively high public health burden and/or a significant individual and societal cost. Of the four topics, two reflect discordance type 1 (adolescent pregnancy and adult breast cancer) and two mainly reflect discordance type 2 (childhood obesity and HIV infection in injection drug users). We purposefully selected these four cases because they offer concrete examples for at least one discordance type that was known to us. Given the complexity of the policy-making process, health policy in these areas may be affected by more than one discordance type.

Table 1 provides a summary for the status of the evidence-based interventions addressing these public health issues, current national-level United States policies relevant to these interventions, as well as recommendations made by key United States public health and medical agencies and organisations, examples of related evidence to policy discordance, potential human life and financial losses attributable to the discordances, and interruptive factors and barriers impeding translation of evidence to policy.

Table 1 Four illustrative examples of evidence-to-policy discordance in the context of health policy in the Unites States

Discordance 1: Current recommendations promote interventions that do not work

Example 1 – Interventions to prevent pregnancy in adolescents

Public health issue

Despite a substantial decrease in adolescent pregnancy over the past two decades, nearly 230,000 babies were born to women aged 15–19 years in 2015 [49]. Teenage pregnancy and childbearing is associated with massive economic and social costs [50, 51]. Teenage pregnancy has serious short- and long-term impacts on the lives of teen mothers, the parents of these girls, as well as their children [50].

Evidence for recommended interventions

A wide range of interventions have been designed and implemented to address the teen pregnancy problem worldwide and in the United States [52,53,54]. Among others, these include educational and behavioural interventions focusing on increasing adolescents’ knowledge about the risk of pregnancy, delaying the age of sexual debut, building contraceptive use skills, promoting consistent use of birth control methods, and providing birth control methods. Among all the existing interventions, findings of high-quality systematic reviews only support the promotion of contraceptive use combined with education as a mean to reduce unintended pregnancy over a medium- to long-term period [55]. Although there are many randomised and observational studies, there is a paucity of evidence supporting population-level impact on pregnancy rates of behavioural sexual risk reduction interventions for adolescents [52]. Lack of evidence is in part due to the use of biased methods, indirect assessed outcomes (e.g. evaluating commonly used proxy outcomes, such as change in knowledge and behaviour, instead of pregnancy itself), inapplicability of content, non-fidelity in replication, and the heterogeneous modalities in which interventions are delivered [52, 54, 56].

Policy response

As an example of federal-level policy response to teen pregnancy, we focus on OAH’s Teen Pregnancy Prevention Program. The United States Congress authorised over $101 million each year of its initial programme period (fiscal years 2010–2015) for OAH to make “competitive contracts and grants to public and private entities to fund medically accurate and age appropriate programs that reduce teen pregnancy” [57]. After administration and other costs, 75% of the funds were allocated to replicate programmes “proven” [sic] to be effective in reducing teen pregnancy, and the other 25% for “innovative” programmes [58]. Since 2015, Congress has continued to fund the Teen Pregnancy Prevention Program by allocating $61 million to replicate “effective” programmes administrated through 270 cooperative agreements, each ranging from $200,000 to $500,000 per year. For innovative programmes in 2018, $22 million will flow through up to 75 cooperative agreements ranging from $250,000 to $375,000 per year [59,60,61].

Discordance between evidence and recommendation

Despite lack of evidence based on globally accepted standards and practices, behavioural sexual risk reduction interventions are characterised as evidence based and promoted by the OAH in the United States Department of Health and Human Services as a way to prevent teen pregnancy. We argue that, since OAH has used obsolete and arguably flawed methods for synthesising the body of evidence for its pregnancy prevention programmes, it is unacceptable to characterise them as ‘evidence based’ and they should not be recommended for policy. To better understand our rationale, it is important to understand the process and methods that OAH used to evaluate such programmes.

OAH created the Teen Pregnancy Prevention Evidence Review (TPPER) in response to the 2010 Consolidated Appropriations Act [58], mandating that pregnancy prevention programmes must be “proven effective through rigorous evaluation to reduce teenage pregnancy, behavioral risk factors underlying teenage pregnancy, or other associated risk factors” [62]. The definition of what the United States Congress understands to be rigorous evaluation is nowhere provided in the congressional appropriations document [55]. It would certainly make sense that they intended for the use of methods that would minimise the risk of any threats to credibility of evidence.

In 2016, OAH released summaries of the results of TPPER evaluation of 25 programmes, and again in 2018. At first glance, the TPPER report appears to be part of an evidence-based decision-making process, but careful examination of the process for generating the summary evaluation leads us to question its rigor. To name a few issues, although some federal agencies (e.g. the Agency for Health Research and Quality (AHRQ)) conduct systematic reviews based on global standard methods, OAH’s programmes are assessed with the simplistic, obsolete methodology developed by What Works Clearinghouse (WWC) at the United States Department of Education more than 15 years ago [63]. Since 2007, WWC’s systematic reviews methodology has been revised by Mathematica Policy Research, a private company, but even with those updates their methods fall short. Among other critiques of WWC’s methodology and Mathematica Policy Research’s errors in assessment [64,65,66], they received particularly strong criticism in two reports from an organisation called the National Institute for Direct Instruction [67, 68]. The latter critiques suggest that major concerns in WWC systematic reviews, including “misinterpretation of study findings, inclusion of studies where programs were not fully implemented, exclusion of relevant studies from review, inappropriate inclusion of studies, concerns over WWC policies and procedures, incorrect information about a program developer and/or publisher, and the classification of programs” [67]. To further expand on the shortcomings of WWC’s methods, we focus on two aspects of their methods and provide examples.

A critical shortcoming aspect of the WWC evaluation method is its low threshold in characterising a programme as evidence based. WWC’s practice is out of alignment with global standards for assessing evidence quality. This is not because WWC’s methods and episteme are uniquely superior. According to WWC, interventions are evidence based if they “demonstrate evidence of a positive, statistically significant impact on at least one of the following outcomes: sexual activity (initiation; frequency; rates of vaginal, oral and/or anal sex); number of sexual partners; contraceptive use (consistency of use or one-time use, for either condoms or another contraceptive method); STIs; pregnancy” [69].

In other words, an intervention tested in a study with a finding of one favourable outcome among several neutral or even unfavourable outcomes will be deemed evidence based. For instance, if significantly more 14- to 16-year-old teens at 1-month follow-up report that they have had fewer sexual partners than they reported at baseline, the intervention is deemed evidence based even if 17- to 18-year-old teens reported more partners or if every other outcome of the study was null or negative.

Another shortcoming is really a cluster of concerns with regards to study selection and risk of bias. WWC systematic review methods are idiosyncratic and are not aligned with the rigorous, global standard methods used by the AHRQ and other federal agencies [70]. In determining eligibility of studies for inclusion in WWC systematic reviews, reviewers rate studies according to an algorithm. For example, if a study population was randomised, reviewers next assess whether attrition was high or low. If it was high, they check to see whether study arms were comparable at baseline. If they were not, the study is excluded from the review (deemed “does not meet WWC standards”). Had attrition been low but reviewers then discerned unadjusted confounders, the trial would similarly have been excluded [71]. While indeed the evidence from these studies would likely have been of poor quality, reviews conducted according to global standards (e.g. AHRQ reviews) would never exclude poorly conducted studies that in other aspects of population, intervention, comparison, and outcome (PICO) and design met inclusion criteria [70, 72]. Rather, rigorous reviews would assess the risk of bias in each study and report it transparently. It is quite acceptable to exclude studies at high risk of bias from quantitative meta-analyses with studies of similar PICO and design, but WWC excludes these studies entirely from the review without comment [71]. Compounding the problem, WWC reviewers do not formally assess the risk of bias in individual studies. If participants were reported to have been randomised, it does not matter to WWC how well or poorly this was done. Bias associated with lack of blinding and any deficiencies in outcome assessment are also not explicitly considered [73]. There are other serious shortcomings in WWC’s study selection process that are beyond the scope of this paper.

It is not possible to know with certainty the ways in which these problems manifest themselves in OAH’s 2016 summary report [74] or its similar 2018 ‘summary of findings’ [75]. It is also beyond the scope of this paper to explain in detail the differences between OAH criteria and global standards. It may suffice to say that, from the perspective of methods used by AHRQ, Cochrane Collaboration and other leading agencies in the evidence-based public health domain for assessing evidence quality, WWC’s methods seem to be poor [72, 76].

Example 2 – Mammography screening for early diagnosis of breast cancer

Public health issue

Although mortality attributable to breast cancer has declined substantially since its peak in the 1970s, breast cancer is the most common cancer in women in the United States, regardless of race or ethnicity [77]. In 2014, nearly 230,000 women were diagnosed with breast cancer and approximately 40,000 women died from breast cancer in the United States [78]. The overall risk of breast cancer for women in the United States has not changed in the last decade, though it has increased for some ethnic minorities.

Evidence for recommended intervention

Breast cancer screening is generally considered to be part of the standard of care in the battle against the high burden of disease associated with breast cancer among women [79]. The goal of breast cancer screening through mammography is to identify tumours before there are visible signs or symptoms of the disease and to treat cancer early, when chances for cure are higher.

A Cochrane systematic review [80] found seven randomised controlled trials of women aged 39–70 (n = 600,000), assigned to receive mammograms or no mammograms. In trials with a low risk of bias, the breast cancer mortality rate was similar in both groups. In trials with a high risk of bias, there were 15% fewer deaths in women receiving mammograms. The reviewers estimated a 30% risk of breast cancer overdiagnosis. Overdiagnosis of breast cancer can lead to unnecessary psychological harm as well as to unnecessary biopsies, mastectomies and deaths. The review’s lead author subsequently published a paper titled “Mammography screening is harmful and should be abandoned” [81]. Subsequent studies showed benefit for screening of women older than 50, however, the best available evidence suggests no benefit for screening average-risk women when they are 40–49 years old.

Discordance between evidence and recommendation

Currently in the United States, recommendations for when women should receive mammograms are heterogeneous and vary by organisation or agency issuing the recommendation (Table 2) [82].

Table 2 Recommendations about mammography: Women aged 40 to 49 with average riska [82]

Screening for a disease at the population level may be appropriate when, among other conditions, the disease burden is very high, screening tests are reasonably accurate (in terms of both sensitivity and specificity, analysed together), the risk of adverse events is low and costs are low; it may not be appropriate in other contexts. Breast cancer screening is associated with several common and important adverse effects, as follows: (1) a 5–50% risk over a 20-year period of receiving false positive results, leading to more tests that are costly, time-consuming and may cause anxiety [83, 84]; (2) overdiagnosis and overtreatment, namely finding and treating a tumour that would not have gotten worse had it not been detected (overtreatment can have severe side effects, including invasive unnecessary biopsy and mastectomy, radiation therapy, anxiety and even death); (3) procedures; and (4) potential risk of developing new cancers associated with repeated exposure to x-rays [82].

A systematic review of 59 reviews published between 2000 to 2015 about benefits and harms of the mammography concluded that “the specific expertise and competing interests of the authors influenced the conclusions of systematic reviews” [36]. The authors reported that, compared to those conducted by non-clinicians, systematic reviews conducted by clinicians significantly reported conclusions favouring mammography.

The American Breast Cancer Foundation seemingly ignores these risks and harms, instead making such recommendations as the following: “Women should begin scheduling their annual mammograms at the age of 40” and “Mammography can help to reduce the number of deaths from breast cancer among women ages 40–70”, referencing to the CDC surveillance SEER in 2002–2008 [85]. Other breast cancer advocacy organisations also downplay or fail to accurately communicate the risks associated with mammography.

The two examples presented above for Discordance 1 serve to illustrate how lack of knowledge, unconscious biases, vested interests and other factors can have an impact on the validity and reliability of synthesised research evidence underpinning public health recommendations. With that in mind, we now turn our attention to two additional illustrative examples for Discordance 2, which show the types of pressures that often compete with research evidence in health policy development.

Example 3 – Interventions to prevent obesity in children

Public health issue

Approximately 30% of children and adolescents in the United States are clinically obese (body mass index ≥ 30%) or clinically overweight (body mass index ≥ 25% to < 30%). Children with obesity are at higher risk of developing asthma, type 2 diabetes, bone and joint problems, sleep apnoea, and of becoming obese as adults [86]. This increased rate of obesity has been attributed to increased consumption of sugar-sweetened beverages (SSB), increased use of junk food and other unhealthy foods (high in fat, salt and sugar), decreased physical activity and other factors [87].

Evidence for recommended interventions

Several recent systematic reviews provide compelling evidence about a growing number of interventions to prevent childhood obesity. Prevention interventions are diverse in terms of programming (diet, physical activity or both in combination) and setting (home, school, community, child care, primary care or combinations of these settings). A 2013 systematic review concluded that “physical activity interventions in a school-based setting with a family component or diet and physical activity interventions in a school-based setting with home and community components have the most evidence for effectiveness” [88]. Another systematic review conducted by the Robert Wood Johnson Foundation identified 12 discrete, physical activity strategies and 13 nutritional interventions that could be implemented through health policy or in environmental designs of schools and community. These include environmental modifications in schools, neighbourhoods and communities that could potentially encourage greater physical activity, as well as prompts for children to begin physical activity [89, 90]. However, to tackle childhood obesity at the population level, a multi-pronged, comprehensive and cohesive set of policies is necessary to address the root causes of the epidemic. Table 3 shows numerous structural [91] approaches that could potentially be deployed in a coordinated fashion to reduce childhood obesity, ranging from interventions through changes in laws and regulations, those operating by means of environmental changes and other designed to influence social norms.

Table 3 Potential interventions to reduce childhood obesity [91]

Previous policy recommendations, such as after-school physical activity programmes, taxation on SSB, and bans on fast-food TV advertising targeting children have been studied via microsimulation analysis. Of these, the single most effective strategy was increased taxation on SSB [92].

Policy response

Childhood obesity has received substantial public health attention in the past decade. By 2013, 30 states in the United States had enacted legislation to create or expand obesity-prevention efforts in children. Currently, the most important federal law with indirect implications for childhood obesity is the Healthy, Hunger-Free Kids Act (2010). This legislation includes six large nationwide programmes [93], but the focus of the legislation is good nutrition for low-income mothers and children, not obesity prevention.

Discordance between evidence and recommendation

Children are directly targeted by the fast-food industry, which uses advertising and marketing strategies designed to capture children’s attention. Strategies include the use of cartoon characters, movie stars and sports figures in marketing as well as offering complementary toys with a child’s meal and special play areas at restaurants. Regulating advertisements directly targeting children could be a promising approach to preventing childhood obesity. There is compelling evidence that the food industry creates obesogenic environments to influence children’s preferences for and consumption of foods that contribute to obesity [94]. In 2005, the Institute of Medicine recommended that food-industry advertising that targets children should be eliminated, but little progress has been made since then [13]. Despite a large body of evidence supporting the effectiveness of several interventions, we did not identify any comprehensive legislation that addresses the magnitude of the United States childhood obesity epidemic. A 2015 Congressional bill developed by the United States House of Representatives specifically to stop obesity in schools, the Stop Obesity in Schools Act of 2015, mandates the Department of Health and Human services “to develop a national strategy to reduce childhood obesity that: (1) provides for the reduction of childhood obesity rates by 10% by the year 2020; (2) addresses short-term and long-term solutions; (3) identifies how the federal government can work effectively with entities to implement the strategy; and (4) includes measures to identify and overcome obstacles” [95]. The last available record indicates that the bill was referred to the Subcommittee on Early Childhood, Elementary, and Secondary Education on March 23, 2016. As of this writing, no further action has been taken.

Example 4 – Interventions to prevent HIV in injection drug users

Public health issue

The HIV epidemic remains a major public health challenge in the United States and globally [96]. People who inject drugs (PWID) may share drug paraphernalia or engage in high-risk sexual behaviour, putting them at increased risk of blood-borne infections such as HIV and hepatitis C virus (HCV) [97, 98]. In 2013, over 103,000 men and nearly 70,000 women in the United States were living with HIV, with their acquisition of the virus attributed to injecting drug use [96]. Although in the past few years the rate of HIV diagnoses in PWID has declined by nearly half [99], there has been an increase in the numbers of new heroin injectors each year, notably in the Appalachian region [100], who are now at risk for blood-borne infections through high-risk practices [101], and who also have poor access to HIV and HCV prevention and treatment programmes [102].

Evidence for recommended intervention

Along with methadone maintenance treatment, the use of syringe service programmes (SSPs) is an effective strategy to prevent the spread of blood-borne infections in PWID [103]. Provision of clean needles and syringes prevents PWID from sharing these and decreases the risk of HIV and HCV as well as other adverse outcomes [104]. Even 20 years ago, many developed and developing countries worldwide had already implemented SSPs in large scale, but the United States has not done so [105]. There is a large body of evidence generated from empirical and modelling studies supporting the effectiveness of SSPs [104, 106, 107] and cost-effectiveness in the United States [108, 109].

Policy response

In 1988, the Department of Health and Human Services forbade the use of any federal funds to support the SSP until it was proven to be safe and effective [110]. Since as early as 1995, CDC and most other public health organisations involved in responding to the HIV epidemic have recommended the provision of free needles [111]. Some states changed their laws permitting syringe exchange programmes in 1990, many years after other countries [112], but until 2015, the United States federal government maintained its total ban on SSP funding. With the recent emergence of an epidemic of new heroin injectors [100], the federal government changed its position to permit funding of SSPs in 2016. However, this funding cannot be used to purchase syringes or other drug paraphernalia [113].

Discordance between policy recommendation and enacted policy

Socially conservative members of the United States Congress have disregarded evidence supporting the provision of SSP for PWID for over two decades. In the absence of randomised controlled trials, opponents of the intervention argue that there is no proof that it is effective and safe. They often referred to early studies in Canada (Montreal, Quebec and Vancouver, British Columbia), showing no difference in HIV incidence between needle exchange groups and control groups, and suggested that provision of free needles may increase drug use and injection [114]. However, systematic reviews of numerous subsequent domestic and international empirical studies, as well as modelling and cost-effectiveness analyses, have shown these concerns to be without merit [104, 106,107,108,109].

Some time lag between the production of science and its translation into policy and programmes is reasonable and should even be encouraged. In the context of public health programmes such as SSPs, potential harms of the intervention should be given just as much attention as the benefits. However, more than 20 years of research evidence shows that SSPs have minimal harms, while providing significant health and economic benefits.

External considerations beyond the realm of scientific evidence have driven SSP policy decision-making [114, 115]. The historical context helps to explain why this is the case. The early HIV epidemic among PWID coincided with the emergence of the United States ‘War on Drugs’ policy in the 1980s [116]. The War on Drugs was a punitive approach to law enforcement and justice that saw all illicit drug users as criminals, rather than as patients in need of care [117]. It is still very much a part of United States public policy, notwithstanding recent decriminalisation of marijuana laws in some states. Many conservative policy-makers (and their constituents) see PWID as criminal addicts, mere ‘junkies’ who engage in lifestyle choices that they can and should control [118]. This attitude is one reason that many legislators have ignored countervailing research evidence about the efficacy of SSPs in reducing the risk of HIV infection.

Key interruptive factors and barriers in the health policy process, in the context of four discordance examples

For simplicity and illustrative purposes, we designate each health policy example to demonstrate one of the discordance types and related interruptive factors or barriers. However, even among these examples, there could still be other interruptive factors that also interfere with the translation of evidence to the policy process, causing Discordance 1 and/or or Discordance 2. There are many other potential factors that we cannot explore here, however, we do suggest that perhaps the three most prominent barriers and interruptive factors in the context of our examples are the lack of knowledge about the principles of evidence-based medicine, unconscious biases, and vested interests and beliefs.

Knowledge of the principles of evidence-based health policy is a cornerstone of evidence-to-policy translation. We have already discussed how the lack of such knowledge among stakeholders may lead to Discordance 1, wherein an intervention may be characterised as evidence based when, in fact, it is not, and even an intervention that results in net harms may be promoted. This may also lead to or facilitate Discordance 2 situations. For instance, we can safely assume that most legislative staffers involved in the process of policy development have not been trained in the principles of evidence-based medicine concepts to the extent that that they could properly understand, appraise and interpret health research evidence. Given the complexity of the evidence base in childhood obesity (second example), even well-trained persons might have difficulty in appreciating nuances of evidence quality. With the fourth example, one needs a relatively strong knowledge base of epidemiologic bias to proceed with good discernment in decision-making.

Unconscious bias may inform our decisions in different ways to those we have already discussed to this point, and can lead to both Discordance 1 and Discordance 2. For example, it is plausible that some stakeholders involved at different stages of policy development might generally prefer to keep things the way they are (status quo bias). Perhaps unaware of global standards, those reviewing the evidence for OAH may prefer to continue using the same biased but familiar methodology. Advocacy groups promoting access to mammography for women may prefer the existing pro-mammography guidelines, given that women’s struggles to increase health coverage where especially relevant to women. Policy-makers may feel the issue of childhood obesity is too complex to tackle; they may feel less pressure to scale up SSPs, compared to current, urgent discussion about the prescription opioid epidemic. They also may not be willing to change their previous public positions, as constituents may see this as flip-flopping on a matter of moral importance. Other types of unconscious biases (e.g. observer-expectancy bias, confirmation bias) can potentially be discerned in any number of health policy examples beyond those that we discuss.

Finally, perhaps the most widely acknowledged interruptive factor in evidence-based policy-making is the influence of vested interests and beliefs. This can creep into every aspect of health policy development. Although decision-makers are typically required to disclose all potential financial conflicts of interest or to affirm that they have none, it is still fair to inquire about potential conflicts of interest, perhaps not strictly financial in nature, when cherry-picked data are used to suggest that a given behavioural intervention works, while evidence for alternative approaches is ignored. What would happen to a professional organisation if it intensively ran campaigns to enroll women over age 40 for mammograms because they purportedly save lives, if that same organisation later had to admit to these women that it had made a mistake? This professional call to action may only have saved lives in a small minority of women while it failed to inform women of potential harms such as the increased incidence of false positive results that likely led to overtreatment. In another controversial area, how could politicians whose constituents perceive injection drug use to be morally degenerate behaviour develop the political willpower to scale-up SSP in the United States? Careful consideration of the types of evidence and data that exist to support or refute existing arguments may be helpful to bring to the public for their education.

Discussion

We identified two types of discordances between evidence and policy implementation and illustrated their impact with examples. In addition to being wasteful of scarce financial resources, both Discordance 1 (the discordance between research evidence and policy recommendations) and Discordance 2 (discordance between policy recommendations and actual policy) can lead to large negative impacts on human health and wellbeing.

Harms associated with the discordances

The first harm reflects missed opportunities to save lives by implementing policies driven by our core values and supported high-quality evidence. When there is discordance between policy recommendations and enacted policy, we fail to implement interventions that have been shown to work. Amid conflicts about best approaches and the pressure of other agendas competing against research evidence, important public health problems as prominent as the HIV/AIDS epidemic may be neglected [119]. Policies that go completely against research evidence, such as the war on drugs, may also be enacted [120, 121], which can result in outright societal and individual harms [122, 123].

The second type of harm is reflected in opportunity costs. Public funding is usually limited and there are always competing (e.g. obesity, diabetes) and emerging (e.g. Zika, opioid addiction) public health problems and crises. Thus, for instance, the $1.3 billion public funds spent annually for mammograms in younger (age 40–49) women with only an average risk for breast cancer is funding that might have been better spent on interventions that actually saved women’s lives. Far from competing over scant resources and debating which dreaded disease is more important, the bottom line is the urgent need to make policy decisions that consider policy options based on the best evidence available. Continuing failure to do so supports political narratives that government is the problem, as it is perceived, by some, not to make wise investments, while others perceive that government should not play a role in this type of societal investment.

Physical or psychological harms are a third type of problem. Many medical procedures and public health interventions are accompanied by unintended negative consequences. Even procedures or interventions believed generally to be harmless may have an important negative impact, because the impact of these harms may be downplayed or ignored by investigators [39, 124, 125]. In many cases, harms are not thoroughly assessed and not initially detected in research studies that test an intervention under controlled conditions, but these harms may emerge when delivered at large scale under real-world conditions. For example, licensed drugs believed to be highly effective as treatments, with minimal harms, may later be recalled due to serious side effects [126].

Finally, the discordance between research evidence and policy recommendations can work to discredit scientific efforts and the research community as they are perceived by the general public and policy-makers. The scientific community has always had to fight against ideologies and anti-scientific dogma, especially in the context of public policy. In addition, special-interest groups may actively deploy tactics to discredit scientific findings that are counter to vested interests or beliefs [127,128,129]. The integrity of the scientific community and of science itself plays an important role in keeping public opinion aligned with scientific fact instead of with hype, spin, conspiracy theory, ideology and other so-called ‘alternative facts’. If those involved in the production, analysis and interpretation of evidence fail to maintain an unbiased view, or even have their own vested interests in the forefront, science and scientific evidence will suffer in the public eye. Further, if scientists overstate their findings, fail to fully interpret results, or fail to convey that science is the result of centuries of constant learning that has evolved over time, the argument for relying on evidence is diminished.

Role of key health policy stakeholders

There are many stakeholders in health policy development. Besides policy-makers, they may include healthcare providers, insurance companies, government and regulatory agencies, professional disease-specific associations (e.g. American Heart Association), community and grassroots organisations, individual patients and their families, national-level foundations (e.g. Robert Wood Johnson Foundation), lobbyists, product manufacturers, patent holders, the media, and other interest groups. If scientific evidence is used appropriately in the process of policy development, it can potentially have a large impact on population health. This may be seen reflected in laws and regulations that set standards for the quality and safety of our food, the water we drink and the air we breathe, as well as in the quality and quantity of healthcare services we receive.

Within this milieu, health advocacy groups and those engaged in the development and translation of scientific evidence for policy recommendations have a critical role to play in advancing an agenda for improving health. Health advocates are expected to make positive changes by improving access to quality care and preserving patient’s rights at the structural level (i.e. in policies, laws and regulations). Their role is even more crucial for underserved and marginalised communities (e.g. working poor, undocumented immigrants, substance users, homeless populations) whose voices may not be heard in the absence of active advocacy, as well as for topics that may be stigmatised or be controversial, such as SSPs and abortion. However, as with other types of stakeholders, unconscious biases (e.g. confirmation bias) or natural human altruism may also play a role – we want an intervention to work, even when the evidence does not show this conclusively.

Despite this important role, health advocacy has only recently been recognised as a distinct discipline within the domain of public health [130]. However, without a background in epidemiology, advocates may not have a nuanced appreciation of how synthesised research evidence may be used as a means to increase their effectiveness in promoting health.

Some policy-makers and other stakeholders may have training in epidemiology yet lack the sophistication to understand the nature of bias in research or to have an adequate understanding of how to interpret systematic review findings. This is important, since even systematic reviews may promote certain agendas [3, 131] or reflect the biases of programme implementers [41, 42]. When they are told that an intervention is evidence-based, policy-makers may not closely scrutinise the underlying research. Even when these stakeholders are fully versed in all public health dimensions of a given policy decision, their grasp of whether research evidence is strong may be distorted due to prior ideological commitments, organisational influence and even self-interest.

Future implications

As a first overarching step, the research community should work with other stakeholders involved in health policy development to build a climate in which research evidence is highly valued but also tested rigorously. Often, efforts are made to discredit science or regulatory and public health safety agencies are pressured to downplay scientific evidence. Funding agencies, academic institutions, researchers, scientific journals and others directly engaged in producing and disseminating evidence can play a critical role in overcoming some of these challenges. Among other potential approaches, they could raise the bar in favour of quality versus quantity of research studies, improve the quality of the peer-review process on both ends of the evidence production pipeline (i.e. both research proposals and eventual manuscripts), and train more experts in the field of evidence-based public health. For instance, in the example of behavioural interventions to prevent teen pregnancy, there are too many studies assessing small-size programmes with short-term follow-ups that only measured changes in knowledge and self-reported behaviours (as opposed to an actual pregnancy outcome) of participants. Further, emphasis should be placed on the importance of change in actual pregnancy outcomes; the observed effect on self-reported sexual behaviour outcomes is unreliable and provides indirect evidence. Training of experts in the field of knowledge transfer may also narrow the gap in evidence to policy translation. Knowledge transfer is an emerging field aimed to optimise the transfer of the latest research evidence and stakeholder perspectives, with the goal of improving health outcomes [2]. Health researchers may immunise themselves by practicing high standards of scientific integrity, actively engaging in health advocacy and interactions with diverse stakeholders and policy-makers to communicate their findings, and understanding the culture and values surrounding the problem and policy in question in order to more effectively communicate the scientific evidence to a broader audience outside of the scientific community.

Further, the scientific community must do a better job in educating the public about the fact that science is an evolving process and is thus subject to change. Today’s best evidence may be tomorrow’s old news. Problem solving and policy development require an iterative process. Often, it requires several cycles of planned data gathering and evaluation in order to achieve increasingly better outcomes. Policy-makers often want to declare, ‘problem solved’. Scientists should be willing to say, ‘Let evidence indicate if we are on the right path, and let us be ready to learn and continue to improve upon our outcomes’, and continue to test the applicability of the evidence to the field.

It can also be difficult to identify appropriate and applicable research evidence for policy development. For instance, in respect to childhood obesity, there is a mismatch between the complex and multi-factorial nature of the problem (i.e. caused by interlinked cultural, economic, health, literacy, and other barriers among underserved populations in real-life settings) and the existing evidence (i.e. based on single biomedical interventions in a selective population in a controlled condition).

Another overarching issue is the explosion of research production, making it even harder to differentiate high-quality and relevant evidence from the rest for decision-making. Thousands of new medical and public health articles are indexed in PubMed, alone, each week. Workers in the field of knowledge transfer use synthesised research findings from high-quality systematic reviews to deliver best evidence to policy-makers and other key stakeholders. This evidence may be packaged in the form of ‘evidence briefs’ or other approaches [44, 132], and customised in other ways to meet the needs of specific stakeholder groups.

To minimise harms associated with the discordance between evidence and policy, we propose that those engaged in the policy-making process, in particular those who translate evidence for policy recommendations and who are health advocates, should learn to appraise the evidence informing their policy agenda (e.g. via inclusion of this type of analytic framework and science in their health policy and advocacy curriculum), or work closely with those who do have those skills and can represent the public interest in the policy arena being debated. Health advocates often simplify research findings in order to communicate the essence of the analysis. Trusted advocates should work with the researchers involved to ensure that the statement of the findings is indeed accurate.

We suggest that a minimum grounding of stakeholders in core principles of evidence-based public health, as well as in the science of communicating about research findings, may build a bridge between the latest scientific evidence and public health policy. This may be especially useful when the evidence is not clear-cut, as with pregnancy prevention interventions or mammography for breast cancer. A more nuanced understanding of the degree to which one may believe research findings, including consideration of epidemiologic biases, applicability to the policy question and the role of industry or interest group funding, could help to make United States health policy more evidence-based and improve health at the population level.

In the end, enhanced training may only improve a subsection of the problem that is attributable to lack of adequate knowledge in evidence-based public health. We also need to recognise the importance of building competency in other areas, as outlined below.

We propose more attention be given to improve the communication capacity of policy-makers in the manner that evidence is framed for different audiences, particularly segments that may discount the validity of evidence that may not support findings contrary to these groups’ belief systems or sense of morality, such as the SSPs for HIV prevention. It is particularly crucial that these groups be engaged, especially if they question the importance of making societal investments in particular issues, for example, segments who discount the role of government in developing and implementing programmes.

Competencies should be built to identify common ground across different audience segments, framing results as a means of responding to concerns across different groups concerned with particular societal outcomes, or those who may be concerned with costs related to certain outcomes. They also have to face their own ‘unconscious bias’ in championing evidence that may not actually be valid. Regarding the example of mammography for breast cancer, for instance, different healthcare provider and patient advocacy groups perceive and act upon the same evidence to push their respective agendas forward in different ways.

Analysis limitations

There are some limitations to our analysis. It is not a comprehensive analysis of all possibilities for discordance; indeed, we acknowledge that the types of discordance we examine here are only two very prominent ones. We selected our examples purposively, and there may have been better exemplars. A couple of other possibilities include screening for depression and primary care check-ups. Depression screening often leads to overdiagnosis and, in most cases, to diagnosed patients initiating antidepressant regimens [133,134,135], despite many significant known harms from antidepressant medications [38, 136] and side effects that include an increased risk of suicide and violence [39]. Primary care check-ups have been shown to have limited impact on reducing the risk of morbidity or mortality [137] in patients without other serious health risks. However, given the face validity of such interventions and their popularity among patients and advocacy groups, policy-makers may feel compelled to endorse polices that ignore this evidence.

There are likely many other factors causing or mediating Discordance 1 and 2 in the context of our examples. To illustrate each example, we focused our analysis on a single Discordance that was most visible to us, and apparent in the literature. There may or may not also be the complementary Discordance in those contexts. For example, in the example of adolescent pregnancy prevention, we merely reflect on Discordance 1 (e.g. use of inferior data synthesis methods and cherry-picking of favourable findings to characterise behavioural interventions as evidence based). However, it is also likely that this Discordance exists because of the macro-level political and cultural climate in which the alternative approaches (i.e. promotion of contraceptives and abortion services that are truly evidence based) are morally unacceptable for some policy-makers and their constituents, and thus are removed or downplayed in the policy agenda (Discordance 2).

Further, in the context of provided examples, there could be discordance between evidence and policy in respect to other interventions that stakeholders inappropriately promote or demote. For example, in addition to behavioural interventions, OAH has also been promoting certain abstinence-based programmes to prevent teen pregnancy [138, 139], despite lack of evidence of efficacy [52, 55].

In order to properly tease out the actual factors causing or mediating Discordance 1 and Discordance 2 in the context of our examples, we would need to conduct a survey of all stakeholders involved (e.g. OAH, American Breast Cancer Foundation, policy-makers, etc.). Such a study would allow us to collect and analyse primary data about stakeholder knowledge of evidence-based medicine, their deeply held values and beliefs in the context of the subject area, subtle or indirect conflicts of interest that may have bearing on decisions, and other issues that may interfere with the process of rigorously translating evidence to policy. Without such data, we may not be able to get an accurate and comprehensive picture of the real issues from the existing published literature and our analysis may gravitate towards speculation.

Conclusion

Public health policy should provide evidence-based solutions to public health problems. National and local policy-makers may face barriers in the use of research evidence when allocating resources [140]. Their priorities may be based on obsolete or incomplete evidence or factors other than research evidence.

In both types of discordance that we discuss here, there is a risk of increased population morbidity and/or mortality. It may not be feasible or, indeed, possible to change many context-specific barriers in the use of health research evidence in public health policy and programming. However, we can still mitigate the risks to population health if all stakeholders involved in guiding, developing and implementing public health policy have at least foundational skills in assessing evidence quality, as well as in communicating about it in nuanced ways. This could help to increase the use of the best evidence, which could result in better population health. Even where research evidence and evidence-based recommendations are used only selectively, health policy decisions will at least be evidence informed, if not always evidence based. Given the intractable or systemic nature of many discordances, even this partial uptake could help to reduce harm and improve health at the population level.

Research evidence is often shunted aside in health policy development, but it need not be this way. If research evidence were developed by scientists who followed global best practices in conducting and reporting their research [18, 26, 72, 141,142,143,144], if health recommendations were made by public health authorities who truly understood the evidentiary strengths and limitations of the research under examination, and if policy-makers could, in turn, do the same, public health and clinical care in the United States would have an increased likelihood of improving significantly.

Abbreviations

AHRQ:

Agency for Health Research and Quality

CDC:

Centers for Disease Control and Prevention

HCV:

Hepatitis C virus

OAH:

Office of Adolescent Health

PWID:

People who inject drugs

SSB:

Sugar-sweetened beverages

SSP:

Syringe service programmes

TPPER:

Teen Pregnancy Prevention Evidence Review

WWC:

What Works Clearinghouse

References

  1. Chalmers I. Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date evaluations. Annal Amer Acad Polit Soc Sci. 2003;589(1):22–40.

    Article  Google Scholar 

  2. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implementation Sci. 2012;7(1):50.

    Article  Google Scholar 

  3. Ioannidis J. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514.

    Article  PubMed  PubMed Central  Google Scholar 

  4. International Network of Agencies for Health Technology Assessment. http://www.inahta.org. Accessed 7 July 2017.

  5. The Cochrane Collaboration. http://www.cochrane.org. Accessed 7 July 2017.

  6. Community Preventive Services Task Force. The Community Guide. https://www.thecommunityguide.org. Accessed 7 July 2017.

  7. Canadian Task Froce on Preventive Health Care. http://canadiantaskforce.ca. Accessed 7 July 2017.

  8. US Preventive Services Task Force. https://www.uspreventiveservicestaskforce.org. Accessed 7 July 2017.

  9. Chou R, Dana T, Blazina I, Daeges M, Bougatsos C, Grusing S, et al. Statin use for the prevention of cardiovascular disease in adults: a systematic review for the U.S. Preventive Services Task Force. Rockville, MD: USPSTF; 2016.

    Google Scholar 

  10. Kahn JG, Marseille E, Auvert B. Cost-effectiveness of male circumcision for HIV prevention in a South African setting. PLoS Med. 2006;3(12):e517.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Mills E, Cooper C, Anema A, Guyatt G. Male circumcision for the prevention of heterosexually acquired HIV infection: a meta-analysis of randomized trials involving 11,050 men. HIV Med. 2008;9(6):332–5.

    Article  PubMed  CAS  Google Scholar 

  12. Whitney CG, Zhou F, Singleton J, Schuchat A, Centers for Disease Control and Prevention. Benefits from immunization during the vaccines for children program era - United States, 1994–2013. MMWR Morb Mortal Wkly Rep. 2014;63(16):352–5.

    PubMed  PubMed Central  Google Scholar 

  13. Zhou F, Santoli J, Messonnier ML, Yusuf HR, Shefer A, Chu SY, et al. Economic evaluation of the 7-vaccine routine childhood immunization schedule in the United States, 2001. Arch Pediatr Adolesc Med. 2005;159(12):1136–44.

    Article  PubMed  Google Scholar 

  14. Zaza S, Briss P, Harris K. The guide to Community Preventive Services: What Works to Promote Health. Oxford: Cpater 8: Motor Vehicle Occupant Injury; 2005. https://www.thecommunityguide.org/sites/default/files/publications/Front-Matter.pdf. Accessed 7 July 2017

    Google Scholar 

  15. Lewis J. Lead Poisoning: A Historical Perspective: Enviromental Protection Agency Journal; 1985. https://archive.epa.gov/epa/aboutepa/lead-poisoning-historical-perspective.html. Accessed 7 July 2017.

    Google Scholar 

  16. US Food and Drug Administration. Trans Fat Now Listed with Saturated Fat and Cholesterol (2006). https://wayback.archive-it.org/7993/20180125051942/, https://www.fda.gov/Food/LabelingNutrition/ucm274590.htm. Accessed 25 June 2017.

  17. US Federal Register. Final Determination Regarding Partially Hydrogenated Oils: A Notice by the Food and Drug Administration on 06/17/2015. Docket No. FDA–2013–N–1317: Goverment Publishing Office. https://www.gpo.gov/fdsys/pkg/FR-2015-06-17/pdf/2015-14883.pdf. Accessed 25 June 2017.

  18. Andrews JC, Schünemann HJ, Oxman AD, Pottie K, Meerpohl JJ, Coello PA, et al. GRADE guidelines: 15. Going from evidence to recommendation—determinants of a recommendation’s direction and strength. J Clin Epidemiol. 2013;66(7):726–35.

    Article  PubMed  Google Scholar 

  19. World Health Organization. WHO Handbook for Guideline Development – 2nd edition 2014. http://www.who.int/publications/guidelines/handbook_2nd_ed.pdf. Accessed 25 June 2017.

  20. Karlsson N, Gärling T, Bonini N. Escalation of commitment with transparent future outcomes. Experimental Psychol. 2005;52(1):67–73.

    Article  Google Scholar 

  21. Samuelson W, Zeckhauser R. Status quo bias in decision making. J Risk Uncertainty. 1988;1(1):7–59.

    Article  Google Scholar 

  22. Nickerson RS. Confirmation bias: a ubiquitous phenomenon in many guises. Rev Gen Psychol. 1998;2(2):175.

    Article  Google Scholar 

  23. Psychology Concepts. http://www.psychologyconcepts.com/observer-expectancy-effect-or-experimenter-expectancy-effect/. Accessed 7 July 2017.

  24. Frieden TR. Government’s role in protecting health and safety. New Engl J Med. 2013;368(20):1857–9.

    Article  PubMed  CAS  Google Scholar 

  25. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health - An evolving concept. Am J Prevent Med. 2004;27(5):417–21.

    Google Scholar 

  26. Smith R. The Cochrane Collaboration at 20. BMJ. 2013;347:f7383.

  27. Flay BR, Biglan A, Boruch RF, Castro FG, Gottfredson D, Kellam S, et al. Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prev Sci. 2005;6(3):151–75.

    Article  PubMed  Google Scholar 

  28. Bowen S, Zwi AB. Pathways to “evidence-informed” policy and practice: a framework for action. PLoS Med. 2005;2(7):e166.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Brownson RC, Kreuter MW, Arrington BA, True WR. Translating scientific discoveries into public health action: how can schools of public health move us forward? Public Health Rep. 2006;121(1):97–103.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health. 2009;99(9):1576–83.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Jansen MW, van Oers HA, Kok G, de Vries NK. Public health: disconnections between policy, practice and research. Health Res Policy Syst. 2010;8:37.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Centers for Disease Control and Prevention. CDC Policy Process: US Department of Health & Human Services; 2015. https://www.cdc.gov/policy/analysis/process/index.html. Accessed 7 July 2017.

  33. The William T. Grant Foundation. From Data to Evidence to Policy: Recommendations for the Commission on Evidence-Based Policymaking. 2016. http://wtgrantfoundation.org/. Accessed 15 Jul 2018.

  34. Collins T. Health policy analysis: a simple tool for policy makers. Public Health. 2005;119(3):192–6.

    Article  PubMed  CAS  Google Scholar 

  35. Coveney J. Analyzing public health policy: three approaches. Health Promot Pract. 2010;11(4):515–21.

    Article  PubMed  Google Scholar 

  36. Raichand S, Dunn AG, Ong M-S, Bourgeois FT, Coiera E, Mandl KD. Conclusions in systematic reviews of mammography for breast cancer screening and associations with review design and author characteristics. Systematic Rev. 2017;6(1):105.

    Article  Google Scholar 

  37. Gotzsche PC. Patients should have free and immediate access to all information related to clinical trials. BMJ. 2017;356:j1221.

    Article  PubMed  Google Scholar 

  38. Gotzsche PC. Why I think antidepressants cause more harm than good. Lancet Psychiatry. 2014;1(2):104–6.

    Article  PubMed  Google Scholar 

  39. Sharma T, Guski LS, Freund N, Gøtzsche PC. Suicidality and aggression during antidepressant treatment: systematic review and meta-analyses based on clinical study reports. BMJ. 2016;352:i65.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  40. Altman DG, Simera I. A History of the Evolution of Guidelines for Reporting Medical Research: The Long Road to the EQUATOR Network. JLL Bulletin: Commentaries on the History of Treatment Evaluation. 2015. http://www.jameslindlibrary.org/articles/a-history-of-the-evolution-of-guidelines-for-reporting-medical-research-the-long-road-to-the-equator-network/. Accessed 14 Jul 2018.

  41. Flynn AB, Falco M, Hocini S. Independent evaluation of middle school-based drug prevention curricula: a systematic review. JAMA Pediatr. 2015;169(11):1046–52.

    Article  PubMed  Google Scholar 

  42. Yavchitz A, Ravaud P, Altman DG, Moher D, Hrobjartsson A, Lasserson T, et al. A new classification of spin in systematic reviews and meta-analyses was developed and ranked according to the severity. J Clin Epidemiol. 2016;75:56–65.

    Article  PubMed  Google Scholar 

  43. Poskanzer SG. Higher Education Law: The Faculty. Baltimore: JHU Press; 2002.

  44. Lavis JN, Oxman AD, Lewin S, Fretheim A. SUPPORT Tools for evidence-informed health Policymaking (STP). Health Res Policy Syst. 2009;7(Suppl 1):I1.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Ong EK, Glantz SA. Tobacco industry efforts subverting International Agency for Research on Cancer’s second-hand smoke study. Lancet. 2000;355(9211):1253–9.

    Article  PubMed  CAS  Google Scholar 

  46. Dearlove JV, Bialous SA, Glantz SA. Tobacco industry manipulation of the hospitality industry to maintain smoking in public places. Tobacco Control. 2002;11(2):94–104.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  47. Kearns CE, Glantz SA, Schmidt LA. Sugar industry influence on the scientific agenda of the National Institute of Dental Research’s 1971 National Caries Program: a historical analysis of internal documents. PLoS Medicine. 2015;12(3):e1001798.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud P-AC, et al. Why don’t physicians follow clinical practice guidelines?: A framework for improvement. JAMA. 1999;282(15):1458–65.

    Article  PubMed  CAS  Google Scholar 

  49. Centers for Disease Control and Prevention. About Teen Pregnancy: Pregnancy in the United States 2017. https://www.cdc.gov/teenpregnancy/about/index.htm. Accessed 4 Jul 2017.

  50. Hoffman SD, Maynard RA. Kids Having Kids: Economic Costs & Social Consequences of Teen Pregnancy. Washington, DC: The Urban Institute Press; 2008.

  51. The National Campaign to Prevent Teen and Unplanned Pregnancy. Counting It Up: The Public Costs of Teen Childbearing 2010. http://thenationalcampaign.org/why-it-matters/public-cost. Accessed 5 July 2017.

  52. Marseille EMA, Biggs MA, Miller A, Horvath H, Lightfoot M, Malekinejad M, Kahn JG. Effectiveness of School-Based Teen Pregnancy Prevention Programs in the USA: a Systematic Review and Meta-Analysis. Prevention Sci. 2018;19(4):468–89.

    Article  Google Scholar 

  53. Oberlander SE, Trivits LC. Building the evidence to prevent adolescent pregnancy: contents of the volume. Am J Public Health. 2016;106(S1):S6.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Tolli MV. Effectiveness of peer education interventions for HIV prevention, adolescent pregnancy prevention and sexual health promotion for young people: a systematic review of European studies. Health Educ Res. 2012;27(5):904–13.

    Article  PubMed  CAS  Google Scholar 

  55. Oringanje C, Meremikwu MM, Eko H, Esu E, Meremikwu A, Ehiri JE. Interventions for preventing unintended pregnancies among adolescents. Cochrane Database Syst Rev. 2016;2:CD005215.

    PubMed  Google Scholar 

  56. Brittain AW, Williams JR, Zapata LB, Pazol K, Romero LM, Weik TS. Youth-friendly family planning services for young people: a systematic review. Am J Prev Med. 2015;49(2 Suppl 1):S73–84.

    Article  PubMed  Google Scholar 

  57. Library of Congress. HR 1625: Consolidated Appropriations Act, 2018.

  58. Office of Adolescent Health. Grantee Evaluations FY2010–2014: Department of Health and Human Services; 2016. https://www.hhs.gov/ash/oah/evaluation-and-research/grantee-led-evaluation/2010-2014-grantees/index.html Accessed 5 July 2017.

  59. Office of Adolescent Health. TPP Program Grantees (FY2010–2014) 2016. https://www.hhs.gov/ash/oah/grant-programs/teen-pregnancy-prevention-program-tpp/about/tpp-cohort-1/index.html. Accessed 26 May 2018.

  60. Office of Adolescent Health. About the Teen Pregnancy Prevention Program 2018. https://www.hhs.gov/ash/oah/grant-programs/teen-pregnancy-prevention-program-tpp/about/index.html. Accessed 26 May 2018.

  61. US Department of Health and Human Services. Fact Sheet: FY 2018 Funding Opportunity Announcements for Teen Pregnancy Prevention Program. 2018. https://www.hhs.gov/ash/about-ash/news/2018/fy-2018-funding-opportunity-announcements-tpp-factsheet.html. Accessed 26 May 2018.

  62. Office of Adolescent Health. Teen Pregnancy Prevention Program: Department of Health and Human Services; 2016. https://www.hhs.gov/ash/oah/adolescent-development/reproductive-health-and-teen-pregnancy/teen-pregnancy-and-childbearing/teen-pregnancy-prevention-program/index.html. Accessed 5 July 2017.

  63. US Department of Education. What Works Clearhouse, Procedures and Standards Handbooks, Standards Version 3.0, March 2014. 2014. https://ies.ed.gov/ncee/wwc/Handbooks. Accessed 5 July 2017.

  64. Horne CS. Assessing and strengthening evidence-based program registries’ usefulness for social service program replication and adaptation. Eval Rev. 2017;41(5):407–35.

  65. Munter CP, Shekell C. The role of program theory in evaluation research: a consideration of the what works clearinghouse standards in the case of mathematics education. Am J Eval. 2016;37(1):7–26.

    Article  Google Scholar 

  66. Stockard JWT. The threshold and inclusive approaches to determining “best available evidence”: an empirical analysis. Am J Eval. 2017;38(4):471–92.

    Article  Google Scholar 

  67. Wood TW. National Institute for Direct Instruction. Does the What Works Clearinghouse Really Work?: Investigations into Issues of Policy, Practice, and Transparency 2017. https://www.nifdi.org. Accessed 26 May 2018.

  68. Wood TW. National Institute for Direct Instruction. Inaccuracies in WWC Reports: Findings from a FOIA Request. NIFDI Technical Report 2014–2015. https://www.nifdi.org. Accessed 26 May 2018.

  69. Office of Adolescent Health. Teen Pregnancy Program Evidence Review: Frequently Asked Questions. 2017. https://tppevidencereview.aspe.hhs.gov/FAQ.aspx#tenth. Accessed 26 May 2018.

  70. Agency for Healthcare Research and Quality. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. https://effectivehealthcare.ahrq.gov/topics/cer-methods-guide/overview. Accessed 29 May 2018.

  71. Institute of Education Sciences: What Works Clearinghouse. How the WWC Rates a Study: Rating Group Designs. 2017. https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_info_rates_061015.pdf. Accessed 29 May 2018.

  72. Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0. The Cochrane Collaboration; 2011. http://handbook-5-1.cochrane.org. Accessed 15 Jul 2018.

  73. Mathematica Policy Research. Identying Programs that Impact Teen Pregnancy, Sexually Transmitted Infections and Associated Sexual Risk Behaviors: Review Protocol version 5.0. 2016. https://tppevidencereview.aspe.hhs.gov/pdfs/TPPER_Review%20Protocol_v5.pdf. Accessed 29 May 2018. 

  74. Lugo-Gil J LA, Vohra D, Adamek K, Lacoe J, Goesling B. Updated findings from the HHS Teen Pregnancy Prevention Evidence Review: July 2014 through August 2015 [updated version]: Office of Adolescent Health; 2016. https://tppevidencereview.aspe.hhs.gov/pdfs/Summary_of_findings_2015.pdf. Accessed 26 May 2018.

  75. Lugo-Gil J LA, Vohra D, Adamek K, Lacoe J, Goesling B. Updated findings from the HHS Teen Pregnancy Prevention Evidence Review: August 2015 through October 2016: Office of Adolescent Health; 2018. https://tppevidencereview.aspe.hhs.gov/pdfs/Summary_of_findings_2016-2017.pdf. Accessed 26 May 2018.

  76. Schünemann H, Brożek J, Guyatt G, Oxman A. GRADE Handbook for Grading Quality of Evidence and Strength of Recommendations. The GRADE Working Group; 2013. http://www.guidelinedevelopment.org/handbook/. Accessed 15 Jul 2018.

  77. Ryerson AB, Eheman CR, Altekruse SF, Ward JW, Jemal A, Sherman RL, et al. Annual report to the nation on the status of cancer, 1975–2012, featuring the increasing incidence of liver cancer. Cancer. 2016;122(9):1312–37.

    Article  PubMed  PubMed Central  Google Scholar 

  78. Centers for Disease Control and Prevention. Breast Cancer Statistics. 2014. https://www.cdc.gov/cancer/breast/statistics/index.htm. Accessed 4 July 2017.

  79. Oeffinger KC, Fontham ET, Etzioni R, Herzig A, Michaelson JS, Y-CT S, et al. Breast cancer screening for women at average risk: 2015 guideline update from the American Cancer Society. JAMA. 2015;314(15):1599–614.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  80. Gotzsche PC, Jorgensen KJ. Screening for breast cancer with mammography. Cochrane Database Syst Rev. 2013;(6):CD001877.

  81. Gotzsche PC. Mammography screening is harmful and should be abandoned. J R Soc Med. 2015;108(9):341–5.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Centers for Disease Control and Prevention. Breast Cancer Screening Guidelines for Women 2016. https://www.cdc.gov/cancer/breast/pdf/BreastCancerScreeningGuidelines.pdf. Accessed 4 July 2017.

  83. Jorgensen KJ. Mammography screening is not as good as we hoped. Maturitas. 2010;65(1):1–2.

    Article  PubMed  Google Scholar 

  84. Jorgensen KJ, Gotzsche PC. Overdiagnosis in publicly organized mammography screening programmes: systematic review of incidence trends. BMJ. 2009;339:b2587.

    Article  PubMed  PubMed Central  Google Scholar 

  85. American Breast Cancer Foundation. Mammogram Myths. http://www.abcf.org/think-pink-education/mammograms/mammogram-myths. Accessed 5 July 2017.

  86. Centers for Disease Control and Prevention. Childhood Obesity Facts: Prevalence of Childhood Obesity in the United States, 2011–2014. 2017. https://www.cdc.gov/obesity/data/childhood.html. Accessed 5 July 2017.

  87. Khan LK, Sobush K, Keener D, Goodman K, Lowry A, Kakietek J, et al. Recommended community strategies and measurements to prevent obesity in the United States. MMWR: Recommendations and Reports. 2009;58(7):1–29.

    PubMed  Google Scholar 

  88. Wang Y, Wu Y, Wilson RF, Bleich S, Cheskin L, Weston C, et al. Childhood Obesity Prevention Programs: Comparative Effectiveness Review and Meta-analysis. Prepared by the Johns Hopkins University Evidence-based Practice Center under Contract No. 290–2007-10061-I. AHRQ Publication No. 13-EHC081-EF. Rockville, MD: Agency for Healthcare Research and Quality. 2013. www.effectivehealthcare.ahrq.gov/reports/final.cfm.

  89. Brennan L, Castro S, Brownson RC, Claus J, Orleans CT. Accelerating evidence reviews and broadening evidence standards to identify effective, promising, and emerging policy and environmental strategies for prevention of childhood obesity. Ann Rev Public Health. 2011;32:199–223.

    Article  Google Scholar 

  90. Brennan LK, Brownson RC, Orleans CT. Childhood obesity policy research and practice: evidence for policy and environmental strategies. Am J Prev Med. 2014;46(1):e1–16.

    Article  PubMed  PubMed Central  Google Scholar 

  91. Blankenship KM, Friedman SR, Dworkin S, Mantell JE. Structural interventions: concepts, challenges and opportunities for research. J Urban Health. 2006;83(1):59–72.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  92. Kristensen AH, Flottemesch TJ, Maciosek MV, Jenson J, Barclay G, Ashe M, et al. Reducing childhood obesity through US federal policy: a microsimulation analysis. Am J Prev Med. 2014;47(5):604–12.

    Article  PubMed  PubMed Central  Google Scholar 

  93. National Conference of State Legislatures). Healthy, Hunger-Free Kids Act Of 2010 (P.L. 111–296) Summary 2011. http://www.ncsl.org/research/human-services/healthy-hunger-free-kids-act-of-2010-summary.aspx. Accessed 5 July 2017.

  94. Sonntag D, Schneider S, Mdege N, Ali S, Schmidt B. Beyond food promotion: a systematic review on the influence of the food industry on obesity-related dietary behaviour among children. Nutrients. 2015;7(10):8565–76.

    Article  PubMed  PubMed Central  Google Scholar 

  95. House of Representetives: Energy and Commerce; Education and the Workforce subcomittee. H.R.3772. Stop Obesity in Schools Act of 2015: Library of Congress; 2016. https://www.congress.gov/bill/114th-congress/house-bill/3772. Accessed 5 July 2017.

  96. Centers for Disease Control and Prevention. HIV/AIDS: Basic Statistics. 2017. https://www.cdc.gov/hiv/basics/statistics.html. Accessed 1 June 2018.

  97. Nelson PK, Mathers BM, Cowie B, Hagan H, Des Jarlais D, Horyniak D, et al. Global epidemiology of hepatitis B and hepatitis C in people who inject drugs: results of systematic reviews. Lancet. 2011;378(9791):571–83.

    Article  PubMed  PubMed Central  Google Scholar 

  98. Prejean J, Song R, Hernandez A, Ziebell R, Green T, Walker F, et al. Estimated HIV incidence in the United States, 2006–2009. PloS one. 2011;6(8):e17502.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  99. Centers for Disease Control and Prevention. HIV and Injection Drug Use 2017. https://www.cdc.gov/hiv/risk/idu.html. Accessed 6 July 2017.

  100. Young AM, Havens JR. Transition from first illicit drug use to first injection drug use among rural Appalachian drug users: a cross-sectional comparison and retrospective survival analysis. Addiction. 2012;107(3):587–96.

    Article  PubMed  Google Scholar 

  101. Havens JR, Lofwall MR, Frost SD, Oser CB, Leukefeld CG, Crosby RA. Individual and network factors associated with prevalent hepatitis C infection among rural Appalachian injection drug users. Am J Public Health. 2013;103(1):e44–52.

    Article  PubMed  PubMed Central  Google Scholar 

  102. Strathdee SA, Beyrer C. Threading the needle—how to stop the HIV outbreak in rural Indiana. New Engl J Med. 2015;373(5):397–9.

    Article  PubMed  CAS  Google Scholar 

  103. MacArthur GJ, van Velzen E, Palmateer N, Kimber J, Pharris A, Hope V, et al. Interventions to prevent HIV and hepatitis C in people who inject drugs: a review of reviews to assess evidence of effectiveness. Int J Drug Policy. 2014;25(1):34–52.

    Article  PubMed  Google Scholar 

  104. Abdul-Quader AS, Feelemyer J, Modi S, Stein ES, Briceno A, Semaan S, et al. Effectiveness of structural-level needle/syringe programs to reduce HCV and HIV infection among people who inject drugs: a systematic review. AIDS Behavior. 2013;17(9):2878.

    Article  PubMed  Google Scholar 

  105. Riley D, O’Hare P. Harm reduction: History, definition, and practice. Harm reduction: National and international perspectives: Sage; 2000. https://doi.org/10.4135/9781452220680.n1.

  106. Aspinall EJ, Nambiar D, Goldberg DJ, Hickman M, Weir A, Van Velzen E, et al. Are needle and syringe programmes associated with a reduction in HIV transmission among people who inject drugs: a systematic review and meta-analysis. Int J Epidemiol. 2013;43(1):235–48.

    Article  PubMed  Google Scholar 

  107. Wodak A, Cooney A. Do needle syringe programs reduce HIV infection among injecting drug users: a comprehensive review of the international evidence. Substance Use Misuse. 2006;41(6–7):777–813.

    Article  PubMed  Google Scholar 

  108. Laufer FN. Cost-effectiveness of syringe exchange as an HIV prevention strategy. J Acquir Immune Defic Syndr. 2001;28(3):273–8.

    Article  PubMed  CAS  Google Scholar 

  109. Nguyen TQ, Weir BW, Des Jarlais DC, Pinkerton SD, Holtgrave DR. Syringe exchange in the United States: a national level economic evaluation of hypothetical increases in investment. AIDS Behavior. 2014;18(11):2144.

    Article  PubMed  Google Scholar 

  110. Clark PA, Fadus M. Federal funding for needle exchange programs. Med Sci Monitor. 2009;16(1):PH1–PH13.

    Google Scholar 

  111. Lurie P. When science and politics collide: the federal response to needle-exchange programs. Bull New York Acad Med. 1995;72(2):380.

    CAS  Google Scholar 

  112. Bramson H, Des Jarlais DC, Arasteh K, Nugent A, Guardino V, Feelemyer J, et al. State laws, syringe exchange, and HIV among persons who inject drugs in the United States: History and effectiveness. J Public Health Policy. 2015;36(2):212–30.

    Article  PubMed  Google Scholar 

  113. Centers for Disease Control and Prevention. Department of Health and Human Services Implementation Guidance to Support Certain Components of Syringe Services Programs, 2016. 2016. https://www.hiv.gov/sites/default/files/hhs-ssp-guidance.pdf. Accessed 4 July 2017.

  114. Singer M. Needle exchange and AIDS prevention: controversies, policies and research. Med Anthropol. 1997;18(1):1–12.

    Article  PubMed  CAS  Google Scholar 

  115. Buchanan D, Shaw S, Ford A, Singer M. Empirical science meets moral panic: An analysis of the politics of needle exchange. J Public Health Policy. 2003;24(3–4):427–44.

    Article  PubMed  Google Scholar 

  116. Drug Policy Alliance. A Brief History of the Drug War. http://www.drugpolicy.org/facts/new-solutions-drug-policy/brief-history-drug-war-0. Accessed 6 July 2017.

  117. Bluthenthal RN, Kral AH, Erringer EA, Edlin BR. Drug paraphernalia laws and injection-related infectious disease risk among drug injectors. J Drug Issues. 1999;29(1):1–16.

    Article  Google Scholar 

  118. Acker CJ. Creating the American Junkie: Addiction Research in the Classic Era of Narcotic Control. Baltimore: JHU Press; 2002.

  119. Lee PR, Arno PS. The federal response to the AIDS epidemic. Health Policy. 1986;6(3):259–67.

    Article  PubMed  CAS  Google Scholar 

  120. Nadelmann EA. Drug prohibition in the United States: Costs, consequences, and alternatives. Science. 1989;245(4921):939–47.

    Article  PubMed  CAS  Google Scholar 

  121. Baum D. Smoke and Mirrors: The War on Drugs and the Politics of Failure. Boston: Little, Brown; 1996.

    Google Scholar 

  122. Werb D, Rowell G, Guyatt G, Kerr T, Montaner J, Wood E. Effect of drug law enforcement on drug market violence: A systematic review. Int J Drug Policy. 2011;22(2):87–94.

    Article  PubMed  Google Scholar 

  123. DeBeck K, Cheng T, Montaner JS, Beyrer C, Elliott R, Sherman S, et al. HIV and the criminalisation of drug use among people who inject drugs: a systematic review. Lancet HIV. 2017;4(8):E357–E74.

    Article  PubMed  PubMed Central  Google Scholar 

  124. Ioannidis JP, Evans SJ, Gotzsche PC, O’Neill RT, Altman DG, Schulz K, et al. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med. 2004;141(10):781–8.

    Article  PubMed  Google Scholar 

  125. Stamatakis E, Weiler R, Ioannidis JP. Undue industry influences that distort healthcare research, strategy, expenditure and practice: a review. Eur J Clin Invest. 2013;43(5):469–75.

    Article  PubMed  Google Scholar 

  126. Saluja S, Woolhandler S, Himmelstein DU, Bor D, McCormick D. Unsafe drugs were prescribed more than one hundred million times in the United States before being recalled. Int J Health Serv. 2016;46(3):523–30.

    Article  PubMed  Google Scholar 

  127. Francis JA, Shea AK, Samet JM. Challenging the epidemiologic evidence on passive smoking: tactics of tobacco industry expert witnesses. Tobacco Control. 2006;15(Suppl 4):iv68–76.

    PubMed  PubMed Central  Google Scholar 

  128. Tong EK, Glantz SA. Tobacco industry efforts undermining evidence linking secondhand smoke with cardiovascular disease. Circulation. 2007;116(16):1845–54.

    Article  PubMed  CAS  Google Scholar 

  129. Lexchin J. Those who have the gold make the evidence: how the pharmaceutical industry biases the outcomes of clinical trials of medications. Sci Eng Ethics. 2012;18(2):247–61.

    Article  PubMed  Google Scholar 

  130. Gardner A, Brindis C. Advocacy and Policy Change Evaluation: Theory and Practice. Stanford: Stanford University Press; 2017.

  131. Ebrahim S, Bance S, Athale A, Malachowski C, Ioannidis JP. Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. J Clin Epidemiol. 2016;70:155–63.

    Article  PubMed  Google Scholar 

  132. Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ. 2006;84(8):620–8.

    Article  PubMed  PubMed Central  Google Scholar 

  133. Thombs BD, Ziegelstein RC. Depression screening in primary care: why the Canadian task force on preventive health care did the right thing. Can J Psychiatry. 2013;58(12):692–6.

    Article  PubMed  Google Scholar 

  134. Roseman M, Kloda LA, Saadat N, Riehm KE, Ickowicz A, Baltzer F, et al. Accuracy of depression screening tools to detect major depression in children and adolescents: a systematic review. Can J Psychiatry. 2016;61(12):746–57.

    Article  PubMed  PubMed Central  Google Scholar 

  135. Mojtabai R. Universal depression screening to improve depression outcomes in primary care: sounds good, but where is the evidence? Psychiatr Serv. 2017;68(7):724–6.

    Article  PubMed  Google Scholar 

  136. Gotzsche PC. Antidepressants are addictive and increase the risk of relapse. BMJ. 2016;352:i574.

    Article  PubMed  Google Scholar 

  137. Krogsbøll LT, Jørgensen KJ, Larsen CG, Gøtzsche PC. General health checks in adults for reducing morbidity and mortality from disease: Cochrane systematic review and meta-analysis. BMJ. 2012;345:e7191.

    Article  PubMed  PubMed Central  Google Scholar 

  138. Badgley AM, Musselman C, Casale T, Badgley-Raymond S. Heritage Keepers Abstinence Education. Teen Pregnancy Prevention Evidence Review. Washington, DC: US Department of Health and Human Services. https://tppevidencereview.aspe.hhs.gov/document.aspx?rid=3&sid=74. Accessed 1 June 2018

  139. Jemmott LS, Jemmott JB III, K MC. Making a Difference!. Teen Pregnancy Prevention Evidence Review. Washington, DC: US Department of Health and Human Services. https://tppevidencereview.aspe.hhs.gov/document.aspx?rid=3&sid=101. Accessed 1 June 2018

  140. Andermann A, Pang T, Newton JN, Davis A, Panisset U. Evidence for Health II: Overcoming barriers to using evidence in policy and practice. Health Res Policy Syst. 2016;14(1):17.

    Article  PubMed  PubMed Central  Google Scholar 

  141. Schulz KF, Altman DG, Moher D, Group CONSORT. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. BMJ. 2010;340:c332.

    Article  PubMed  PubMed Central  Google Scholar 

  142. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007;370(9596):1453–7.

    Article  PubMed  Google Scholar 

  143. Vandenbroucke JP, von Elm E, Altman DG, Gotzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Epidemiology. 2007;18(6):805–35.

    Article  PubMed  Google Scholar 

  144. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

    Article  PubMed  PubMed Central  Google Scholar 

  145. Nelson HD, Fu R, Cantor A, Pappas M, Daeges M, Humphrey L. Effectiveness of Breast Cancer Screening: Systematic Review and Meta-analysis to Update the 2009 U.S. Preventive Services Task Force Recommendation. Ann Intern Med. 2016;164(4):244–55.

  146. Harm Reduction Coalition. Goverment Reports: Syringe Access. http://harmreduction.org/issues/syringe-access/overview/government-reports/. Accessed 15 July 2018.

  147. Bassler SE. The history of needle exchange programs in the United States. Master’s and Doctroal Proejct. The University of Toledo Digital Repository. Docuemnt # 275: The University of Toledo; 2007.

  148. Frieden TR, Dietz W, Collins J. Reducing childhood obesity through policy change: acting now to prevent obesity. Health Affairs. 2010;29(3):357–63.

Download references

Acknowledgements

We would like to extend our gratitude towards Dr Elliot Marseille at Health Strategies International for his critical review of this manuscript.

Funding

Authors would like to acknowledge support of the discretionary fund from the Philip R. Lee Institute for Health Policy Studies at the University of California, San Francisco, as well the Caldwell B. Eselystyn Chair in Health Policy Fund at UCSF.

Availability of data and materials

All data and materials were identified using openly accessible citations.

Author information

Authors and Affiliations

Authors

Contributions

All four authors contributed to initial conceptual ideas. MM and HH conducted the literature search, summarised findings from cited studies and developed the first draft of the manuscript. HS and CB critically reviewed and revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohsen Malekinejad.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Malekinejad, M., Horvath, H., Snyder, H. et al. The discordance between evidence and health policy in the United States: the science of translational research and the critical role of diverse stakeholders. Health Res Policy Sys 16, 81 (2018). https://doi.org/10.1186/s12961-018-0336-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12961-018-0336-7

Keywords