- Open Access
- Open Peer Review
This article has Open Peer Review reports available.
Evidence for Health I: Producing evidence for improving health and reducing inequities
© Andermann et al. 2016
Received: 13 February 2015
Accepted: 16 February 2016
Published: 14 March 2016
In an ideal world, researchers and decision-makers would be involved from the outset in co-producing evidence, with local health needs assessments informing the research agenda and research evidence informing the actions taken to improve health. The first step in improving the health of individuals and populations is therefore gaining a better understanding of what the main health problems are, and of these, which are the most urgent priorities by using both quantitative data to develop a health portrait and qualitative data to better understand why the local population thinks that addressing certain health challenges should be prioritized in their context. Understanding the causes of these health problems often involves analytical research, such as case-control and cohort studies, or qualitative studies to better understand how more complex exposures lead to specific health problems (e.g. by interviewing local teenagers discovering that watching teachers smoke in the school yard, peer pressure, and media influence smoking initiation among youth). Such research helps to develop a logic model to better map out the proximal and distal causes of poor health and to determine potential pathways for intervening and impacting health outcomes. Rarely is there a single ‘cure’ or stand-alone intervention, but rather, a continuum of strategies are needed from diagnosis and treatment of patients already affected, to disease prevention, health promotion and addressing the upstream social determinants of health. Research for developing and testing more upstream interventions must often go beyond randomized controlled trials, which are expensive, less amenable to more complex interventions, and can be associated with certain ethical challenges. Indeed, a much neglected area of the research cycle is implementation and evaluation research, which often involves quasi-experimental research study designs as well as qualitative research, to better understand how to derive the greatest benefit from existing interventions and ways of maximizing health improvements in specific local contexts. There is therefore a need to alter current incentive structures within the research enterprise to place greater emphasis on implementation and evaluation research conducted in collaboration with knowledge users who are in a position to use the findings in practice to improve health.
“Even if the cure for HIV was one glass of clean water, we wouldn’t be able to cure the world.” – Technical Officer at the World Health Organization, Geneva, Switzerland
To help people make better-informed decisions about improving health and reducing health inequities, an important question is, what evidence is needed in supporting these decisions ? There is a large body of biomedical research evidence that looks at single diseases and considers randomized controlled trials (RCTs) to be the gold standard in determining whether a given medicine or device will benefit a specific patient group as compared to no treatment (i.e. placebo) or the current standard of care. However, in the field of public health, where the aim is to improve the health of entire populations, a more complex arsenal of research study designs are needed that better address the complexity and contextual nuances involved, as well as ensuring that research evidence is co-produced with knowledge users who are able to implement changes that in practice will lead to improved health outcomes. Even for diseases where there is a known prevention or cure, people are still dying from these conditions because we lack knowledge on how to make these treatments work in practice in a variety of contexts. The purpose of this article series is therefore to describe how to produce evidence for improving the health of populations and how to ensure that this evidence is then used to make better informed decisions for health. The first article in this series focuses on the different kinds of study designs and approaches that can be used, beyond the traditional focus on RCTs, for producing evidence that can help to improve population health and reduce health inequities.
Defining health priorities
The first step in improving the health of individuals and populations is better understanding what the main health problems are, and of these, which are the most urgent priorities and why. According to the PRECEDE-PROCEED model , quantitative data can be used to create a health portrait of the frequency and severity of context-specific health problems (i.e. the ‘objective health needs’), but qualitative data is also needed to explore the perceptions of whether these health problems are considered by the local population to be a priority and why (i.e. the ‘subjective health needs’). For instance, the US National Health and Nutrition Examination Survey , and similar surveys in other countries, ask about disease prevalence and may also include direct assessments of the health condition (e.g. identifying diabetes by doing blood sugar tests). However, in addition to knowing about how many people have the health problem and how many new cases develop each year, it is also important to know the severity of the health problem. This, in turn, has important implications for the health system, especially in relation to non-communicable diseases, including mental health conditions, addictions, gender-based violence, child maltreatment and other chronic problems that can cause prolonged suffering, greatly impacting quality of life and increasing the need for care over long periods of time, even if there may not be a significant impact on mortality. To better understand in what ways health problems actually affect people, it is necessary to ask people directly. Therefore, qualitative research can be used to tease out how a health problem impacts people’s lives and what kind of support would be most helpful.
When prioritizing which health problems should be the focus of further research (i.e. moving to step 2 in the research cycle) it is not sufficient to simply make a ranking of the health conditions which result in the largest number of deaths, disability-adjusted life years or which cost the most money. According to Green and Kreuter’s “Precede-Procede” model for health planning, in addition to the ‘objective needs assessment’ based on surveillance data and descriptive surveys, there should also be a ‘subjective needs assessment’ that considers the viewpoint of the local population . People want to be involved , and their voices should be heard to ensure a fair process  since these decisions will ultimately affect them. Qualitative research is more participatory and inclusive by using purposive sampling to obtain a wide range of perspectives, including those in the minority who may be more marginalized. Thus, it is an important way of involving various populations or target groups in providing their own views and empowering them in determining their own health priorities and identifying their preferred solutions .
Understanding the causes of the health problem
Once the major health priorities are identified, the next step is to better understand the causes of the health problems as a basis for identifying effective interventions. Epidemiological studies, such as case-control and cohort studies, can demonstrate whether there is an association between an exposure (such as smoking) and an outcome (such as lung cancer) [9, 10].
The next step in the research cycle is determining what works to improve health. This involves developing new interventions or identifying existing interventions that act on the causes of poor health, and then conducting further research to assess which of these interventions actually makes a difference in improving health outcomes.
Developing interventions to improve health outcomes
In developing and testing interventions, we want to know whether the intervention works, how well it works, whether there are any unwanted negative consequences, whether the benefits of the intervention outweigh the harms, and how much this will cost per incremental improvement in health. While RCTs have long been the gold standard for determining the efficacy of an intervention, these studies can nonetheless have certain methodological challenges that can affect the internal and external validity, and hence the usefulness of the results . This led to the development of the CONSORT reporting standards to at least be able to better judge these shortcomings and determine the utility of the data for decision-making . However, even beyond issues relating to validity of results, RCTs can be extremely expensive, less amenable to studying more complex interventions at the health system or population level, and are also subject to important ethical considerations [16, 17], which are beyond the scope of this article. Therefore, alternative study designs are important and increasingly being used, and include pre-post studies where the population serves as its own control group and stepped-wedge designs with sequential roll-out of interventions over time , both of which offer certain advantages, but like all research studies, also have their limitations .
Implementing and evaluating research in a real-world context
The final steps in the research cycle are traditionally the implementation and evaluation of the intervention in ‘real world’ settings rather than controlled research settings. Nevertheless, in population health research, where interventions are often complex and where it can be difficult to find ‘controlled settings’, the boundaries between the development of interventions, and their implementation and evaluation, can be blurred. Ultimately, what we really want to know is whether an intervention has improved the health of the population and has reduced health inequities.
Commonly used ways of assessing whether there has been a positive change in population health are quasi-experimental studies such as pre-post studies, natural experiments and stepped-wedge designs. If an intervention has been shown to produce a positive impact on health, policymakers would also want to know how much the actual implementation of the intervention will cost, and what the incremental cost per additional health benefit produced would be. Economic evaluations can attempt to provide this information, though often rely upon modelling based on a variety of assumptions which may or may not reflect the reality in a given context. Moreover, demonstrating that an intervention is inexpensive and able to produce a health benefit in a controlled research setting is very different from ensuring that the health benefit can be realized within a given budget when the intervention is implemented on a larger scale in a ‘real world’ setting.
The tail end of the research cycle which deals with implementation and evaluation is a grey zone where research blends into practice. In research, the purpose is to generate new knowledge and to develop and test hypotheses. This entails using the various research study designs described above and it also requires ethical approval to protect research participants (regardless of the study design chosen, since all studies pose certain ethical challenges that should not be overlooked) . In contrast, the purpose of implementation and evaluation is to improve the effectiveness and efficiency of programs and policies by modifying, adapting and adjusting these in accordance with lessons learnt from actually using them (and studying how they work) in practice.
According to WHO, most research to date has focused on the development of new interventions rather than optimizing the delivery of existing interventions. There is therefore a call for more research that “focuses on studying how research outcomes can be translated into practice” . Of course, the type of research that succeeds in being funded reflects the priorities established by funding agencies, which tend to be overly concerned with developing new technologies and securing intellectual property agreements rather than optimizing delivery and utilization in local contexts. According to Leroy et al. , “ninety-seven percent of grants were for developing new technologies, which could reduce child mortality by 22%. This reduction is one third of what could be achieved if existing technologies were fully utilized”. Indeed, many evidence-based innovations fail to generate the expected health impact when transferred to communities in the global South, largely because their implementation is untested, unsuitable or incomplete . If the goal of research is improving population health and saving lives, then funding agencies need to rethink whether they are investing in the right places.
In an attempt to maximize the health impact of research by optimizing delivery and utilization of existing technologies, WHO developed an implementation research platform to better understand the challenges of generalizing research findings in the real world and contextualizing interventions for implementation in specific settings . Similarly, evaluation research is intended to assist decision-makers in making better informed choices about whether or not to continue, modify or discontinue a certain policy or program. Not only is this important for performance management by demonstrating accountability, transparency and the judicious use of public funds , but ultimately, evaluation research is important to ensure that the interventions implemented are indeed improving the health and well-being of individuals and populations. Thus, greater investment and infrastructure is required to ensure that such research takes place, since far too many programs and policies are put into place without much attention to the underlying evidence base, and are then left in place for years or even decades, with little or no continuous quality improvement to ensure that they are producing the outcomes initially intended.
There are many different types of research studies that can help to answer a wide variety of research questions. However, in practice, there are certain types of studies that generally prevail, whereas other types are few and far between. For instance, until recently, there was relatively little work in the area of implementation and evaluation research as most research was focused on earlier stages of the research cycle. Indeed, researchers would develop research protocols, apply for funding, conduct their research studies to measure disease or understand causes or test simple disease-specific interventions, prepare manuscripts for publication in high-impact peer-reviewed journals often concluding that “more research is needed”, and then start the process all over again – essentially bypassing the implementation and evaluation stages. Therefore, organizations which support research must acknowledge the importance of implementation and evaluation research and provide the necessary resources to develop research capacity and support submitted proposals to strengthen the knowledge base in these fields of research.
It is also increasingly being recognized that research evidence will have very little effect if it does not reach the local knowledge users who are in a position to apply this information to motivate change. Ideally, according to Parry et al. , these knowledge users and decision-makers should be engaged in the research process from the very outset, to help inform the key knowledge gaps that need to be addressed and to then ‘translate’ the evidence into policy and practice. This increased emphasis on ‘integrated knowledge translation’, also known as co-production of research evidence, certainly requires more time and effort to build up the required interdisciplinary and intersectoral partnerships, but it also increases the chances that the research findings will be applied and used in practice and will yield tangible results in the long run. Integrated knowledge translation implies that researchers must play an important role in helping knowledge users frame health priorities in a way which can be addressed by the different kinds of research study designs available, often requiring mixed methods approaches to tease out complex issues.
Even if researchers are progressively being encouraged to think about how the research findings can be applied in practice and can now apply for a growing number of knowledge dissemination grants, there nonetheless remain perverse incentive systems in the way that research is funded which lead to some types of research being prioritized over others; not because it is more important nor because it will lead to more significant health gains, but due to the way in which the research enterprise is structured. In this regard, researchers, especially in academic settings, tend to focus on ‘publications’, ‘professorships’ and ‘patents’, rather than ‘policy’, ‘practice’ and ‘people’ . Indeed, the evidence base is hugely biased towards basic science and clinical research (e.g. the effect on blood pressure from using anti-hypertensive medications) rather than population research (e.g. the impact of grassroots community development and social norm modification on the incidence of family violence and child maltreatment). Pratt and Loff  further argue that research legislation and policies used in high-income countries have increasingly led these countries to invest in health research aimed at boosting national economic competitiveness rather than reducing health inequities and that the ‘gadget health’ approach “diverts funding away from research that is needed to implement existing interventions and to strengthen health systems, i.e. health policy and systems research”.
To ensure that we do not lose sight of the true goals of health research, it is important to look at the big picture and not be blinded by academic or commercial interests, such as the ‘publish or perish’ imperative or the hype surrounding new technologies . There are no ‘magic bullets’ or easy cures for the world’s health problems, which are largely a reflection of underlying economic, social, cultural and political problems. Further, there is little point in producing all of this research evidence if it is not used to make better-informed decisions and policies to improve health. Incentive systems, such as greater availability of funding mechanisms and research awards tailored to this area, and which recognize the importance of applying research in practice, are therefore required. Beyond the amount of publications produced, what if researchers were instead judged based on their efforts to inform the development, implementation and evaluation of policies and programs that prevent human suffering, save lives and reduce inequities? Perhaps then we really would see the benefits of research in practice.
AA is funded through a Clinician Scholar Award by the Quebec Health Research Fund (Fonds de Recherche du Québec – Sante) and the Quebec Federation of Medical Specialists (Fédération des Médecins Spécialistes du Québec).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Andermann A. Evidence for health: from patient choice to global policy. Cambridge: Cambridge University Press; 2013.Google Scholar
- World Health Organization. World Health Assembly Resolution: WHO’s Role and Responsibilities in Health Research (WHA63.21). Geneva: WHO; 2010. http://apps.who.int/gb/ebwha/pdf_files/WHA63/A63_R21-en.pdf. Accessed 11 February 2016.Google Scholar
- Gielen AC, Green LW. The impact of policy, environmental, and educational interventions: a synthesis of the evidence from two public health success stories. Health Educ Behav. 2015;42(1 Suppl):20S–34.View ArticlePubMedGoogle Scholar
- Liu S, Wang W, Zhang J, He Y, Yao C, Zeng Z, et al. Prevalence of diabetes and impaired fasting glucose in Chinese adults, China National Nutrition and Health Survey, 2002. Prev Chronic Dis. 2011;8(1):A13.PubMedGoogle Scholar
- Green LW, Kreuter MW. Health Program Planning: An Educational and Ecological Approach. 4th ed. New York: McGraw-Hill; 2005.Google Scholar
- Theodorou M, Samara K, Pavlakis A, Middleton N, Polyzos N, Maniadakis N. The public’s and doctors’ perceived role in participation in setting health care priorities in Greece. Hellenic J Cardiol. 2010;51(3):200–8.PubMedGoogle Scholar
- Gruskin S, Daniels N. Process is the point: justice and human rights: priority setting and fair deliberative process. Am J Public Health. 2008;98(9):1573–7.View ArticlePubMedPubMed CentralGoogle Scholar
- Smith AM, Adams R, Bushell F. Qualitative health needs assessment of a former mining community. Community Pract. 2010;83(2):27–30.PubMedGoogle Scholar
- Doll R, Hill A. Smoking and carcinoma of the lung. Preliminary report. Br Med J. 1950;2(4682):739–48.View ArticlePubMedPubMed CentralGoogle Scholar
- Doll R, Peto R, Boreham J, Sutherland I. Mortality in relation to smoking: 50 years’ observations on male British doctors. BMJ. 2004;328:1519–28.View ArticlePubMedPubMed CentralGoogle Scholar
- Towns S, DiFranza JR, Jayasuriya G, Marshall T, Shah S. Smoking cessation in adolescents: targeted approaches that work. Paediatr Respir Rev. 2015. Ahead of print.Google Scholar
- Dhala A, Pinsker K, Prezant DJ. Respiratory health consequences of environmental tobacco smoke. Clin Occup Environ Med. 2006;5(1):139–56.PubMedGoogle Scholar
- Hopkins DP, Briss PA, Ricard CJ, Husten CG, Carande-Kulis VG, Fielding JE, et al. Reviews of evidence regarding interventions to reduce tobacco use and exposure to environmental tobacco smoke. Am J Prev Med. 2001;20(2 Suppl):16–66.View ArticlePubMedGoogle Scholar
- Guyatt GH, Sackett DL, Cook DJ. on behalf of the Evidence-Based Medicine Working Group. Users’ guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid? JAMA. 1993;270(21):2598–601.View ArticlePubMedGoogle Scholar
- Consolidated Statement on Reporting Trials (CONSORT). http://www.consort-statement.org/. Accessed 11 February 2016.
- Declaration of Helsinki. Ethical principles for medical research involving human subjects. Geneva: World Medical Association; 2008. http://www.wma.net/en/30publications/10policies/b3/index.html. Accessed 11 February 2016.Google Scholar
- Tri-Council Policy Statement. Ethical Conduct for Research Involving Humans. Ottawa: Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada; 2010. http://www.pre.ethics.gc.ca/pdf/eng/tcps2/TCPS_2_FINAL_Web.pdf. Accessed 11 February 2016.Google Scholar
- Brown CA, Lilford RJ. The stepped wedge trial design: a systematic review. BMC Med Res Methodol. 2006;6:54.View ArticlePubMedPubMed CentralGoogle Scholar
- Hargreaves JR, Copas AJ, Beard E, Osrin D, Lewis JJ, Davey C, et al. Five questions to consider before conducting a stepped wedge trial. Trials. 2015;16:350.View ArticlePubMedPubMed CentralGoogle Scholar
- Pollock K. Procedure versus process: ethical paradigms and the conduct of qualitative research. BMC Med Ethics. 2012;13:25.View ArticlePubMedPubMed CentralGoogle Scholar
- Health Systems Research and Implementation Research. Geneva: Special Programme for Research and Training in Tropical Diseases (TDR), World Health Organization; 2011. http://apps.who.int/tdr/svc/topics/health-systems-implementation-research. Accessed 11 February 2016.
- Leroy JL, Habicht JP, Pelto G, Bertozzi SM. Current priorities in health research funding and lack of impact on the number of child deaths per year. Am J Public Health. 2007;97(2):219–23.View ArticlePubMedPubMed CentralGoogle Scholar
- Madon T, Hofman KJ, Kupfer L, Glass RI. Public health. Implementation science. Science. 2007;318(5857):1728–9.View ArticlePubMedGoogle Scholar
- Alliance for Health Policy and Systems Research. Implementation Research Platform. Geneva: World Health Organization; 2011. http://www.who.int/alliance-hpsr/projects/implementationresearch/en/index.html. Accessed 11 February 2016.Google Scholar
- Blalock A. Evaluation research and the performance management movement: from estrangement to useful integration? Evaluation. 1999;5(2):117–49.View ArticleGoogle Scholar
- Parry D, Salsberg J, Macaulay A. A guide to researcher and knowledge-user collaboration in health research [online course]. Ottawa: Canadian Institutes of Health Research; 2009.Google Scholar
- Pang T. Filling the gap between knowing and doing. Nature. 2003;426(6965):383.View ArticlePubMedGoogle Scholar
- Pratt B, Loff B. Health research systems: promoting health equity or economic competitiveness? Bull World Health Organ. 2012;90:55–62.View ArticlePubMedGoogle Scholar
- Evans P, Meslin E, Marteau T, Caufield T. Deflating the genomic bubble. Science. 2011;331(6019):861–2.View ArticlePubMedGoogle Scholar