Skip to main content


We’d like to understand how you use our websites in order to improve them. Register your interest.

Strengthening and measuring research impact in global health: lessons from applying the FAIT framework



To date, efforts to measure impact have largely focused on health research in high-income countries, reflecting where the majority of health research funding is spent. Nevertheless, there is a growing body of health and medical research being undertaken in low- and middle-income countries (LMICs), supported by both development aid and established research funders. The Framework to Assess the Impact of Translational health research (FAIT) combines three approaches to measuring research impact (Payback, economic assessment and case study narrative). Its aim is to strengthen the focus on translation and impact measurement in health research. FAIT has been used by several Australian research initiatives; however, it has not been used in LMICs. Our aim was to apply FAIT in an LMIC context and evaluate its utility.


We retrospectively applied all three FAIT methods to two LMIC studies using available data, supplemented with group discussion and further economic analyses. Results were presented in a scorecard format.


FAIT helped clarify pathways of impact for the projects and provided new knowledge on areas of impact in several domains, including capacity-building for research, policy development and economic impact. However, there were constraints, particularly associated with calculating the return on investment in the LMIC context. The case study narrative provided a layperson’s summary of the research that helped to explain outcomes and succinctly communicate lessons learnt.


Use of FAIT to assess the impact of LMIC research was both feasible and useful. We make recommendations related to prospective use, identification of metrics to support use of the Payback framework, and simplification of the economic assessment, which may facilitate further application in LMIC environments.

Peer Review reports


There is a growing interest among both research funders [1,2,3] and academics [4,5,6,7] in identifying and measuring the social, environmental and economic benefits of research. Calls to better describe ‘impact’ are driven by the need to improve accountability, ensure relevance and inform funding [7,8,9]. Whether research outcomes can be ‘translated’ or applied in the real world is seen as one important way of assessing benefit [4, 5, 10]. Further, identifying pathways to impact during the design of research programmes can improve the quality and integrity of research by clarifying purpose and end-users [10, 11]. Interest in the benefits of health and biomedical research has been prominent in broader discussions on research impact [7, 9] due to the large amount of public funding it attracts [5] and the importance of tailoring outputs to the needs of clinicians and patients [5, 12].

Calls for evidence of impact have in turn catalysed work on how to measure it, and a range of approaches have been developed globally [8, 12,13,14]. One of the earliest and most widely-used [8, 9] approaches is the Payback model, introduced by Buxton and Hanney [15]. It aims to capture benefits in a range of areas such as knowledge generation, health services improvement and policy development, and has been adapted or modified a number of times [16]. Economic assessment (i.e. monetising research impacts) is also widely used, though typically at high levels, for example, aggregating research benefit nationally or in specific programmes over decades [17,18,19]. Project-specific approaches to measuring economic impact are emerging [8], though they have been critiqued for over-reliance on modelling and questionable assumptions [7, 17]. Narratives are a third, validated approach to describing impact, providing a summary of the research process and outcomes and have been the basis of the Research Evaluation Framework in the United Kingdom. Narratives have the advantage of being able to explain the complex (and often multi-directional) process through which impact occurs [7, 10, 11].

To date, efforts to measure impact have largely focussed on health research in high-income countries (HICs) [7, 8], reflecting where the majority of health research funding is spent as well as the limited infrastructure and capacity for health research in low- and middle-income countries (LMICs) [20, 21]. In 1990, the Commission on Health Research for Development, a consortium of global health agencies, researchers and development partners, identified the ‘90/10 gap’, i.e. that less than 10% of global health research spending is devoted to diseases or conditions that account for 90% of the global disease burden. This led to calls for a more equitable and systematic approach to prioritising health research investments [22, 23] and for development partners to devote 5% of official development assistance (aid) for health to research, as well as for LMICs to increase their own health research spending [24].

These calls have in part been answered – there is now more research in and on the health needs of low-income countries [25, 26] and a growing number of development partners are active in health research [27], including the United Kingdom’s Department for International Development, the United States Agency for International Development, and the Bill and Melinda Gates Foundation; these three agencies acknowledge the need to monitor the impact of their research investment [28,29,30]. Though none have standalone research impact frameworks, their policy documents refer to the need for research investments to create new knowledge and inform decision-making [28], build capacity for research in LMICs [28, 29] and facilitate local adaptation of evidence-based approaches (e.g. through implementation research) [29, 30] – all common elements of the ‘Payback’ model. Similarly, major funders of domestic health research in the United Kingdom, the United States and Canada now also fund research in LMICs directly [31,32,33,34] and through international collaborations [35]. These agencies do not appear to have standalone impact frameworks specific to their international collaborations (the United Kingdom Medical Research Council uses the Department for International Development framework); however, their domestic research impact models highlight knowledge generation and influence on policy and practice [36], economic growth (measured through links with business) and long-term health and environmental impact [2].

Thus, despite vast differences in health needs and research capacities in HIC and LMICs, there are similar expectations of what health research should achieve and how its impact should be measured in both contexts. As research investment in LMIC environments continues to grow, a better understanding of the challenges associated with research translation and measuring impact in LMIC contexts is likely to be useful. This study applies a research impact framework developed in a HIC (Australia) to research carried out in the Pacific and Indonesia. Our aim was to evaluate applicability, identify strengths and weaknesses, and make recommendations to support further use.


We carried out a rapid search for health research impact frameworks and selected the Hunter Medical Research Institute’s (HMRI) Framework to Assess the Impact from Translational health research (FAIT). Based on an extensive review of existing impact frameworks and with input from potential users, FAIT combines the three most commonly used approaches to impact assessment [10]. The first, based on the ‘Payback’ model, identifies ‘domains of benefit’. While each domain of FAIT is based on an existing approach to research impact, the combination of these approaches into a single tool is novel. Domains can be adapted to the research project under review but suggestions proposed by FAIT include knowledge generation, impacts on policy, clinical practice, health services or population health, and economic benefits. The second method comprises a cost–benefit analysis that compares costs (of the research itself and of implementing research recommendations), to social, environmental and economic benefits (expressed in monetary terms) that flow from implementation. Again, categories of benefit are flexible and left to the discretion of those completing the assessment. The third section is a short narrative that provides a summary of “how translation occurred and how research impact was generated” [10]. The text is structured around common sub-headings (need, research response, outcome, impact, lessons) and its purpose is to contextualise quantitative findings and explain outcomes.

We selected FAIT for three reasons. First, its mixed-methods approach, combining the three main approaches to measuring research impact provided an opportunity to test a range of impact measurement approaches in the LMIC context. Second, the framework emphasises translational health research, and is therefore well suited to the research projects we sought to review, which aimed to influence policy and practice. Third, FAIT can be applied to a range of research methods, from qualitative studies to implementation research to clinical trials, and so has a potential for wide application. FAIT is currently being applied to five projects within an Australian Centre for Research Excellence [37] but has not yet been applied in an LMIC context. HMRI colleagues agreed to engage in our study, adding value by reviewing our use of the FAIT tool.

Between March and September 2018, we applied FAIT to two recently completed research projects, namely (1) a programme to reduce salt consumption in two Pacific Island countries, Samoa and Fiji, and (2) the introduction of a digital health tool, SMARThealth, to improve the quality of diagnosis and treatment of cardiovascular diseases in East Java, Indonesia.

We chose these projects because good documentation was available from which data could be extracted. For the Pacific Salt project, there was a study protocol, process evaluations, impact evaluations and intervention costings for each Pacific country [38, 39]. For SMARThealth, an end-of-project completion report had been prepared for the funder, which included clinical results and a cost-effectiveness analysis. While FAIT is designed to be applied prospectively to encourage research translation and ensure the required evidence of impact is collected along the way, retrospective application represented a feasible approach to determine the applicability of FAIT in the LMIC context.

Our study was carried out in two stages. In stage one, we completed a first draft of the impact framework (presented in the FAIT scorecard format). We drew on existing documents to source the majority of data required, supplementing this with additional discussion and data mining where needed. The process was led by RD, who was not part of the research teams responsible for the two chosen projects, with support from BA to complete the economic analysis. The leads of each research team provided relevant documents and reviewed and amended RD’s first draft. In stage two, the lead author of the FAIT framework (AS) and the person leading its application and translation (SR) provided feedback and comment, which was critical to the process of refinement, including identifying additional areas of impact. While previous applications of FAIT have focussed on gathering impact data for the project under review, this study also considered the framework itself, and its applicability to a developing country context.


Tables 1 and 2 present populated impact scorecards for the Pacific Salt project and the SMARThealth project. We found application of the ‘domains of benefit’ section to be feasible and useful – it helped to generate evidence of impact, including new data, in a range of areas not documented in existing project evaluations. For the Pacific Salt project, this included impacts on knowledge advancement and capacity-building, as well as indirect, positive impacts on the Samoan and Fijian economies through generating employment (in the research team) and spending of project funds (Table 1). For example, in relation to policy development, use of the framework drew attention to networks established with policy-makers, and the learning generated on the political economy of working with the food industry in Fiji, which research project leaders were aware of but had not previously documented. Support and prompting from HMRI was critical to the identification of these domains, and to the process of describing and quantifying specific benefits within them.

Table 1 Impact scorecard for the Pacific Salt project intervention, Fiji and Samoa, Pacific
Table 2 Impact scorecard for SMARThealth intervention, Malang, East Java, Indonesia

For the SMARThealth project, use of the ‘domains of benefit’ section helped to identify previously unrecognised areas of impact on knowledge generation and capacity-building of in-country partners. In addition, it prompted additional work to quantify recognised (but previously unreported) positive impacts on the health system; these included numbers of health workers trained, improvements to the medications supply, and better collection and sharing of patient data (Table 2). While these aspects were mentioned in the project evaluation, they had not been explicitly measured or identified as project benefits. For both projects, the narrative text provided a useful summary of the project and its impacts and provided an opportunity to reflect on lessons learnt.

We found generating data on the return on investment to be the most challenging aspect of FAIT, for a number of reasons. First, it required specialist input from a health economist (not available to all project teams). Second, data needed to model economic returns are often not easily available for the LMIC context, and in the case of the Pacific Salt project, were not collected during project implementation. This meant that areas of benefit identified retrospectively, such as the increased earning potential of staff in partner countries who gained skills through being involved in the project, could not be calculated. In Australia, standard pay scales for most professions are available and could be used to model such a benefit, but this is not the case for the majority of LMICs. Similarly, health gains and life years saved are commonly used measures of impact of interventions in HICs. However, they are more difficult to calculate in LMICs given the paucity of data.

Third, we found the broader context of poverty and health systems development had an impact on efficiency and hence economic return. For instance, SMARThealth provided support to allow a clinical task normally provided by a physician to be delivered by a lower-cost community health worker. In circumstances where physicians do provide this support, such a ‘shift’ would be cost saving. However, this was not the case in Indonesia, where the counter-factual was ‘no care’ so introducing the intervention represented a net cost to the health system, which in turn diminished the final social return on investment. Recognising these challenges, we attempted to monetarise the identified benefits of SMARThealth, drawing on the literature for both methods and estimates on which to base assumptions. Benefits of the intervention were modelled using estimates of the reduction in cardiovascular disease (CVD) events avoided as a result of the intervention. In summary, our assumptions were:

  • A relative risk reduction in CVD events (ischaemic heart disease, myocardial infarction or stroke) of 0.80 for every 10 mmHg reduction in systolic blood pressure, based on a recent systematic review and meta-analysis of randomised clinical trials [40].

  • One hospitalisation per CVD event.

  • Disability weights for people with CVD were adopted from the Global Burden of Disease Study, using an estimated weighted average from myocardial infarction and moderate to severe stroke weights, resulting in a disability-adjusted life year (DALY) weight of 0.39. A disability weight of 1 was used to reflect the dead health state and used to calculate the years of life lost [41].

  • Death rates resulting from CVD events were estimated using results from the literature for middle-income nations [42].

Using these assumptions, we then calculated:

  • The reduction in CVD events resulting from the reductions in blood pressure found through the trial, projected over a 5-year period.

  • The savings to the health system, based on the average cost of hospitalisations for CVD events in Indonesia.

  • The health gains for the population resulting from the intervention in terms of DALYs averted. Using estimates from the literature [43], we estimated each healthy life year gained to represent productivity gains of 6–12 months of per capita Gross National Income. We believed this to be a conservative estimate, given the average age of the cohort targeted by the intervention was 59 years, average life expectancy in Indonesia is 69 years [44], unemployment is relatively low at 6.9% [45], and almost two-thirds work in the informal sector where there is no mandatory retirement age [46].

  • Indirect and non-medical cost savings; using estimates from the literature, indirect benefits were valued as half the Gross National Income of Indonesia per capita per healthy life year gained as a result of the intervention.

These calculations were then compiled, divided by the cost of the research and delivering the intervention, and used to determine the estimated social return on investment (Table 2). Our approach was similar to one that might be used in a HIC environment, yet the resulting calculations were less robust given our reliance on estimates from outside Indonesia. We were unable to complete a similar set of calculations for the Fiji Salt Project due to lack of data collected during the project itself, though we did outline an approach for doing so (Table 1). Cost–benefit approaches are rarely used in LMICs, in part due to the types of constraints we encountered, including lack of a standardised approach and lack of data on which to base assumptions.


Good practice in the delivery of aid and development assistance, including aid for health, has long emphasised principles of local ownership, effectiveness and sustainability [47, 48]. Accordingly, many research projects designed and implemented in low-income contexts intuitively emphasise engagement of local stakeholders, use of local systems and measurement of meaningful results (beyond academic outputs), suggesting that research impact models should be a ‘natural fit’ with LMIC-based health research.

There is recognised tension between the linear approach to impact implied by impact models, and the understanding that interactions between researchers and end-users are complex and iterative [10, 11, 49,50,51]. Indeed, reviews in HICs suggest the use of research impact models has favoured quantitative, empirical studies that can describe a clear, unambiguous outcome [52] and where economic returns are likely to be high [9]. In LMIC contexts, weak health governance and implementation environments [53, 54] mean the role and influence of research ‘evidence’ is even more problematic, and therefore measurement of impact even more challenging. If, as HIC reviews suggest, the use of impact models is cementing an existing bias in research funding towards statistical measures [5] at the expense of experimental or qualitative research design [52], this may be detrimental to LMIC research, where research infrastructure is often lacking, and qualitative methods are particularly needed to explain the poorly understood [53] governance environment in which research occurs.

Nevertheless, our experience suggests that impact models can play a useful role in hypothesising pathways through which impact is expected to occur, which can in turn prompt consideration of which stakeholders need to be engaged, what advocacy work (alongside research) may be required, and what vested interests could act as a barrier to uptake. This may be particularly relevant in LMIC environments and points to the need to consider research impact pathways prospectively, during the design of the research process, rather that retrospectively as we have done in this study.

Calculating economic benefit and return on investment

The challenge of valuing human life and calculating economic return on health investments is recognised in HIC contexts [8]. Our experiences suggest this challenge is exacerbated in the LMIC environment and, consequently, research projects in LMICs may struggle to show a positive return on investment. Issues include the following:

  • Low wages, high-levels of informal sector employment and/or unemployment mean that productivity gains (as commonly measured, in terms of income) associated with extending healthy life are difficult to estimate.

  • Poor levels of population health and low life expectancy (relative to HICs) may obscure gains in healthy life.

  • Poor coverage of essential health services means that introducing a new service, however essential and cost-effective in and of itself, may represent a net cost for the system (as no service was previously provided) diminishing the level of return.

  • The dearth of studies from LMICs on non-medical and indirect costs, such as transport to health facilities [55], make it difficult to estimate these, though they are often considerable [56, 57].

  • Overarching all of these issues is the fundamental challenge of poor quality health data in many LMICs [58] and the likely low statistical accuracy of globally standardised measures such as the DALY in LMIC contexts, which are relied on to estimate cost savings and economic benefits [59].

Cost-effectiveness analyses whereby the value of interventions is assessed in terms of natural units (for example, cost per CVD event avoided) or cost–utility analyses that assess interventions in terms of utility gained (for example, DALYs) are more common than cost–benefit analyses (where all benefits are monetised) in the health sector, including in LMIC contexts [60]. In addition to being more straightforward to estimate, cost-effectiveness allows a consideration of the relative value of an intervention, which can be used to inform a ‘business case’ on whether or not to implement the intervention more widely; such data is especially important in LMIC contexts, where resources are often scarce.

Cost effectiveness analyses are therefore a critical component of determining the broader societal ‘return’ on research investment (as FAIT attempts to do). Nevertheless, practical challenges remain in performing a full cost–benefit analysis in LMIC contexts, as we have demonstrated. Equally, in qualitative studies where research benefits cannot be monetised, such as a change in perceptions or attitudes, a cost–consequence analysis may be more applicable.

Finally, it is worth acknowledging that, while the Payback and narrative components of FAIT consider impact retrospectively, based on empirical evidence, the social return on investment models projected economic returns into the future. This may appear an anomaly; however, it is common practice for economic analysis to contain an element of forecasting given the challenge of demonstrating economic impact within the short time frame of a research project.

Suggestions for application of FAIT in LMICs

Prospective use with programme logic model

We applied the FAIT framework retrospectively, yielding important insights and new knowledge on study impacts. However, greater benefit is likely to come from applying the framework prospectively and in combination with a programme logic model’ as intended by FAIT’s authors [10]. For example, prospective application of FAIT can help ensure relevant data is captured in the monitoring frameworks. In the Fiji example, we were unable to complete a social return on investment due to lack of data – a prospective application of FAIT would have indicated these data gaps. Equally, prospective application of FAIT aids consideration of potential positive and negative programme externalities. In Malang, for example, prospective application may have highlighted the potential impact of the intervention on the workload of community health workers, and led to monitoring of any adverse impact on other health tasks they performed, e.g. in maternal and child health.

Programme logic models, also called ‘theory of change’ models, are commonly used in the design of development assistance (aid) programmes [61] to identify areas of potential impact and understand the process through which change occurs. Many LMIC studies bridge the disciplines of research and development, and are thus well suited to use of programme logic methodologies. The FAIT programme logic model, in line with models commonly used in aid programmes, identifies the need or issue to be addressed, activities, expected outputs, end-users of those outputs and anticipated impact [10]. Additional file 1 published with this paper provides two examples of the application of the FAIT modified programme logic model to current LMIC research projects, demonstrating its feasibility.

Menu of metrics

The ‘domains of benefit’ section is a key strength of FAIT allowing identification of a range of benefits beyond the intervention/process that is the subject of the study. This is particularly useful in an LMIC context, where the ‘process of doing research’ may itself have positive externalities, for example, related to capacity-building for research, supporting local policy-makers, building the skills of the health workforce, or the economic impact of research project spending. However, our experience suggests that users of FAIT may need guidance (prompting) to identify and capture such benefits. The initial list of potential domains provided by FAIT is helpful; however, further suggestions on possible metrics linked to each domain would be useful, for example, on specific areas of potential economic benefit and how to calculate these, or how to measure the sustainability and impact of a knowledge network established during a study. This would help to ensure appropriate data collection is built into study design (e.g. on salary scales), in turn facilitating calculation of return on investment.

Calculating economic returns

We found calculating cost–benefit to be challenging and the results to have weak validity given issues of data quality and reliance on assumptions. Furthermore, we believe that there are many contexts where it will not be possible or meaningful to undertake cost–benefit analyses due to lack of data.

Nevertheless, as interest in measuring research impact grows, and is inevitably applied to LMIC contexts, further research on how to approach this challenge is likely to be useful.

Where cost–benefit analysis is not possible, cost-effectiveness analysis may provide a practical alternative. Especially where the focus of research relates to an intervention or service delivery change that can be costed, cost-effectiveness analysis should be routinely done as part of the intervention evaluation process. Such data can contribute to a business case to scale up interventions trialled during research by projecting future returns and can also inform future research funding investment. More broadly, incorporating any form of economic analysis into research impact assessment provides a valuable perspective, re-emphasising the imperative to ensure all spending choices deliver value for money – particularly important in LMIC contexts. As discussed above, conducting prospective analysis is important to identify (and make arrangements to collect) data required to conduct economic analysis.

Strengths and limitations

A key purpose of FAIT is to encourage research translation. To this end, it is designed for use throughout the implementation of research projects. We did not use the tool in this way – rather, we applied it retrospectively. Even so, we found it yielded useful findings. Further, we did not validate our impact claims through additional project evaluation as required by some impact templates [52]. However, an independent researcher led the process and HMRI’s involvement provided a level of external scrutiny. Indeed, our pragmatic approach to assessing impact was a strength of this study as it responds to a common critique of impact frameworks, namely that they take too long to complete and are too expensive to implement [11]. This approach was facilitated by the fact that the projects reviewed already included a significant focus on research translation through involvement of stakeholders (end users), and in the case of the Pacific Salt project, a comprehensive process evaluation [38, 39]. This points to a further limitation of our study, namely that we focussed exclusively on implementation research. Applying FAIT to other types of research project designs in LMICs will allow broader assumptions to be made about its applicability within the LMIC context.


Though developed to measure impact in Australian health systems, FAIT can be applied to research projects in LMICs. We found the mixed-methods approach to assessing impact to be a key benefit of FAIT. While we encountered challenges calculating return on investment, the use of the FAIT framework helped illuminate data gaps and highlighted the importance of considering affordability. We make suggestions that support further applications of FAIT in LMICs and, we hope, will contribute to an emerging conversation on how best to measure research impact in LMICs. In this context, future research that tests the applicability of other high-income research frameworks in low-income environments may be useful. Capacity-building for any staff using the framework is likely to be a worthwhile investment.



cardiovascular disease


disability-adjusted life year


Framework to Assess the Impact of Translational health research


high-income country


Hunter Medical Research Institute


low- and middle-income countries


  1. 1.

    Research Excellence Framework 2021 United Kingdom. Accessed 17 Sept 2018.

  2. 2.

    Science and Technology for America's Reinvestment Measuring the Effects of Research on Innovation. Competitiveness and Science USA: National Institute of Health. Accessed 26 Apr 2019.

  3. 3.

    Australian Research Council. National Innovation and Science Agenda Measures: Research Engagement and Impact Assessment. Accessed 26 Apr 2019.

  4. 4.

    Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299(2):211–3.

  5. 5.

    Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.

  6. 6.

    Deeming S, Reeves P, Ramanathan S, Attia J, Nilsson M, Searles A. Measuring research impact in medical research institutes: a qualitative study of the attitudes and opinions of Australian medical research institutes towards research impact assessment frameworks. Health Res Policy Syst. 2018;16(1):28.

  7. 7.

    Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: a literature review. Sci Public Policy. 2009;36(4):255–70.

  8. 8.

    Greenhalgh T, Raftery J, Hanney S, M G. Research impact: a narrative review. BMC Med. 2016;14(1):78.

  9. 9.

    Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations, and definitions of research impact: a review. Res Eval. 2014;23(1):21–32.

  10. 10.

    Searles A, Doran C, Attia J, Knight D, Wiggers J, Deeming S, et al. An approach to measuring and encouraging research translation and research impact. Health Res Policy Systems. 2016;14(1):60.

  11. 11.

    Guthrie S, Wamae W, Diepeveen S, Wooding S, Grant J. Measuring Research: A Guide to Research Evaluation Frameworks and Tools. Santa Monica: RAND Corporation; 2013.

  12. 12.

    Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015;13:18.

  13. 13.

    Sarli CC, Holmes KL. Update to "Beyond Citation Analysis: A Model for Assessment of Research Impact". J Med Libr Assoc. 2012;100(2):82.

  14. 14.

    Adam P, Ovseiko PV, Grant J, Graham KEA, Boukhris OF, Dowd AM, et al. ISRIA statement: ten-point guidelines for an effective process of research impact assessment. Health Res Policy Syst. 2018;16(1):8.

  15. 15.

    Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1(1):35–43.

  16. 16.

    Aymerich M, Carrion C, Gallo P, Garcia M, Lopez-Bermejo A, Quesada M, et al. Measuring the payback of research activities: a feasible ex-post evaluation methodology in epidemiology and public health. Soc Sci Med. 2012;75(3):505–10.

  17. 17.

    Frank C, Nason E. Health research: measuring the social, health and economic benefits. CMAJ. 2009;180(5):528–34.

  18. 18.

    Glover M, Buxton M, Guthrie S, Hanney S, Pollitt A, Grant J. Estimating the returns to UK publicly funded cancer-related research in terms of the net value of improved health outcomes. BMC Med. 2014;12:99.

  19. 19.

    Roback K, Dalal K, Carlsson P. Evaluation of health research: measuring costs and socioeconomic effects. Int J Prev Med. 2011;2(4):203–15.

  20. 20.

    Global Forum for Health Research. The 10/90 Report on Health Research 2000. Geneva: World Health Organization; 2000.

  21. 21.

    Morel CM, Acharya T, Broun D, Dangi A, Elias C, Ganguly NK, et al. Health innovation networks to help developing countries address neglected diseases. Science. 2005;309(5733):401–4.

  22. 22.

    Labonte R, Spiegel J. Setting global health research priorities. BMJ. 2003;326(7392):722–3.

  23. 23.

    Yoshida S. Approaches, tools and methods used for setting priorities in health research in the 21st century. J Glob Health. 2016;6(1):010507.

  24. 24.

    Commission on Health Research for Development. Health Research: Essential Link to Equity in Development. New York: Oxford Universtiy Press; 1990. Accessed 26 Apr 2019.

  25. 25.

    Global Forum for Health Research and World Health Organization‎. The 10/90 (‎ten ninety)‎ report on health research 2003-2004. Geneva: Global Forum for Health Research; 2004. Accessed 26 Apr 2019.

  26. 26.

    Delisle H, Roberts JH, Munro M, Jones L, Gyorkos TWJHRP, Systems. The role of NGOs in global health research for development. Health Res Policy Syst. 2005;3(1):3.

  27. 27.

    Landriault E, Matlin S. Monitoring Financial Flows for Health Research 2009, Behind the Global Numbers. Geneva: Global Forum for Health Research; 2009.

  28. 28.

    Department for International Development. Research Strategy 2008–2013. London: DfID; 2008.

  29. 29.

    United States Agency for International Development. Global Health Reserach and Development Strategy 2017–2022. Washington DC: USAID; 2017.

  30. 30.

    Bill & Melinda Gates Foundation. Evaluation Policy. Accessed 15 Sept 2018.

  31. 31.

    Medical Research Council. International and Global Health Research. Accessed 26 Apr 2019.

  32. 32.

    Fogarty International Center. Our Role in Global Health: National Institute of Health. Accessed 26 Apr 2019.

  33. 33.

    Canadian Institutes of Health Research. Global Health Research. Accessed 26 Apr 2019.

  34. 34.

    National Institute for Global Health Research. Global Health Research.: National Health Service. Accessed 26 Apr 2019.

  35. 35.

    Global Alliance for Chronic Diseases. Research Funding London. Accessed 26 Apr 2019.

  36. 36.

    Medical Research Council. Evaluating Research Outcomes. Accessed 26 Apr 2019.

  37. 37.

    Ramanathan S, Reeves P, Deeming S, Bailie RS, Bailie J, Bainbridge R, et al. Encouraging translation and assessing impact of the Centre for Research Excellence in Integrated Quality Improvement: rationale and protocol for a research impact assessment. BMJ Open. 2017;7(12):e018572.

  38. 38.

    Webster J, Pillay A, Suku A, Gohil P, Santos JA, Schultz J, et al. Process evaluation and costing of a multifaceted population-wide intervention to reduce salt consumption in Fiji. Nutrients. 2018;10(2):155.

  39. 39.

    Trieu K, Webster J, Jan S, Hope S, Naseri T, Ieremia M, et al. Process evaluation of Samoa's national salt reduction strategy (MASIMA): what interventions can be successfully replicated in lower-income countries? Implement Sci. 2018;13(1):107.

  40. 40.

    Ettehad D, Emdin CA, Kiran A, Anderson SG, Callender T, Emberson J, et al. Blood pressure lowering for prevention of cardiovascular disease and death: a systematic review and meta-analysis. Lancet. 2016;387(10022):957–67.

  41. 41.

    Salomon JA, Haagsma JA, Davis A, de Noordhout CM, Polinder S, Havelaar AH, et al. Disability weights for the Global Burden of Disease 2013 study. Lancet Glob Health. 2015;3(11):e712–23.

  42. 42.

    Yusuf S, Rangarajan S, Teo K, Islam S, Li W, Liu L, et al. Cardiovascular risk and events in 17 low-, middle-, and high-income countries. N Engl J Med. 2014;371(9):818–27.

  43. 43.

    Bertram MY, Sweeny K, Lauer JA, Chisholm D, Sheehan P, Rasmussen B, et al. Investing in non-communicable diseases: an estimation of the return on investment for prevention and treatment services. Lancet. 2018;391(10134):2071–8.

  44. 44.

    World Bank. Life Expectancy at Birth, Total (Years). Accessed 15 Oct 2018.

  45. 45.

    Indonesia Investments. Unemployment in Indonesia. Accessed 26 Apr 2019.

  46. 46.

    Asian Development Bank and BPS-Statistics Indonesia. The Informal Sector and Informal Employment in Indonesia: Country Report 2010. Mandaluyong City: Asian Development Bank; 2011.

  47. 47.

    Organization for Economic Cooperation and Development. The Paris Declaration on Aid Effectiveness and the Accra Agenda for Action. Paris: OECD; 2005.

  48. 48.

    Dickinson C, Attawell K. Progress and Challenges in Aid Effectivenss, What Can We Learn from the Health Sector? Paris: OECD; 2011.

  49. 49.

    The RAMESES II Project. A Realist Understanding of Program Fidelity. London: National Institute of Health Research; 2017.

  50. 50.

    Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):2.

  51. 51.

    Davies H, Nutley S, Walter I. Why 'knowledge transfer' is misconceived for applied social research. J Health Serv Res Policy. 2008;13(3):188–90.

  52. 52.

    Greenhalgh T, Fahy N. Research impact in the community-based health sciences: an analysis of 162 case studies from the 2014 UK Research Excellence Framework. BMC Med. 2015;13:232.

  53. 53.

    Alliance For Health Systems and Policy Research. Health Systems Research: Governance and Accountability. Research Issues. Geneva: AHSPR; 2008.

  54. 54.

    Fryatt R, Bennett S, Soucat A. Health sector governance: should we be investing more? BMJ Global Health. 2017;2(2):e000343.

  55. 55.

    Walker IF, Garbe F, Wright J, Newell I, Athiraman N, Khan N, et al. The economic costs of cardiovascular disease, diabetes mellitus, and associated complications in South Asia: a systematic review. Value Health Reg Issues. 2018;15:12–26.

  56. 56.

    Ensor T, Cooper S. Overcoming barriers to health service access: influencing the demand side. Health Policy Plan. 2004;19(2):69–79.

  57. 57.

    Sakdapolrak P, Seyler T, Ergler C. Burden of direct and indirect costs of illness: empirical findings from slum settlements in Chennai, South India. Prog Dev Stud. 2013;13(2):135–51.

  58. 58.

    World Health Organization. Framework and Standards for Country Health Information Systems. 2nd ed. Geneva: WHO; 2012.

  59. 59.

    Parks R. The rise, critique and persistence of the DALY in Global Health. J Glob Health. 2014. Accessed 24 Apr 2019.

  60. 60.

    Masset E, Mascagni G, Acharya A, Egger E-M, Saha A. Systematic reviews of cost-effectiveness in low and middle income countries: a review of reviews. J Dev Eff. 2018;10(1):95–120.

  61. 61.

    Stein D, Valters C. Understanding Theory of Change in International Development. JSRP Paper 1. London: London School of Economics and Political Science; 2012.

Download references


Thanks to Anushka Patel and D Praveen of The George Institute for Global Health for support completing the SmartHealth impact scorecard.


No specific funding was received for this study.

Availability of data and materials

Not applicable.

Author information




RD and JW conceived the study. RD led the writing and DP, JW and SR provided substantive input to drafts. RD developed first drafts of Tables 1 and 2, with support from BA on the economic analysis; SR and AS reviewed these tables and provided feedback. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Rebecca Dodd.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

AS lead the team that developed the FAIT tool, reviewed in this study. Since 2016, SR has been engaged as a Post-Doctoral Fellow in Impact Assessment with the key role on applying FAIT. JW is Director of the World Health Organization Collaborating Centre on Salt Reduction. The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Example of application of FAIT programme logic model for Health workforce study, India. Example of application of FAIT programme logic model to Intervention to reduce salt intake, Pacific. (DOCX 92 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dodd, R., Ramanathan, S., Angell, B. et al. Strengthening and measuring research impact in global health: lessons from applying the FAIT framework. Health Res Policy Sys 17, 48 (2019).

Download citation


  • Research impact
  • Translation
  • Economic impact
  • Low-income countries