Skip to main content

Advertisement

Looking both ways: a review of methods for assessing research impacts on policy and the policy utilisation of research

Article metrics

Abstract

Background

Measuring the policy and practice impacts of research is becoming increasingly important. Policy impacts can be measured from two directions – tracing forward from research and tracing backwards from a policy outcome. In this review, we compare these approaches and document the characteristics of studies assessing research impacts on policy and the policy utilisation of research.

Methods

Keyword searches of electronic databases were conducted in December 2016. Included studies were published between 1995 and 2016 in English and reported methods and findings of studies measuring policy impacts of specified health research, or research use in relation to a specified health policy outcome, and reviews reporting methods of research impact assessment. Using an iterative data extraction process, we developed a framework to define the key elements of empirical studies (assessment reason, assessment direction, assessment starting point, unit of analysis, assessment methods, assessment endpoint and outcomes assessed) and then documented the characteristics of included empirical studies according to this framework.

Results

We identified 144 empirical studies and 19 literature reviews. Empirical studies were derived from two parallel streams of research of equal size, which we termed ‘research impact assessments’ and ‘research use assessments’. Both streams provided insights about the influence of research on policy and utilised similar assessment methods, but approached measurement from opposite directions. Research impact assessments predominantly utilised forward tracing approaches while the converse was true for research use assessments. Within each stream, assessments focussed on narrow or broader research/policy units of analysis as the starting point for assessment, each with associated strengths and limitations. The two streams differed in terms of their relative focus on the contributions made by specific research (research impact assessments) versus research more generally (research use assessments) and the emphasis placed on research and the activities of researchers in comparison to other factors and actors as influencers of change.

Conclusions

The Framework presented in this paper provides a mechanism for comparing studies within this broad field of research enquiry. Forward and backward tracing approaches, and their different ways of ‘looking’, tell a different story of research-based policy change. Combining approaches may provide the best way forward in terms of linking outcomes to specific research, as well as providing a realistic picture of research influence.

Background

Research evidence has the potential to improve health policy and programme effectiveness, help build more efficient health services and ultimately achieve better population health outcomes [1]. The translation of research evidence into health policy, programmes and services is an ongoing and commonly reported challenge [2]. If research is not translated, it means that extensive investments in research and development are potentially going to waste [3]. In response to this issue, researchers and funding bodies are being asked to demonstrate that funded research represents value for money, not only through the generation of new knowledge but also by contributing to health and economic outcomes [4, 5]. Pressures for accountability have also led to a greater focus on evidence-informed policy-making, which calls for policy-makers to make greater use of research in policy decisions so that policies and programmes are more likely to improve population health outcomes [1].

Consequently, there has been an increasing emphasis on measuring the wider impacts of research [6] (“an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” ([7] p. 4), as well as understanding how research is used in decision-making processes [1, 8,9,10,11,12,13]. This literature review focuses on methods for measuring the impacts of research on public policy specifically, where policy impacts are considered as intermediary outcomes between research outputs and longer-term impacts such as population health and socioeconomic changes [1]. Health policy impacts can be defined variously, but encompass indirect or direct contributions of research processes or outputs to the development of new health policy or revisions of existing health policy at various levels of governance [14]. It is proposed that the use of research to inform public policy leads to desired outcomes such as health gains [1]. Policy impacts, however, can be more easily measured and attributed to research than impacts that are further ‘upstream’ from research outputs [1, 15].

Measuring the policy impacts of research can be approached from two directions – tracing forward from research to identify its impacts on policy and other outcomes, and tracing backwards from a policy outcome (e.g. policy change or document) to identify whether and how research has been utilised [1, 11, 16, 17]. Several reviews have considered conceptual approaches and methods for assessing research impacts [5, 6, 16,17,18,19,20,21,22] and research utilisation in health policy-making [1, 11]. These reviews identify elements that characterise and differentiate assessment processes (Box 1). Examples of the empirical application of forward tracing research impact assessments are more commonly discussed in existing reviews than backward tracing approaches.

In addition, existing reviews have only addressed the relative advantages and disadvantages of forward and backward tracing approaches to a limited degree [1, 11, 16, 17]. Forward tracing approaches are reported to be more common because they allow a more precise focus on specific research, which is important for funding bodies seeking to account for research expenditure [1, 16, 17]. However, this focus on specific research creates challenges attributing any observed changes to the specific research under study [16], this is because research is usually only one factor amongst many at play during policy decisions [25]. Furthermore, where research is influential, policy decisions are usually based on the synthesis of a broad spectrum of knowledge, rather than the findings of individual studies or a specific programme of work [26]. In addition, it can be difficult to establish what would have occurred in the absence of the research under study (counterfactual conditional) [27]; there is no ‘control’ state against which to compare outcomes [18]. Examining the context in which policy change occurs therefore becomes important [27, 28]; however, forward tracing assessments have been criticised for failing to address the complexities involved in policy decision-making [17]. Forward tracing assessments are also subject to limitations associated with the timing of assessment because research impacts can take a long time to occur [25]. On the other hand, backward tracing approaches are said to be more suited to understanding the extent and processes through which knowledge, including research, influences policy decisions [11], but they are not always able to identify the influences of specific research, or the relative degree of influence of a particular study, and other potential limitations in terms of measuring research use are not well documented [23, 24, 29].

In this review of the literature, our aim was to document the extent and nature of studies measuring the impacts of health research on policy and compare forward and backward tracing approaches to assessment. Firstly, we documented the characteristics of empirical studies drawn from two streams of empirical research, namely studies measuring the impacts of health research on policy and studies examining research utilisation in health policy decisions. Secondly, a descriptive framework (Fig. 1) was developed to allow structured comparisons between assessments to be made. This framework incorporated both the key elements identified in studies described in previous reviews (Box 1) and those emerging from an iterative analysis of studies included in the current review. Thirdly, based on reported strengths and limitations of the approaches described, we considered what may be gained or lost where different approaches were chosen, and particularly how the direction of assessment may influence the assessment findings. Finally, we sought to identify gaps in the existing literature and areas that warranted further investigation. To our knowledge, this paper is the first to systematically analyse these two streams of research in relation to each other.

Fig. 1
figure1

Descriptive framework for research impact and research use assessments

Methods

This review of the literature was completed in December 2016, and examines peer-reviewed empirical studies published between 1995 and 2016 in English that measured the impacts of health research on policy and research use in health policy decisions. We also examined existing reviews on these topics. Our review questions were as follows:

  • What are the core elements of empirical research impact or research use assessments?

  • What is the extent and nature of empirical peer-reviewed research in this area of study?

  • What are the advantages and disadvantages of different approaches to assessment?

  • Where do the gaps in the existing literature lie and which areas warrant further investigation?

Search strategy

The review utilised an iterative process that included several steps. We initially searched electronic databases (Medline, CINAHL, EBM reviews, Embase, Google Scholar) using keyword search terms derived from research impact assessment reviews and empirical studies known to the authors (e.g. research impact, impact assessment, investment return, research payback, payback model, payback framework, societal impact, policy impact, research benefit, health research). Based on the abstracts from this search, we compiled all empirical studies that reported policy impacts in relation to health research, or research use in relation to health policy outcomes, and reviews reporting methods of research impact assessment.

After completing the above process, it was clear that the initial search had identified papers starting with research and measuring its impacts, but had been less successful in identifying papers starting with policy outcomes and measuring research use. Another search of key databases was therefore conducted using ‘research use’ search terms derived from the studies already identified on this topic (e.g. research use, research translation, evidence use, research utilisation, research evidence, evidence-based policy, knowledge utilisation, health policy). This resulted in further relevant studies being added to our list.

The reference lists of all included studies were then scanned to identify other relevant papers not found during the database search. The full texts of included studies were read to ensure they met the inclusion/exclusion criteria for the review. The search process is shown in Fig. 2.

Fig. 2
figure2

Flow diagram of literature search process

Inclusion criteria

In relation to our analysis of empirical studies, we only included studies where the research or health policy outcome under study was clearly defined. We excluded studies that did not report on health research or a health policy outcome. Studies that did not report methods in conjunction with results of impact or research use assessments were also excluded. In addition, we excluded studies reporting opinions about research impact or use in general, rather than measuring the impact of specific research or research use in relation to specific policy outcomes. Finally, we excluded studies examining strategies or interventions to improve research translation. As our aim was to define and report key characteristics of studies rather than synthesise study findings, studies were not included/excluded based on study quality.

Data extraction, development of the descriptive framework and categorisation of empirical studies

To analyse the included studies, we prepared a data extraction tool incorporating the key elements described in existing reviews (Box 1). The initial categories were progressively refined during the data extraction and analysis process and integrated into a comprehensive ‘research impact and research use’ assessment framework; the elements of which are described in the results below. Thus, data extraction was iterative, until information from all the empirical studies was documented in relation to the final framework. Categorisation of studies according to key elements of the framework was done based on statements made by the study authors, where possible. Where judgements were required, categorisations were discussed by the authors of this paper until a consensus was reached.

Results

Literature search

An initial review of abstracts in electronic databases against the inclusion criteria yielded 137 papers, 34 of which were excluded after full text review. Searches of reference lists of the included papers identified a further 60 studies (Fig. 2). The final number of papers included in this review was 163; 144 were empirical studies reporting methods and findings of research use or research impact assessments (included in the results that follow) and 19 were reviews of the literature. A full list of the included empirical studies is provided in Additional file 1. To aid the reader to identify studies cited as examples in the results section, the numbers given in brackets in subscript match the reference numbers in Additional file 1.

Analysis of empirical studies (n = 144)

Overview of the descriptive framework and included studies

Figure 1 provides a descriptive representation of the empirical studies included in this review. It depicts the two parallel streams of research, namely studies concerned with measuring and understanding the ‘impacts of research’ (research impact assessments) and those concerned with measuring and understanding ‘research use’ in policy decisions (research use assessments). The study starting point defined whether a study was categorised as research impact or research use – research impact assessments usually started with research and traced forward to identify the benefits arising from that research; conversely, research use assessments usually started with a policy outcome and traced backwards to understand whether and how research had been used. There was a small group of ‘intersecting studies’ that drew on elements from both streams of research, and where, occasionally, research impact assessments used backward tracing approaches and research use assessments used forward tracing approaches. Assessments in both streams were based on similar theoretical concepts, utilised similar methods and had similar assessment end-points (i.e. they reported on similar outcomes). However, outcomes were reported from different perspectives depending on the direction of assessment chosen. The unit of analysis utilised in assessments varied across studies overall, ranging from a narrow focus on specific research projects or policy outputs to a broader focus on larger programmes of research or policy processes.

Below, we describe the number and nature of the included studies according to the key elements of the framework. Table 1 provides the number of studies categorised by type of assessment, direction of assessment, unit of analysis and methods of assessment. Illustrative examples of the different types of assessments are provided in Table 2. Overall, we identified a similar number of research impact (n = 68; Table 1) and research use assessments (n = 67; Table 1), as well as a small group of intersecting studies, drawing on elements of both streams of research (n = 9; Table 1).

Table 1 Descriptive characteristics of included studies (n = 144)
Table 2 Illustrative examples of forward and backward tracing assessments, and assessments utilising both approaches

The studies originated from 44 different countries. Three quarters (76%; n = 109) were from high-income countries and predominantly the United Kingdom (n = 38), the United States of America (n = 16), Australia (n = 15), and Canada (n = 10). In middle- to low-income countries, a greater number of research use studies than of research impact studies were completed (n = 22 vs. n = 7, respectively). Most studies (81%; n = 116) were published in the last decade (2006–2016). A wide variety of research types and policy decisions were studied, as summarised in Boxes 2 and 3.

Core elements of the descriptive framework

Key drivers and reasons for assessment

The two streams of research were driven by different factors and conducted for different but related reasons. Research impact assessments were primarily driven by pressures to demonstrate that spending money on research is an appropriate use of scarce resources, while research use assessments were primarily driven by a desire to understand and improve the use of research in policy decisions so that health outcomes could be improved. Research impact assessments were most commonly conducted to demonstrate the value of research beyond the academic setting, to identify factors associated with research impact and develop impact assessment methods (Table 3; Fig. 1). Research use assessments were most commonly conducted to understand policy processes and the use of research within them (Table 3; Fig. 1). Intersecting studies were influenced by factors consistent with both streams of research.

Table 3 Key drivers and reasons for assessments

Direction of assessment

As depicted in Fig. 1, research impact assessments most commonly used forward tracing approaches (n = 61, Table 1; Examples A, F, Table 2), while research use assessments most commonly used backward tracing approaches (n = 63; Examples I-Q Table 2). However, there were several groups of studies that deviated from this pattern. Firstly, a few research impact assessments used a backwards tracing approach (n = 7; Table 1). For example, starting with a group of related policy documents, tracing backwards from these to identify the use of specific research outputs as an indication of research impact [26, 64, 110], or tracing the origins of research (country of origin, research funder, type of research) cited in clinical guidelines to identify research that had been impactful [41, 70, 76, 77] (Example H, Table 2). These backward tracing studies included a systematic analysis of a group of policy documents from a given policy area, rather than examining single policy documents to corroborate claimed impacts, as was common for forward tracing research impact assessments.

Secondly, there were a few studies where the reasons for assessment were more consistent with the research use group, but the study used a forward tracing approach. These studies traced forward from specific research outputs but assessed whether and how these had been used by a specific policy community who had commissioned or were mandated to consider the research under study (n = 4, Table 1; Example G Table 2).[20, 22, 23, 30] Individual research user and agency characteristics associated with research use were assessed, as well as the characteristics associated with the research itself. Only policy-makers were interviewed or surveyed, which was unusual for forward tracing assessments, and some assessments involved an element of evaluation or audit of the policy-makers’ responsibilities to consider evidence.

Finally, there was a group of studies sitting within the intersection between the two streams of research that utilised a combination of forward and backward tracing approaches (n = 9; Table 1). In some cases, the study authors were explicit about their intentions to utilise both forward and backward tracing approaches to assessment, aiming to triangulate data from both approaches to produce an overall picture of research impact. For example, tracing forward from a programme of research to identify impacts, as well as analysing a group of policy documents to identify how the programme of research had influenced policy [11], or tracing forward from the activities of researchers to identify impacts as well as analysing a policy process linked to the research [88] (Examples R, S, Table 2). These studies drew mainly on elements consistent with the research impact literature. Other intersecting studies were more difficult to classify as they focussed on the interface between a specific research output and a specific policy outcome, examining both the research production and dissemination process as well as the policy decision-making process (Example T, Table 2) [19, 31, 63, 127].

Unit of analysis

The unit of analysis for studies starting with research ranged from discrete research projects with a defined start, end-point and limited set of findings, to increasingly larger programmes of work, representing multiple research studies linked through the researcher, research topic area or research funder. Thus, we classified studies (Fig. 1; Table 4) in terms of whether the unit of analysis was a research project (Examples A, B, Table 2), programme of research (Example D, Table 2), research centre (Example E, Table 2), or portfolio of research (Example F, Table 2). Research projects were the most common unit of analysis (n = 52; Table 1).

The unit of analysis for assessments starting with a policy outcome included (Fig. 1; Table 4) a group of policy documents or process for developing a specific document/s (Examples H–J, Table 2), decision-making committees where the committee itself and the decisions made over a period of time were under study (Examples K–L, Table 2), and policy processes where a series of events and debate over time, culminating in a decision to implement or reject a course of policy action, was studied (Examples M–Q, Table 2). Policy processes were the most common unit of analysis (n = 49; Table 1).

Table 4 Units of analyses for included studies

Several studies compared the impacts of different types of research grants (e.g. project, fellowship, research centre grants) and thus included more than one unit of analysis [14, 55, 141]. The same was true for studies adopting both forward and backwards tracing approaches, where the starting point for assessment was both a specific research project or programme and a specific policy outcome or process [11, 19, 31, 63, 88, 127].

Theories and conceptual models underpinning assessments

It was common for studies in our sample to draw on existing models and theories of research use and policy-making [1, 11, 12]. These were used to form conceptual frameworks and organise assessments and discussions around the nature of research use or impacts identified in assessments. As well as drawing on this broad base of literature, the studies often utilised a specific framework to structure data collection, analysis and to facilitate comparisons between cases (Fig. 1). Specific frameworks were more often utilised in research impact assessments than research use assessments (n = 46 vs. n = 23, respectively; Table 1).

The frameworks in the set of research impact assessments most commonly provided a structure for examining multiple categories or types of impact, and sometimes included more detailed case studies of how and why impacts occurred. The Payback Framework [30] was the most commonly used framework of this nature (n = 23). The elements and categories of the Payback Framework seek to capture the diverse ways in which impacts arise, including the interactions between researchers and end-users across different stages of the research process and feedback loops connecting stages [6, 20]. Other similar frameworks included the Research Impact Framework [31], the Canadian Academy of Health Sciences impact framework [32] or frameworks that combined these existing approaches [11, 16, 85]. In addition, some studies used frameworks based on a logic model approach to describe the intended outputs, outcomes and impacts of a specific portfolio of research, sometimes including multiple categories of impact [44, 78, 98, 114] or focussing on policy impacts alone [99, 100]. Finally, there were several examples of studies utilising frameworks based on contribution analysis, an approach to exploring cause and effect by assessing the contribution a programme is making to observed results [33]. Such frameworks emphasise the networks and relationships associated with research production and focus on the processes and pathways that lead to impact rather than outcomes [27, 33]. Examples included frameworks that prompted the evaluation of research dissemination activities to measure changes in awareness, knowledge, attitudes and behaviours of target audiences as precursors to impact [88]; models that focussed on actor scenarios or productive interactions prompting the examination of the pathways through which actors linked to research, and distal to it, took up the research findings to describe impact (Contribution Mapping [34]) [57, 69, 87]; and frameworks that prompted an analysis of network interactions and flows of knowledge between knowledge producers and users [84]. Most frameworks utilised in research impact assessments depicted a linear relationship between research outputs and impacts (that is, simple or direct links from research to policy), albeit with feedback loops between policy change and knowledge production included. Research impact studies rarely utilised frameworks depicting the relationship between contextual factors and research use [84, 133].

By contrast, contextual factors featured strongly in the models and frameworks utilised in the research use assessments examined. Research use frameworks most commonly provided a mechanism for understanding how issues entered the policy agenda or how policy decisions were made. Dynamic and multidirectional interactions occurring between the policy context, actors and the evidence were emphasised, thus providing a structure for examining the factors that were influential in the policy process. Many examples were utilised, including Kingdon’s Multiple Streams Theory [35], Walt and Glison’s Health Policy Analysis Triangle [36], Dobrow’s framework for context-based evidence-based decision-making [37], Lomas’s framework for contextual influences on the decision-making process [26], and the Overseas Development Institutes Research and Policy in Development (RAPID) Framework [38]. In addition, models provided a structure for analysis of different stages of the policy process [2, 61, 122] or according to different types of research use (e.g. conceptual, symbolic, instrumental research use [19], research use continuum [11]) [4, 61]. Finally, evidence typologies were sometimes used to structure assessments, so the use of research evidence could be compared to the use of information from other sources [9, 143].

Intersecting studies utilised frameworks that focussed on the research–policy interface depicting the links or interactions occurring between researchers and policy-makers during research and policy development [19, 124, 129]. There were also examples of models depicting channels for knowledge diffusion [63, 127].

Methods of assessment

Data sources

We found that similar data sources were used in both research impact and research use assessments (Fig. 1), including interviews, surveys, policy documents, focus groups/discussion groups, expert panels, literature reviews, media sources, direct observation and bibliometric data. Research impact assessments also utilised healthcare administrative data [64, 78, 119] and routinely collected research impact data (e.g. ResearchFish [39] [43], Altmetrics [40] [12], UK Research Excellence Framework case studies [41] [42]).

Triangulation of data and case studies

Most studies triangulated data from multiple sources, often in the form of case studies (Table 1). Research use assessments were more likely to describe single case studies than research impact assessments, where multiple case study designs were more common (Table 1). Data was most commonly sourced from a combination of interviews, surveys and document analysis, for research impact assessments, and interviews and document analysis for research use assessments. Research impact assessments often combined a larger survey with a smaller number of case studies, to obtain breadth as well as depth of information. Surveys were rarely used in research use assessments [22, 23, 33, 74, 135].

Cases for both research impact and research use studies were most often purposely selected, based on the likelihood of having impacts for research impact assessments and known research use or to illustrate a point (e.g. delay in research uptake, influence of various actors) for research use assessments. Exceptions included assessments where a whole of sample [16, 85, 95, 115] or stratified sampling approach [37, 54, 66, 74, 107, 140] was adopted.

Scoring of impacts and research use

In some research impact and research use assessments, a process for scoring impacts or research use was utilised, usually to compare cases [2, 7, 10, 14, 16, 17, 37, 42, 50, 54, 62, 64, 79, 90, 91, 97, 107, 113, 115, 117, 140, 141]. Examples of the scoring criteria used for each group are provided in Table 5.

Table 5 Scoring criteria utilised in research impact and research use assessments

Study participants

Where respondents were surveyed or interviewed, research impact assessments tended to focus on the perspectives of researchers (Table 1), most commonly questioning researchers about the impacts of their own research and end-users directly linked to the research or researchers under study. Research use assessments tended to draw on the views of a wider range of stakeholders (Table 1), and where researchers were interviewed, they were often interviewed as experts/advisors rather than about the role played by their own research.

Data analysis methods used

As most of the data collected in both research impact and research use studies was qualitative in nature, qualitative methods of analysis or basic descriptive statistics were most commonly used. However, there were studies in which more complex statistical analyses of quantitative data were employed. For example, logistic or linear regression analyses to determine which variables were associated with research impact [73, 140], research use by policy-makers [20, 22, 23] or policy decision-making [17, 79]. In addition, one study used network analysis to explore the nature and structure of interactions and relationships between the actors involved in policy networks [118]

Retrospective versus prospective data collection

Most assessments collected data retrospectively (i.e. sometime after the research findings were available (from 2 to 20 years) for forward tracing assessments, or after a policy outcome had occurred for backwards tracing assessments). Prospective data collection was rare (i.e. data collected during and immediately after research completion for forward tracing assessments, or during policy development for backwards tracing assessments) [19, 49, 103, 137].

End-point for assessment

Depending on the starting point for assessment, the end-point of assessment was either to describe policy impact or research use (Fig. 1). Intersecting studies reported how specific research was used in relation to a specific policy outcome. Definitions for what constituted a ‘policy impact’ or ‘research’ in the assessment differed between studies.

Definitions of policy impact

For studies commencing with research, not all studies explicitly defined what was considered a policy impact, rather describing any changes attributable to the research. Where definitions were provided, some definitions required evidence of the explicit application of research in policy decisions; that is, the research directly influenced the policy outcome in some way [16]. There were also examples where incremental steps on the pathway to policy impact, such as changes in policy-makers’ awareness and knowledge or process measures (e.g. interaction (participation on an advisory committee), dissemination (presentation of research findings to policy-makers)) [88], were counted as impacts. Here, such process measures were seen as “a more practical way of looking at how research interacts with other drivers to create change” rather than looking “for examples of types of outputs or benefits to specific sectors” ([42] p.12). In addition, a shift in language and focus from ‘attribution’ to ‘contribution’ was promoted by some authors to suggest that research was only one factor amongst many influencing outcomes [57,69,87,88]. Some studies reported policy impacts alone, while others reported multiple categories of impact. Where multiple categories of impact were reported, impacts were not always categorised in the same way so that what was considered a policy impact in one study would have fallen under a different category of impact in another (e.g. policy impact vs. health services impact) [16, 114, 140].

Definitions of research

Conversely, for studies commencing with a policy outcome, not all studies provided a definition for what constituted ‘research’ in the assessment, rather summarising the relevant scientific literature to provide an overview of the research available to policy-makers.[8, 13, 18] Where definitions were provided, some studies used narrower definitions of research, such as ‘citable’ academic research only [74], as opposed to broader definitions where ‘any data’ that played a role in shaping/driving policy change was included in the definition of research.[4] Other authors defined a specific type of research to be identified in the assessment (e.g. economic analyses [49, 103, 137], research on patient preferences [130], evidence of effect and efficiency [37, 101]). Most authors of research use studies explicitly recognised that research was only one source of information considered by policy-makers. Some studies explored the use of various types of information (e.g. contextual socio-political, expert knowledge and opinion, policy audit, synthesis, reviews, economic analyses [9]), as well as research (e.g. scientific literature [9]). In addition, some studies included research in a broader definition of ‘evidence’ alongside other information sources (e.g. including research study results, findings on monitoring and evaluation studies and population-based surveys, Ministry of Health reports, community complaints and clinical observations as ‘evidence’ used during policy-making [90]). Finally, there were examples of research being distinguished in terms of local and international research sources [8, 61].

Common outcomes reported

Despite differing trajectories of assessments, research impact and research use assessments reported similar types of outcomes (Fig. 1), although the discussion was framed in different ways. For example, qualitative methods were utilised in both research impact and research use assessments to describe the impacts that occurred or how research had been used. Authors from both groups described outcomes in terms of conceptual, symbolic or instrumental uses of research [4, 19, 20, 44, 61, 72, 74, 129, 133, 143], direct/explicit and indirect impacts/uses of research [42, 74, 87], or research use according to the stage of the policy process at which the use occurred [61, 75, 85, 100, 122] (Box 4). Other assessments adopted a quantitative approach, to sum impacts or research use across units of analysis, resulting in an overall measure of impact for a research portfolio or area of research [50, 55, 73, 141], or in policy domains as a benchmark of research use for that policy area [40, 68, 143].

In tackling the question about what is needed to facilitate research utilisation and research impact, both research impact and research use studies reported on the processes and pathways through which research was utilised and the factors associated with research use. Studies from both groups also focussed on the role played by various actors in the policy process. Research impact assessments tended to focus on research and researchers as facilitators of impact, commonly examining the dissemination, engagement activities, networks and other characteristics of specific researchers in considering impact pathways and factors associated with impact. Study participants were usually linked to the research or researchers under study in some way and commonly provided a perspective about the context surrounding the uptake of the research under study, rather than being asked about the policy context more broadly (e.g. sociocultural, political, economic factors and other information sources influencing the policy process).

In contrast, research use assessments generally examined the role played by a wide range of actors in the policy process (e.g. politicians, policy-makers, service providers, donors, interest groups, communities, researchers) and links between the research and policy interface (e.g. networks, information exchange activities, capacity-building activities, research dissemination activities, partnerships). Variables associated with policy-makers and policy organisations (e.g. culture, ideologies, interests, beliefs, experience), as well as the research and researchers, were examined. In addition, assessments tended to adopt a broader approach when examining the policy context, considering research along-side a range of other influencing factors.

Discussion

In this paper, we provide a framework for categorising the key elements of two parallel and sometimes intersecting streams of research – studies assessing the policy impacts of research and studies assessing research use in policy processes. Within the studies examined, research impact assessments were primarily conducted to demonstrate the value of research in terms of producing impacts beyond the academic setting. This information was important for grant funding bodies seeking to account for research expenditure. As such, research impact assessments focussed on research, identifying impacts that could be attributed to specific research projects or programmes and the mechanisms or factors associated with achieving these impacts. Such studies predominantly used forward tracing approaches, where research projects (the most common unit of grant funding) were the unit of analysis. Research use assessments, on the other hand, were conducted with a view to improving policy outcomes by identifying ways in which research use could be enhanced. Here, the assessments most commonly focussed on understanding policy processes, whether and how research was used and the mechanisms and factors that facilitated research use. Thus, backward tracing approaches predominated; starting with a specific policy outcome and utilising a policy analysis frame to consider the influence of research alongside other factors. The approaches to assessment influenced the nature of the findings, so their respective strengths and limitations should be considered.

Strengths and limitations of approaches

The main difference between the research impact and research use studies we considered was the relative focus on the influence of ‘specific research’ in relation to a policy outcome. Research impact assessments focused on specific pieces or bodies of research so that observed effects could be linked to grant funding, researchers or research groups [17]. While research projects were most commonly assessed, we encountered examples where the unit of analysis was broadened to include larger programmes of research, in some respects to overcome problems attributing impacts to single projects within a researcher’s larger body of work. However, this did not overcome problems separating the influence of this research from that conducted by others in the same field [46, 47]. Broadening the unit of analysis also created problems with defining the scope of assessment in terms of where the programme of research started and ended, as research generally builds on earlier research and itself [46, 47]. In addition, the larger the programme of research under study, the more diffuse its impacts became, making them more difficult to identify and attribute to individuals or groups of researchers and certainly funding bodies [48,49,50].

The research use assessments on the other hand, tended to examine the role played by research in more general terms rather than attempting to determine the contribution made by specific research projects or programmes. Indeed, such assessments often highlighted the relationships between related or conflicting programmes of research, local and international research and other sources of information (e.g. expert opinion, practice-based knowledge). There were also examples of research use assessments that examined the use of ‘evidence’ without separating the influence of research from other information sources (e.g. scientific research, population surveys, administrative data and reports, community complaints, clinical/expert opinion). These differences raise the issue about whether a single research project is a valid unit of analysis [17, 26] and what unit of analysis is the most appropriate. While it might be useful to focus on specific research for research accountability purposes and ease of measurement, the use of information assimilated from multiple sources is consistently reported as closer to the reality of how knowledge enters the policy debate and contributes to policy outcomes [45].

Different approaches to assessment will also give rise to a differential emphasis on the role of research in policy-decisions and the relevance of context [27, 42]. The research impact assessments we examined tended to focus on why impacts occurred (did not occur) and the contextual factors associated with research uptake, rather than adopting a wider frame to examine other factors and information sources that may have been influential. Focusing on research uptake may mean that details of the policy story are missed and the influence of the research overstated [17], whereas research use assessments commonly sought to understand the relationship between the various factors involved in decision-making and the role played by research within this mix. Tracing backwards to examine the policy process in this way is likely to provide a more accurate picture of research influence [11]. However, this finding depended on the unit of policy analysis chosen for assessment. As policy decisions often build on previous policy decisions which in turn may be influenced by research [29], focussing on a narrow aspect of the policy process as a unit of analysis may not capture all of the research considered in reaching an outcome or the full range of factors that may have influenced the policy decision [51]. In particular, policy documents represent the outputs of policy discussions or the policy position at a single point in time, so examining research use at this level may mean that it is missed, or undue emphasis is placed on the influence of cited research [51].

As well as the relative emphasis placed on research, the assessment approach itself may determine the type and nature of impacts or research use identified. For example, it was common for the research impact assessments we examined to seek evidence linking the research in question to the policy outcome (e.g. seeking corroborating testimony from policy-makers or evidence in policy documents). Studies also sometimes sought to quantify the strength of this relationship, or the relative contribution of the research in relation to other factors, by subjectively scoring the extent of research influence on the policy outcome. This focus on measurable links between research and policy that can be proven meant that such assessments were more likely to identify instances where research had been directly applied in policy discussions (instrumental uses) [52]. In addition, the research impact assessments we examined most commonly utilised frameworks suggesting direct and linear links between research and policy (albeit with feedback loops included), and thus potentially overlooking indirect or conceptual uses. Finding evidence for indirect influences, such as changes in awareness and understanding of an issue, may be challenging [27]. To better capture indirect and as well as direct impacts, some authors propose that research impact should be measured in terms of the processes (e.g. interactions, dissemination activities) and stages of research adoption amongst end-users/stakeholders resulting from these processes (e.g. changes in awareness, understanding, attitude/perceptions), rather than focussing on outcome-based modes of impact evaluation [27, 34, 42]. This way of thinking about impact helps to identify changes that occur early in the impact pathway and can establish clear links between the research and the contribution it has made [42], however, this may emphasise ‘potential’ rather than actual impact. It can be argued that actual impact only occurs if a stakeholder uses or applies (e.g. to inform or encourage/discourage change) the research results within a policy debate; that is, if there has been a behavioural change because of the knowledge gained [53].

For research use assessments, the nature of research use reported may vary depending on what type of policy process was considered [29]. The studies we examined that assessed specific and discrete policy decisions, for example, committee decisions focussed on making recommendations for practice, also tended to emphasise instrumental research use, as there was a requirement or mandate for research to be directly applied in the decision-making process, whereas studies considering broader policy processes, where events overtime were examined, had the potential to identify the many ways in which research could be utilised. The conceptual models that were adopted in these assessments provided a mechanism for considering how issues entered the policy agenda or how policy decisions were made without a presumption that research had made a direct contribution to the policy outcome. However, assessments of this nature highlighted the difficulties of determining the influence of research on tacit knowledge, where research use lies within other types of information (e.g. expert knowledge) and stakeholder positions [29]. For example, the research use assessments we examined commonly investigated the influence of other information sources and stakeholder’s positions on policy decisions, but stopped short of investigating whether these sources of influence were themselves informed by research [29]. Identifying hidden or unconscious uses of research will always be challenging for both research use and research impact assessments.

Not only does the overall choice of approach influence the assessment findings, but also specific methodological choices. Some methodological issues were common to both research impact and research use assessments. For example, issues to do with the timing of assessment to best capture research impacts or use. In addition, purposeful sampling and the number of case studies conducted influenced how predictive or transferrable the assessment findings were [17, 24, 54]. There were also tensions within both streams between the value of utilising the most comprehensive and robust methods of assessments possible and the resources required for these methods. Case studies, including interviews with study participants, were considered the gold standard method of assessment, but resource intensive to conduct [55]. Policy case studies were particularly time and resource intensive, requiring careful consideration of historical and contextual influences, hence the predominance of single policy case studies amongst the research use assessments we examined [24]. On the research impact side, methods utilising automated data extraction from policy documents and electronic surveys of researchers have been introduced [6, 56]. Such methods are less resource intensive and offer greater potential for implementation on a wide scale, but there is still limited information available about their validity and reliability [5, 6, 57, 58]. There were also instances where methodological choices differed between the two streams of research, influencing outcomes of assessments from each group. For example, researchers or end-users directly associated with the research project or programme under study were most commonly interviewed or surveyed in the research impact assessments, whereas the research use assessments we examined often involved a broader cross section of policy actors and researchers as study participants. These differences provide different perspectives about the role played by research, and thus the method influences the findings.

In essence, the differences between forward and backward tracing assessments highlighted above illustrate how the choices made in assessments alter the phenomenon they aim to examine. In fact, this is similar to other types of evaluation; the assessment process illuminates a particular pathway, perspective or outcome, but another assessment process would see it differently.

Possibilities for further research

It is likely that the pathways to impact and the degree to which research will be utilised will differ for different types of research and policy areas [1, 29]. Understanding these differences may help researchers and policy-makers to set appropriate goals in terms of research impact and use, as well as to identify the most appropriate pathways through which translation could be achieved. However, we identified only a small number of studies comparing the impacts of different types of research (and only biomedical compared to clinical research) or differences in research use according to policy area. Further studies adopting across-case comparison approaches to investigate these issues would be useful.

In this review, we encountered a lack of consistency in the definitions and terminology applied across the included studies. This was the case for describing the type of research being assessed and what constituted policy impacts in research impact assessments, as well as in defining and categorising forms of evidence and types of policies in research use assessments. Different conclusions about the extent to which policy-making is informed by research may arise from different views about what constitutes research in research use assessments or, conversely, policy impact in research impact assessments [29]. Moving towards the application of consistent definitions across this area of study would also be beneficial [14]. It is also important that authors in future studies are clear about the definitions and ways of thinking about research impact/use applied in assessments, so that comparisons between study findings can be made and limitations made explicit [14].

The two streams of research discussed in this review have developed separately over a similar timeframe. More recently, studies have drawn on elements from both streams of research. Some of these studies are exemplary in many ways, tracing forward from research and backwards from policy to produce case studies which address common limitations in novel and rigorous ways. There is scope for more research impact assessments to borrow from backwards tracing approaches in this way. In addition, very few studies utilising network analyses and applied systems-based theories, were identified in this review. Such approaches may also provide a means of exploring these issues [52].

Most of the studies included in this review appeared to be initiated by researchers for researchers or by research funding bodies. Researchers are now being asked to routinely track the impacts of their own research [6]. This focus on research and researchers places a one-sided emphasis on the role of researchers in getting research into policy. Reducing the waste from research also requires action from policy-makers. Yet, very few studies investigated to what degree the decision-making environment supported research use. To address this imbalance, there is scope for policy agencies to develop mechanisms to assess their own requirements and practices for considering research during policy deliberations, as well as investigating ways to routinely monitor research use.

Finally, there were very few examples of prospective approaches being utilised in either stream of research examined in this review. These approaches have disadvantages, for example, they may not be practical in terms the resources required to trace research or policy processes for extended periods, or it can be difficult to obtain permission to directly observe policy processes or respondents may not be as forthcoming about factors of influence at the time they are occurring (e.g. political debates) [15]. However, prospective approaches to assessment may prompt researchers and end-users to think about research translation from the outset of a research project or policy process, and provide opportunities for appropriate and tailored translational interventions to be embedded into work processes [59]. Routine data collection and, in particular, process metrics related to research translation activities could be used to provide feedback about areas requiring attention in order to improve research uptake [59]. With the advent of routine data collection systems, the potential advantages of this approach could be explored in future studies.

Limitations of this review

This review only included English language publications and therefore studies from non-English speaking countries will be under-represented. This may in part explain our findings around the high proportion of studies conducted in high-income countries. The studies included in this review are likely to be broadly representative of the type of studies conducted to date. However, due to our exclusion criteria, we may have missed examples of studies published only in the grey literature or methodological approaches that have not been empirically tested. For example, we identified only a small number of peer-reviewed publications where a programme of research was the unit of analysis. The preparation of case studies based on a researcher’s programme of research was adopted in both the Australian Research Quality Framework [60] and more recently the UK Research Excellence framework [41]. Reports describing the application and findings of this approach are available in the grey literature [41, 61]. Finally, author one managed the literature search and inclusion process, as well as extracting primary data from the included articles. This may have introduced some bias, although the other authors of this review were consulted and came to agreement on ambiguous cases. Study authors did not always explicitly describe their studies in terms of the characteristics we have included in our descriptive framework and some studies required judgements to be made regarding classification. Our findings in terms of the number of studies within each category should therefore be considered indicative. However, this issue highlights the need for a framework, such as the one we propose, to facilitate clearer communication about what, in fact, studies were seeking to achieve and how they did it.

Conclusions

Herein, we have defined the key characteristics of two research streams with the aim of facilitating structured comparisons between studies. In many ways, the separate and distinct development of these two research streams, and their different approach to examining the issues, reflect the much-discussed separation of the two domains of research and policy. The descriptive framework introduced and discussed in this paper provides a ‘missing link’, showing how these two streams intersect, compare and differ. Our framework offers an integrated perspective and analysis, and can be used by researchers to identify where their own research fits within this field of study and to more clearly communicate what is being assessed, how this is done and the limitations of these choices.

We have shown that the approach to assessment can determine the perceived influence of research on policy, the nature of this influence and our understanding of the relationship between research and policy. As such, the two approaches, forward and backward tracing, essentially tell a different story about how (if at all) research-based policy change happens. In some ways, the assessments construct the phenomenon they aim to measure. For example, forward tracing research impact assessments, with their focus on specific research and the activities of researchers, may emphasise direct influences of research on policy and overstate the influence of research in policy processes. Conversely, research use assessments utilising a backwards tracing analysis tend to paint a more complex picture of assimilated knowledge contributing to policy outcomes alongside other influential factors. Combining aspects of the two approaches may provide the best way forward in terms of linking outcomes to specific research, as well as providing a realistic picture of research influence.

References

  1. 1.

    Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of health research in policy-making: Concepts, examples and method of assessment. Health Res Policy Syst. 2003;1:2.

  2. 2.

    Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85:729–68.

  3. 3.

    Chalmers I. Biomedical research: Are we getting value for money? Significance. 2006;3:172–5.

  4. 4.

    Martin BR. The Research Excellence Framework and the ‘impact agenda’: are we creating a Frankenstein monster? Res Eval. 2011;20:247–54.

  5. 5.

    Bornmann L. What is societal impact of research and how can it be assessed? A literature survey. J Am Soc Inf Sci Technol. 2013;64:217–33.

  6. 6.

    Greenhalgh T, Raftery J, Hanney S, Glover M. Research impact: a narrative review. BMC Med. 2016;14:78.

  7. 7.

    REF 2014 Key Facts. http://www.ref.ac.uk/2014/media/ref/content/pub/REF%20Brief%20Guide%202014.pdf. Accessed 15 Dec 2016.

  8. 8.

    Orton L, Lloyd-Williams F, Taylor-Robinson D, O'Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS One. 2011;6:e21704.

  9. 9.

    Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

  10. 10.

    Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7:239–44.

  11. 11.

    Nutley S, Walter I, Davies H. Using Evidence: How Research Can Inform Public Services. Bristol: Policy Press at the University of Bristol; 2007.

  12. 12.

    Almeida C, Bascolo E. Use of research results in policy decision-making, formulation, and implementation: a review of the literature. Cad Saude Publica. 2006;22(Suppl):S7–19. discussion S20-33

  13. 13.

    Liverani M, Hawkins B, Parkhurst JO. Political and institutional influences on the use of evidence in public health policy. A systematic review. PLoS One. 2013;8:e77404.

  14. 14.

    Alla K, Hall WD, Whiteford HA, Head BW, Meurk CS. How do we define the policy impact of public health research? A systematic review. Health Res Policy Syst. 2017;15:84.

  15. 15.

    Lavis J, Ross S, McLeod C, Gildiner A. Measuring the impact of health research. J Health Serv Res Policy. 2003;8:165–70.

  16. 16.

    Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: a literature review. Sci Public Policy. 2009;36:255–70.

  17. 17.

    Hanney S, Buxton M, Green C, Coulson D, Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess. 2007;11(53):1–180.

  18. 18.

    Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015;13:18.

  19. 19.

    Raftery J, Hanney S, Greenhalgh T, Glover M, Blatch-Jones A. Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme. Health Technol Assess. 2016;20(76):1–254.

  20. 20.

    Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: An overview of reviews. Health Res Policy Syst. 2011;9:26.

  21. 21.

    Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations, and definitions of research impact: A review. Res Eval. 2013;23(1):21–32.

  22. 22.

    Thonon F, Boulkedid R, Delory T, Rousseau S, Saghatchian M, Van Harten W, O'Neill C, Alberti C. Measuring the outcome of biomedical research: A systematic literature review. PLoS One. 2015;10(4):e0122239.

  23. 23.

    Gilson L, Raphaely N. The terrain of health policy analysis in low and middle income countries: a review of published literature 1994-2007. Health Policy Plan. 2008;23:294–307.

  24. 24.

    Walt G, Shiffman J, Schneider H, Murray SF, Brugha R, Gilson L. 'Doing' health policy analysis: methodological and conceptual reflections and challenges. Health Policy Plan. 2008;23:308–17.

  25. 25.

    Frank C, Nason E. Health research: measuring the social, health and economic benefits. Can Med Assoc J. 2009;180:528–34.

  26. 26.

    Lomas J. Connecting research and policy. Can J Policy Res. 2000;1:140–44.

  27. 27.

    Molas-Gallart J, Tang P, Morrow S. Assessing the non-academic impact of grant-funded socio-economic research: results from a pilot study. Res Eval. 2000;9:171–82.

  28. 28.

    Morton S. Creating research impact: the roles of research users in interactive research mobilisation. Evid Policy J Res Debate Pract. 2015;11:35–55.

  29. 29.

    Lavis JN, Ross SE, Hurley JE. Examining the role of health services research in public policymaking. Milbank Q. 2002;80:125–54.

  30. 30.

    Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1:35–43.

  31. 31.

    Kuruvilla S, Mays N, Walt G. Describing the impact of health services and policy research. 2007;12:S1–23-31.

  32. 32.

    Canadian Academy of Health Sciences. Making an Impact: A preferred Framework and indicators to Measure Returns on Investment in Health Research. Ottawa: Panel on Return on Investment in Health Research. Canadian Academy of Health Sciences; 2009.

  33. 33.

    Riley BL, Kernoghan A, Stockton L, Montague S, Yessis J, Willis CD. Using contribution analysis to evaluate the impacts of research on policy: Getting to ‘good enough’. Res Eval. 2018;27:16–27.

  34. 34.

    Kok MO, Schuit AJ. Contribution mapping: A method for mapping the contribution of research to enhance its impact. Health Res Policy Syst. 2012;10:21.

  35. 35.

    Kingdon JW. Agendas, Alternatives, and Public Policies. Second ed. New York: Longman; 2003.

  36. 36.

    Walt G, Gilson L. Reforming the health sector in developing countries: the central role of policy analysis. Health Policy Plan. 1994;9:353–70.

  37. 37.

    Dobrow MJ, Goel V, Upshur RE. Evidence-based health policy: context and utilisation. Soc Sci Med. 2004;58:207–17.

  38. 38.

    ODI. Briefing Paper: Bridging Research and Policy in International Development An analytical and practical framework https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf. Accessed Dec 2016.

  39. 39.

    ResearchFish. www.researchfish.net. Accessed 15 Dec 2016.

  40. 40.

    Altmetric. www.altmetric.com. Accessed 15 Dec 2016.

  41. 41.

    UK REF2014 Case Studies. http://impact.ref.ac.uk/CaseStudies/. Accessed 15 Dec 2016.

  42. 42.

    Morton S. Progressing research impact assessment: A ‘contributions’ approach. Res Eval. 2015;24:405–19.

  43. 43.

    Meagher L, Lyall C, Nutley S. Flows of knowledge, expertise and influence: a method for assessing policy and practice impacts from social science research. Res Eval. 2008;17:163–73.

  44. 44.

    Gilson L, McIntyre D. The interface between research and policy: Experience from South Africa. Soc Sci Med. 2008;67:748–59.

  45. 45.

    Smith KE, Katikireddi SV. A glossary of theories for understanding policymaking. J Epidemiol Community Health. 2013;67:198–202.

  46. 46.

    Hanney SR, Home PD, Frame I, Grant J, Green P, Buxton MJ. Identifying the impact of diabetes research. Diabet Med. 2006;23:176–84.

  47. 47.

    Hanney S, Mugford M, Grant J, Buxton M. Assessing the benefits of health research: lessons from research into the use of antenatal corticosteroids for the prevention of neonatal respiratory distress syndrome. Soc Sci Med. 2005;60:937–47.

  48. 48.

    Hanney S, Packwood T, Buxton M. Evaluating the benefits from health research and development centres: a categorization, a model and examples of application. Evaluation. 2000;6:137–60.

  49. 49.

    Orians CE, Abed J, Drew CH, Rose SW, Cohen JH, Phelps J. Scientific and public health impacts of the NIEHS Extramural Asthma Research Program: insights from primary data. Res Eval. 2009;18:375–85.

  50. 50.

    Ottoson JM, Ramirez AG, Green LW, Gallion KJ. Exploring potential research contributions to policy. Am J Prev Med. 2013;44:S282–9.

  51. 51.

    Bunn F, Kendall S. Does nursing research impact on policy? A case study of health visiting research and UK health policy. J Res Nurs. 2011;16:169–91.

  52. 52.

    Greenhalgh T, Fahy N. Research impact in the community-based health sciences: An analysis of 162 case studies from the 2014 UK Research Excellence Framework. BMC Med. 2015;13:232.

  53. 53.

    Samuel GN, Derrick GE. Societal impact evaluation: Exploring evaluator perceptions of the characterization of impact under the REF2014. Res Eval. 2015;24:229–41.

  54. 54.

    Tulloch O, Mayaud P, Adu-Sarkodie Y, Opoku BK, Lithur NO, Sickle E, Delany-Moretlwe S, Wambura M, Changalucha J, Theobald S. Using research to influence sexual and reproductive health practice and implementation in sub-Saharan Africa: a case-study analysis. Health Res Policy Syst. 2011;9(Suppl 1):S10.

  55. 55.

    Cohen G, Schroeder J, Newson R, King L, Rychetnik L, Milat AJ, Bauman AE, Redman S, Chapman S. Does health intervention research have real world policy and practice impacts: testing a new impact assessment tool. Health Res Policy Syst. 2015;13:3.

  56. 56.

    Guthrie S, Bienkowska-Gibbs T, Manville C, Pollitt A, Kirtley A, Wooding S. The impact of the National Institute for Health Research Health Technology Assessment programme, 2003-13: a multimethod evaluation. Health Technol Assess. 2015;19:1–291.

  57. 57.

    Drew CH, Pettibone KG, Finch FO, Giles D, Jordan P. Automated research impact assessment: a new bibliometrics approach. Scientometrics. 2016;106:987–1005.

  58. 58.

    Bornmann L, Haunschild R, Marx W. Policy documents as sources for measuring societal impact: how often is climate change research mentioned in policy-related documents? Scientometrics. 2016;109:1477–95.

  59. 59.

    Searles A, Doran C, Attia J, Knight D, Wiggers J, Deeming S, Mattes J, Webb B, Hannan S, Ling R, et al. An approach to measuring and encouraging research translation and research impact. Health Res Policy Syst. 2016;14:60.

  60. 60.

    Donovan C. The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research. N Dir Eval. 2008;2008:47–60.

  61. 61.

    Excellence in Innovation for Australia (EIA) Trial.https://go8.edu.au/programs-and-fellowships/excellence-innovation-australia-eia-trial. Accessed 15 Dec 2016.

  62. 62.

    Ritter A, Lancaster K. Measuring research influence on drug policy: A case example of two epidemiological monitoring systems. Int J Drug policy. 2013;24:30–7.

Download references

Funding

This work was supported by funding from the National Health and Medical Research Council of Australia (Grant #1024291).

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Author information

All authors were involved in the conception and design of this review. RN was responsible for the search strategy design, study retrieval and data extraction. All authors contributed to the analysis of and interpretation of data. All authors contributed to the preparation of the final text of the article and approved the final manuscript.

Correspondence to Robyn Newson.

Ethics declarations

Ethics approval and consent to participate

This study received approval from the Human Research Ethics Committee, University of Sydney (2016/268).

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

List of included empirical studies. (DOCX 40 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Research impact assessment
  • Research impact
  • Research payback
  • Policy impact
  • Research utilisation
  • Research use
  • Health policy
  • Health research
  • Evidence-informed policy