Skip to main content

Policy and practice impacts of applied research: a case study analysis of the New South Wales Health Promotion Demonstration Research Grants Scheme 2000–2006

Abstract

Background

Intervention research provides important information regarding feasible and effective interventions for health policy makers, but few empirical studies have explored the mechanisms by which these studies influence policy and practice. This study provides an exploratory case series analysis of the policy, practice and other related impacts of the 15 research projects funded through the New South Wales Health Promotion Demonstration Research Grants Scheme during the period 2000 to 2006, and explored the factors mediating impacts.

Methods

Data collection included semi-structured interviews with the chief investigators (n = 17) and end-users (n = 29) of each of the 15 projects to explore if, how and under what circumstances the findings had been used, as well as bibliometric analysis and verification using documentary evidence. Data analysis involved thematic coding of interview data and triangulation with other data sources to produce case summaries of impacts for each project. Case summaries were then individually assessed against four impact criteria and discussed at a verification panel meeting where final group assessments of the impact of research projects were made and key influences of research impact identified.

Results

Funded projects had variable impacts on policy and practice. Project findings were used for agenda setting (raising awareness of issues), identifying areas and target groups for interventions, informing new policies, and supporting and justifying existing policies and programs across sectors. Reported factors influencing the use of findings were: i) nature of the intervention; ii) leadership and champions; iii) research quality; iv) effective partnerships; v) dissemination strategies used; and, vi) contextual factors.

Conclusions

The case series analysis provides new insights into how and under what circumstances intervention research is used to influence real world policy and practice. The findings highlight that intervention research projects can achieve the greatest policy and practice impacts if they address proximal needs of the policy context by engaging end-users from the inception of projects and utilizing existing policy networks and structures, and using a range of strategies to disseminate findings that go beond traditional peer review publications.

Peer Review reports

Background

Public funds are expended on health research in large part to lead to improvements in policy [13], practice, resource allocation, and ultimately, the health of the community [4, 5]. However, the transfer of new knowledge from research into practice continues to be far from optimal [2, 6, 7]. It is widely recognized that increasing the impact of research on policy and practice is likely to require many different strategies, including the development of research-policy partnerships, better summaries of evidence and more research-receptive policy and funding agencies [8, 9]. It is also increasingly acknowledged that studies designed to evaluate the impact of interventions to improve health (intervention research) can inform subsequent intervention-specific policy and practice [10]. However, only a relatively small proportion (between 10-23%) of primary research funded by public health agencies or published in the peer reviewed literature is intervention research [11, 12].

Little is known about the nature and mechanisms that underlie the influence of intervention research on health policy or practice. In fact, there are no agreed systematic approaches for measuring such impacts [13]. Traditional indices of research productivity relate to numbers of papers, impact factors of journals and citations. These metrics are widely used by research granting bodies, although they do not always relate well to the ultimate goals of applied health and medical research [1416]. The emerging literature on research impact [1719] highlights its complex, non-linear, unpredictable nature, and the propensity, to date, to count what can be easily measured, rather than measuring what “counts” in terms of significant, enduring changes [14].

A recent systematic review of approaches to assessing research impacts by Banzi et al. [13] identified 22 reports included in four systematic reviews and 14 primary studies. These publications described several theoretical frameworks and methodological approaches (for example, bibliometrics, econometrics, interviews, ad hoc case studies) to measuring research impacts, with the “payback model” as the most frequently used conceptual framework [19]. Based on this review of existing models, Banzi et al. differentiated five broad categories of research impacts: i) advancing knowledge; ii) capacity building; iii) informing decision-making; iv) health benefits; and, v) broad socio-economic benefits.

To date, most primary studies of research impacts (‘impacts research’) have been small scale case studies; and there has been no comprehensive assessment of impacts and their mediators across any single applied research funding scheme. The New South Wales (NSW) Health Promotion Demonstration Research Grants Scheme (HPDRGS) was designed by NSW Ministry of Health, Australia, in response to a paucity of evidence on large-scale intervention effectiveness across prevention policy priorities. The specific aims of the scheme are to fund applied research that builds the evidence-base for health promotion policy and practice; develop partnerships and build capacity for health promotion research between health districts, universities and organizations outside of the health sector. This paper reports on a exploratory case series analysis of all 15 projects funded under the HPDRGS during the period 2000 to 2006, to determine their subsequent policy and practice impacts (the ‘what’) and to explore the forces and factors influencing these impacts (the ‘how’ and ‘why’).

Methods

At the commencement of the study in January 2012, 15 projects funded during the period 2000 to 2006 had been completed for at least twenty-four months and most (n = 12) for longer than four years. This period was selected to balance the time required for evidence of impact to become manifest, against the potential accuracy of recall by respondents. A case study approach was used to explore if, and in what ways, research projects were used to influence policy and practice, and to identify the key factors (how and why) which influenced their use. Case study methods are appropriate for answering ‘how’ and ‘why’ questions when the phenomenon of interest (in this case, applied research) is embedded within a real-life context (policy and practice environment) [20]. Due to the diversity of projects under consideration, this case series included a number of different methods (Figure 1). The study was approved by the University of Sydney Human Research Ethics Committee and all participants gave written informed consent to take part in the study.

Figure 1
figure1

Overview of study methods and key steps in the research process

Step 1 Research scoping

After considering the ‘research impact’ literature, we adapted the conceptual framework by Banzi et al. [13] as its domains aligned very closely with the objectives of the HPDRGS; with the five broad impact domains collapsed into four, as follows: i) Advancing knowledge and research related impacts (peer review articles, impact on research methods, better targeting for future research); ii) Capacity building (development of research capacity of staff, students, others); iii) Informing policies and product development (policy, guidelines, products, intervention development); and, iv) Health, societal and economic impacts (health status, social benefits, shift in knowledge, attitudes, behaviors, social capital, macroeconomic impacts, etc.). We combined categories four and five of Banzi’s framework (health benefits and broad socio-economic benefits) as pilot testing suggested they were both distal and inter-related and would be more efficiently described together.

Interview protocols for chief investigators (CI) and nominated end-users were guided by the adapted Banzi impact categories, lessons from the ‘research impacts’ literature scan and questions arising from a preliminary review of available documentation for the 15 projects. The interviews were then piloted with a CI and an end-user of two intervention research projects of commensurate size not funded through the HPDRGS.

Step 2 Data collection

Semi-structured telephone interviews

The CIs were invited by email to participate in the study, with non-responders sent a reminder email after one week and then followed-up by telephone up to three times. Participating CIs were asked to nominate up to three end-users, defined as individuals who could provide a perspective on how the project had influenced policy, practice, organizational development, further research or in applications such as guidelines or teaching materials. CIs were encouraged to identify end-users from a range of sectors in which impacts occurred. These end-users were approached by email, using the same contact and follow-up procedure as CIs, to participate in an interview exploring how the project and its findings had been used from their perspective. CIs were typically university based academics or health service managers/researchers with joint appointments. While end-users were most frequently current or former policy makers, health service managers and practitioners.

Telephone interviews were conducted by an experienced research officer (RN) who has a good working knowledge of disease prevention, intervention research, and related policy and practice contexts; and was independent of the CIs and end-users. Interviews were digitally recorded with participants’ permission. Both CI and end-user telephone interviews explored perspectives on the overall impacts of individual projects, asked about specific impacts in relation to each of the four categories, and identified factors contributing to such impacts, or lack thereof. The following list outlines a summary of the telephone interview topic guides for CIs and end-users.

Table 1 Project characteristics, key implications and dissemination methods used for HPDRG projects 2000-2006

Semi-structured telephone interview topic guide: Investigators and End users

  • Recall of research aims, key finding and implications

  • Dissemination process (how, factors influencing the dissemination process)

  • Interface with end users – how research team worked with potential end users (investigators only)

  • Interface with researchers – how were end users involved in the research project, how did they hear about the findings (end users only)

  • Overall impact – how have the findings been used

  • Specific impacts – capacity building, partnerships, policy and product development, health and other sector impacts, societal and economic impacts

  • Circumstances surrounding the use of the findings, or limited impact of the findings

  • Evidence of impacts – documentary sources

  • Nomination of end users (investigators only)

Bibliometric analysis

A bibliometric analysis was also undertaken in Scopus in April-June 2012 to examine the total and mean number of citations (excluding self-citations) for all peer review publications arising from each project. Project reports were located and examined to document key project findings. Respondents were also asked to provide copies of additional documentary sources as evidence of how project findings had been used, such as policy documents, briefs, reports and curriculum materials. Additional searches of the grey literature were undertaken to corroborate documentary evidence of impacts reported in the interviews. Documentary evidence was compiled by the research officer (RN) and checked by two other authors (AJM and JB).

Step 3 Impact Assessment

Data synthesis and verification panel

Interview and document data were collated and triangulated in ‘case summaries’ by two authors (AJM and JB) and reviewed for accuracy by the research officer who conducted the interviews (RN). Case summaries for each project included: i) key research findings and implications; ii) the perspectives of CIs and end-users on how project findings had been used and key factors influencing use, including illustrative quotations; iii) bibliometric analysis; iv) documentary evidence of impacts; and, v) notes and observations made during CI and end-user interviews. The coding framework for analyzing these case summaries was based on impact domains, contextual information and key factors influencing research use.

A verification panel was established to review and assess the collated case study material, and provide an overall assessment of the policy and practice impact of each of the 15 projects. Our approach was adapted from the RAND/UCLA (University of California, LA, USA) appropriateness method [21, 22]. This systematic consensus method has been widely used to derive expert consensus on clinical indications, quality improvement and assessing effectiveness of health networks [23, 24].

The verification panel was made up of eight members of the research team: a mix of senior academics and policy makers, including international experts in the field of applied population health research. Case summaries of each project were independently assessed by panel members across the four impact domains and overall impact. Assessments were made using a nine point scale: 1 to 3 ‘limited impact’; 4 to 6 ‘moderate impact’; and 7 to 9 ‘high impact’. Judgments of overall impact took into account the four impact domains as well as: size of the project and level of funding; time since project completion; potential sustainability of the impact; and, research and implementation challenges that were addressed in creating the impact. Individual ratings were compiled and discussed at a verification panel meeting held in August 2012, where consensus was reached on overall impact assessments for all 15 studies. The panel also identified a number of key influences on policy and practice impacts across projects, which were further explored by a final analysis of the data to describe ‘how’ and ‘why’ projects were impactful or not.

Results

Project characteristics

Between 2000 and 2006, fifteen projects were funded across a broad range of topics, using a range of study designs, most commonly RCTs (n = 7), mixed methods (n = 5) and quasi-experimental designs (n = 2) (Table 1). Most projects employed a mix of qualitative and quantitative methods (n = 13). Funding ranged from 10,000 to 300,000 Australian dollars per project. Projects were most commonly implemented in community (n = 9) and health services (n = 5) settings in both rural (n = 8) and metropolitan areas (n = 7).

Table 2 Interview sample, research outputs, means of independent assessment and panel assessment of overall impact for projects funded between 2000 and 2006

Semi-structured interviews and panel impact assessments

A total of 46 interviews were conducted (Table 2), with CI interviews (mean duration: 53.3 mins; range: 38 to 97 mins) lasting longer than end-user interviews (mean duration: 40.0 mins; range: 19 to 81 mins). The response rate for CIs was 70.8% and 74.4% for end-users.

Table 3 How projects and their findings informed policy and practice

There was limited variation between panel members in their assessments of the overall impact of each project, and consensus on the final group overall impact assessment was achieved easily. Three studies were considered to possess ‘high’ overall impact (Tai Chi, Mental Health First Aid and Nicotine Dependent Inpatients), eight ‘moderate’ overall impact (Rural Hearing Conservation, Smoking Cessation in Indigenous Communities, Pedometers in Cardiac Rehab, Exercise to Prevent Falls after Stroke, Walk-to-School, Reducing Falls Injuries within Aged Care, Reducing Smoking in Mental Health, Cycling Infrastructure), while four were rated as ‘low’ overall impact (Nutrition Practices in Youth Housing, Secondary Prevention in Patients with CVD, Safer Streetscapes, Making Connections). Impact ratings across the adapted Banzi categories, as well as overall impact assessments for each project, are shown in Table 2.

Table 4 Factors influencing impacts of research on policy and practice (across case studies)

Advancing knowledge

Projects sought to advance knowledge using a variety of dissemination methods including reports, peer-reviewed papers, conference presentations, theses, presentations to stakeholder groups, political advocacy, training, websites and the media (Table 1). Peer reviewed papers generated by projects ranged from 0 to 7, with a mean of 10 citations per paper (range: 0 to 73); 96% of these citations came from six projects, which were rated as high or moderate impact studies. The two projects independently rated as having a ‘high impact’ were the Tai Chi and Mental Health First Aid projects. The studies with the highest citations were effective interventions which provided novel results for the field of interest. High and moderately impactful projects were all managed by experienced researchers, and high quality publications were produced despite equivocal findings in some instances. All of the studies with low impact on advancing knowledge had null study results, no publications and were mostly led by inexperienced researchers and practitioners.

Capacity building impacts

Both CIs and end-users indicated that capacity building occurred through staff development, partnership building and follow-on research funding. For many end-users, projects provided opportunities to develop their research skills and partnerships with researchers. Researchers of the two projects with high capacity building impact, Tai Chi and Mental Health First Aid, consistently stated that projects helped them to build their own research capacity and partnership networks, enabling them to build enduring connections to policy and practitioner networks from which a body of research emerged. A number of CIs and end-users of high and moderately impactful projects spoke of projects as a place where future research and service ‘leaders’ were trained.

Policy and practice impacts

In terms of policy impacts, end-user respondents from high and moderate impact studies reported using research to inform agenda setting and policy debates. Project findings also informed policy planning, and in some cases underpinning elements of new policies in health services. At the practice level, high and moderate impact projects were reported as being used to inform program planning across a range of sectors. In the health sector a number of projects (Treatment of Nicotine Dependant Inpatients, Reducing Smoking in Mental Health; Smoking Cessation in Indigenous communities) resulted in substantial practice changes in the provision of smoking cessation advice and nicotine replacement therapy in health services. A number of projects also informed organizational development, where interventions were integrated into core business of health services. One such study (Tai Chi) led to a much more standardized provision of falls prevention interventions in community setting across large parts of the state of NSW.

In some cases, high impact research provided retrospective support and rationale for existing health promotion programs, such as the NSW Rural Hearing Conservation Program. Overall, practice impacts appeared to largely flow from policy impacts. For example, the policy focus on tobacco control in hospital settings contributed to the development of new practice resources and professional development for smoking brief intervention in hospitals, as well as in mental health units. A summary of how projects and their findings influenced policy and practice and illustrative quotes derived from interviews are provided in Table 3

Broader health, economic and societal impacts

None of the projects were independently assessed as ‘high impact’ in the health, societal and economic impacts domain, with a mean ‘moderate’ rating being the highest achieved for the Rural Hearing Conservation Program, Tai Chi, Mental Health First Aid, Treatment of Nicotine Dependant Inpatients, Falls in Aged Care and Smoking Cessation in Indigenous Communities programs.

Factors influencing policy and practice impacts: the ‘how’ and ‘why’

Examination of patterns differentiating high, moderate and low (overall) impact intervention research at the verification panel and further thematic analysis of interview transcripts identified six key factors that particularly contributed to these impacts. A summary of these factors and illustrative quotes derived from the interviews are collated in Table 4.

Nature of the intervention

All of the studies considered to have high policy and practice impacts (Tai Chi, Mental Health First Aid and Treatment of Nicotine Dependent Inpatients) also had moderate to high ratings for advancing knowledge and strong research outputs. However, a number of studies that achieved moderate to high ratings in advancing knowledge, and demonstrated strong research outputs (journal papers and citations) failed to achieve high levels of real world policy and practice impacts, namely the Walk to School and Cycling Infrastructure programs. Data suggest that these projects lacked definitive results and a clear agency with policy responsibility, where policy makers could advocate for their replication and expansion. In addition, the complex and inter-sectoral nature of these interventions, that require environmental and cultural change to achieve intended outcomes made them difficult to readily replicate or scale-up.

Further examination of studies with low and moderate impact also highlighted a number of barriers to applying findings, including not producing clear results indicating effective action, interventions and outcomes that were hard to explain, and no consideration of how effective interventions could be scaled-up for population level implementation. The majority of high impact projects effectively packaged intervention materials and tapped into readily available workforce to expand program reach. To illustrate, in the space of nine years the Mental Health First Aid program has been scaled-up using a ‘train the trainer’ model to the point where it has reached 1% of the Australian Population [25].

It is interesting to note that though high and moderately impactful projects generally received larger amounts of funding, this alone was not always related to impact. The Rural Hearing Conservation project evaluated an existing program with minimal resources (A$17,670), providing a high return on investment in terms of policy and practice impacts. Also, a number of the least impactful projects received large amounts of funding.

Leadership and champions

Highly impactful projects all displayed strong networks of leaders and champions who advocated for further adoption of interventions into policy and practice. These individuals were found to promote the benefits of the intervention across a variety of stakeholder groups including, politicians, media, policy makers and the general public, as well as relevant professional and academic networks. Champions included CIs, end-users and chief executive officers of organizations within which interventions were trialed, as well as intervention service providers who had a commercial interest in expanded program delivery.

Effective partnerships

For the majority of high and moderately impactful studies, partnerships between end-user groups and the CIs existed from the inception of the projects. The analysis showed that in many cases ongoing relationships provided the continuity and mechanisms for project findings to be disseminated and considered, and for end-user groups to become engaged in formulating the key policy recommendations and wider dissemination processes. These partnerships also allowed researchers to tap into prevailing policy priorities and were considered an important contributor to their capacity to undertake further priority-driven research in partnership with end-users.

Dissemination approaches

Impactful projects consistently used active dissemination strategies, such as discussion of findings at workshops between researchers and end-users, as well as dissemination of findings through established policy and practitioner networks. These projects also developed ‘knowledge transfer’ products, such as short reports highlighting key findings and recommendations and packaged project resources/materials, making them available on websites for broader use. Some high impact studies intentionally published findings in open access journals, as a way of disseminating project findings to a broader audience of end-users. Analysis of low impact studies indicated that they, for the most part, gave little consideration to dissemination processes and, in a number of instances, offered no analysis of broader policy implications of project findings.

Perceived research quality

Research quality was consistently cited by a number of end-users of high impact projects as an important consideration in their use of research findings. However, end-users also stated that decisions to change or modify policy or practice were informed by the ‘body of evidence’, rather than findings of single studies.

Contextual factors

Among the numerous contextual factors identified as potential facilitators to the application of research findings, one of the most influential was the prevailing policy ‘zeitgeist’. CIs and end-users of high impact projects spoke of a study’s ability to provide a potential solution to a pressing policy problem. So much so, that some projects gained momentum through external factors, such societal events and parliamentary inquiries, that focused community and political attention on issues for which research could provide a response (such as Mental Health First Aid). For low impact studies, some of the key impediments to applying the findings comprised circumstances where researchers did not have capacity to establish and maintain links with policy makers or with the current policy priorities.

Discussion

While a growing number of studies have examined impacts of research [13] and research funding [26], this is the first study to document the impacts of a policy-driven applied research funding scheme. This analysis of research impacts indicates that some, but not all, of the intervention research funded through the HPDRGS achieved a wide range of tangible impacts across most domains. It is clear that the three projects with the highest overall impact ratings in this study had substantial impacts on advancing knowledge and capacity building, as well as policy and practice. However, some projects with substantial research impacts (papers and citations) yielded only minimal policy and practice impacts. This reinforces that traditional indices of research impact and researchers’ track record on publications and grants are not always an accurate guide to the policy and practice impacts of their research.

This case study analysis demonstrates the positive impact that intervention research funding can have on a range of policy and practice decisions, with findings used as a policy advocacy tool (to attract attention and funding to an issue), for priority setting (identifying areas and target groups for intervention), and to support and justify existing programs/approaches or identify the need for alternatives. In a number of instances project findings informed the early stages of policy development, when there had previously been a lack of definitive evidence about effective intervention approaches. We also found that research findings were used to directly underpin key elements of existing policies for falls prevention in older people, and tobacco control. In addition, findings were used to improve understanding of issues associated with implementing and assessing new interventions such as travel guides and other promotion of active transport.In many instances the use of project findings by practitioners reflected a need to act on state-wide policy imperatives. The introduction of the NSW policies on smoke free hospitals and falls prevention saw many practitioners tasked with developing local responses using relevant HPDRGS project resources. This highlights the value of having research funding aligned with state-wide policy.

It is clear from this analysis that many factors influence public health policy and practice, with evidence from an effective intervention study in itself generally not enough to shift the current approaches. Consistent with previous research [5], we found that findings of a single study were usually considered alongside a broader body of evidence about effective intervention approaches, as well as a consideration of the local context and timing requirements. In a seeming paradox, some studies that had null or equivocal results still achieved moderate policy and practice impact. This suggests that adoption of project findings into policy and practice are influenced by factors other than evidence of effectiveness. Closer review of these projects revealed that the introduction of state-wide policies and programs meant that practitioners adopted the available project materials, meeting an immediate practice need, even though studies were demonstrably not effective.

Further, we found that a range of contextual factors were critical in facilitating the use of the projects findings, which is in agreement with previous studies [2731]. In particular, supportive policy contexts encouraged partnerships between researchers and end-users from the inception of projects, and where possible utilized existing structures (policy and practitioner networks, etc.) for communication. Tapping into existing policy and practitioner networks and processes appeared to enable researchers to build partnerships and trust with practice and policy ‘users’ and allowed better utilization of policy ‘windows of opportunity’. In some instances a confluence of events provided the right conditions for an intervention to be widely adopted into policy and practice. One such tragic event was the Virginia Tech massacre in the United States that highlighted the importance of mental health literacy and was thought to be a critical factor by the CI of this project in driving the early expansion of Mental Health First Aid in North America. All of the high impact projects were characterized by simple interventions that were well implemented, high quality research, champions to advocate and disseminate for adoption, as well as supportive contextual factors. The review of materials by the verification panel identified that an intervention’s capacity to be packaged and change ‘agents’ trained in its delivery was particularly important, as evidenced by the rapid expansion of Mental Health First Aid and Tai Chi across respective practice settings.

This study supports a growing body of evidence about the importance of embedding and linking research with broader strategic policy contexts. In a systematic review of 24 studies of the use of evidence by health policy makers, Innvaer and colleagues [28] found that personal contact, timeliness and relevance were the most commonly reported facilitators of research use. In our current study, impactful projects appeared to effectively engage key end-users groups, to ensure that projects were aligned to the interests and needs of such groups and to promote ownership of the findings and by doing so, increasing commitment to action. Most of the low impact studies had no such clear links with end-users or existing policy and practice networks.

Findings of this and other recent studies [31] also highlight the importance of the production of a range of dissemination products such as short reports, fact sheets and project resources, and their availability through websites, as well as publishing in open access journals to facilitate the use of the findings by end-users. There is increasing emphasis from funding agencies on making research evidence readily available [32, 33]. Yet, recent studies of public health research suggest that most dissemination activity rarely goes beyond publishing academic papers, appears to be undertaken in an ad hoc, unfunded fashion, and that access to dissemination advice and support for researchers from funding agencies and academic institutions is lacking [34, 35]. This study highlights the value of funding and systematically supporting a wide range of dissemination activities.

The excellent return on investment from the Rural Hearing Conservation Program Evaluation highlights what can be achieved with limited resources when research funding is well targeted. There appears to be merit in funding high quality evaluations of existing policies and programs. Increasingly, funding agencies require investigators to detail how their research impacts on policy and practice [36, 37], and a growing number of theoretical frameworks for assessing impact have been proposed [16, 3841]. This study demonstrates the utility of the scoring and panel verification methods used for identifying and measuring proximal research impacts (advancing knowledge, capacity building and policy and practice impacts). The findings of this and other recent studies [31] suggest however, that the longer-term impacts (societal, health and economic) of a single study can be difficult to discern and attribute. The CIs, end-users and verification panelist all reported difficulty in identifying and assessing these impacts for any single study. This is understandable, as such impacts almost always result from a complex interplay of contributing factors; and there remains a need for alternative ways of conceptualizing and measuring longer term research ‘impacts’.

This study has a number of strengths and limitations. The strengths of this study were that impacts were assessed using multiple methods, including bibliometric analysis, interviews with researchers and end-users, and documentary checks. These data were triangulated and distilled into case summaries, which were used in a rigorous verification process involving independent assessments of impacts and a group panel assessment. The documentary checks lend confidence that the perspectives of the chief investigators and end-users were credible, while the verification panel process provides a well established and tested methodology for reaching expert consensus, and minimizing subjectivity of assessments. The end-users were purposefully sampled on the basis of having knowledge and/or experience of how the project findings had been used, and while this ensured they contributed relevant information, there was potential for some degree of social response bias, as some end-users may have been inclined to report positive impacts or over-inflate those impacts. We attempted to reduce social response bias by having researchers not previously involved in funded HPDRGS projects conducting the interviews and undertaking the analysis. The recall of impacts was somewhat uncertain for some projects from the early funding rounds, as these were conducted between 10 to 12 years ago. For one of the projects the end-users could not be identified. It is possible therefore that the impacts may have been underestimated for some of these older projects.

Conclusions

This HPDRGS case series analysis provides new methods and insights into how intervention research projects influence policy and practice. Funded projects had variable impacts on policy and practice. Where impacts occurred they ranged from raising awareness of health interventions, identifying priority issues and target groups for interventions, underpinning new policies, and supporting and justifying existing policy and/or programs. The success of high impact projects was perceived in large part to be due to the nature and quality of the intervention itself (simple to understand, built in mechanisms for training and delivery), high quality research, champions who advocated for adoption, and active dissemination strategies. Our findings also highlight the need for strong partnerships between researchers and policy makers/practitioners to increase ownership over the findings and commitment to action.

Abbreviations

CI:

Chief investigators

HPDRGS:

Health Promotion Demonstration Research Grants Scheme

References

  1. 1.

    Anderson W, Papadakis E: Research to improve health practice and policy. Med J Aust. 2009, 191: 646-647.

    PubMed  Google Scholar 

  2. 2.

    Cooksey D: A Review of UK Health Research Funding. 2006, London: HMSO

    Google Scholar 

  3. 3.

    Health and Medical Research Strategic Review Committee: The Virtuous Cycle: Working Together for Health and Medical Research. 1998, Canberra: Commonwealth of Australia

    Google Scholar 

  4. 4.

    National Health and Medical Research Council Public Health Advisory Committee: Report of the Review of Public Health Research Funding in Australia. 2008, Canberra: NHMRC

    Google Scholar 

  5. 5.

    Campbell DM: Increasing the use of evidence in health policy: practice and views of policy makers and researchers. Australia and New Zealand Health Policy. 2009, 6: 21-10.1186/1743-8462-6-21.

    Article  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Banks G: Evidence-Based Policy-Making: What is it? How do we get it?. 2009, Canberra: Productivity Commission

    Google Scholar 

  7. 7.

    Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M: The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst. 2003, 1: 2-29. 10.1186/1478-4505-1-2.

    Article  PubMed  PubMed Central  Google Scholar 

  8. 8.

    Redman S, Jorm L, Haines M: Increasing the use of research in health policy: the Sax Institute model. Australasian Epidemiologist. 2008, 15: 15-18.

    Google Scholar 

  9. 9.

    Lomas J: Improving Research Dissemination and Uptake in the Health Sector: Beyond the Sound of One Hand Clapping. 1997, Hamilton: McMaster University Centre for Health Economics and Policy Analysis

    Google Scholar 

  10. 10.

    Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, Stall R: Implementing evidence-based interventions in health care: application of the replicating effective programs framework. Implement Sci. 2007, 2: 42-10.1186/1748-5908-2-42.

    Article  PubMed  PubMed Central  Google Scholar 

  11. 11.

    Sanson-Fisher RWCE, Htun AT, Bailey LJ, Miller CJ: We are what we do: research outputs of public health. Am J Prev Med. 2008, 35: 380-385. 10.1016/j.amepre.2008.06.039.

    Article  PubMed  Google Scholar 

  12. 12.

    Milat AJ, Bauman A, Redman S, Curac N: Public health research outputs from efficacy to dissemination: a bibliometric analysis. BMC Public Health. 2011, 11: 934-10.1186/1471-2458-11-934.

    Article  PubMed  PubMed Central  Google Scholar 

  13. 13.

    Banzi R, Moja L, Pistotti V, Facchini A, Liberati A: Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst. 2011, 9: 26-10.1186/1478-4505-9-26.

    Article  PubMed  PubMed Central  Google Scholar 

  14. 14.

    Wells R, Whitworth JA: Assessing outcomes of health and medical research: do we measure what counts or count what we can measure?. Aust New Zealand Health Policy. 2007, 4: 14-10.1186/1743-8462-4-14.

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Kuruvilla S, Mays N, Walt G: Describing the impact of health services and policy research. J Health Serv Res Policy. 2007, 12 (1): 23-31.

    Article  Google Scholar 

  16. 16.

    Weiss AP: Measuring the impact of medical research: moving from outputs to outcomes. Am J Psychiatry. 2007, 164: 206-214. 10.1176/appi.ajp.164.2.206.

    Article  PubMed  Google Scholar 

  17. 17.

    Smith R: Measuring the social impact of research - difficult but necessary. BMJ. 2001, 323: 528-10.1136/bmj.323.7312.528.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  18. 18.

    Davies H, Nutley SM, Walter I: Assessing the Impact of Social Science Research: Conceptual, Methodological and Practical Issues. A Background Discussion Paper for ESRC Symposium on Assessing Non-academic Impact of Research. 2005, St Andrews: Research Unit for Research Utilisation, School of Management, University of St Andrews

    Google Scholar 

  19. 19.

    Kalucy EC, Jackson-Bowers E, McIntyre E, Reed R: The feasibility of determining the impact of primary health care research projects using the Payback Framework. Health Res Policy Syst. 2009, 7: 11-10.1186/1478-4505-7-11.

    Article  PubMed  PubMed Central  Google Scholar 

  20. 20.

    Yin R: Case Study Research: Design and Methods. 2003, London: Sage

    Google Scholar 

  21. 21.

    Brook RH, Chassin MR, Fink A, Solomon DH, Kosecoff J, Park RE: A method for the detailed assessment of the appropriateness of medical technologies. Int J Technol Assess Health Care. 1986, 2: 53-63. 10.1017/S0266462300002774.

    CAS  Article  PubMed  Google Scholar 

  22. 22.

    Shekelle P: The appropriateness method. Medical Decision Making. 2004, 24: 228-231. 10.1177/0272989X04264212.

    Article  PubMed  Google Scholar 

  23. 23.

    Haines M, Brown B, Craig J, D'Este C, Elliott E, Klineberg E, McInnes E, Middleton S, Paul CRS, Yano EM, on behalf of Clinical Networks Research: Determinants of successful clinical networks: the conceptual framework and study protocol. Implement Sci. 2012, 7: 16-10.1186/1748-5908-7-16.

    Article  PubMed  PubMed Central  Google Scholar 

  24. 24.

    McGory ML, Kao KK, Shekelle PG, Rubenstein LZ, Leonardi MJ, Parikh JA, Fink A, Ko CY: Developing quality indicators for elderly surgical patients. Ann Surg. 2009, 250: 338-347. 10.1097/SLA.0b013e3181ae575a.

    Article  PubMed  Google Scholar 

  25. 25.

    Jorm AF, Kitchener BA: Noting a landmark achievement: Mental health first aid training reaches 1% of Australian adults. Aust N Z J Psychiatry. 2011, 45: 808-813. 10.3109/00048674.2011.594785.

    Article  PubMed  Google Scholar 

  26. 26.

    Kingwell BA, Anderson GP, Duckett SJ, Hoole EA, Jackson-Pulver LR, Khachigian LM, Morris ME, Roder DM, Rothwell-Short J, Wilson AJ: Evaluation of NHMRC funded research completed in 1992, 1997 and 2003: gains in knowledge, health and wealth. Med J Aust. 2006, 184: 282-286.

    PubMed  Google Scholar 

  27. 27.

    Belkhodja O, Amara N, Landry R, Ouimet M: The extent and organizational determinants of research utilization in Canadian health services organizations. Sci Commun. 2007, 28 (23): 337-417.

    Google Scholar 

  28. 28.

    Innvær S, Vist G, Trommald M, Oxman A: Health policy-makers' perceptions of their use of evidence: A systematic review. J Health Serv Res Policy. 2002, 7 (4): 239-244. 10.1258/135581902320432778.

    Article  PubMed  Google Scholar 

  29. 29.

    Moore G, Redman S, Haines M, Todd A: What works to increase the use of research in population health policies and programmes: A review. Evidence & Policy. 2011, 7: 277-305. 10.1332/174426411X579199.

    Article  Google Scholar 

  30. 30.

    Rispel LC, Doherty J: Research in support of health systems transformation in South Africa: The experience of the Centre for Health Policy. J Pub Health Policy. 2011, 32: 10-29.

    Article  Google Scholar 

  31. 31.

    Laws R, King L, Hardy L, Milat AJ, Rissel C, Newson R, Rychetnik L, Bauman AE: Utilization of a population health survey in policy and practice: a case study. Health Res Policy and Syst. 2013, 11: 4-10.1186/1478-4505-11-4.

    Article  Google Scholar 

  32. 32.

    Research CIH: Roadmap: Creating Innovative Research for Better Health and Health Care CIHR Strategic Plan: 2009/10–2013/14. 2009, Ottawa: Canadian Institutes of Health Research

    Google Scholar 

  33. 33.

    National Health and Medical Research Council: National Health and Medical Research Council (NHMRC) Strategic Plan 2007–2009. 2007, Canberra: NHMRC

    Google Scholar 

  34. 34.

    Tetroe JM, Graham ID, Foy R, Robinson N, Eccles MP, Wensing M, Durieux P, Légaré F, Nielson CP, Adily A, Ward JE, Porter C, Shea B, Grimshaw JM: Health research funding agencies' support and promotion of knowledge translation: an international study. Milbank Q. 2008, 86: 125-155. 10.1111/j.1468-0009.2007.00515.x.

    Article  PubMed  PubMed Central  Google Scholar 

  35. 35.

    Wilson PM, Petticrew M, Calnan MW, Nazareth I: Does dissemination extend beyond publication: A survey of a cross section of public funded research in the UK. Implement Sci. 2010, 5: 61-10.1186/1748-5908-5-61.

    Article  PubMed  PubMed Central  Google Scholar 

  36. 36.

    Canadian Institutes of Health Research Knowledge Synthesis Grant.http://www.researchnet-recherchenet.ca/rnr16/viewOpportunityDetails.do?progCd=10219&org=CIHR,

  37. 37.

    National Health and Medical Research Council Partnership Project Grants.http://www.nhmrc.gov.au/_files_nhmrc/file/grants/apply/strategic/funding_rules_partnership_projects_120418.pdf,

  38. 38.

    Landry R, Amara N, Lamari M: Climbing the ladder of research utilization: Evidence from social science research. Sci Commun. 2001, 22 (4): 396-422. 10.1177/1075547001022004003.

    Article  Google Scholar 

  39. 39.

    Lavis J, Ross S, McLeod C, Gildiner A: Measuring the impact of health research. J Health Serv Res Policy. 2003, 8 (3): 165-170. 10.1258/135581903322029520.

    Article  PubMed  Google Scholar 

  40. 40.

    Wooding S, Hanney S, Buxton M, Grant J: Payback arising from research funding: evaluation of the Arthritis Research Campaign. Rheumatology. 2005, 44 (9): 1145-1156. 10.1093/rheumatology/keh708.

    CAS  Article  PubMed  Google Scholar 

  41. 41.

    Research ASfM: Exceptional Returns: The Value of Investing in Health R&D in Australia. 2008, Canberra: Australian Society for Medical Research

    Google Scholar 

Download references

Acknowledgements

This work was supported by a Capacity Building Infrastructure Grant from NSW Ministry of Health, Australia. The authors would like to thank study participants for their time and contribution to this research. Finally, we would like thank NSW Ministry of Health for their ongoing commitment to funding applied research through the NSW Health Promotion Demonstration Research Grants Scheme.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Andrew J Milat.

Additional information

Competing interests

In order to minimize potential conflict of interest and response bias in the interviews, an independent research officer (RN) conducted the interviews with all of the chief investigators and end-users. This was particularly important as one of the authors (AJM) had previously administered the HPDRGS, and two others had been past recipients of its grants (AB, CR). Those involved in funded projects (AB, CR) declared their potential conflicts of interest at the verification panel meeting prior to any group assessment of impacts, they did not assess the impacts of their own projects, and did not contribute to final group assessments of their work.

Authors’ contributions

AJM conceived the study and AJM, RL, LK, CR designed the methods. RN was responsible for conducting the interviews and collecting documentary sources. AJM and JB undertook analysis of the case study data, produced the case summaries. AJM drafted the manuscript. All authors contributed to data interpretation and have read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Milat, A.J., Laws, R., King, L. et al. Policy and practice impacts of applied research: a case study analysis of the New South Wales Health Promotion Demonstration Research Grants Scheme 2000–2006. Health Res Policy Sys 11, 5 (2013). https://doi.org/10.1186/1478-4505-11-5

Download citation

Keywords

  • Government
  • Health promotion
  • Intervention research
  • Policy
\