What can we learn from interventions that aim to increase policy-makers’ capacity to use research? A realist scoping review

Background Health policy-making can benefit from more effective use of research. In many policy settings there is scope to increase capacity for using research individually and organisationally, but little is known about what strategies work best in which circumstances. This review addresses the question: What causal mechanisms can best explain the observed outcomes of interventions that aim to increase policy-makers’ capacity to use research in their work? Methods Articles were identified from three available reviews and two databases (PAIS and WoS; 1999–2016). Using a realist approach, articles were reviewed for information about contexts, outcomes (including process effects) and possible causal mechanisms. Strategy + Context + Mechanism = Outcomes (SCMO) configurations were developed, drawing on theory and findings from other studies to develop tentative hypotheses that might be applicable across a range of intervention sites. Results We found 22 studies that spanned 18 countries. There were two dominant design strategies (needs-based tailoring and multi-component design) and 18 intervention strategies targeting four domains of capacity, namely access to research, skills improvement, systems improvement and interaction. Many potential mechanisms were identified as well as some enduring contextual characteristics that all interventions should consider. The evidence was variable, but the SCMO analysis suggested that tailored interactive workshops supported by goal-focused mentoring, and genuine collaboration, seem particularly promising. Systems supports and platforms for cross-sector collaboration are likely to play crucial roles. Gaps in the literature are discussed. Conclusion This exploratory review tentatively posits causal mechanisms that might explain how intervention strategies work in different contexts to build capacity for using research in policy-making. Electronic supplementary material The online version of this article (10.1186/s12961-018-0277-1) contains supplementary material, which is available to authorized users.

Intervention study 2 evaluating policymakers' perceptions of relevance and potential impact of a long-term Policy Liaison Initiative (PLI) aimed at supporting the use of Cochrane systematic reviews in policy work.
Intervention strategies included: a community of practice to increase awareness and support knowledge sharing, seminars, skills workshops, a tailored website and review summaries.
Domain: Access, Skills improvement and Interaction Targeted participants: Policymakers at managerial and lower levels in the federal Department of Health.

Country: Australia
Individual interviews with participants: n=10/38 managers (who were randomised and sent personalised invitations). Seven group interviews n=33/5000 staff across all levels of the Department (who were sent general invitations). Plus participation data.
Use and awareness of systematic reviews Awareness and relevance of PLI Individual-, unit-and organisation-level capability to assess, interpret and apply research Links with researchers and other external experts Referenced literature regarding the complexity of policymaking (e.g. [2,3]), the need for accessible research, and the value of high quality systematic reviews as efficient decision-making aids (e.g. [4,5]). The selection of study outcomes was informed by arguments that distal research use cannot be wholly attributed to capacity-building (e.g. [6]). Data analysis was guided by the theoretical domains framework [7] and built on themes in previous studies (e.g. [8]).
Despite >565 occasions of attendance at forums and 294 members, most interviewees were not aware of PLI. They used reviews/syntheses but most did not distinguish between these and systematic reviews. Some did not understand the scope of systematic reviews. Access was impeded for those who found the Cochrane library hard to navigate. Links with researchers bolstered capacity to access and use research. Policy-relevance, applicability and accessibility were key needs. Managers were more confident than general staff that the Department had the skills to acquire, assess and interpret research.

Study 2.
Brownson et al 2011 [9] Experimental study to identify the factors that influence whether state policymakers would find evidence briefs 3 about mammography screening understandable, credible and useful.
States were stratified and participants randomised to four groups, each receiving one type of brief: 1. data-focused brief with state-level data, 2. data-focused with local-level data, 3. storyfocused with state-level data, 4. story-focused with local-level data.
Domain: Access Three groups of state-level policymakers from six states: state legislators (elected officials), legislative staff (those serving the legislators), and health executive branch administrators (civil servants).

Participants n=840
Country: USA Questionnairebased study. Postintervention survey responses n=291, an overall response rate of 35%, but a 47% response rate from executive branch administrators (the group of interest in this review) Whether the brief was understandable, credible, likely to be used, and likely to be shared Described the contradictory and overwhelming volume of information policymakers receive, and their preference for concise, relevant syntheses (e.g. [10,11]). Noted the power of narrative in policy communication and the composition of effective policy briefs (e.g. [12]). Stories were crafted as per Kreuter et al.'s [13] framework. Data collection and analysis referenced personal and professional influences on policymakers' info engagement [e.g. 14,15].
All recipients found the briefs understandable and credible. 67% of executive policymakers reported the briefs contained an appropriate amount of information, but 20% wanted more. This group were more likely to use and to share data-focused than story-focused briefs. This was the same for legislators but not staffers who were most likely to use story-focused briefs. Participants favoured statelevel rather than local data, but they all operated at state level so a regional policymaker cohort may have responded differently.

Study 3.
Campbell et al. 2011 [16] Observational evaluation of policymakers' satisfaction with the process and outcomes of Evidence Check, a program that helps policymakers commission highquality rapid reviews of research in 6-8 weeks. Relevance and policy impacts of the review product Relevance and accuracy of reviews Like study 1 (above), barriers to research use and the value of concise syntheses were identified; but the emphasis here was on the limitations of formal systematic reviews [e.g. 17]; policymakers' need for timely, accessible and applicable answers to specific questions; the benefit of linkage with researchers; and the use of knowledge brokers as expert boundary spanners who can facilitate communication and enable the production of bettertargeted syntheses (e.g. [18][19][20][21]).
Participants reported high levels of satisfaction with the knowledge brokering process and the reviews produced. Knowledge brokering helped to: refine research questions, shape project parameters (e.g. scope, budget, timeframe), and facilitate communication with researchers. The reviews were seen as useful with mostly indirect impacts, e.g. informing policy deliberations and identifying evidence gaps. Independent researchers assessed the reviews as accurately reflecting the current body of evidence. The authors note the trend towards models of knowledge translation that emphasise complexity and policymakerresearcher interactions [23]. Seminal research use typologies are referenced (e.g. [24,25]). A framework of categories that support research use is identified from the researchers' previous studies which guide the data analysis (although not cited, this framework has much in common with the diffusion of innovations framework developed by Greenhalgh et al. [26]).
Few participants read the research documents. Dissemination workshops had the greatest impact as they bypassed the need for skills in reading, appraising and interpreting research and did not contribute to information overload. However, workshop attendance was uneven. It was hard to get Ministry staff, especially those at senior levels, to participate in the intervention or evaluation and there were no discernible impacts at national level. Some regional policymakers were unaware of the research findings, but others used them instrumentally, conceptually and tactically.

Study 5.
Dobbins et al. 2001 [27, 28] 4 Intervention study that tested the extent to which health decisionmakers used policy relevant systematic reviews that were provided by the research team.
24-month trial with five systematic reviews (on topics of current policy relevance) disseminated once. Extent to which policymakers used the reviews in decision-making What characteristics predicted use at the levels of the review, the individual policymaker, the organisation, and/or environment This study was framed by diffusion of innovations theories (e.g. [29]). It drew links between research use in practice and policy in relation to the impact of: multiple forms of evidence, the power of personal attributes and experience, and the complex processes whereby new initiatives are adopted (e.g. [30][31][32]). An unpublished (and undescribed) framework guided the study. Survey instruments derived from previous studies, two of which combined concepts from multiple studies, mostly in nursing (e.g. [32][33][34][35]). 63% of respondents said they had used at least one systematic review to make a decision. Reviews were most useful for program justification and planning, but had little impact on evaluation decisions. Predictors of use were: organisational position (managers and directors were significantly more likely to use a review than clinicians), expecting to use a review in the future, perceptions that the reviews were easy to use and compensated for limited critical appraisal skills. Their impact was rated more highly in agencies with higher existing levels of support for research use. 4 The two articles cited in relation to this study as complementary articles about the same study, so aspects of both are synthesised here 5 Public health units are municipal-level agencies with legislative responsibility for research-informed program planning and evaluation (see Kothari [36] (see also Traynor et al. 2014 -study 17 below) Experimental (randomised controlled) trial comparing: (a) access to an online registry of systematic reviews, or (b) access to the registry plus tailored weekly messages, or (c) access to the registry plus tailored messages and knowledge brokering.
Health departments were stratified and randomly allocated to the three groups. Over the 12-month trial there was ongoing access to the registries. Tailored messages were sent weekly X7. Knowledge brokers communicated >once monthly, made a site visit of 1-2 days, and hosted workshops and webinars.

Domain: Access and systems improvement
Policymakers and program managers in regional and local public health departments. Barriers to research use were identified, including time constraints, research availability, and policymakers' limited capacity to appraise and translate studies (e.g. [19,33]). References the value of tailored and targeted messaging, and of knowledge brokers, to improve the use of systematic reviews (e.g. [37,38]). The study is guided by a framework that integrates concepts from diffusion of innovations [29] about the stages of adoption of new initiatives, plus concepts from the authors own work about the characteristics that mediate the uptake of research.
In most policy areas, the intervention had no significant effect on evidenceinformed decision making, with no significant difference between the three intervention groups in the extent to which research was used. In public health there was a significant between-group difference in research use only when access to both systematic reviews and tailored messages were combined [36]. Having access to an online registry of research appeared to have no impact at all. Knowledge brokering also appeared to be ineffective, but may trend toward a positive effect when organisational research culture is perceived as low.

Study 7.
Dwan et al. 2015 [39] Observational study of a facilitated engagement strategy that enables researchers to present contextualised findings to policymakers. Included validation of an evaluation instrument. The study is framed by literature focusing on the complexity of getting research into policy [40]. The goal is conceptualised as research mobilisation (rather than transfer or translation [41]), and linkage between researchers and policymakers [42] that counters the two-communities divide [43]. The need for tailored information, and the situated nature of research usefulness are emphasised [44,45]. Forums were based on exchanges in previous studies (e.g. [46]) Participants indicated that the forums had broadened their knowledge and stimulated thinking. Over ¾ indicated the forums' content was directly applicable to their work and they may be able to use it. The content of roundtables was more applicable than seminars, but was no more effective in stimulating thinking and/or broadening participants' knowledge. International speakers were rated as especially effective. Nearly 90% had used research in the past 12 months and said they would use it more if it were easily available. The interplay of evidence and politics is noted [48,49], and the capabilities required to use research effectively in this complex environment [50], which are often lacking in LMIC [51]. Capacity is conceptualised at multiple levels as per the UK Department of International Development [52]. Institutional capacity is regarded as especially critical for sustained research-informed policymaking. The authors cite Ward et al. [53] on the role of interaction as an explanatory feature in research transfer models, and frame the results using categories of capacity described by Moore et al. [54]. ACCs were conceptualised as boundary organisations [56] intended to support crosssector collaboration that would, in turn, foster Research→Policy [57]. The study's program theory (which was programmatic rather than theoretical) guided data collection and analysis. Findings build on previous ACC [58] including a study that took an interpretive hermeneutic approach [59].
The ACC provided a platform for dialogue and interaction, but project collaborations did not extend into enduring partnerships. Most committees functioned well but thematic groups were less successful due to lack of support from managers. Overall, policymakers were less involved than researchers and practitioners. New research proposals were written but non-researcher involvement was limited and traditional research designs were used. The number of projects and participants increased over time, but the structure and density of networks was unchanged.
2005 [60] Experimental case controlled study that tested whether policymakers were more likely to use a research report if they were involved in its production.
12-month trial with ongoing feedback and one presentation for the three 'involved' units. Both they and the three comparison units received a copy of the final report. The authors hypothesise that formal policymaker-research linkage and exchange [42] will create shared agendas, solutions, practices, lexicon and goals that bridge the two-communities [43] and counter static research transfer models. The study was guided by a conceptual model of stages and types of research use ( [61] and [49]).
Staff within units that were involved in the production of a research report were more likely to receive a report, and to understand it better and value it more, than units that were not involved. But actual use was not affected. Both involved and comparison units used the research findings to confirm that their program activities were consistent with evidence, and to compare their program performance relative to other units. Gaps between knowledge and practice [23] are tied to disconnects between researchers and knowledge-users [43]. Collaboration in research development and dissemination is described and advocated for. [63][64][65][66]. The authors argue that, despite recent reviews [67,68], collaboration remains a 'black box' and greater understanding of partnerships is needed [69]. The PreVAiL network was based on public health approaches to violence. Collaborative development of the questionnaire and indicators is described elsewhere [45]. Thematic data analysis was used. [70] Participation rates varied from 11-79%. The network was seen as beneficial for individuals and organisations. 75% of PIQ respondents felt their contributions were valued. Partners used the network as a source of synthesised information, but tended to contact the same researchers. Some partners functioned as an 'information conduit' to their own organisation. There were collaborations in writing papers, grants and speaking at events, but desire for greater collaboration on grants, research proposals and advocacy. Identify any lessons learnt about the process and impact of the Policy BUDDIES strategy Focuses on the need for demand-driven research [73], and the importance of organisational culture in fostering research use [74]. Intervention design was based on studies highlighting the centrality of partnerships approaches and trusting relationships alongside the need to ensure that used research is robust and valid e.g. [75]. Data collection drew on Walt and Gilson's policy analysis framework [76] Buddying helped policymakers to value and use research evidence, but also built the capacity of researchers to understand policy needs and provide useful support. Buddies were perceived as more objective than other experts. Interactions were necessarily iterative and required equality and trust. Institutional support and incentives for using research were important barriers/facilitators to policymakers' involvement in generating and using evidence. Champions drove policymakers' ownership of the initiative.

Study 13.
Pappaioanou et al. 2003 [77] Intervention study that tested an intervention for strengthen the capacity of policy staff to collect, analyse, report and use epidemiological data.  [78], and the need to involve users in systems design [79]. Aimed to reduce barriers including: the failure of researchers to produce quality, timely, inaccessible research and lack of participation in interpretation [80]; poor systems for accessing policy-relevant information; and the need for policymakers to understand and trust health data. [81] All countries trained policy staff (a) to use data and (b) to train others to use it. Participants reported the training taught them how to work as part of a public health team, empowered them to use data to identify critical health community problems, helped them understand their local decisionmaking environment, and helped them set achievable outcomeoriented goals and formulate and implement plans to tackle them. Quantitative skills assessment data is not reported. The intervention was found to improve data-informed public health in all countries. Some country-specific impacts are identified.

Participants and setting
Evaluation methods

Study 14.
Peirson et al. 2012 [82] Observational evaluation of the implementation and impacts of a strategic plan for using research in decision-making. The intervention was developed using strategies from outside healthcare identified in an earlier study based on the hypotheses that research-informed policymaking requires a culture of critical inquiry, staff capacity and tools for research use, and improved organisational knowledge management [83]. Collaborative data collection and analysis was informed by key texts in organisational change (e.g. [26,29,84,85]), knowledge exchange [23] and implementation [86] [87] Intervention evaluation of a yearlong program to build the capacity of government decision-makers to use HIV data strategically.
The intervention comprised block weeks of training in: HIV interventions and situational analysis; descriptive and analytic epidemiology; HIV surveillance; and evaluation. Regional teams were mentored by researchers to complete a practical project that they presented for assessment. 92% of participants felt the course met their expectations and all said it was relevant to their work. Self-reported skills improved: trainees could collect, analyse and interpret data effectively and use the findings, and carry out work tasks confidently. The expert panel judged that trainees had learned core skills in using data but needed to refine their analyses and correct some errors. Some trainees went on to train their colleagues. Retention increased in subsequent cohorts (from 65% to 87% & 92%) after program improvements, e.g. a shorter more intensive course and the addition of mentors from outside academia.
Mentorship was hypothesised to be the critical mechanism of change.
Stakeholders agreed the course contributed to skills capacity in Ethiopia.

Participants and setting
Evaluation methods

Study 16.
Shroff et al. 2015 [90] Observational evaluation of five projects resulting from a WHO initiative to "catalyse" the use of health research in policy via push, pull and exchange activities.
Interventions were specified locally but could include: platforms to produce and communicate research; training programs; establishing data usage units within ministries of health; developing and using evidence briefs; and hosting policy dialogues or other forums for connecting researchers and policymakers. There was considerable variation in intervention activities and their intensity. In more successful projects the use of research was aided by a combination of: enthusiastic policymakers in researchorientated ministries; research topics that were policy priorities and also interested the researchers; the availability of reliable, easy-to-understand research; positive research/policy relationships; clear expected outcomes; thorough dissemination of findings; and strong project leadership. The use of multiple strategies targeting different domains was thought to be beneficial. The practice of establishing research centres in ministries is suggested. Identifies the gaps between research and policy [105] lack of health agenda ownership by policymakers in low-and middle-income countries (LMICs) [106] and the non-linear, non-rational process of policymaking [107]. The use and design of workshops was founded on work by Poulos et al. [108]. Statistical data analysis used methods developed for LMICs [109] and phenomenological analysis of focus group data followed Giorgi [110] Attendance rate of 84%. Of the 81 participants 64% were policymakers. Preforum these participants supported the intervention goals, and believed that research could provide sound and relevant guidance for more effective, efficient and sustainable health systems.
Post-forum, participants reported they had greater understanding of how to: access research and assess its policy relevance; synthesise and present research; transform research into policy; and amplify the impact of research in policy.

Study 19.
Uneke et al. 2015a [111] Intervention  [5]. They recognise the politicisation of policymaking and the need to incorporate different stakeholder perspectives in policy options [112]. Intervention design draws on studies that emphasise the benefits of training workshops and mentoring [113]. The intervention is modelled on HPACs in other countries and premised on the assertion that regular interaction between policymakers and researchers can address the gaps between them [112,116]. Follows Choi et al. [117] who suggest that collaboration can increase policymakers' capacity to apply a "science lens" to policymaking, and researchers' capacity to be "policy sensitive". Aims for a systematic and transparent appraisal of research within policy processes [5]. The evaluation drew on qualitative methods within a case study approach. [101,118] Participants' reported: increased understanding of practical knowledge translation, including how to access and use research; markedly reduced distrust between policymakers and researchers; and greater ability to promote research-informed policymaking within the Ministry of Health. The evidence brief produced by the HPAC has been published [119] and is under consideration by the MoH. HPAC members called for performance measurements and institutional support to ensure continuation and independence. The authors note the need for continual training and interaction if HPAC productivity is to be sustained.

Study 21.
Waqa et al. 2013 [120] Observational process evaluation of a tailored intervention to build policymakers' capacity to produce evidence briefs. The authors hypothesised that increased researcherpolicymaker interactions, facilitated by knowledge brokers [36], promote research use in policymaking [8,26]. Intervention strategies were informed by previous studies, including: the use of an advisory panel, gaining high level organisational buy-in via 'concept papers' [121], targeted skills development [82], supported development and presentation of evidence briefs [122], assessment of existing skills and support for using research [123]. Use of process diaries was based on Waters et al. [124] 55% of participants completed the 12-18-month intervention, 63% of these produced one or more briefs (n=20) and 5 organisations developed templates for constructing future briefs. The knowledge brokering team spent an average of 30 hours per participant.
Organisations with higher levels of internal support for using research developed more briefs. The program's success was built on partnership with high-level policy staff in each organisation which were formalised and resulted in strong organisational commitment to the project, but it was undermined by high staff turnover.  [8,42,126,127]. The lack of knowledge about how interaction contributes to research use is noted [63,126]. It questions the hypothesis that structural support for interaction is sufficient to facilitate meaningful communication and connection [128]. Information about the intervention design is not available in English. No literature is cited as informing the data collection or analysis.
Goals were undermined by differences between partners in views, values and expectations. In the first presentation, policymakers perceived the researchers as poor communicators who were too focused on methodology, and found the results inaccessible and lacking policy usefulness. Results were repackaged using scenarios to highlight policy relevance, and a carefully managed public forum was held. This was perceived as successful in presenting scientifically robust findings that were also accessible and applicable.
Findings have influenced problem definition and agenda-setting, and paved the way for further researchpolicy collaborations.