The role of evidence in policy
Research into the role of evidence in policy-making constitutes a field of considerable breadth. Interestingly, a large proportion of empirical work into the science–policy interface has been relatively unperturbed by the historical dimensions of the phenomenon, sketched in the preceding section. Barriers to and facilitators of knowledge transfer are a well-recognized research topic [62, p. 735] with a set of established results. The bulk of the work on knowledge transfer pertains to barriers and facilitators with respect to evidence-based policy-making [51, 62, 64, 67]. Academics have strong motivations to give policy advice, in terms of both demonstrating “impact” to funders and making a difference to society [66]. Boswell (2008, 2009) identifies three functions of expert knowledge in policy-making (cited after Holm and Ploug [45, p. 15]): (1) as an instrument to achieve a (given) aim (instrumental), (2) to confer epistemic authority and thereby legitimacy (legitimating function) and (3) to substantiate already formed policy preferences (symbolic function). To illustrate, Christensen [15, p. 293] points out that at least since the end of the Second World War, particular interest of policy-makers accrues to economics, which has begun to bestow more legitimacy in policy advice than other forms of knowledge. This has been variously attributed to the (supposed) role of economics in ensuring prosperity, and in economics bestowing an aura of rationality on decision-making (ibid.).
Even while there is a vast body of literature studying impact, the relative importance of different factors has not been established [40]. With few exceptions, the literature on the topic fails to clearly define or analyse the fundamental concepts (“research uptake”, “impact”, “policy advice”) and the problems that are at stake. Notable counterexamples include works by Weiss [87], Mitton et al. [62]—who build upon Weiss’s model—and Blewden et al. [4], whereas Cairney, along with various collaborators, has published extensively on the second issue [8, 10, 66, 88]. The respective literature can be grouped as follows:
-
(1)
Problem diagnosis: empirical evidence for the evidence–policy gap (mostly qualitative, i.e. interviews and surveys)
-
(2)
Problem solving: recommendations for researchers on how to “bridge the evidence–policy gap”
-
(3)
Critique: the diagnosis of an evidence–policy gap results from a normative problem definition and an analysis based on a rudimentary (common sense) understanding of policy processes.
Here, we focus on the first and second group to distil the common denominator of the empirical work on why research uptake does (not) work. As Oliver and Cairney [66] point out, the advice offered to academics wishing to engage with policy-makers is frequently inconsistent. We would like to add to this the observation that the analyses of the science–policy interface are, if not inconsistent, then frequently uninformative. In fact, while there is now a large body of work documenting barriers to the use of scientific evidence in policy processes, taken at face value, most of the advice for overcoming barriers to uptake appears commonsensical and generic. Most of it uncritically assumes a gap between academia and policy, otherwise known as the evidence–policy gap [19], that “needs bridging” [71], often going so far as “using the exact phrasing” [66, p. 3] to suggest (rather than demonstrate) that their advice will help foster research uptake. This literature further assumes that policy is rarely based on data, and that greater use of evidence will produce better outcomes, an assumption that remains empirically untested [66]. At best, studies recommend process-related improvement (e.g. by reference to increasing transparency) [19]. Empirical work is frequently case-based, without contextualization of the multifaceted processes underlying policy development [64, 67, p. 4].
Key factors in research uptake: relationships, resources and research skills
Debates about the nature of the problems have spawned various literature engaging with policy advice from empirical and theoretical standpoints. In drafting this section, we predominantly relied on four narrative reviews of barriers and facilitators for the use of evidence by policy-makers [10, 54, 64, 64, 67, 67] that we amended by including empirical studies. Contacts and relationships (social capital) are reported throughout the literature as major facilitators of evidence use [64, 67, p. 7]. According to Oliver [64, 67, p. 4], timing and opportunity are the most important factors, along with (dis)trust and mutual (dis)respect. Policy-makers seek information that is timely, relevant, credible and available [42, p. 7]. Organizational factors such as (lack of) access to scientific results, (lack of) material and personnel resources and managerial support, and inflexible and nontransparent policy processes are mentioned frequently (ibid. 4 f.) Quality, relevance and reliability of research as well as presentation formats act as facilitators (ibid. 6). However, accessible communication of research involves trade-offs: Clear writing makes research more digestible but at increased cost for researchers [60]. Respondents value researchers who exhibit competence (pragmatism)/reputation), integrity (faithful representation of research) and independence (more important to politicians), and benevolence/commitment [65, p. 122]. For research to be effective in policy-making, a fundamental requirement is effective communication (e.g. [17]), a responsibility ascribed to researchers (e.g. [41]). Lack of understanding/awareness of research on the part of policy-makers was reported as a barrier (ibid. 6), as were (lack of) personal experience, values and judgements. Respondents scarcely attribute lack of uptake to the policy process itself [10]. Indeed, the literature often bemoans a general lack of reflection on policy processes (ibid.). Research uptake is further enabled/hampered by organizational constraints, influence of fads and trends on the policy process [87], corruption and ideology as well as cultural beliefs [39]. The most frequent organizational barriers to research uptake were limited resources (financial or personnel), time constraints (to make decisions or participate in training), high staff turnover and institutional resistance towards change [20]. On the other hand, decision-makers’ willingness to create a culture of knowledge translation and to invest resources was mentioned as a facilitator.
In what follows, we identify key factors that the literature holds (not) to be conducive to research uptake: (i) quality of relationships and informants, (ii) resources and access to research, (iii) communication formats and policy-makers’ research skills, and (iv) the policy context, and discrepancies in values, and goals. As we will demonstrate, the value proposition of open science directly or indirectly relates to several of these factors, which suggests that the genericity of the analysis of barriers carries over to the proposition that open science will enhance research uptake.
(i) Relationship quality and quality of informants Relationship quality is a well-recognized research area. Collaboration between researchers and policy-makers, along with relationships and skills, are the most frequently reported facilitators of research uptake [64, 67]. (Long-term) collaboration starting in the early stages of knowledge production is favoured by researchers and policy-makers alike [14]. Mutual mistrust is a well-researched barrier [13, 23, 34, 35, 41]. Researchers are advised to build better communication channels and relationships with policy-makers [31, 34, 38, 42]. While policy-makers worry about bias in research, researchers qualify policy processes as biased [35]. Positivism is thereby an artefact of requests for unbiased truth. The strategies employed by researchers to influence policy are likewise value-laden and cannot be understood solely as evidence-based [9]. Because researchers and policy-makers belong to different communities [55], the role of the knowledge broker has gained importance [28] in facilitating knowledge transfer [32]. Sustained dialogue between researchers and policy-makers is essential for the development of researchers’ perspectives, in-depth knowledge of the policy process, and credibility [31]. This aspect of the problem mirrors the constraints posed by differences in timescales [4, 11, 13, 18, 22]. The prevalence of informal contacts entails that science–policy interactions lack transparency [44]. Policy-makers treat scientific input as an internal concern, with the effect that recommendations by committees remain invisible. Oliver et al. [64, 67] document an increasing amount of research stressing the serendipitous nature of the policy process which gives primacy to informal contacts. In these environments, formalized advice through contract research does not promote transparency, but shifting to research programmes has boosted transparency regarding beneficiary institutions, funding amounts, topics and publication of results [44].
Policy-makers’ advisors reside either inside or outside public bodies [15, p. 295]. The current knowledge transfer landscape includes a set of (more or less) formalized roles [63, p. 3]. Policy-makers trust government sources as well as advocacy, industry and lobby groups, and experts [17, p. 844]. Policy-makers trust their networks and personal contacts most for information [65]; academics are rarely represented in them [65, p. 122]. As their research awareness is low, policy-makers prefer opinion leaders as information sources. Few academics participate directly in the decision-making process (ibid.). Policy-makers prefer local experts, governmental agencies and websites to academic publications. Policy-makers predominantly seek (quantitative) data and statistics [17, p. 842], but also use other information which they consider relevant and timely [64, 67].
(ii) Organizational factors and access to academic resources Lack of resources is a frequent barrier to academic policy advice [10, p. 400]. Resources are invested to the extent knowledge exchange is deemed profitable [16, p. 462]. Researchers tend to expect knowledge transfer to produce immediate results [4]. However, the temporal structure of policy-making is ill-attuned to academic influence [48, p. 205], as timescales of policy-making are shorter than those of academia [42]. Time constraints keep policy-makers from directly engaging with research. Timely access to good-quality research is conducive to uptake; poor access and lack of timely research output are frequent barriers [64, 67], as is the short-term nature of research funding [23, p. 467]. Knowledge transfer is deeply embedded in organizational, institutional and policy contexts [16, p. 468] which influence how relationships between academia and government evolve [42, p. 7], but is not featured in tenure/promotion criteria [35]. Patterns of evidence use and management vary across domains and across organizational types [43].
Access to information is important in research uptake [64, 67]. Policy-makers need relevant research to make well-informed decisions [12]. Costs associated with access inhibit research uptake, and public servants use their university affiliations to circumvent this [63]. Research needs to be both accessible (to potential users) and acceptable (in terms of the evidence provided) [60, p. 303]. Accessibility enables timely use of evidence; acceptability can mean scientific acceptability (valid methods, unbiased results, modelling assumptions), institutional acceptability (evidence meets the institutional needs of the decision-maker) or ethical acceptability [60]. There are trade-offs between the accessibility and the acceptability of research findings such that the use of a statistical apparatus might improve the acceptability of a certain evidence base, but only at the cost of its accessibility to nonexperts. External funding may similarly increase accessibility but harm scientific acceptability [60]. The propensity of organizations for research uptake depends on formal and informational structures for organizational learning [1]. Translation via up-to-date research syntheses that are easier to consume and less likely to be biased could help [38]. Systematic reviews are regarded as fundamental in transferring evidence from medical and health research to health policy-making [1, 58, 83], but even systematic reviews require translation [39], making for the importance of intermediaries [34]. Formal structures within research-performing institutions along with mechanisms to make syntheses available could facilitate research uptake [1, 34]. Given policy-makers’ preference for personal contacts, the availability and accessibility of scholarly publications is of secondary concern [65].
(iii) Communication formats and research skills Scholarly communication via peer-reviewed publications is ill-attuned to the needs of policy-makers who prefer personal contacts [41]. Potential experts are identified based on engagement with literature, through conferences, personal networks and reputation (e.g. past committee memberships), media presence, and sometimes through self-identification [17, 41]. Oral forms of communication are more commonly used than written material; the ability to communicate clearly and concisely is highly sought after. Policy-makers prefer personal contacts; formal procedures to identify experts are rare [65].
Policy-makers involve such heterogeneous actors as politicians, public servants, administrators, lobbyists and interest groups [76]. Evidence helps decision-makers reduce uncertainty, but policy-makers rely on beliefs and emotions in choosing a problem interpretation [10]. Policy-makers’ abilities in finding and making sense of evidence facilitate research uptake [12, 64, 67]. Policy-makers struggle with knowledge management and have difficulties appraising research [18; 42, p. 7], in addition to a lack of financial resources, knowledge, attitudes and skills [18]. Because uptake depends on data interpretation and analysis skills, mere access to data and other research outputs (systematic reviews, individual studies, grey literature) is not sufficient [53].
(iv) Policy context and discrepancies in norms and goals The policy context is fundamental for the use of evidence [4, 16, 31, 49]. Policy-making is an unpredictable, long-term, multilevel process involving networks of policy-makers, paradigms and norms in a quick succession of priorities [10, p. 400; 11, p. 544]. The inclusion of academics and interest groups in the policy process is subject to cultural differences [44]. Research needs to be policy-relevant in the first place to be considered by policy-makers [74], but this is only a necessary (not a sufficient) condition. Policy-making and academia have different goals and success criteria [45, p. 8] Policy is not driven by neutral scientific evidence. Policy-makers are motivated by factors other than research evidence [43, p. 474]. The policy process is inherently normative, involving interests and power relations and necessarily depending on policy-makers’ preferences, goals and values [48, p. 204; 43, p. 473]. These deliberative aspects are difficult to account for in problem-centred analyses of knowledge transfer [23, p. 467]. Evidence pertains to ends and means [43, p. 473], and needs to be embedded in action proposals [16, p. 459]. Researchers work with small, clearly defined problems, whereas policy-makers address problems holistically [48, p. 204]—discrepancies that make collaboration prone to conflict (e.g. [13]). Collaboration is not neutral; it works best when research goals match policy aims [6]. Discrepancies in norms and values influence how the potential for research uptake is perceived [59]; internal validity of information does not by itself influence the use of information [16, p. 457]. Research uptake therefore depends upon relevance to a given policy context [87], legitimacy (of knowledge producers) and accessibility [16, p. 460]. Even where the impact of scientific evidence on policy advice is evident (e.g. [15]), it is not clear whether changes in the culture of policy advice have an impact on policies. The same can be said for research more generally. In addition to having to answer questions of implementation, policy-makers need to worry about being re-elected and striking compromise between competing groups. All these factors limit the extent to which policies can be evidence-based.