Skip to main content

Open science at the science–policy interface: bringing in the evidence?

Abstract

Part of the current enthusiasm about open science stems from its promises to reform scientific practice in service of the common good, to ensure that scientific outputs will be found and reused more easily, and to enhance scientific impact on policy and society. With this article, we question this optimism by analysing the potential for open science practices to enhance research uptake at the science–policy interface. Science advice is critical to help policy-makers make informed decisions. Likewise, some interpretations of open science hold that making research processes and outputs more transparent and accessible will also enhance the uptake of results by policy and society at large. However, we argue that this hope is based on an unjustifiably simplistic understanding of the science–policy interface that leaves key terms (“impact”, “uptake”) undefined. We show that this understanding—based upon linear models of research uptake—likewise grounds the influential “evidence–policy gap” diagnosis which holds that to improve research uptake, communication and interaction between researchers and policy-makers need to be improved. The overall normative stance of both discussions has sidelined empirical description of the science–policy interface, ignoring questions about the underlying differences between the policy domain and academia. Importantly, both open science and literature on closing the evidence–policy gap recommend improving communication (in terms of either the content or the means) as a viable strategy. To correct some of these views, we combine insights from policy theory with a narrative review of the literature on the evidence–policy gap in the health domain and find that removing barriers to access by itself will not be enough to foster research uptake.

Peer Review reports

Introduction: the role of science in policy-making

Whereas in earlier decades, public servants played a dominant role in policy advice, they are now expected to consult external sources (academia, stakeholder organizations, think tanks, political organizations) [42]. At least in industrialized countries, governments increasingly rely on external advice and (scientific) evidence as a way to improve governance [44]. This has led to increased interest in understanding the interlinkages between policy-making and expertise [56]. The goal of research in government is to find information that will help to solve specific, predefined policy problems in real time [31]. While the evidence-based policy (EBP) movement was instrumental in promoting rigorous analysis of policy options and programmes [43], a salient issue in the literature concerns the fact that topical scientific expertise is not being used in respective policy decisions despite its availability [36, 39], a problem that has been labelled the “evidence–policy gap” [39] and has been singled out as a major obstacle to reaching the United Nations’s Sustainable Development Goals in public health [69].

In this paper, we investigate literature addressing processes of knowledge integration in the policy domain and thereby focus on the recognition of open science as an effective way to enhance the uptake of scientific knowledge in the policy domain. In fact, utilization of publicly available scientific results and data by policy-makers has been claimed to be one of the benefits of open science [63, 82]. Likewise, a variety of measures have been proposed for researchers who wish to make an impact on policy-makers. Like advocates of open science, analysts of the evidence–policy gap recommend improving communication and other forms of interaction between researchers and policy-makers to close the gap. We argue that inquiring into the role of open science in policy-making can be regarded as a variant of the more general issue of research utilization or research uptake in policy-making. Open science thereby acts as a lens through which the science–policy interface can be studied by asking: How would scientific policy advice be improved if we improved scholarly communication? To this end, we reviewed literature studying the ways in which knowledge enters the policy process as well as literature documenting the reforms to scholarly communication and research practice envisioned by open science.

To the best of our knowledge, science advice has been most comprehensively problematized in the health policy domain, under the heading of an evidence–policy gap [7, 10, 14, 17, 21, 35, 40, 65]. This literature identifies concrete barriers to research utilization, offering suggestions to researchers how those barriers could be overcome. Yet, the relationship between policy-makers and researchers at the science–policy interface is rarely scrutinized and is frequently described as one of mutual misunderstanding or outright mistrust [41], where researchers and policy-makers are incapable of successful communication.

We begin our paper with a brief exposition of the development and tenets of the open science movement. Then we briefly review the history of the science–policy interface to interrogate the potential for open science practices to enhance scientific policy advice. Based on a narrative literature review, we describe how it frames the science–policy relationship, how it suggests the uptake of scientific knowledge in the policy domain can be enhanced and how the (potential) contribution of open science is assessed. Before concluding, we introduce three analytical points to conceptually inform the interpretation of our findings. This work is based upon and significantly extends a project deliverable prepared for the European Commission under Grant Agreement No. 824612 (ON-MERRIT—Observing and Negating Matthew Effects in Responsible Research and Innovation Transition) [73]. For the deliverable, a systematic literature review was performed in Scopus and Web of Science. Articles were selected based on abstract and keywords, with additional hand selection of articles from the sample so generated based on references to (aspects of) open science. Results were then summarized, validated and synthesized. The relevant literature was restricted to (1) all (peer-reviewed) materials pertaining to policy-making, and (2) all materials referring to (aspects of) open science within that corpus. For the analysis, a triaging strategy was used (similar to the approach described in Contandriopoulos et al. [16]) to keep the amount of text manageable. The obtained data are narrative (i.e. in the form of published research articles), so the primary analysis strategy was narrative as well. Combining a summative approach and an analytical approach, we produced a synthesis document of how the literature describes research uptake. We included a total of 115 articles in the review.

Conceptual background

A brief exposition of open science

The term “open science” encompasses a variety of meanings ranging from publicizing research outputs—open access in its various forms—to making all aspects of the research process accessible [26], including data (e.g. [30]), notebooks, analysis plans and code [46, 72], as well as research evaluation and peer review [75, 79]. Open science denotes a bundle of practices and associated ideas such as accessibility, reproducibility, (data) sharing and collaboration [84], but is often used interchangeably with open access and open data [2]. With respect to the science–policy interface, some ([82], see also the Budapest Open Access Initiative of 2002 [63]) have suggested that open access facilitates knowledge utilization in policy-making.

In light of the plurality of these approaches, open science is best described as an umbrella term for a programme of reforming science by reforming scholarly communication, with a clearly normative thrust (i.e. to make science better). Proponents have taken for granted the premise that academia is in need of top-down reform, a claim that has been couched in the superficially positive terminology of “openness”, without clearly stating in what way science was ever “closed”. In an attempt to cluster the existing landscape of open science interpretations, Fecher and Friesike described the various approaches in terms of five “schools”: “The infrastructure school (which is concerned with the technological architecture), the public school (which is concerned with the participation in knowledge creation), the measurement school (which is concerned with alternative impact measurement), the democratic school (which is concerned with access to knowledge) and the pragmatic school (which is concerned with collaborative research)” [25]). Following this framework, we argue that dominant interpretations of open science concern science-to-science interaction and collaboration, while only parts of what sails under the banner of open science addresses science-to-public relationships and engagement. In essence, the “public school” asks questions as to how science can be open to collaboration by the public (for the most part understood as citizen science) and how science can benefit the public.Footnote 1 Open science advocates have suggested that the removal of access barriers will serve to boost public trust in science as well as evidence-based policy-making, with particular emphasis on enrolling the public [37]. Others [81] argue, in the same vein, that the trend towards increased openness holds the promise for more public engagement. Some have claimed that open science continues a long-standing agenda of fostering participatory research. Civil society actors, users, patients, nongovernmental organizations, industry and other societal stakeholders are not only said to benefit from open access to scientific outputs [82] but, crucially, are regarded as resourceful contributors to processes of knowledge production. Therefore, citizen science is highly valued in open science as furthering public engagement throughout, contributing to data collection and analysis and publication and evaluation of research findings [84], as well as facilitating dialogue between science and society [52].

In the next section, we wish to prepare our main point—that both open science and policy advice frequently fall short of delivering adequate empirical descriptions, in particular as regards the science–policy interface—by describing how theorizing the science–policy interface has developed in the second half of the twentieth century.

Three phases of the science–policy interface

The way in which science and research engage with government underwent a considerable evolution in the second half of the twentieth century [73]. As the science–policy interface evolved, so did conceptual models developed by social and political scientists [32]. Before the Second World War, governments were entrusted with defining the common good. Postwar Europe saw increasing numbers of scholars engaging with policy in an era of increased investment in science [80]. Maasen and Weingart [57] identify several historical developments conducive to the changing relationship between science and policy-making. For the United States, Weingart [85, 86] diagnosed increasing formalization of communication between science and policy that he traced to President Dwight Eisenhower’s 1957 appointment of a science and technology advisor and committee. This model still exists, at least in part [32], but has since been supplemented by more sophisticated models of science advice (e.g. Mode 2 Science, cf. [29]).

In a recent review, Sokolovska et al. [80] describe three broad historical phases the science–policy interface underwent in the second half of the twentieth century, each with its associated policy model: The first phase (“linear models”) corresponds to a linear understanding of the science–policy interface, where advice is conceived as a one-way communication process [80] based on a dichotomy of facts (science) and values (policy). The linear model assumes knowledge integration working rationally from problem identification to problem solution, and assumes that appeals to scientific data and established facts will lead to better problem characterizations and, by extension, better policy. Meanwhile, this linear approach to the science–policy interface has come under criticism. It is now understood that the linear model does not adequately reflect the complexity of the science–policy interface [8] and the processes by which knowledge enters the policy sphere [47]. The second phase (“interactive models”) is characterized by the assumption that both scientists and policy-makers collaborate in a nonhierarchical way in search of the best problem solutions. With respect to the science–policy interface, the role of communicating the results of evidence synthesis falls to the knowledge broker [32], as described in Roger Pielke’s book The honest broker [70]. The hitherto final phase (“participatory models”) revolves around the question of engaging society at large in the policy process. Participatory models regard the science–policy interface as a discursive process between researchers and policy-makers [80] and are similar to interactive models in this respect; however, they are characterized by intense reflection on “formats of societal engagement and the language of communication on the science–policy interface” [80], for example, in terms of changing relationships between academia and the larger society in the triple-helix model [24], which similarly constitutes a shift away from linear models of the science–society relationship.

Note that the three models are ideal types; they are designed to help guide our theorizing. In that sense, they are not mutually exclusive—that is, they can (and frequently do) coexist. Each model corresponds to a class of strategies and techniques associated with managing the science–policy interface that researchers (can) adopt. Each phase thereby follows its own logic [80]. The family of linear models is of particular interest to our research aim. In the following sections we analyse a body of empirical work on the science–policy interface that suggests that the science–policy interface can be described as a gap. Based on an in-depth analysis of this literature, we argue that this diagnosis builds upon an ultimately inadequate linear understanding of the science–policy interface.

Methodology

Identification of relevant studies

We proceeded according to the following steps: identification of relevant studies, selection of eligible studies based on abstract and keywords, summarizing the results and narrative synthesis. The authors conducted abstract/title searches in electronic databases (Web of Science, Scopus, PubMed) in October 2019 and January 2020 using combinations of the following search terms (Table 1).

Table 1 Overview of search terms

Selection procedure

We conducted a literature search based on standardized keyword strings built from the search terms above, producing 491 studies.Footnote 2 The strategy involved identifying relevant literature on research uptake and then eliminating for lack of references to open science principles and practices. After deduplication, retrieved articles were categorized thematically. In total, 73 articles were included in the analysis based on title and abstract. The following inclusion criteria were applied:

  • (National and international) studies attempting to understand or improve academic policy advice/research uptake

  • (National and international) studies attempting to understand or improve the policy process

  • Available in English

  • Full text could be obtained

  • Peer-reviewed (review article, commentary, editorial, conference paper, research article)—grey literature was excluded

  • Any kind of methodology (quantitative, qualitative, review) was eligible.

One important downside of this methodology is that it only identifies contributions that define research uptake as a problem, leaving out studies that employ a different problem definition. Effectively, this means that fields which do influence policy-making but where the mechanisms are taken for granted are possibly absent from this review.

Summarizing the results

Following the approach in Contandriopoulos et al. [16, p. 453], we combine a summative and an analytical approach, moving from article synopses to syntheses of how knowledge transfer is problematized with respect to open science practices. Whereas narrative analysis was used to focus on problem definitions and contextual factors, a summative approach was employed for describing commonalities and identifying causal mechanisms of knowledge transfer.

Literature review: how research does (not) inform policy

The role of evidence in policy

Research into the role of evidence in policy-making constitutes a field of considerable breadth. Interestingly, a large proportion of empirical work into the science–policy interface has been relatively unperturbed by the historical dimensions of the phenomenon, sketched in the preceding section. Barriers to and facilitators of knowledge transfer are a well-recognized research topic [62, p. 735] with a set of established results. The bulk of the work on knowledge transfer pertains to barriers and facilitators with respect to evidence-based policy-making [51, 62, 64, 67]. Academics have strong motivations to give policy advice, in terms of both demonstrating “impact” to funders and making a difference to society [66]. Boswell (2008, 2009) identifies three functions of expert knowledge in policy-making (cited after Holm and Ploug [45, p. 15]): (1) as an instrument to achieve a (given) aim (instrumental), (2) to confer epistemic authority and thereby legitimacy (legitimating function) and (3) to substantiate already formed policy preferences (symbolic function). To illustrate, Christensen [15, p. 293] points out that at least since the end of the Second World War, particular interest of policy-makers accrues to economics, which has begun to bestow more legitimacy in policy advice than other forms of knowledge. This has been variously attributed to the (supposed) role of economics in ensuring prosperity, and in economics bestowing an aura of rationality on decision-making (ibid.).

Even while there is a vast body of literature studying impact, the relative importance of different factors has not been established [40]. With few exceptions, the literature on the topic fails to clearly define or analyse the fundamental concepts (“research uptake”, “impact”, “policy advice”) and the problems that are at stake. Notable counterexamples include works by Weiss [87], Mitton et al. [62]—who build upon Weiss’s model—and Blewden et al. [4], whereas Cairney, along with various collaborators, has published extensively on the second issue [8, 10, 66, 88]. The respective literature can be grouped as follows:

  1. (1)

    Problem diagnosis: empirical evidence for the evidence–policy gap (mostly qualitative, i.e. interviews and surveys)

  2. (2)

    Problem solving: recommendations for researchers on how to “bridge the evidence–policy gap”

  3. (3)

    Critique: the diagnosis of an evidence–policy gap results from a normative problem definition and an analysis based on a rudimentary (common sense) understanding of policy processes.

Here, we focus on the first and second group to distil the common denominator of the empirical work on why research uptake does (not) work. As Oliver and Cairney [66] point out, the advice offered to academics wishing to engage with policy-makers is frequently inconsistent. We would like to add to this the observation that the analyses of the science–policy interface are, if not inconsistent, then frequently uninformative. In fact, while there is now a large body of work documenting barriers to the use of scientific evidence in policy processes, taken at face value, most of the advice for overcoming barriers to uptake appears commonsensical and generic. Most of it uncritically assumes a gap between academia and policy, otherwise known as the evidence–policy gap [19], that “needs bridging” [71], often going so far as “using the exact phrasing” [66, p. 3] to suggest (rather than demonstrate) that their advice will help foster research uptake. This literature further assumes that policy is rarely based on data, and that greater use of evidence will produce better outcomes, an assumption that remains empirically untested [66]. At best, studies recommend process-related improvement (e.g. by reference to increasing transparency) [19]. Empirical work is frequently case-based, without contextualization of the multifaceted processes underlying policy development [64, 67, p. 4].

Key factors in research uptake: relationships, resources and research skills

Debates about the nature of the problems have spawned various literature engaging with policy advice from empirical and theoretical standpoints. In drafting this section, we predominantly relied on four narrative reviews of barriers and facilitators for the use of evidence by policy-makers [10, 54, 64, 64, 67, 67] that we amended by including empirical studies. Contacts and relationships (social capital) are reported throughout the literature as major facilitators of evidence use [64, 67, p. 7]. According to Oliver [64, 67, p. 4], timing and opportunity are the most important factors, along with (dis)trust and mutual (dis)respect. Policy-makers seek information that is timely, relevant, credible and available [42, p. 7]. Organizational factors such as (lack of) access to scientific results, (lack of) material and personnel resources and managerial support, and inflexible and nontransparent policy processes are mentioned frequently (ibid. 4 f.) Quality, relevance and reliability of research as well as presentation formats act as facilitators (ibid. 6). However, accessible communication of research involves trade-offs: Clear writing makes research more digestible but at increased cost for researchers [60]. Respondents value researchers who exhibit competence (pragmatism)/reputation), integrity (faithful representation of research) and independence (more important to politicians), and benevolence/commitment [65, p. 122]. For research to be effective in policy-making, a fundamental requirement is effective communication (e.g. [17]), a responsibility ascribed to researchers (e.g. [41]). Lack of understanding/awareness of research on the part of policy-makers was reported as a barrier (ibid. 6), as were (lack of) personal experience, values and judgements. Respondents scarcely attribute lack of uptake to the policy process itself [10]. Indeed, the literature often bemoans a general lack of reflection on policy processes (ibid.). Research uptake is further enabled/hampered by organizational constraints, influence of fads and trends on the policy process [87], corruption and ideology as well as cultural beliefs [39]. The most frequent organizational barriers to research uptake were limited resources (financial or personnel), time constraints (to make decisions or participate in training), high staff turnover and institutional resistance towards change [20]. On the other hand, decision-makers’ willingness to create a culture of knowledge translation and to invest resources was mentioned as a facilitator.

In what follows, we identify key factors that the literature holds (not) to be conducive to research uptake: (i) quality of relationships and informants, (ii) resources and access to research, (iii) communication formats and policy-makers’ research skills, and (iv) the policy context, and discrepancies in values, and goals. As we will demonstrate, the value proposition of open science directly or indirectly relates to several of these factors, which suggests that the genericity of the analysis of barriers carries over to the proposition that open science will enhance research uptake.

(i) Relationship quality and quality of informants Relationship quality is a well-recognized research area. Collaboration between researchers and policy-makers, along with relationships and skills, are the most frequently reported facilitators of research uptake [64, 67]. (Long-term) collaboration starting in the early stages of knowledge production is favoured by researchers and policy-makers alike [14]. Mutual mistrust is a well-researched barrier [13, 23, 34, 35, 41]. Researchers are advised to build better communication channels and relationships with policy-makers [31, 34, 38, 42]. While policy-makers worry about bias in research, researchers qualify policy processes as biased [35]. Positivism is thereby an artefact of requests for unbiased truth. The strategies employed by researchers to influence policy are likewise value-laden and cannot be understood solely as evidence-based [9]. Because researchers and policy-makers belong to different communities [55], the role of the knowledge broker has gained importance [28] in facilitating knowledge transfer [32]. Sustained dialogue between researchers and policy-makers is essential for the development of researchers’ perspectives, in-depth knowledge of the policy process, and credibility [31]. This aspect of the problem mirrors the constraints posed by differences in timescales [4, 11, 13, 18, 22]. The prevalence of informal contacts entails that science–policy interactions lack transparency [44]. Policy-makers treat scientific input as an internal concern, with the effect that recommendations by committees remain invisible. Oliver et al. [64, 67] document an increasing amount of research stressing the serendipitous nature of the policy process which gives primacy to informal contacts. In these environments, formalized advice through contract research does not promote transparency, but shifting to research programmes has boosted transparency regarding beneficiary institutions, funding amounts, topics and publication of results [44].

Policy-makers’ advisors reside either inside or outside public bodies [15, p. 295]. The current knowledge transfer landscape includes a set of (more or less) formalized roles [63, p. 3]. Policy-makers trust government sources as well as advocacy, industry and lobby groups, and experts [17, p. 844]. Policy-makers trust their networks and personal contacts most for information [65]; academics are rarely represented in them [65, p. 122]. As their research awareness is low, policy-makers prefer opinion leaders as information sources. Few academics participate directly in the decision-making process (ibid.). Policy-makers prefer local experts, governmental agencies and websites to academic publications. Policy-makers predominantly seek (quantitative) data and statistics [17, p. 842], but also use other information which they consider relevant and timely [64, 67].

(ii) Organizational factors and access to academic resources Lack of resources is a frequent barrier to academic policy advice [10, p. 400]. Resources are invested to the extent knowledge exchange is deemed profitable [16, p. 462]. Researchers tend to expect knowledge transfer to produce immediate results [4]. However, the temporal structure of policy-making is ill-attuned to academic influence [48, p. 205], as timescales of policy-making are shorter than those of academia [42]. Time constraints keep policy-makers from directly engaging with research. Timely access to good-quality research is conducive to uptake; poor access and lack of timely research output are frequent barriers [64, 67], as is the short-term nature of research funding [23, p. 467]. Knowledge transfer is deeply embedded in organizational, institutional and policy contexts [16, p. 468] which influence how relationships between academia and government evolve [42, p. 7], but is not featured in tenure/promotion criteria [35]. Patterns of evidence use and management vary across domains and across organizational types [43].

Access to information is important in research uptake [64, 67]. Policy-makers need relevant research to make well-informed decisions [12]. Costs associated with access inhibit research uptake, and public servants use their university affiliations to circumvent this [63]. Research needs to be both accessible (to potential users) and acceptable (in terms of the evidence provided) [60, p. 303]. Accessibility enables timely use of evidence; acceptability can mean scientific acceptability (valid methods, unbiased results, modelling assumptions), institutional acceptability (evidence meets the institutional needs of the decision-maker) or ethical acceptability [60]. There are trade-offs between the accessibility and the acceptability of research findings such that the use of a statistical apparatus might improve the acceptability of a certain evidence base, but only at the cost of its accessibility to nonexperts. External funding may similarly increase accessibility but harm scientific acceptability [60]. The propensity of organizations for research uptake depends on formal and informational structures for organizational learning [1]. Translation via up-to-date research syntheses that are easier to consume and less likely to be biased could help [38]. Systematic reviews are regarded as fundamental in transferring evidence from medical and health research to health policy-making [1, 58, 83], but even systematic reviews require translation [39], making for the importance of intermediaries [34]. Formal structures within research-performing institutions along with mechanisms to make syntheses available could facilitate research uptake [1, 34]. Given policy-makers’ preference for personal contacts, the availability and accessibility of scholarly publications is of secondary concern [65].

(iii) Communication formats and research skills Scholarly communication via peer-reviewed publications is ill-attuned to the needs of policy-makers who prefer personal contacts [41]. Potential experts are identified based on engagement with literature, through conferences, personal networks and reputation (e.g. past committee memberships), media presence, and sometimes through self-identification [17, 41]. Oral forms of communication are more commonly used than written material; the ability to communicate clearly and concisely is highly sought after. Policy-makers prefer personal contacts; formal procedures to identify experts are rare [65].

Policy-makers involve such heterogeneous actors as politicians, public servants, administrators, lobbyists and interest groups [76]. Evidence helps decision-makers reduce uncertainty, but policy-makers rely on beliefs and emotions in choosing a problem interpretation [10]. Policy-makers’ abilities in finding and making sense of evidence facilitate research uptake [12, 64, 67]. Policy-makers struggle with knowledge management and have difficulties appraising research [18; 42, p. 7], in addition to a lack of financial resources, knowledge, attitudes and skills [18]. Because uptake depends on data interpretation and analysis skills, mere access to data and other research outputs (systematic reviews, individual studies, grey literature) is not sufficient [53].

(iv) Policy context and discrepancies in norms and goals The policy context is fundamental for the use of evidence [4, 16, 31, 49]. Policy-making is an unpredictable, long-term, multilevel process involving networks of policy-makers, paradigms and norms in a quick succession of priorities [10, p. 400; 11, p. 544]. The inclusion of academics and interest groups in the policy process is subject to cultural differences [44]. Research needs to be policy-relevant in the first place to be considered by policy-makers [74], but this is only a necessary (not a sufficient) condition. Policy-making and academia have different goals and success criteria [45, p. 8] Policy is not driven by neutral scientific evidence. Policy-makers are motivated by factors other than research evidence [43, p. 474]. The policy process is inherently normative, involving interests and power relations and necessarily depending on policy-makers’ preferences, goals and values [48, p. 204; 43, p. 473]. These deliberative aspects are difficult to account for in problem-centred analyses of knowledge transfer [23, p. 467]. Evidence pertains to ends and means [43, p. 473], and needs to be embedded in action proposals [16, p. 459]. Researchers work with small, clearly defined problems, whereas policy-makers address problems holistically [48, p. 204]—discrepancies that make collaboration prone to conflict (e.g. [13]). Collaboration is not neutral; it works best when research goals match policy aims [6]. Discrepancies in norms and values influence how the potential for research uptake is perceived [59]; internal validity of information does not by itself influence the use of information [16, p. 457]. Research uptake therefore depends upon relevance to a given policy context [87], legitimacy (of knowledge producers) and accessibility [16, p. 460]. Even where the impact of scientific evidence on policy advice is evident (e.g. [15]), it is not clear whether changes in the culture of policy advice have an impact on policies. The same can be said for research more generally. In addition to having to answer questions of implementation, policy-makers need to worry about being re-elected and striking compromise between competing groups. All these factors limit the extent to which policies can be evidence-based.

Discussion

In an attempt to interpret the claims of the reviewed literature, we propose considering the following three analytical points. We start with Cairney and collaborators, who argue that the literature on the evidence–policy gap lacks the conceptual resources to describe how policy processes work. If Cairney et al.’s [10] analysis is correct, a deeper understanding of the way policy processes work is necessary to understand how policy advice benefits decision-making. However, instead of attempting to understand policy processes, "researchers have directed their attention at how to increase their own outputs, rather than on understanding the processes behind policy change" [67]. Consequently, the mechanisms by which scientific advice may benefit policy processes are rarely explicated. Even though policy models largely remain implicit in the reviewed literature, we argue that most studies rely on a linear model of the science–policy interface. The linear model frames scientists as producers of expert knowledge and identifies the main challenge in effective forms of communication that deliver the required knowledge to policy-makers. This linear model corresponds to representative democracy [77], suggesting that policy-making takes place primarily in (national, regional, municipal) parliaments and effectively boils down to decisions made by elected representatives of these assemblies.

In light of the contemporary literature, linear conceptions of the science–policy interface are rather outdated. Such linear models dominating the postwar era were superseded by interactive approaches [80]. Today, however, science–policy models seek to include a broad set of stakeholders and civil society actors in policy deliberation. Such type of knowledge integration in the policy domain, however, draws on participatory democracy and consequently involves policy arenas beyond legislative assemblies [33]. This conceptual consideration of the literature on the science–policy interface is crucially important, as it points to the conspicuous absence of interactive and deliberative approaches to policy-making in the reviewed literature. The conceptual preference for the linear model not only impacts on deficient communication as the main problem, but it also entails preferred ways to overcome the shortcomings identified above.

Along the lines of policy advice for actors of representative democracy, Gollust et al. [35] point out that scientists and policy-makers live in different worlds and often do not understand or appreciate the specifics (needs, requirements, logics) of the other [13, 35, 41, 45].Footnote 3 Most of the literature on the evidence–policy gap implicitly assumes a “two cultures model”, highlighting the

differences in academic and political “cultures”: language and jargon, longer scientific timescales, low incentives to engage, differing perceptions of scientific knowledge, and the relative need for scientists to challenge evidence (to ensure it is robust) but for policy makers to generate an image of policy certainty and reconcile evidence with well-established beliefs. There is also a perception that policy makers rely on personal experience, ad hoc links with experts, people they know and trust, and simple decision-making techniques and stories rather than the state-of-the-art in scientific research and sophisticated modeling systems (Lomas and Brown 2009, 906). [10, p. 400]

This model turns on an—ultimately untenable—separation of two aspects of policy advice: reducing uncertainty [9, 10] versus reducing ambiguity and increasing clarity [9]. Cairney et al. [10] further point out that a large proportion of the empirical work has focused on a third aspect that concerns improving the flow of information between decision-makers and researchers. The evidence–policy gap has thus been defined in terms of improving the evidence base, better communication of information, improved interaction with policy-makers, better timing, and the use of knowledge brokers [74].

Finally, we argue that the opacity of the policy-making process in the reviewed literature corresponds to the epistemological opacity of the scientific knowledge in question. By and large, the literature presents scientific knowledge as sacrosanct; scientific knowledge offered in the form of policy advice is taken for granted. Once published, the epistemic status (validity, credibility) of scientific knowledge is no longer questioned and can thus be delivered to policy-makers, notwithstanding the fact that knowledge users might require translation. A deeper analysis of the tenets of the literature on research uptake therefore shows that positivist conceptions of knowledge correspond to a specific view of policy-making as parliamentarian decision-making which in turn corresponds to a linear model of the science–policy interface. Seen from this point of view, scientific knowledge is sought after because of its desired capacity to solve policy problems. Likewise, influential interpretations of open science as the provision of open access to scientific outputs are indicative in omitting interpretations that foreground the significance of knowledge creation (public participation and engagement).

It is once more interesting to note the absence of alternative approaches in the reviewed literature. For instance, the literature on the evidence–policy gap never speaks to constructivist epistemologies (knowledge as co-constructed by researchers, policy-makers and civil society actors). Equally, the epistemic role of users, patient groups, (health) professionals or communities of practice is rarely touched upon. Epistemologies corresponding to participatory models of policy deliberation understand scientific knowledge as co-constructed, context-dependent and situated. It follows that for knowledge to be policy-relevant and effectively implemented, it matters that it is created in a participatory form.

As we have attempted to show, the absence of contributions that ostensibly draw on participatory policy models and related forms of knowledge production is indicative of the implicit assumptions in the reviewed literature on the evidence–policy gap.

Conclusion

Scientific policy advice is essential for policy-makers, both in terms of addressing immediate issues and in terms of long-term planning. Based on a literature review, we investigated the role of open science and its potential to enhance the uptake of scientific knowledge in the policy domain. Our main findings are as follows:

  1. 1.

    Our analysis identified a promissory discourse suggesting the potentials of open science (e.g. [63, 82]). Using a hypothetical language, assumptions are voiced that open science might, would or could increase the uptake of scientific knowledge in the policy domain, however without the presentation of evidence to substantiate such optimistic claims as yet.

  2. 2.

    We found that a surprisingly small body of literature on the uptake of scientific knowledge in the policy domain explicitly addresses the role of open science practices in this regard.

  3. 3.

    By contrast, we found a fairly developed discourse in the medical literature that recognizes a low uptake of scientific knowledge in the policy domain and frames this problem as “evidence–policy gap”. This literature concludes that:

    1. (i)

      Scientists and policy-makers live in different worlds and accordingly pursue incommensurable logics.

    2. (ii)

      The improvement of communication skills on the part of scientists would be effective in bridging the evidence–policy gap.

    3. (iii)

      The literature on the evidence–policy gap predominantly draws on the linear model of the science–policy interface, a conception then criticized as rather outdated [80].

  4. 4.

    Finally, we found a glaring absence of engagement with newer literature on the integration of scientific knowledge in the policy domain. By that we refer to conceptions of the science–policy interface that point to the importance of deliberative and participatory practices. The recognition of newer approaches such as responsible research and innovation were equally absent in the reviewed literature.

In light of these findings, we conclude that the potential for open science practices to bridge the observed evidence–policy gap has as yet not been demonstrated. Accordingly, the claim that open science can boost the utilization of publicly available scientific results and data by policy-makers is empirically largely unsubstantiated. Critical observers noted that accessibility of scientific literature is neither necessary nor sufficient [68, 78]. What matters instead is how a more open, participatory form of research affects scientific knowledge production in the first place. These rather sobering findings raise further questions with respect to modelling the science–policy interface. As we have argued, expectations to the effect that open science will increase research uptake are often linked to a linear understanding of knowledge transfer.

Against this backdrop, we point to the relevance of more recent literature on the science–policy interface. In particular, the consultation of a broader set of stakeholders has been suggested as a more inclusive and effective form of knowledge integration in the policy domain [80]. Future attempts to utilize the potentials of open science may therefore benefit from turning to deliberative forms of policy advice. In such a way, open science may have more to offer than what is now conceived almost exclusively as the provision of access to scientific outputs and more effective scholarly communication. This potential is even greater where participation is made possible from the outset of knowledge-making processes (upstream engagement). Indeed, participatory approaches and the contribution of more inclusive forms of knowledge-making have seen some success [27, 33]. However, these efforts were not framed in terms of open science, but rather as deliberative democracy [50], participatory technology assessment [5], public engagement [3], or responsible research and innovation [68], respectively. A stronger mutual recognition of these as yet separate bodies of literature may help to advance the integration of scientific knowledge in the policy domain.

Availability of data and materials

This contribution is based upon a literature review. The materials used are therefore available subject to copyright agreements between the authors of these publications and their publishers.

Notes

  1. The term “democratic school” is ambiguous in this context, since Fecher and Friesike refer to access to scientific knowledge without having a specific user in mind, rather addressing a generic subject of knowledge ([25, p. 25]. Knowledge provided is not appropriated by nonscientific audiences and therefore boils down to a science-to-science communication of research outputs.

  2. For a full list of search strings, databases searched and search results, see Reichmann et al. [73], p. 93 ff.

  3. Choi et al. [13] argue in a similar vein that researchers and policy-makers differ substantially in their goals, values, attitudes towards information, languages, perceptions of time, and career paths (e.g. [13]. If correct, this entails that researchers need to balance conflicting roles and identities when engaging with policy-makers in policy contexts. These have been described with reference to Mertonian disinterestedness [61] as an attitude associated with argumentative rigour and rationality which bears the associated risk of policy irrelevance [40].

References

  1. Abekah-Nkrumah G, et al. A review of the process of knowledge transfer and use of evidence in reproductive and child health in Ghana. Health Res Policy Syst. 2018. https://doi.org/10.1186/s12961-018-0350-9.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Albornoz D et al. Framing power: tracing key discourses in open science policies. in. 22nd International Conference on Electronic Publishing—Connecting the Knowledge Commons: From Projects to Sustainable Infrastructure, ELPUB 2018, Toronto: 22nd International Conference on Electronic Publishing—Connecting the Knowledge Commons: From Projects to Sustainable Infrastructure, ELPUB 2018. https://doi.org/10.4000/proceedings.elpub.2018.23.

  3. Bekker M, et al. Linking research and policy in Dutch healthcare: infrastructure, innovations and impacts. Evid Policy. 2010;6(2):237–53. https://doi.org/10.1332/174426410X502464.

    Article  Google Scholar 

  4. Blewden M, Carroll P, Witten DK. The use of social science research to inform policy development: case studies from recent immigration policy. Kōtuitui N Z J Soc Sci Online. 2010;5(1):13–25. https://doi.org/10.1080/1175083X.2010.498087.

    Article  Google Scholar 

  5. Böschen S et al. Technikfolgenabschätzung: Handbuch für Wissenschaft und Praxis. 1. Auflage. Baden-Baden: Nomos (Nomos Handbuch). 2021. Available at: https://doi.org/10.5771/9783748901990 (Accessed: 19 April 2022).

  6. Boyko JA, Kothari A, Wathen CN. Moving knowledge about family violence into public health policy and practice: a mixed method study of a deliberative dialogue. Health Res Policy Syst. 2016;14:31.

    Article  Google Scholar 

  7. Brownson RC, Jones E. Bridging the gap: translating research into policy and practice. Prev Med. 2009;49(4):313–5. https://doi.org/10.1016/j.ypmed.2009.06.008.

    Article  PubMed  Google Scholar 

  8. Cairney P. The politics of evidence-based policy making. Palgrave Macmillan; 2016.

    Google Scholar 

  9. Cairney P, Oliver K. Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Res Policy Syst. 2017;15(1):1–11. https://doi.org/10.1186/s12961-017-0192-x.

    Article  Google Scholar 

  10. Cairney P, Oliver K, Wellstead A. To bridge the divide between evidence and policy: reduce ambiguity as much as uncertainty. Public Adm Rev. 2016;76(3):399–402. https://doi.org/10.1111/puar.12555.

    Article  Google Scholar 

  11. Cairney P, Rummery K. Feminising politics to close the evidence-policy gap: the case of social policy in Scotland. Aust J Public Adm. 2018;77(4):542. https://doi.org/10.1111/1467-8500.12266.

    Article  Google Scholar 

  12. Cambon L, et al. Evaluation of a knowledge transfer scheme to improve policy making and practices in health promotion and disease prevention setting in French regions: a realist study protocol. Implement Sci. 2017;12:83. https://doi.org/10.1186/s13012-017-0612-x.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Choi BCK, et al. Can scientists and policy makers work together? J Epidemiol Community Health. 2005;59(8):632. https://doi.org/10.1136/jech.2004.031765.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Choi BCK, et al. Bridging the gap between science and policy: an international survey of scientists and policy makers in China and Canada. Implement Sci. 2016. https://doi.org/10.1186/s13012-016-0377-7.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Christensen J. Economic knowledge and the scientization of policy advice. Policy Sci. 2018;51(3):291–311. https://doi.org/10.1007/s11077-018-9316-6.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Contandriopoulos D, et al. Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature. Milbank Q. 2010;88(4):444–83. https://doi.org/10.1111/j.1468-0009.2010.00608.x.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Dodson EA, Geary NA, Brownson RC. State legislators’ sources and use of information: bridging the gap between research and policy. Health Educ Res. 2015;30(6):840–8. https://doi.org/10.1093/her/cyv044.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Ellen ME, et al. Determining research knowledge infrastructure for healthcare systems: a qualitative study. Implementation Sci. 2011. https://doi.org/10.1186/1748-5908-6-60.

    Article  Google Scholar 

  19. Ellen ME, Léon G, et al. Barriers, facilitators and views about next steps to implementing supports for evidence-informed decision-making in health systems: a qualitative study. Implementation Sci: IS. 2014;9:179. https://doi.org/10.1186/s13012-014-0179-8.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Ellen ME, Lavis JN, et al. Health systems and policy research evidence in health policy making in Israel: what are researchers’ practices in transferring knowledge to policy makers? Health Res Policy Syst BioMed Central. 2014;12:67. https://doi.org/10.1186/1478-4505-12-67.

    Article  Google Scholar 

  21. Ellen ME, et al. ‘How is the use of research evidence in health policy perceived? A comparison between the reporting of researchers and policy-makers. Health Res Policy Syst. 2018. https://doi.org/10.1186/s12961-018-0345-6.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Ellen ME, Lavis JN, Shemer J. Examining the use of health systems and policy research in the health policymaking process in Israel: views of researchers. Health Res Policy Syst. 2016. https://doi.org/10.1186/s12961-016-0139-7.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Elliott H, Popay J. How are policy makers using evidence? Models of research utilisation and local NHS policy making. J Epidemiol Community Health. 2000;54(6):461. https://doi.org/10.1136/jech.54.6.461.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Etzkowitz H, Leydesdorff L, et al. The second academic revolution: the role of the research university in economic development. In: Cozzens SE, et al., editors. The research system in transition. Dordrecht and Boston: Kluwer; 1990. p. 109–24.

    Chapter  Google Scholar 

  25. Fecher B, Friesike S. Open science: one term, five schools of thought. In: Bartling S, Friesike S, editors. Opening science. Cham: Springer International Publishing; 2014. p. 17–47. https://doi.org/10.1007/978-3-319-00026-8_2.

    Chapter  Google Scholar 

  26. Fell MJ. The economic impacts of open science: a rapid evidence assessment. Publications. 2019;7(3):46. https://doi.org/10.3390/publications7030046.

    Article  Google Scholar 

  27. Fischer F. Citizens, experts, and the environment: the politics of local knowledge. 4th ed. Durham: Duke University Press; 2005.

    Google Scholar 

  28. Gagnon S, Mailhot C, Ziam S. The role and contribution of an intermediary organisation in the implementation of an interactive knowledge transfer model. Evid Policy. 2019;15(1):7–29. https://doi.org/10.1332/174426418X15166967955544.

    Article  Google Scholar 

  29. Gibbons M et al. The new production of knowledge. The dynamics of science and research in contemporary societies. London et al.: Sage Publications. 1994.

  30. Giffels J. Sharing data is a shared responsibility. Sci Eng Ethics. 2010;16(4):801–3. https://doi.org/10.1007/s11948-010-9230-6.

    Article  PubMed  Google Scholar 

  31. Glied S, Wittenberg R, Israeli A. Research in government and academia: the case of health policy. Israel J Health Policy Res. 2018;7(1):1–8. https://doi.org/10.1186/s13584-018-0230-3.

    Article  Google Scholar 

  32. Gluckman PD, Bardsley A, Kaiser M. Brokerage at the science–policy interface: from conceptual framework to practical guidance. Humanit Soc Sci Commun. 2021;8(1):1–10. https://doi.org/10.1057/s41599-021-00756-3.

    Article  Google Scholar 

  33. Gohl C, et al. Eine gute beratene Demokratie ist eine gut beratene Demokratie. In: Dagger S, et al., editors. Politikberatung in Deutschland: Praxis und Perspektiven. 1st ed. Wiesbaden: Verlag für Sozialwissenschaften; 2004. p. 200–15.

    Chapter  Google Scholar 

  34. Gold M. Pathways to the use of health services research in policy. Health Serv Res. 2009;44(4):1111–36. https://doi.org/10.1111/j.1475-6773.2009.00958.x.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Gollust SE, Seymour JW, Pany MJ, Goss A, et al. Mutual distrust: perspectives from researchers and policy makers on the research to policy gap in 2013 and recommendations for the future. Inquiry. 2017;54:1–11. https://doi.org/10.1177/0046958017705465.

    Article  Google Scholar 

  36. Graham ID, et al. Lost in knowledge translation: time for a map? J Contin Educ Heal Prof. 2006;26(1):13–24. https://doi.org/10.1002/chp.47.

    Article  Google Scholar 

  37. Grand A, et al. Mapping the hinterland: data issues in open science. Public Underst Sci. 2016;25(1):88–103. https://doi.org/10.1177/0963662514530374.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Grimshaw JM, et al. Knowledge translation of research findings. Implementation Sci: IS. 2012;7:50. https://doi.org/10.1186/1748-5908-7-50.

    Article  PubMed Central  Google Scholar 

  39. Haines A, Kuruvilla S, Borchert M. Bridging the implementation gap between knowledge and action for health. Bull World Health Org. 2004;82(10):724–31.

    PubMed  PubMed Central  Google Scholar 

  40. Haynes AS, et al. From “our world” to the “real world”: exploring the views and behaviour of policy-influential Australian public health researchers. Soc Sci Med (1982). 2011;72(7):1047–55. https://doi.org/10.1016/j.socscimed.2011.02.004.

    Article  Google Scholar 

  41. Haynes AS, et al. Identifying trustworthy experts: how do policymakers find and assess public health researchers worth consulting or collaborating with? PLoS ONE. 2012;7(3): e32665. https://doi.org/10.1371/journal.pone.0032665.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Head BW. Relationships between policy academics and public servants: learning at a distance? Aust J Public Adm. 2015;74(1):5–12. https://doi.org/10.1111/1467-8500.12133.

    Article  Google Scholar 

  43. Head BW. Toward more “Evidence-Informed” policy making? Public Adm Rev. 2016;76(3):472–84. https://doi.org/10.1111/puar.12475.

    Article  Google Scholar 

  44. Hermann AT, et al. Cultural imprints on scientific policy advice: climate science-policy interactions within Austrian neo-corporatism. Environ Policy Gov. 2015;25(5):343. https://doi.org/10.1002/eet.1674.

    Article  Google Scholar 

  45. Holm S, Ploug T. The use of empirical evidence in formulating reproductive policy advice and policy. Monash Bioeth Rev. 2015;33(1):7–17. https://doi.org/10.1007/s40592-015-0020-4.

    Article  PubMed  Google Scholar 

  46. Ibanez L, Avila R, Aylward S. Open Source and Open Science: how it is changing the medical imaging community. in 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, 2006. 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, 2006, pp. 690–693. https://doi.org/10.1109/ISBI.2006.1625010.

  47. Jasanoff S. Designs on Nature. Science and Democracy in Europe and the United States. Princeton and Oxford: Princeton University Press; 2005.

    Book  Google Scholar 

  48. Kothari A, et al. Indicators at the interface: managing policymaker-researcher collaboration. Knowl Manag Res Pract. 2011;9(3):203–14. https://doi.org/10.1057/kmrp.2011.16.

    Article  Google Scholar 

  49. Krick E. Creating participatory expert bodies. How the targeted selection of policy advisers can bridge the epistemic-democratic divide. Eur Politics Soc. 2019;20(1):101–16. https://doi.org/10.1080/23745118.2018.1515865.

    Article  Google Scholar 

  50. Lafont C. Unverkürzte Demokratie. Frankfurt am Main: Suhrkamp. 2021. Available at: https://www.suhrkamp.de/buch/cristina-lafont-unverkuerzte-demokratie-t-9783518587645 (Accessed: 19 April 2022).

  51. Lavis JN, et al. How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q. 2003;81(2):221–48. https://doi.org/10.1111/1468-0009.t01-1-00052.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Leonelli S, Spichtinger D, Prainsack B. Sticks and carrots: encouraging open science at its source. Geo. 2015;2(1):12–6. https://doi.org/10.1002/geo2.2.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Lillefjell M, Knudtsen MS. From knowledge to action in public health management: experiences from a Norwegian context. Scandinavian J Public Health. 2013;41(8):771–7. https://doi.org/10.1177/1403494813496600.

    Article  CAS  Google Scholar 

  54. Liverani M, Hawkins B, Parkhurst JO. Political and institutional influences on the use of evidence in public health policy. A systematic review. PLoS ONE. 2013;8(10): e77404. https://doi.org/10.1371/journal.pone.0077404.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  55. Löblová O. Epistemic communities and experts in health policy-making. Eur J Public Health. 2018;28(suppl_3):7–10. https://doi.org/10.1093/eurpub/cky156.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Lundin M, Öberg P. Expert knowledge use and deliberation in local policy making. Policy Sci. 2014;47(1):25–49. https://doi.org/10.1007/s11077-013-9182-1.

    Article  Google Scholar 

  57. Maasen S, Weingart P. (eds) Democratization of Expertise?: Exploring Novel Forms of Scientific Advice in Political Decision-Making. Springer Netherlands (Sociology of the Sciences Yearbook). 2005. https://doi.org/10.1007/1-4020-3754-6.

  58. Marquez C, et al. Enhancing the uptake of systematic reviews of effects: what is the best format for health care managers and policy-makers? A mixed-methods study. Implement Sci. 2018;13:84. https://doi.org/10.1186/s13012-018-0779-9.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Martin G, Currie G, Lockett A. Prospects for knowledge exchange in health policy and management: institutional and epistemic boundaries. J Health Serv Res Policy. 2011;16(4):211–7. https://doi.org/10.1258/jhsrp.2011.010132.

    Article  PubMed  Google Scholar 

  60. Merlo G, et al. Bridging the gap: exploring the barriers to using economic evidence in healthcare decision making and strategies for improving uptake. Appl Health Econ Health Policy. 2015;13(3):303–9. https://doi.org/10.1007/s40258-014-0132-7.

    Article  PubMed  Google Scholar 

  61. Merton RK. The normative structure of science. In: The sociology of science. Chicago: The University of Chicago Press; 1973. p. 267–80.

    Google Scholar 

  62. Mitton C, et al. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85(4):729–68. https://doi.org/10.1111/j.1468-0009.2007.00506.x.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Olesk A, Kaal E, Toom K. The possibilities of Open Science for knowledge transfer in the science-policy interface. J Sci Commun. 2019. https://doi.org/10.22323/2.18030203.

    Article  Google Scholar 

  64. Oliver K, et al. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(2):1–12. https://doi.org/10.1186/1472-6963-14-2.

    Article  Google Scholar 

  65. Oliver K, et al. Identifying public health policymakers’ sources of information: comparing survey and network analyses. Eur J Pub Health. 2017;27:118–23. https://doi.org/10.1093/eurpub/ckv083.

    Article  Google Scholar 

  66. Oliver K, Cairney P. The dos and don’ts of influencing policy: a systematic review of advice to academics. Palgrave Commun. 2019;5(1):1–11. https://doi.org/10.1057/s41599-019-0232-y.

    Article  Google Scholar 

  67. Oliver K, Lorenc T, Innvær S. New directions in evidence-based policy research: a critical analysis of the literature. Health Res Policy Syst. 2014;12(34):1–12. https://doi.org/10.1186/1478-4505-12-34.

    Article  Google Scholar 

  68. Owen R, Pansera M. Responsible innovation and responsible research and innovation. in Handbook on science and public policy. Northampton: Edward Elgar Publishing, 2019; pp. 26–48.

  69. Panisset U, et al. Implementation research evidence uptake and use for policy-making. Health Res Policy Syst. 2012;10:20. https://doi.org/10.1186/1478-4505-10-20.

    Article  PubMed  PubMed Central  Google Scholar 

  70. Pielke RA. The honest broker. Making sense of science in policy and politics. Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo: Cambridge University Press; 2007.

    Book  Google Scholar 

  71. Rajić A, Young I, McEwen SA. Improving the utilization of research knowledge in agri-food public health: a mixed-method review of knowledge translation and transfer. Foodborne Pathog Dis. 2013;10(5):397–412. https://doi.org/10.1089/fpd.2012.1349.

    Article  PubMed  Google Scholar 

  72. Ram K. Git can facilitate greater reproducibility and increased transparency in science. Source Code Biol Med. 2013;8(1):7. https://doi.org/10.1186/1751-0473-8-7.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Reichmann S, Wieser B, Ross-Hellauer T. ON-MERRIT D5.1 Scoping report: open science outputs in policy-making and public participation. 2020. https://doi.org/10.5281/zenodo.3875055.

  74. Rose DC et al. Improving the use of evidence in legislatures: the case of the UK Parliament. Evidence & Policy [Preprint]. Available at: http://centaur.reading.ac.uk/88497/ (Accessed: 16 April 2020). 2019.

  75. Ross-Hellauer T. What is open peer review? A systematic review. F1000Research. 2017;6:588. https://doi.org/10.12688/f1000research.11369.2.

    Article  PubMed  PubMed Central  Google Scholar 

  76. Saretzki T. Evidence-based policy-making? The meaning of scientific knowledge in policy processes. Z Evid Fortbild Qual Gesundhwes. 2019;144–145:78–83. https://doi.org/10.1016/j.zefq.2019.05.008.

    Article  PubMed  Google Scholar 

  77. Schmidt MG. Demokratietheorien: Eine Einführung. Wiesbaden: Springer Fachmedien Wiesbaden; 2019. https://doi.org/10.1007/978-3-658-25839-9.

    Book  Google Scholar 

  78. von Schomberg R. Why Responsible Innovation. in The International Handbook on Responsible Innovation. A Global resource. Cheltenham: Edward Elgar Publishing. 2019.

  79. Shanahan DR, Olsen BR. Opening peer-review: the democracy of science. J Negat Results Biomed. 2014;13(1):2. https://doi.org/10.1186/1477-5751-13-2.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Sokolovska N, Fecher B, Wagner GG. Communication on the science-policy interface: an overview of conceptual models. Publications. 2019;7(4):64. https://doi.org/10.3390/publications7040064.

    Article  Google Scholar 

  81. Stilgoe J. Why Responsible Innovation? in Responsible Innovation. Making the responsible emergence of science and innovation in society. Chichester: Wiley, 2013; pp. xi–xv.

  82. Tennant J, Jacques D, Collister L. The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Research. 2016. https://doi.org/10.12688/f1000research.8460.1.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Tugwell P, et al. Systematic reviews and knowledge translation. Bull World Health Organ. 2006;84(8):643–51. https://doi.org/10.2471/blt.05.026658.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Vicente-Saez R, Martinez-Fuentes C. Open Science now: a systematic literature review for an integrated definition. J Bus Res. 2018;88:428–36. https://doi.org/10.1016/j.jbusres.2017.12.043.

    Article  Google Scholar 

  85. Weingart P. Verwissenschaftlichung der Gesellschaft—Politisierung der Wissenschaft. Z Soziol. 1983;12(3):225–41. https://doi.org/10.1515/zfsoz-1983-0303.

    Article  Google Scholar 

  86. Weingart P. Scientific expertise and political accountability: paradoxes of science in politics. Sci Public Policy. 1999;26(3):151–61. https://doi.org/10.3152/147154399781782437.

    Article  Google Scholar 

  87. Weiss CH. The many meanings of research utilization. Public Adm Rev. 1979;39(5):426–31. https://doi.org/10.2307/3109916.

    Article  Google Scholar 

  88. Wellstead A, Cairney P, Oliver K. Reducing ambiguity to close the science-policy gap. Policy Design Pract. 2018;1(2):115–25. https://doi.org/10.1080/25741292.2018.1458397.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the entire ON-MERRIT project team for a very fruitful collaboration, and especially the project coordinator, Tony Ross-Hellauer, for granting us the opportunity to collaborate within this project and for coordination and support. We further thank our deliverable reviewers as well as our project advisory board, with special thanks to those advisory board members who kindly provided specific feedback on early versions of the deliverable upon which this contribution is based.

Funding

This contribution was written as part of a project funded by the European Commission under Grant Agreement No. 824612 (ON-MERRIT—Observing and Negating Matthew Effects in Responsible Research and Innovation Transition).

Author information

Authors and Affiliations

Authors

Contributions

Conception (SR, BW), data collection (SR), data analysis (SR, BW), drafting of initial manuscript (SR), feedback on initial manuscript (BW), supervision (BW). All authors read and approved the final manuscript.

Corresponding author

Correspondence to Stefan Reichmann.

Ethics declarations

Competing interests

We, the authors, report no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Reichmann, S., Wieser, B. Open science at the science–policy interface: bringing in the evidence?. Health Res Policy Sys 20, 70 (2022). https://doi.org/10.1186/s12961-022-00867-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12961-022-00867-6

Keywords