Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Public health: disconnections between policy, practice and research

  • Maria WJ Jansen1, 2Email author,
  • Hans AM van Oers3,
  • Gerjo Kok4 and
  • Nanne K de Vries1, 2, 5
Health Research Policy and Systems20108:37

https://doi.org/10.1186/1478-4505-8-37

Received: 29 July 2010

Accepted: 31 December 2010

Published: 31 December 2010

Abstract

Background

Public health includes policy, practice and research but to sufficiently connect academic research, practice and public health policy appears to be difficult. Collaboration between policy, practice and research is imperative to obtaining more solid evidence in public health. However, the three domains do not easily work together because they emanate from three more or less independent 'niches'.

Work cycles of each niche have the same successive steps: problem recognition, approach formulation, implementation, and evaluation, but are differently worked out. So far, the research has focused on agenda-setting which belongs to the first step, as expressed by Kingdon, and on the use of academic knowledge in policy makers' decision-making processes which belongs to the fourth step, as elaborated by Weiss. In addition, there are more steps in the policy-making process where exchange is needed.

Method

A qualitative descriptive research was conducted by literature search. We analyzed the four steps of the policy, practice and research work cycles. Next, we interpreted the main conflicting aspects as disconnections for each step.

Results

There are some conspicuous differences that strengthen the niche character of each domain and hamper integration and collaboration. Disconnections ranged from formulating priorities in problem statements to power roles, appraisal of evidence, work attitudes, work pace, transparency of goals, evaluation and continuation strategies and public accountability. Creating awareness of these disconnections may result in more compatibility between researchers, policy makers and practitioners.

Conclusion

We provide an analysis that can be used by public health services-related researchers, practitioners and policy makers to be aware of the risk for disconnections. A synthesis of the social, practical and scientific relevance of public health problems should be the starting point for a dialogue that seeks to establish a joint approach. To overcome the above mentioned disconnections, face-to-face encounters consistently emerge as the most efficient way to transfer knowledge, achieve higher quality and acknowledge mutual dependence. We recommend practice and policy based research networks to establish strong links between researchers, policy makers and practitioners to improve public health.

Introduction

Public health is the process of mobilizing and engaging local, regional, national and international resources to assure the conditions in which people can be healthy [1]. Public health includes three major fields: (i) policy, as it is inherently a political enterprise that supplies services and allocates resources; (ii) practice, as policies need to be implemented to create social action and organize service delivery; and (iii) research, as interventions need to be developed and assessed on effectiveness and cost-benefit ratios. A broad range of disciplines are relevant to these three major fields and public health as a whole. In fact, public health draws on biomedicine, epidemiology, biostatistics, genetics, nutrition, the behavioural sciences, health promotion, psychology, the social sciences (including social marketing), organizational development and public policy. These disciplines, each in their own way, have demonstrated that quality of life is a major topic in public health today. Ideally, policy, practice and research should be mutually dependent partners, uniting the different disciplines and combining academic and tacit knowledge to support public health. In reality, however, it appears to be difficult to sufficiently connect academic research, practice and public health policy. The three domains do not easily work together because they emanate from three more or less independent 'niches'. The term 'niche' is used here because policy, practice and research are characterized by specific ideologies as well as unique norms and values, internal orientations, communication and languages, internal codes of behavior, self-directed improvement processes, independence and a strong desire to protect themselves against the outside world [24]. Due to their niche character, the three domains do not easily converge, despite universal calls for collaboration [412]. Collaboration is thought to foster quality improvement of local and, ultimately, national public health policy in order to tackle complex public health problems. Quality improvement in the Dutch public health sector is urgently needed because, despite having boasted very good population health status in the past, the Netherlands, compared to the rest of the European Union, has seen a substantial decline in population health status in recent years. The assumption that collaboration between practice, research and policy will result in more solid evidence and higher quality standards in public health is widely supported [1322]. Unfortunately, evidence does not naturally find its way into policy and practice [23].

Within the development of evidence-based medicine, a tradition of organizing practice-based research networks as a linkage between medical and public health practitioners and researchers has been built [2428]. So far, these practice-based research networks mainly focus on the one-way transfer of evidence from research to clinical practice. However, gradually, more attention is being paid to the development of mutual relations that enhance practice-based knowledge production [29]. Public health can learn from these experiences of integrating research and practice. Within the field of public health, these practice-based research networks should be extended to include the policy domain. Although the policy domain is highly influenced by political aspects [30, 31], policy-making should also be knowledge-based and result-oriented. This, however, poses several problems to the policy maker. The first problem is finding the evidence in the overwhelming volume of research literature. The second problem is the lack of monitoring and evaluation of public health policy that uses clear outcomes and performance indicators. The third problem is that the exchange process between policy makers, practitioners and researchers is often a one-way transfer. So far, the exchange has focused on agenda-setting or, as expressed by Kingdon [32], on how to create a window of opportunity. Next, it focused on the use of academic knowledge in policy makers' decision-making processes, as elaborated by Weiss [33]. However, in addition to agenda-setting and decision-making, there are more steps in the policy-making process where exchange is needed. These steps can be characterized as a regulatory policy work cycle [34, 35]. The first step in policy is problem recognition, followed by an analysis of the problem and the formulation of an approach to solve it, which is step 2. Step 3 then involves the initiation of implementation and, lastly, in step 4, the effects are interpreted and evaluated. This stepwise procedure is based on the theoretical framework termed 'stages heuristic' or textbook approach [36] (p 6-7). Work cycles of practice and research have the same successive steps [37, 38]. In reality, work cycles do not proceed rationally and linearly from step 1 to 4 [36]. Rather, they tend to be much more incremental [34, 39] or a combination of both. Thus, work cycles proceed as a diffuse, open-ended, interactive and iterative process [36, 39, 40] which may make collaboration extremely complicated. Moreover, in order to include public health practice and research in the local or regional policy-making process, the benefits of such collaboration should fit closely with the goals and performance indicators of each domain.

In order to tackle these problems, reciprocity and interaction should be employed as the starting point for collaboration between legislative policy makers, practitioners and researchers [4145]. Reciprocity and mutual engagement lack in most public health networks (if such a network, be it formal or informal, even exists). Although the framework of stages heuristic has outlived its usefulness, it is employed here as a means to better understand and unravel the extremely complicated collaboration process and to uncover risks for disconnections - or in niche terms - the different survival strategies in each step that keep the three niches separated. In an effort to promote improvements in the interaction between practitioners, policy makers and researchers, the awareness of these disconnections is considered to be the first step towards mutual understanding and initiatives for interaction and dialogue.

Method

A literature search was conducted using (i) relevant textbooks; (ii) electronic bibliographic databases, namely Pubmed, Medline, Cochrane and PsycINFO; and (iii) reference lists of articles published in relevant journals. English and Dutch language articles and books published between 1980 and 2006 were included in the study. The search for relevant textbooks was based on more general terms such as structure, process and outcomes of research, practice and policy separately. The following key words were used for the electronic databases: public health collaboration, public health cooperation, inter-organizational relations in public health, inter-organizational exchange in public health, public health network, public health coalition, public health decision-making, evidence-based public health and practice-based evidence or policy-based evidence. In total, 204 references were found including textbooks or reports (n = 103) and articles (n = 101).

The work cycle model helped us to structure and interpret the literature. First, we explored the typical characteristics of the work processes in detail at each step. Generally used working methods were included in the work cycles. After this search was completed and the policy, practice and research work cycles were detailed for each step (Results part 1), comparisons were made between the steps of the three domains (subsequently for step 1 to 4) by describing the barriers for collaboration. The main conflicting aspects in each step were interpreted and confirmed by the literature in light of their impact on collaboration between policy, practice and research. This resulted in a descriptive overview of disconnections (Results part 2).

Given that, in most countries, the implementation of public health policy is the responsibility of the local authorities, the analysis presented here is from the perspective of local policy and local practice. Research can be conducted at any level and specification to local, regional or national level is indifferent in this analysis.

Results

Part 1: Work cycles

The constructed work cycles show double-sided arrows which represent the diffusion and iteration of the steps while words in bold represent the typical characteristics of the respective work cycles.

The policy cycle

Policy is the process by which problems are conceptualized, solutions and alternatives are formulated, decisions are made, policy instruments are selected and programmes are delivered [34, 35]. Public policy responds to social problems in order to solve, reduce or prevent them. Public problems can be solved by designing community actions [4649], organizational actions, formal rules, procedures and routines [5052]. The policy work cycle takes up to four years (generally the time that passes between elections). It is mainly carried out by civil servants and public administrators and is decided upon by the municipal or city council. Civil servants are rewarded when they operate without failure, which may result in risk avoidance and routine behavior [30, 31, 35]. Public administrators and decision makers like aldermen and politicians who participate in the city council often make policy choices on the basis of how those decisions will impact their chances of being re-elected (i.e. popular and visible) [30, 31, 35, 53]. They want to survive in their niche.

Step 1: Recognition of a socially relevant problem

A problem is described as a discrepancy between negotiated and democratically defined basic social principles and the current situation [30, 32]. The perception of problems therefore depends on a comparison between current normative standards and the actual situation. Normative standards are influenced by the ruling political parties and by public common sense. Problem recognition is often interpreted as mainly a matter of strategic representation of the situation [30]. In the eyes of the public, this may be regarded as window dressing or following the hype of the day. Agenda-setting is a crucial aspect of step 1. How issues are placed on the policy agenda or how they may be prevented from being placed there is a complex process, which is often highly unpredictable [32]. Important criteria for agenda-setting are whether the involvement of the government is legitimate and the policy instruments are accepted by the prevailing political ideology and the majority of the population and whether political salience, public visibility and personal immediacy are positively valued [13, 5456]. When a topic has been put on the policy agenda, the alderman can for instance appoint a sectoral or intersectoral committee for policy preparation, i.e. the general framework for elaborating on the topic. During the subsequent stage of policy formulation or implementation, a policy readjustment can be considered necessary because of, for instance, negative mass media, unfeasibility or undesirable side effects [35, 53]. Figure 1. The regulatory policy cycle [42]
Figure 1

The regulatory policy cycle [42].

Step 2: Policy formulation and political decision

The formulation of policy starts with an analysis of the policy domains that have to be involved, i.e. the assignment analysis. One has to decide whether a problem belongs to, for example, the education, public housing or public health department, or to all policy departments. Departments have their own legally defined authority status and informal power. For instance, the department of city planning often has more power, both formally and informally, than the departments of social welfare and public health [30, 31].

The problem analysis includes many different perspectives, i.e. from the perspective of (population) health, economics, socio-economic differences, employment, social participation, spatial planning, etc., and the same holds for the proposed problem-solving strategies. Public administrators take the context of societal complexity as a starting point for policy formulation [57]. Goals and objectives are formulated in general terms. The policy instruments are then selected and may include information, community action, economic incentives (e.g. subsidies), legal directives and organizational arrangements [34, 35, 47, 52]. Then an implementation plan is developed which leads to negotiations with different local organizations in order to make agreements on their contribution. A specification of costs and benefits must be completed and budget availability must be explored before the definitive policy proposal can be submitted to the municipal council. The political decision-making process can be characterized as a process of bargaining, lobbying, negotiating and accommodating different interests [30, 31, 53, 58]. Final decisions are often the result of compromise.

Step 3: Policy implementation

The implementation process is often insufficiently monitored [42, 53]. In many cases, there is no clear 'road map' showing what, where, when, how and by whom activities are to be implemented. Many policies are described in terms of policy intentions instead of SMART-formulated policy goals (Specific, Measurable, Acceptable, Realistic and with Time specification) which makes monitoring rather difficult [30, 5961]. Despite the fact that local government claims to be in charge of local public health, the coordination and the role of the process manager often is unsatisfactorily implemented, especially when a range of partners from different sectors are involved [62, 63]. Task allocation, responsibilities and competences are often left undefined [64, 65].

Step 4: Policy evaluation

Policy evaluation is often considered unimportant. At any rate, many policy programs are not evaluated at all [66, 67]. In some countries, the local government is obliged to audit its performance. Audits can be interpreted as an evaluation method. Performance audits and financial audits seek to verify the degree to which policy conforms with pre-defined performance indicators or budgets, respectively. When there is a formal duty for public accountability for the estimation of the impact of public health policy budgets on the health of the public, like in the Netherlands [68], still then its fulfillment is often weak [61, 6871]. As ambiguous goal formulation makes effect evaluations that are based on academic standards rather difficult, it may only be feasible to evaluate in terms of process criteria or intermediate goals [30, 61, 64, 72, 73]. Evaluation research determines desirable and undesirable effects and side effects, and - important for public policy - legitimacy. These aspects play a rather important role in generating accountability for civil servants' political responsibilities [30, 31, 34, 59, 74]. Legitimacy during the formulation phase can be misjudged, can change over time or can be evaluated differently because of a change in the ruling political party. Failing public acceptance of government interference and acceptance of the policy instruments selected, a policy will not likely be continued. Although many evaluation results contribute to the knowledge and expertise of the public administration, there are many cases in which evaluation results are deliberately not published or communicated as this might be too risky to the political elite [30, 31].

The practice cycle

Practice aims to serve the needs of others, either directly or indirectly. Practitioners in public health primarily want to solve problems immediately and meet the needs and demands of their clients with whom they have a direct, personal contact. Practice has a preference for a short cycle because practitioners feel a sense of urgency [75]. In this way they can survive within their own niche.

Step 1: Description of a practically relevant problem

A problem is defined as a discrepancy between the actual situation and the needs and demands perceived by individuals, groups, communities and local authorities. A problem is perceived when the current normative standards do not correspond with the actual situation as expressed by epidemiological findings, political priorities and public demands [55, 75, 76]. To recognize a problem, practitioners must perceive a difference between what currently exists in their consultation room or community and a more desirable state which they believe is attainable, modifiable and tractable [75]. Problems in practice are concrete and detailed, and practitioners focus on ways to act immediately, rather than on ways to reason, generalize or find the evidence as researchers do, or wait for legitimate policy instruments and finances as policy makers do.

In many countries, practitioners' routine work is based on the product agreements between local authorities and their Public Health Services. The management of Public Health Services can only invest in the development of new programs to solve identified problems when this is permitted by the budget and when capacity is sufficient. Otherwise, negotiations with local authorities have to be initiated to increase budgets. Figure 2. The regulatory practice cycle [42]
Figure 2

The regulatory practice cycle [42].

Step 2: Practical program formulation

Practitioners almost always experience time constraints due to organizational and personal factors. The practice domain always has an organizational duty to deliver products fixed in scope and frequency as determined each year by the local government. Practitioners do not have time for exhaustive behavioral, environmental and educational analyses [76, 77] or for acquiring profound theoretical insights for the selection of practical strategies [75]. Because practitioners assign high intellectual status to scientific research they do not easily contact researchers for support. Brainstorming about causes and solutions and adopting ready-to-use practical strategies from colleagues' previous experiences are fairly quick and easily accessible procedures [78], which are often applied without a systematic validation of the context [79].

Goals and objectives are not often specified in details [75, 80] and the implementation design usually pays significant attention to organizational constraints and the practical benefits. Practitioners' attitudes make them creative in terms of solving feasibility problems. Capacity, personnel and training needs are taken into account, resulting in the total costs [80]. Agreement on the design is required by management. Local authorities have to make decisions when financial support is necessary or when the program poses future budgetary risks. This can result in management decisions being delayed for a couple of months while practitioners tend to be in a hurry. Practitioners often insufficiently anticipate the policy developments required to integrate program activities in local policy [42].

Step 3: Practical implementation

If practitioners design a program in close cooperation with all colleagues who have to use it, implementation is usually not perceived as a problem [77]. Coordination agreements are made and the program can be delivered by local and/or national organizations. Incompletely worked out designs are improved by trial and error sometimes resulting in 'muddling through' [35].

Step 4: Practice evaluation

The final step of the cycle is a frequently neglected aspect in public health practice as it requires both a theoretical and practical attitude. Evaluation tends to be at the bottom of the practitioners' list of priorities and budget items [66]. At best, the evaluation consists of practitioners' judgments of the program delivery and an assessment of client satisfaction as part of routine quality improvement procedures. Practitioners' work is not paid on the basis of health outcomes at the individual or population level but rather on the basis of whether the product agreements are met. Public Health Services are obliged to produce an annual report on their performance and financial auditing [68, 81]. This planning and control cycle, however, functions as a productivity report rather than an evaluation of public health. Outcome data, documented in terms of life expectancy, prevalence of diseases or public health problems, are available from the regular monitoring services but are not linked to the auditing process. Consequently, the effect of a program on public health practice over time cannot be specified. Such an effect evaluation is also difficult in light of the general constraints and difficulties associated with measuring the effects of preventive public health services [8284].

The research cycle

Scientific research is defined as the systematic, controlled, empirical and critical investigation of hypothetical propositions about presumed relations among natural phenomena. Scientific research aims to produce explanations and predictions - and in case of the applied sciences, also solutions - relating to people's problems, and to contribute facts and theories to the body of knowledge [85]. The scientific approach is the most systematized method of acquiring knowledge. This orderly pattern is called the empirical cycle [37, 86]. The empirical cycle refers to the process in which evidence rooted in objective reality (assuming that an objective reality exists independent of human discovery or observation) and gathered through the human senses is used as the basis for generating knowledge.

The research cycle takes about four years or more as many research projects, e.g. PhD dissertation projects usually take four years, but eight to ten years may pass from the time of the initial hypothesis or research question to publication and dissemination [29]. The way the researchers work helps them to survive in their niche

Step 1: Defining a scientifically relevant problem

In the research domain, a problem is described as a discrepancy between theory and reality, between different theories, between theory and practice, or between practice and desired practice [86, 87]. A problem is perceived as scientifically relevant when, by systematic empirical observation, information can be accumulated or theories can be formulated to extend the existing knowledge base. Step 1 is the generalization of non-systematic observations or perceived practical problems to a problem that is based on theory [59]. Scientifically relevant problems originate from passionate researchers who integrate observations in a more abstract, generally valid picture of reality through creativeness, imagination and induction. Figure 3. The empirical research cycle [42]
Figure 3

The empirical research cycle [42].

Step 2: Formulation of research design and hypothesis

A hypothesis formulated via deductive reasoning is a tentative prediction or explanation of the relationship between two or more variables [37, 59, 87]. The hypothesis serves as a link between the theory and the real-life situation. Descriptive, exploratory and phenomenological studies may not require a hypothesis beforehand as their aim may be to develop hypotheses. The research question(s) and the research design are thoroughly elaborated, which is generally a time-consuming process. The population to be studied and the variables involved are delimited. Researchers tend to reduce complex problems to a range of more detailed problems that can be studied separately. Researchers need to be experts in developing the research design, sampling design, instruments to measure variables, timing and methods of analysis [37, 59, 85, 87].

In public health research, humans are often the source of information. Great care must be exercised so that the rights of these humans remain protected. A medical review ethics committee must approve the study and the procedure for obtaining informed consent needs to be addressed [88].

Step 3: Research implementation

In many studies, the empirical phase, i.e. the systematic data collection, is the most time-consuming part of the investigation. Researchers do not always sufficiently anticipate problems relating to practical matters like data collection or registration systems, the controlled application and fidelity of the intervention in practice [17, 20, 81, 86], logistic requirements, the identification of partners who need to be involved or committed, the involvement of qualified staff, research design soundness guarantees during implementation, recruitment procedures and resource availability.

Step 4: Research evaluation

Although hypotheses can be accepted or rejected, it is inappropriate to speak of definite proof because this is incongruent with the limitations of the scientific approach. Scientists constantly seek objective, replicable evidence as the basis for understanding phenomena. The more frequently the same results are found, the greater the confidence in their validity [89].

Scientific researchers have a duty in terms of public accountability [86] and should communicate their findings to an audience. Four types of audiences are distinguished: the scientific forum, the institutions funding the research project, the practice forum and the general public. Accountability to the scientific forum has a twofold function, i.e. to assess whether the results and the design can stand the test of scientific criticism and to contribute to the scientific body of knowledge. The productivity of a researcher is often assessed by the number of scientific publications one has in influential international journals with a high citation score [90]. Because of this, publications in professional journals specifically meant for the practice forum are less valued thus impeding the dissemination of findings to practice. Scientists are expected to keep a certain distance from policy or practice, avoiding public controversy [91] and emphasizing their objectivity and neutrality [92].

Part 2: Disconnections

Table 1 summarizes the differences between the niches for each step in the work cycle (steps as labels and differences as sub-labels) that may result in disconnections. The niche is an ecological term. Several species can populate the same, different or overlapping niches. For each step, we show why the human species of policymakers, practitioners and researchers have populated different niches to maintain a stable and livable group. Next, we suggest the required burden of tolerance that is needed to cohabit with other species in overlapping niches.
Table 1

Differences (in italics) between the work cycles in the three niches.

 

Policy

Practice

Research

STEP 1

Problem recognition

1. Relevance

socially relevant problem, i.e., solving social problems, influenced by political parties

practically relevant problem, i.e. corresponding to the public's or client's requests or needs due to problems that are modifiable and tractable

scientifically relevant problem, i.e. explaining problems and adding to the body of knowledge based on existing theory

2. Policy agenda setting

much influence on agenda setting

limited influence on agenda setting, media pressure

very limited influence on agenda setting

3. Status

bureaucratic status

social status

high intellectual status

STEP 2

Formulation of policy, practice and research

4. Formal power in policy

much influence of small political group on policy formulation

sometimes indirect influence on policy formulation

usually no influence on policy formulation

5. Goals

Insufficient transparency of final goals

limited transparency of final goals, restricted to practice

sufficient transparency of final goals, but restricted to research

6. Evidence

policy-based evidence: legitimacy, acceptability, visibility, immediacy, political salience

practice-based evidence: profitability, applicability, feasibility

research-based evidence: rationality, empirical validity, logical precision

7. Legitimacy

preferred focus on environmental approach, social, physical, economic

focus on individual behavioural approach

insufficient focus on environmental approach

8. Value of theory and practice

theories are partly relevant; practical implementation is relevant

theories are irrelevant; practical implementation is relevant

theories are relevant; practical implementation is often irrelevant

9. Work attitude

work attitude of administrative control and opportunism; some creativity involved

firm, action-directed work attitude; 'quick and dirty'; creativity involved

cautious work attitude; detailed and time consuming; creativity involved

STEP 3

Implementation of policy, practice and research

10. Adjustments during pilot

interim policy adjustments during policy pilot, trial and error approach

interim practical adjustments during pilot, trial and error approach

no interim adjustments, except for qualitative, responsive research

STEP 4

Policy and practice evaluation and research interpretation

11. Lifespan

unpredictable lifespan of work cycle, maximum four years

preferably short lifespan of work cycle

predictable lifespan, depending on research design and public availability, 4 to 10 years

12. External vs. internal validity

need for external validity but policy results often too tentative

need for external validity but practical implementation and contextual factors often undefined

focus on internal validity, insight in what is effective but not in how it can be made effective in real world setting

13. Public accountability

increasing public accountability, mainly financial within own field

limited public accountability; mainly financial within own field

public accountability by publications in highly authoritative journals within own field

In step 1 policy makers (legislators) define a public health problem in terms of its relevance to their political ideology and public opinion [15, 3032, 53, 93]; practitioners define the same public health problem in terms of its relevance to perceived needs and demands of individuals, epidemiological findings, and products agreements [75, 80]; and researchers define the same public health problem in terms of its relevance to theory, existing body of knowledge, and interests of the investigator [59, 81, 87]. The starting points are different as social, practical and scientific relevance do not automatically overlap [4, 14, 15, 30, 32, 34, 92, 94, 95], but species do not exploit each other and can search for a new equilibrium. Besides, the decision to start the policy cycle is made by a small number of city councillors who together decide to put a subject on the policy agenda [32]. Practitioners and researchers have no formal authority in local policy agenda setting and cannot easily influence local policy, although they can use media attention to put a topic on the political agenda. Nonetheless, most of the time, each field of policy, practice and research sets its own agenda thus leaving the gap between the fields as it is [42, 96].

Each field is valued differently by the other fields in terms of status. Policy makers, and even more so practitioners, assign high intellectual status to scientific research. They place research at a distance and do not value research extensively because of its high intellectual requirements [42]. Scientists and practitioners, on the other hand, perceive the policy making process as highly bureaucratic, impenetrable and full of delays [30, 31, 53, 58]. Scientists generally perceive practice as socially relevant, but they are not always interested in the 'real world', thereby implicitly and unconsciously lowering social status of practitioners [43, 92].

In step 2 practice and research have no formal decisionary power over policy formulation and they have limited influence on the political decision to agree or not on a policy proposal [32]. The final policy goals of these proposals are often expressed in policy intentions that are hard to measure [30, 31, 34, 66]. The same holds for the practice goals. Research goals, on the other hand, are expressed in detail and in SMART terms [37, 87].

Evidence [97] has different meanings in each cycle. The term 'evidence-based' is principally based on rationality but other interpretations of the term evidence have developed [18]. Essentially, these reflect the viewpoints of the parties concerned, as can be seen with the terms 'practice-based evidence' and 'policy-based' or 'policy-informed evidence' [7, 81, 98103]. The terms 'policy-based evidence' and 'practice-based evidence' contribute aspects that originate from their respective niche characteristics. This means that, whereas rationality, empirical validity and logical precision are the decisive arguments for researchers thus resulting in the concept of research-based evidence, legitimacy, public acceptability, political salience, public visibility and public immediacy are the added decisive arguments for policy makers to act or refrain from action, and these arguments shape the concept of policy-based evidence [96, 98, 99, 101, 104]. From the practitioners' perspective, meeting the needs of individuals and groups as well as feasibility, profitability and applicability are the added decisive arguments to act [75]. These are expressed in the concept of practice-based evidence.

Legitimacy is an important aspect during the process of policy formulation [31]. Last decades, public health practice and research have tended to focus on individualized approaches to risk management [58, 105, 106]. However, policy actions focusing on behavioral lifestyle determinants are considered moralistic and may be politically controversial because they interfere in people's private lives [58, 107]. The environmental determinants have been unsatisfactory investigated so far to formulate effective public health policy. If the legitimate role of policy is to be linked to research and practice, the environmental dimension of health should be more explicitly defined in research and practice [58, 108110].

The role and value of theory and practice are different in each niche. Theory is the starting point or the final goal of research and is regarded as indispensable [59, 89]. Within policy-making, the use of theory depends on the educational background and academic experience of civil servants [111]. Practitioners do not tend to use theories to explain how they expect their activities to work. Theories consist of impractical, high-flown, unrealistic ideas, are abstract and are used when there are no facts [95]. Researchers, on the other hand, tend to find practice-based knowledge scientifically irrelevant.

Policy makers, practitioners and researchers have a different work attitude. Scientists are regarded as thinkers, practitioners as doers and policy makers as bureaucrats [30, 31, 34, 42]. These stereotypical images hamper collaboration as they can become ingrained prejudices. Although research findings are often regarded as tentative by scientists, practitioners expect to receive clear guidance on how to act. A cautious scientific attitude may thus clash with a firm attitude towards action. Practitioners may feel inhibited while researchers must fight for the time-consuming accuracy they strive for. The administrative function of the authorities often results in a controlling, bureaucratic and opportunistic attitude, which may conflict with the creative thinking and actions of researchers and practitioners [41, 42]. To cohabit with other species in overlapping niches requires acceptance of differences in power and working style, and training in other languages to understand evidence, legitimacy and the dichotomy of theory and practice.

In step 3 the problem of interim adjustments may appear. Whereas adjustments during implementation are strongly discouraged in most research designs - except for qualitative and responsive research methods -, they are acceptable in policy and practice. As policy- and practice-related knowledge advances during the implementation stage, it influences the formulation of policy and practice programmes and readjustments are made. Repeated switches from step 3 to 2 and vice versa is called 'muddling through', which is not allowed within the field of research, unless it concerns participatory action research. Once a research design has been selected and interventions have been defined, readjustments to the intervention are not allowed anymore [37, 87]. This kind of control will sometimes demand huge sacrifices and inflexibility from the practice field which may even be confronted with client dissatisfaction [107]. When researchers and practitioners have not sufficiently anticipated problems relating to practical data collection or registration systems, controlled application and fidelity to the intervention in practice, logistic requirements, identification of partners who need to be involved or committed, qualified staff, recruitment procedures and resource availability, the research conduct might get stuck [17, 20, 112]. In niche terms, interim adjustments can be considered a predator that should be made innocuous.

In step 4 the results have to be interpreted. Each cycle has its own dynamics and lifespan. Research and policy projects usually take four years while practical programmes have a short lifespan. Unforeseen arguments within the political arena sometimes cause cycles to start during the period in between elections, and their duration can then hardly be predicted [32]. This combination of different paces and the desired interconnections between the cycles makes meshing extremely complicated.

After a research project has been ended, researchers no longer have a legitimate role in the translation of results into policy or practice. As research tends to reduce the complexity of real life problems to detailed sub-questions that are studied separately, it is often difficult to offer an integrated problem solution, ready-to-use in practice or policy [113]. Researchers pay more attention to the internal validity than to the external validity, i.e. the generalization of the results [104, 114]. Action on any substantial scale often has to wait for further analyses that address the contextual determinants in order to corroborate the evidence in practice or policy.

Policy makers, practitioners and researchers each have a duty of public accountability, but in different ways [68, 86]. Audit reports of practice and policy, and peer reviewed, scientific journals are, in theory, accessible and thus readable to the general public, but access is hampered by a range of barriers relating to organisational structure. Besides, the content of peer reviewed, scientific articles is not readable for politicians, civil servants or public administrators due to its scientific jargon, and if it is readable for practitioners they often lack time. The other way round, the content of memoranda from local government or public health service is not attractive to read for researchers due to its length, lack of new knowledge and ambiguous formulation which is necessary to serve consensus and cooperation. To summarise, all kinds of publications are nearly exclusively used within the individual fields that produce them. When species connect different timelines and assist each other in generalizations and professional and scientific publications, the species can live together in overlapping niches.

Discussion

This review of the three work cycles and description of current public health policy, practice and research shows that there are some conspicuous disconnections that strengthen the niche character of each domain and hamper integration and collaboration. Improving collaboration between the public health niches and their work cycles requires, first and foremost, awareness of these differences. Mutual understanding may subsequently reinforce mutual respect and collaboration. As each work cycle starts with the recognition of a problem, respective professionals need to achieve a synthesis with respect to the social, practical and scientific relevance of public health problems. Priorities regarding agenda-setting, problem formulation, goal clarity, evidence use, legitimacy, theory use, attention to internal and external validity, lifespan and the availability and readability of publications do not automatically overlap. Formal power and status differ between the three niches. Kingdon and Weiss described barriers to exchange between policy, practice and research during agenda-setting and decision-making.

We add to that body of knowledge the barriers in all steps in the exchange process. Given the thirteen disconnections, we contend that meshing the desired interconnections between the cycles is an extremely complicated endeavor [41, 42].

Conclusion

To overcome the above mentioned disconnections, face-to-face encounters consistently emerge as the most efficient way to transfer knowledge, achieve higher quality and acknowledge mutual dependence [94, 113, 115, 116]. Personal relations provide gateways to the knowledge available in other niches and may result in affective ties that subsequently can reduce status differences. These can, in turn, stimulate receptivity and commitment to the other niches. Professionals are thus given access to the internal structures of other niches, their formal and informal networks and their climate and culture, which can help them to cross niche barriers and speed up intersectoral knowledge circulation. Public health policy, practice and research must work in consort in each step of the work cycle [92, 117, 118].

Furthermore, managers of practice institutions and public health professors should endeavor to get involved in the political elite as social entrepreneurs [32] and this may enable them to exert effective influence on agenda-setting, policy formulation and political decision-making. News media publications rather than scientific publications in influential international journals are needed to address the public in general and politicians in particular. Thinking in terms of a theory-practice continuum or a synthesis will also promote knowledge about public health evidence. The challenge is to find performance indicators that yield mutual benefits because collaboration does not start or continue automatically. Each niche has arguments that can be used to defend their actions and collaboration should combine the best of each approach in an effort to achieve added value and quality improvement in public health.

Our findings suggest the need for novel structures that bridge policy, practice and research. In 2005, nine centers for collaboration between public health policy, practice and research were initiated in the Netherlands [119, 120]. These centers are called 'Academic Collaborative Centres for Public Health' (In Dutch, Academische Werkplaats). These centres create one biotope in which three niches, each with their own burden of tolerance, can live together because no mutual exploitation mechanisms exist. Hopefully, such a biotope can teach us important lessons regarding this transformative process, which, in turn, will add to the knowledge we have thus far.

Declarations

Authors’ Affiliations

(1)
Academic Collaborative Centre of Public Health Limburg
(2)
Caphri, School of Public Health and Primary Care, Maastricht University
(3)
Department of Public Health, National Institute of Public Health and the Environment
(4)
Faculty of Psychology, Department of Work and Social Psychology, Maastricht University
(5)
Faculty of Health, Medicine and Life Sciences, Department of Health Education and Promotion, Maastricht University

References

  1. Detels R, Breslow L: Current scope and concerns in public health. Oxford Textbook of public health. Edited by: Detels R, McEwen J, Beaglehole R, Tanaka H. 2004, Oxford: University Press, 3-20.Google Scholar
  2. Berridge V: Passive smoking and its pre-history in Britain: policy speaks to science?. Soc Sci Med. 1999, 49: 1183-1195. 10.1016/S0277-9536(99)00159-8.PubMedGoogle Scholar
  3. Beyer JM, Harrison TM: The Utilization Process: A Conceptual Framework and Synthesis of Empirical Findings. Administrative Science Quarterly. 1982, 27: 591-622. 10.2307/2392533.Google Scholar
  4. Bolton MJ, Stolcis BG: Ties that do not bind: musings on the specious relevance of academic research. Public Administration Review. 2003, 63: 626-630. 10.1111/1540-6210.00325.Google Scholar
  5. Byrne D: Enabling good health for all. A reflection process for a new EU health strategy. 2004, European CommissionGoogle Scholar
  6. Hasmiller S: Turning Point: The Robert Wood Johnson Foundation's effort to revitalize public health at the state level. J Public Health Management Practice. 2002, 8: 1-5.Google Scholar
  7. Lomas J, Culyer T, McCutcheon C, McAuley L, Law S: Conceptualizing and combining evidence for health system guidance. 2005, Ottawa: Canadian Health Service Research FoundationGoogle Scholar
  8. Nicola RM, Berkowitz B, Lafronza V: A turning point for public health. J Public Health Management Practice. 2002, 8: iv-vii.Google Scholar
  9. Rychetnik L, Wise M: Advocating evidence-based health promotion: reflections and a way forward. Health Promot Int. 2004, 19: 247-257. 10.1093/heapro/dah212.PubMedGoogle Scholar
  10. World Health Organization: Evidence policy for the WHO Regional Office for Europe. 2004, Copenhagen: WHOGoogle Scholar
  11. World Health Organization: Handbook for evidence-based working and case study writing. 2006, Copenhagen: WHO Programme on Evidence on Health Needs and InterventionsGoogle Scholar
  12. WHO: Resolutions and decisions. Ministerial summit on health research, WHA58.34. 2005, Geneva: WHOGoogle Scholar
  13. Davis P: Problems, politics and processes: public health sciences and policy in developed countries. Oxford Textbook of public health. Edited by: Detels R, McEwen J, Beaglehole R, Tanaka H. 2004, Oxford: University Press, 937-950.Google Scholar
  14. Frenk J: Balancing relevance and excellence: organizational responses to link research with decision making. Soc Sci Med. 1992, 35: 1397-1404. 10.1016/0277-9536(92)90043-P.PubMedGoogle Scholar
  15. Hoeijmakers M: Local health policy development processes. Health promotion and network perspectives on local health policy-making in the Netherlands. 2005, Maastricht UniversityGoogle Scholar
  16. Nutbeam D: Getting 'evidence' into public health policy, and 'policy' into public health research. Tijdschrift Gezondheidswetenschappen. 2003, 81: 155-158.Google Scholar
  17. Botvin GJ: Advancing prevention science and practice: challenges, critical issues, and future directions. Prevention Science. 2004, 5: 69-72. 10.1023/B:PREV.0000013984.83251.8b.PubMedGoogle Scholar
  18. Kohatsu ND, Robinson JG, Torner JC: Evidence-based public health: an evolving concept. Am J Prev Med. 2004, 27: 417-421.PubMedGoogle Scholar
  19. Dean K, Hunter D: New directions for health: towards a knowledge base for public health action. Soc Sci Med. 1996, 42: 745-750. 10.1016/0277-9536(95)00394-0.PubMedGoogle Scholar
  20. Dusenbury L, Hansen WB: Pursuing the course from research to practice. Prev Sci. 2004, 5: 55-59. 10.1023/B:PREV.0000013982.20860.19.PubMedGoogle Scholar
  21. Kimbrell JD, Witmer A, Flaherty P: The Louisiana Public Health Institute: a cross-sector approach for improving the public's health. J Public Health Manag Pract. 2002, 8: 68-74.PubMedGoogle Scholar
  22. Wellcome Trust: Public health sciences: challenges and opportunities. 2004, United Kingdom: Public Health Science Working GroupGoogle Scholar
  23. De Leeuww E, McNess A, Crips B, Stagnitti K: Theoretical reflections on the nexus between research, policy and practice. Critical Public Health. 2008, 18: 5-20. 10.1080/09581590801949924.Google Scholar
  24. Delaney B: Engaging practitioners in research; time to change the values of practice rather than the way research is carried out?. Fam Pract. 2007, 24: 207-208. 10.1093/fampra/cmm031.PubMedGoogle Scholar
  25. Rosser W: Bringing important research evidence into practice: Canadian developments. Fam Pract. 2008, 25 (Suppl 1): i38-43. 10.1093/fampra/cmn080.PubMedGoogle Scholar
  26. Green LW: The prevention research centers as models of practice-based evidence two decades on. Am J Prev Med. 2007, 33: S6-8. 10.1016/j.amepre.2007.03.012.PubMedGoogle Scholar
  27. North American Primary Care Research Group: What does it mean to build research capacity?. Fam Med. 2002, 34: 678-684.Google Scholar
  28. Department of Health: Best research for best health: A new national health research strategy. 2006, London: Department of HealthGoogle Scholar
  29. Green LW: Making research relevant: if it is an evidence-based practice, where's the practice-based evidence?. Fam Pract. 2008, 25 (Suppl 1): i20-24. 10.1093/fampra/cmn055.PubMedGoogle Scholar
  30. Stone D: Policy Paradox. The art of political decision making. 2002, New York: Norton & CompanyGoogle Scholar
  31. Walt G: Health Policy. An introduction to process and power. 2004, London: Zed BooksGoogle Scholar
  32. Kingdon JW: Agendas, alternatives and public policies. 2003, New York: Addison-Wesley Educational Publishers Inc.Google Scholar
  33. Weiss CH: The many meanings of research utilization. Public Administration Review. 1979, 39: 426-431. 10.2307/3109916.Google Scholar
  34. Althaus C, Bridgman P, Davis G: The Australian Policy Handbook. 2007, Sydney: Allen & Unwin, 4Google Scholar
  35. Hoogerwerf A: Beleid, processen en effecten. Overheidsbeleid Een inleiding in de beleidswetenschap. Edited by: Hoogerwerf A, Herweijer M. 1998, Alphen aan de Rijn: Samson, 17-36.Google Scholar
  36. Sabatier P, Ed: Theories of the policy process. 2007, Colorado: Westview Press, secondGoogle Scholar
  37. Bouter LM, Van Dongen MCJM, Zielhuis GA: Epidemiologisch onderzoek. Opzet en interpretatie. 2005, Houten: Bohn Stafleu van LoghumGoogle Scholar
  38. Buunk AP, Veen P: Sociale psychologie. Praktijkproblemen, van probleem naar oplossing. 1995, Houten/Diegem: Bohn Stafleu Van LoghumGoogle Scholar
  39. Lindblom CE, Woodhouse EJ: The policy-making process. 1993, New Jersey USA: Prentice-Hall Inc.Google Scholar
  40. Schön DA, Rein M: Frame reflection. Toward the resolution of intractable policy controversies. 1994, New York: Basic BooksGoogle Scholar
  41. Jansen MW, De Vries NK, Kok G, Van Oers HA: Collaboration between practice, policy and research in local public health in the Netherlands. Health Policy. 2008, 86: 295-307.PubMedGoogle Scholar
  42. Jansen M: Mind the gap: Collaboration between practice, policy and research in local public health. 2007, Maastricht University, Health PromotionGoogle Scholar
  43. Schur CL, Berk ML, Silver LE, Yegian JM, Michael JOGMJ: Connecting The Ivory Tower To Main Street: Setting Research Priorities For Real-World Impact. Health Aff (Millwood). 2009Google Scholar
  44. Campbell D, Redman S, Jorm L, Cooke M, Zwi A, Rychetnik L: Increasing the use of evidence in health policy: practice and views of policy makers and researchers. Australia and New Zealand Health Policy. 2009, 6: 10.1186/1743-8462-6-21.Google Scholar
  45. Davis P, Howden-Chapman P: Translating research findings into health policy. Soc Sci Med. 1996, 43: 865-872. 10.1016/0277-9536(96)00130-X.PubMedGoogle Scholar
  46. Blomgren Bingham L, Nabatchi T, O'Leary R: The New Governance: practices and processes for stakeholders and citizen participation in the work of government. Public Administration Review. 2005, 65: 547-558. 10.1111/j.1540-6210.2005.00482.x.Google Scholar
  47. Cooper TL: Civic engagement in the twenty-first century; toward a scholarly and pracical agenda. Public Administration Review. 2005, 65: 534-535. 10.1111/j.1540-6210.2005.00480.x.Google Scholar
  48. Portney K: Civic engagement and sustainable cities in the United States. Public Administration Review. 2005, 65: 579-591. 10.1111/j.1540-6210.2005.00485.x.Google Scholar
  49. Harting J, Van Assema P: Community-projecten in Nederland. De eeuwige belofte? [Community projects in The Netherlands. The eternal promise?]. 2007, Maastricht: Maastricht University, ZonMwGoogle Scholar
  50. Blum HL: Planning for health. Development and application of social change theory. 1974, New York: Human Sciences PressGoogle Scholar
  51. De Leeuw E: Health policy. An exploratory inquiry into the development of policy for the new public health in the Netherlands. 1989, Maastricht UniversityGoogle Scholar
  52. Holland WW: Overview of policy and strategies. Oxford Textbook of public health. Edited by: Detels R, McEwen J, Beaglehole R, Tanaka H. 2004, Oxford: Oxford University Press, 257-261.Google Scholar
  53. Hunter DJ: Public health policy. 2003, Oxford: Polity Press/Blackwell PublishingGoogle Scholar
  54. Rütten A, Von Lengerke T, Abel T, Kannas L, Lüschen G, Diaz R, Vinck J, Van der Zee J: Policy, competence and participation: empirical evidence for a multilevel health promotion model. Health Promotion International. 2000, 15: 35-47.Google Scholar
  55. Shiell A: Health outcomes are about choices and values: an economic perspective on the health outcomes movement. Health Policy. 1997, 39: 5-15. 10.1016/S0168-8510(96)00845-7.PubMedGoogle Scholar
  56. Signal L: The politics of health promotion: insights from political theory. Health Promotion Int. 1998, 13: 257-264. 10.1093/heapro/13.3.257.Google Scholar
  57. Plsek PE, Greenhalgh T: The challenge of complexity in health care. British Medical Journal. 2001, 323: 625-628. 10.1136/bmj.323.7313.625.PubMedPubMed CentralGoogle Scholar
  58. Petersen A, Lupton D: The new public health. Health and self in the age of risk. 1996, London: Sage PublicationsGoogle Scholar
  59. Buttolph Johnson J, Reynolds HT: Political Science Research Methods. 2005, Washington: CoPress, 5Google Scholar
  60. Korsten AFA: Resultaatgericht begroten. VBTB bij het Rijk een succes? [Result-directed budget estimation in the Netherlands successful?]. 2004Google Scholar
  61. Interdepartementaal Overlegorgaan Financieel Economische Zaken: Eindrapport VBTB evaluatie. Lessen uit de praktijk [Final report Evaluation of policy budgets and accountability. Lessons from practice]. 2004, Den HaagGoogle Scholar
  62. Ruland E, Van Raak A, Spreeuwenberg C, Van Ree J: Managing New Public Health: hoe zijn preventieve samenwerkingsverbanden te realiseren? Een agenda voor actie en onderzoek. Tijdschrift voor Gezondheidswetenschappen. 2003, 81: 52-55.Google Scholar
  63. Ruland EC: Bestuurlijke verankering van innovaties in de openbare gezondheidszorg; lessen uit de casus Hartslag limburg [The administrative embedment of innovations in public health care; lessons learned from the Hartslag Limburg case]. 2008, University Maastricht, Faculty of Health, Medicine and Life SciencesGoogle Scholar
  64. Milio N: Evaluation of health promotion policies: tracking a moving target. Evaluation in health promotion Principles and perspectives. Edited by: Rootman I, Goodstadt M, Hyndman B, McQueen DV, Potvin L, Springett J, Ziglio E. 2001, Copenhagen: WHO Regional Publications, European Series no. 92, 365-386.Google Scholar
  65. Evans D: 'Taking public health out of the ghetto': the policy and practice of multidisciplinary public health in the United Kingdom. Soc Sci Med. 2003, 57: 959-967. 10.1016/S0277-9536(02)00473-2.PubMedGoogle Scholar
  66. Inspectie voor de Gezondheidszorg (The Dutch Health Care Inspectorate): Openbare gezondheidszorg: hoe houden we het volk gezond?. 2005, Den Haag: Inspectie voor de GezondheidszorgGoogle Scholar
  67. Kornalijnslijper N, Schoenmakers C, Smeets K: Onderzoek gemeentelijke nota's gezondheidsbeleid. 2005, Den Haag: SGBO Onderzoeks- en Adviesbureau van de Vereniging van Nederlandse GemeentenGoogle Scholar
  68. Algemene Rekenkamer: Van Beleidsbegroting Tot Beleidsverantwoording (VBTB) en informatiesystemen [From policy budget to accountability and information systems]. 2001, Den Haag: Algemene RekenkamerGoogle Scholar
  69. Algemene Rekenkamer: Verslag 2003 Algemene Rekenkamer [Annual report 2003]. 2004, Den Haag: Algemene RekenkamerGoogle Scholar
  70. Tweede kamer der Staten Generaal: Preventieve gezondheidszorg [Preventive health care]. 2003, Den Haag: Algemene rekenkamerGoogle Scholar
  71. Tweede kamer der Staten Generaal: Handhaven en gedogen [To maintain and tolerate]. 2005, Den Haag: Algemene rekenkamerGoogle Scholar
  72. Milio N: Promotion health throug public policy. 1989, Ottawa: Canadian Public Health AssociationGoogle Scholar
  73. Rütten A: Evaluating health public policies in community and regional context. Evaluation in health promotion Principles and perspectives. Edited by: Rootman I, Goodstadt M, Hyndman B, McQueen DV, Potvin L, Springett J, Ziglio E. 2001, Copenhagen: WHO Regional Publications, European Series no. 92, 341-365.Google Scholar
  74. Holland WW: Public health - its critical requirements. Oxford Textbook of public health. Edited by: Detels R, McEwen J, Beaglehole R, Tanaka H. 2004, Oxford Oxford University Press, 1757-1764.Google Scholar
  75. Van Strien S: Praktijk als wetenschap, methodologie van het sociaal wetenschappelijk handelen. 1986, Assen/Maastricht: Van GorcumGoogle Scholar
  76. Green LW, Kreuter MW: Health promotion and planning: an educational and ecological approach. 1999, Mountain View CA: Mayfield Publishing Company, 3Google Scholar
  77. Bartholomew LK, Parcel GS, Kok G, Gottlieb NH, Eds: Planning health promotion programs. An Intervention Mapping Approach. 2006, San Francisco: Jossey-Bass, 2Google Scholar
  78. Gabbay J, le May A: Evidence based guidelines or collectively constructed "mindlines?" Ethnographic study of knowledge management in primary care. Bmj. 2004, 329: 1013-10.1136/bmj.329.7473.1013.PubMedPubMed CentralGoogle Scholar
  79. Cuijpers P, de Graaf I, Bohlmeijer E: Adapting and disseminating effective public health interventions in another country: towards a systematic approach. Eur J Public Health. 2005, 15: 166-169. 10.1093/eurpub/cki124.PubMedGoogle Scholar
  80. Saan H, De Haes W: Gezond effect bevorderen. Het organiseren van effectieve gezondheidsbevordering. 2005, Woerden: NIGZGoogle Scholar
  81. Learmonth AM: Utilizing research in practice and generating evidence from practice. Health Education Research. 2000, 15: 743-756. 10.1093/her/15.6.743.PubMedGoogle Scholar
  82. Lakhani A, Coles J, Eayres D, Spence C, Sanderson C: Creative use of existing clinical and health outcomes data to assess NHS performance in England: part 2--more challenging aspects of monitoring. Bmj. 2005, 330: 1486-1492. 10.1136/bmj.330.7506.1486.PubMedPubMed CentralGoogle Scholar
  83. Potvin L, Haddad S, Frohlich KL: Beyond process and outcome evaluation: a comprehensive approach for evaluating health promotion programmes. Evaluation in health promotion Principles and perspectives. Edited by: Rootman I, Goodstadt M, Hyndman B, McQueen DV, Potvin L, Springett J, Ziglio E. 2001, Copenhagen: WHO Regional Publications, European Series no. 92, 45-62.Google Scholar
  84. Rootman I, Goodstadt M, Potvin L, Springett J: A framework for health promotion evaluation. Evaluation in health promotion Principles and perspectives. Edited by: Rootman I, Goodstadt M, Hyndman B, McQueen DV, Potvin L, Springett J, Ziglio E. 2001, Copenhagen: WHO Regional Publications, European Series no. 92Google Scholar
  85. Swanborn PG: Methoden van sociaal-wetenschappelijk onderzoek [Methods of socialscientific research]. 1994, Meppel, Amsterdam: BoomGoogle Scholar
  86. Swanborn PG: Sociaal-wetenschappelijk onderzoek en de samenleving. Methoden van sociaal-wetenschappelijk onderzoek. Edited by: Swanborn PG. 1994, Meppel: Boom, 377-411.Google Scholar
  87. Polit DE, Beck CT: Nursing Research. Principles and methods. 2004, Philadelphia, Baltimore, New York, London: Lippincott Wlliams & Wilkins, 7Google Scholar
  88. About the CCMO. [http://www.ccmo-online.nl/main.asp?pid=1%26taal=1]
  89. Chalmers A: Wat heet wetenschap. 2003, Amsterdam: BoomGoogle Scholar
  90. Van Raan AFJ: Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics. 1996, 36: 397-420. 10.1007/BF02129602.Google Scholar
  91. Rothman KJ, Poole C: Science and policy making. Am J Public Health. 1985, 75: 340-341. 10.2105/AJPH.75.4.340.PubMedPubMed CentralGoogle Scholar
  92. Brownson RC, Royer C, Ewing R, McBride TD: Researchers and policymakers: travelers in parallel universes. Am J Prev Med. 2006, 30: 164-172. 10.1016/j.amepre.2005.10.004.PubMedGoogle Scholar
  93. Hoeijmakers M, De Leeuw E, Kenis P, De Vries NK: Local health policy development processes in the Netherlands: an expanded toolbox for health promotion. Health Promot Int. 2007, 22: 112-121. 10.1093/heapro/dam009.PubMedGoogle Scholar
  94. Atwood K, Colditz GA, Kawachi I: From public health science to prevention policy: placing science in its social and political contexts. Am J Public Health. 1997, 87: 1603-1606. 10.2105/AJPH.87.10.1603.PubMedPubMed CentralGoogle Scholar
  95. De Leeuw E, McNess A, Stagnitti K, Crisp B: Acting at the Nexus. Integration of research, policy and practice. 2007, Geelong, AustraliaGoogle Scholar
  96. Nutbeam D: Getting evidence into policy and practice to address health inequalities. Health Promot Int. 2004, 19: 137-140. 10.1093/heapro/dah201.PubMedGoogle Scholar
  97. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS: Evidence based medicine: what it is and what it isn't. Bmj. 1996, 312: 71-72.PubMedPubMed CentralGoogle Scholar
  98. Canadian Health Service Research Foundation: Weighing up the evidence. Making evidence-informed guidance accurate, achievable, and acceptable. 2006, Ottawa: CHSRFGoogle Scholar
  99. Fafard P: Evidence and healthy public policy: Insights from health and political sciences. 2008, Québec: Canadian Policy Research Networks & National Collaborating Centre for Healthy Public PolicyGoogle Scholar
  100. Lavis JN, Oxman AD, Moynihan R, Paulsen EJ: Evidence-informed health policy 1 -Synthesis of findings from a multi-method study of organizations that support the use of research evidence. Implement Sci. 2008, 3: 53-10.1186/1748-5908-3-53.PubMedPubMed CentralGoogle Scholar
  101. Muir Gray JA: Evidence based policy making. Bmj. 2004, 329: 988-989. 10.1136/bmj.329.7473.988.PubMedPubMed CentralGoogle Scholar
  102. Rimer BK, Glanz DK, Rasband G: Searching for evidence about health education and health behavior interventions. Health Educ Behav. 2001, 28: 231-248. 10.1177/109019810102800208.PubMedGoogle Scholar
  103. Green LW: Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence?. Am J Public Health. 2006, 96: 406-409. 10.2105/AJPH.2005.066035.PubMedPubMed CentralGoogle Scholar
  104. Green LW, Glasgow RE: Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006, 29: 126-153. 10.1177/0163278705284445.PubMedGoogle Scholar
  105. Raphael D, Bryant T: The state's role in promoting population health: public health concerns in Canada, USA, UK, and Sweden. Health Policy. 2006, 78: 39-55. 10.1016/j.healthpol.2005.09.002.PubMedGoogle Scholar
  106. Kickbusch I, McCann W, Sherbon T: Adelaide revisited: from healthy public policy to Health in All Policies. Health Promot Int. 2008, 23: 1-4. 10.1093/heapro/dan006.PubMedGoogle Scholar
  107. Horstman K, Houtepen R: Worstelen met gezond leven. Ethiek in de preventie van hart- en vaatziekten. 2005, Amsterdam: Het SpinhuisGoogle Scholar
  108. Commers MJ, Gottlieb N, Kok G: How to change environmental conditions for health. Health Promot Int. 2006, 22 (1): 80-87. 10.1093/heapro/dal038.PubMedGoogle Scholar
  109. Schmid TL, Pratt M, Howze E: Policy as intervention: environmental and policy approaches to the prevention of cardiovascular disease. Am J Public Health. 1995, 85: 1207-1211. 10.2105/AJPH.85.9.1207.PubMedPubMed CentralGoogle Scholar
  110. Yancey AK, Lewis LB, Sloane DC, Guinyard JJ, Diamant AL, Nascimento LM, McCarthy WJ: Leading by example: a local health department-community collaboration to incorporate physical activity into organizational practice. J Public Health Manag Pract. 2004, 10: 116-123.PubMedGoogle Scholar
  111. Keijsers J, Paulussen T, Peters L, Fleuren M, Lammers F: Kennis beter benutten: informatiegedrag van nationale beleidsmakers. 2005, Woerden: NIGZ en TNOGoogle Scholar
  112. Lipsey MW: The challenges of interpreting research for use by practitioners: comments on the latest products from the Task Force on Community Preventive Services. Am J Prev Med. 2005, 28: 1-3. 10.1016/j.amepre.2004.09.026.PubMedGoogle Scholar
  113. Innvaer S, Vist G, Trommald M, Oxman A: Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002, 7: 239-244. 10.1258/135581902320432778.PubMedGoogle Scholar
  114. Steckler A, McLeroy K: The importance of external validity. Am J Public Health. 2008, 98: 9-10. 10.2105/AJPH.2007.126847.PubMedPubMed CentralGoogle Scholar
  115. Lomas J: Using 'linkage and exchange' to move research into policy at a Canadian foundation. Health Aff (Millwood). 2000, 19: 236-240. 10.1377/hlthaff.19.3.236.Google Scholar
  116. Rogers EM: Diffusion of innovations. 2003, New York: Free PressGoogle Scholar
  117. Brownson RC, Fielding JE, Maylahn CM: Evidence-Based Public Health: A Fundamental Concept for Public Health Practice. Annu Rev Public Health. 2009, 30: 175-201. 10.1146/annurev.publhealth.031308.100134.PubMedGoogle Scholar
  118. Brownson RC, Simoes EJ: Measuring the impact of prevention research on public health practice. Am J Prev Med. 1999, 16: 72-79. 10.1016/S0749-3797(99)00014-8.PubMedGoogle Scholar
  119. Raad voor Gezondheidsonderzoek: Advies Werkplaatsfunctie buiten het academisch ziekenhuis. 2000, Den Haag: RGO PublicatieGoogle Scholar
  120. Raad voor Gezondheidsonderzoek: Advies kennisinfrastructuur public health: kennisverwerving en kennistoepassing. 2003, Den Haag: RGO PublicatieGoogle Scholar

Copyright

© Jansen et al; licensee BioMed Central Ltd. 2010

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (<url>http://creativecommons.org/licenses/by/2.0</url>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement