Skip to main content

Priority setting to support a public health research agenda: a modified Delphi study with public health stakeholders in Germany

Abstract

Background

Research priority setting (RPS) studies are necessary to close the significant gap between the scientific evidence produced and the evidence stakeholders need. Their findings can make resource allocation in research more efficient. However, no general framework for conducting an RPS study among public health stakeholders exists. RPS studies in public health are rare and no such study has been previously conducted and published in Germany. Therefore, we aimed to investigate which research topics in public health are prioritised by relevant stakeholders in Germany.

Methods

Our RPS study consisted of a scoping stage and a Delphi stage each split into two rounds. Firstly, we invited members of the German Public Health Association to gather expert insights during two initial workshops. Next, we defined the relevant stakeholder groups and recruited respondents. Thereafter, we collected research topics and assessment criteria with the respondents in the first Delphi round and aggregated the responses through content analysis. Finally, we asked the respondents to rate the research topics with the assessment criteria in the second Delphi round.

Results

In total, 94 out of the 140 invited public health organisations nominated 230 respondents for the Delphi study of whom almost 90% participated in both Delphi rounds. We compiled a comprehensive list of 76 research topics that were rated and ranked by several assessment criteria. We split the research topics into two types, substantive research topics and methodological-theoretical research topics respectively, to ensure the comparability among the research topics. In both types of research topics—substantive research topics and methodological-theoretical research topics—the respective top five ranked research topics hardly differed between public health researchers and public health practitioners. However, clear differences exist in the priority ranking of many (non-top priority) research topics between the stakeholder groups.

Conclusions

This research demonstrates that it is possible, with limited resources, to prioritise research topics for public health at the national level involving a wide range of pertinent stakeholders. The results can be used by research funding institutions to initiate calls for research projects with an increased relevance for health and/or scientific progress.

Peer Review reports

Background

The COVID-19 pandemic has made the importance of high-quality evidence abundantly clear to policy-makers. Hence, the pressure for policy-makers to gather and assess all available evidence when making decisions is increasing [1]. However, the process of including scientific evidence in public health decision-making has been—so far—not fully systematic and is complicated by barriers such as specific contexts and traditions, political priorities, individual beliefs and preferences, social values, and available resources [2,3,4].

Research also shows a significant gap between the scientific evidence that is produced by researchers and the actual scientific evidence demanded by policy-makers and other stakeholders [5,6,7,8]. Identifying what scientific evidence different groups of stakeholders prioritise may help to bridge this gap [7,8,9]. A promising yet underused approach is to involve stakeholders systematically in a structured priority setting process to ensure that their needs are accommodated by the produced evidence [10,11,12,13,14].

Funders of public health research have to decide which research projects to support while facing competing demands and scarce resources [15,16,17,18]. However, decision-making about which research should be conducted first is often not evidence-based [19]. Without rigorous research priority setting (RPS), funders risk that research topics will be chosen arbitrarily or are determined based on subjective goals [20,21,22,23]. Hence, RPS should be conducted in a structured matter to allow for better-informed decisions regarding the direction of future research investments.

Research priority setting

Although several frameworks for priority setting in health research are suggested [14, 16, 17, 24,25,26,27,28,29,30], no established, single framework exists that fits all RPS purposes, not least due to varying aims and target groups in RPS [25, 31, 32]. Nevertheless, several studies on health-related RPS agree that a best-practice RPS is a multi-stage process combining multiple methodological approaches [15, 29, 31].

Most notably, many studies incorporated Delphi-like techniques in RPS [33]. The Delphi technique often consists of two or more rounds, in which a panel of experts can give their opinions about an issue. In the following Delphi rounds, they are encouraged to give anonymous controlled feedback to the results of previous stages, which allows them to reflect, reassess and revise their opinions and judgements if needed [34,35,36]. The Delphi technique in health research is especially appropriate when seeking shared preferences from multiple stakeholders and when available evidence is incomplete [34, 35, 37, 38]. The implementation of this technique assumes that a result yielded from a larger panel of experts with various views will be more well-grounded than a conclusion reached by only one stakeholder [34]. Furthermore, the Delphi technique offers additional advantages, such as uncomplicated incorporating various metrics-based techniques and methods for consensus building [35, 38]. The Delphi technique also does not require face-to-face interactions, which reduces time and cost investments. More importantly, this reduces potential drawbacks of the in-person approach that lead to inaccurate estimations because of social pressure and/or dominance of individuals within groups [31, 39, 40].

Moreover, rating of research topics should preferably be done using more than a single assessment criterion to measure different dimensions of why specific topics are prioritised [23, 29, 31].

RPS studies focussing on the field of public health are rare: Selected public health topics are considered occasionally in RPS studies that focus on health in general or on a (sub)field that overlaps with public health (e.g. health services [41], nursing [42], obesity [43] or mental health [44]). We identified three RPS studies [40, 45, 46] that focused specifically on all public health topics, but for the purpose of prioritising topics for conducting systematic literature reviews for the Cochrane Collaboration only.

In Germany, where public institutions fund more than 3 billion EUR per year on health research [47], no RPS study specifically encompassing all public health research topics has been published so far.

Objectives

Our main objective was to investigate which research topics in public health should be prioritised in Germany according to a wide range of stakeholders by conducting a structured priority setting study. We also wanted to identify potential similarities and differences between stakeholder groups.

Methods

Our RPS study (see Fig. 1) was conceptually split into a scoping stage and a Delphi stage and consisted of four rounds:

  • Scoping round I. Gathering expert insights during two initial workshops

  • Scoping round II. Establishing the framework for the study

  • Delphi round I. Collecting research topics and assessment criteria with stakeholders

  • Delphi round II. Rating the collected research topics with the collected assessment criteria

Fig. 1
figure 1

The prioritisation process

In the scoping stage, we conducted two initial workshops, established an advisory board, and set the framework for the remainder of the study. For the Delphi stage, we administered two rounds of online questionnaires.

Scoping round I—initial workshops

We conducted our study in collaboration with the German Public Health Association (DGPH). Participants of two workshops, organised by the DGPH in 2015 and 2016, discussed and proposed potential research topics that should be covered in an RPS study. The invitations for the workshops were sent to all members of the DGPH. Additionally, individuals with a clear public health expertise, mainly consisting of established researchers in Germany and representatives of German research funding agencies, were invited. In total, the number of participants during the workshops varied between 40 and 50 individuals.

During the first workshop called “Priority topics for public health research”, members of the DGPH were invited during a 5-h-long exploratory roundtable discussion to discuss which research topics for public health research need most attention. The workshop was publicly announced and open to all members of the DGPH. The participants could propose topics or broader research areas themselves for different domains of public health and they could critically reflect on the topics that were proposed by other participants. The exploratory results were recorded in the minutes and distributed to the members of the DGPH by email.

In order to frame the discussions in this stage regarding potential research topics, we defined domains, i.e., broader public health research areas, to group similar research topics together to ensure comparability. We also used the domains as guidance for the respondents in the first Delphi round, allowing the respondents to consider the different broader public health research areas. The domains were

  • Research on current/contemporary issues

  • Effectiveness research

  • Policy research

  • Implementation and/or participatory research

  • Theories and theoretical concepts

  • Methodological research

  • Research on indicators

The second 2-h-long workshop took place during the annual conference of the German Society for Social Medicine and Prevention (DGSMP). In this plenary discussion the results of the first workshop and the planning for our RPS study were presented and discussed with representatives of German research funding agencies.

Scoping round II—establishing framework for the study

The author team approached established researchers and practitioners in the field of public health in Germany to form an advisory board. The advisory board consisted of five researchers and practitioners in total (see also the acknowledgements below), with both subject matter and methodological expertise [25, 31, 48].The advisory board members were not allowed to participate in the following Delphi stage.

The advisory board members reviewed the design and the proposed analyses of the study, particularly with regards to the scoping stage of the study and the questionnaire design. The advisory board was in particular helpful for framing if a research topic can be reasonably considered within the realm of public health research. In our definition for this study, public health research encompasses population level and health systems research, which excludes research topics that are predominantly clinical or biomedical research.

Moreover, we identified the stakeholder groups that are active in German public health and who are therefore relevant for inclusion in our RPS study [25, 44, 49]:

  • Public health research and education

  • Public health administration and policy-making

  • Non-governmental organisations (NGO) and representatives of the public

  • Representatives of health professionals and health care institutions

  • Self-governing associations of health providers and statutory health insurance

The stakeholder groups stand for different professional fields who are either producers, facilitators, or consumers of public health research in Germany. Hereafter, we identified specific organisations that fall within each of the stakeholder groups. We used this list of organisations for the recruitment of individual respondents for the Delphi stage (see Additional file 1 for a full list).

Delphi round I—collecting research topics and assessment criteria

In the first Delphi round, we distributed an online questionnaire to individual respondents. We presented the initial list of proposed research topics from the workshops and proposed assessment criteria that were defined during the scoping stage in the online questionnaire. The respondents could vote from this list which research topics should be included and which assessment criteria should be used for the rating of the research topics in the second Delphi round. Moreover, the respondents could propose further research topics and assessment criteria.

Thereafter, we coded and aggregated all proposed research topics using content analysis [50,51,52] to minimize redundancies and overlaps. The aggregated research topics should be interpreted as broader research themes that can be used for, e.g., setting the focus of future calls for proposals.

Delphi round II—in-depth rating of research topics with assessment criteria

In the second Delphi round, we presented the final list of research topics based on the first Delphi round. For each research topic, we also reported how many respondents voted for a particular research topic during the previous stage [15, 29, 53]. Subsequently, the respondents were asked to rate each research topic with the assessment criteria on a 4-point Likert scale (also with the alternative choice “I cannot assess this"). In order to limit the workload for the respondents, we presented each respondent a random sample of approximately 50% of the research topics only.

Recruitment

We based our recruitment strategy of respondents for the Delphi stage on a previous priority setting study in the field of health services research [41]: We invited public health relevant organisations, i.e. stakeholders, to nominate individuals as respondents from within their own organisation for participation in the study. We asked the organisations to nominate individuals who they believed are most suitable for assessing and prioritising research topics in public health. We pre-defined this as somebody who had multiple years of experience working in the field of public health and who is either a researcher or a user of research.

In total, we invited 140 organisations to nominate up to three respondents each. Table 1 shows the number of organisations by stakeholder group (see Additional file 1 for a complete list of the invited organisations).

Table 1 Respondents during the Delphi stage by stakeholder group

Of the 140 invited organisations, 94 (67%) participated and nominated in total 230 individuals. The participation rate in the first Delphi round was 87% (201 respondents) and in the second Delphi round 88% (203 respondents).

Table 1 also shows the professional area the respondents assigned themselves: approx. 50% of the respondents are working in research and/or higher education. The other respondents assigned themselves to the following stakeholder groups: Administration and/or politics, representatives of the general public, self-governing associations of health providers and statutory health insurance, and healthcare professionals. The respondents had an average of 16 years of experience in the field of public health.

Results

We will report the results of each Delphi round in turn.

First Delphi round: collecting research topics and assessment criteria with stakeholders

The respondents voted for research topics and assessment criteria from the initial expert list and could also propose further research topics and assessment criteria for inclusion in the second Delphi round. In total, the respondents proposed 529 research topics and 50 assessment criteria which we aggregated through a content analysis into 76 sufficiently distinct research topics and 6 assessment criteria, respectively.

The content analysis of the assessment criteria revealed, however, two types of research topics that could not be assessed meaningfully with the same set of criteria: Substantive research topics and methodological-theoretical research topics. Substantive research topics focused on specific thematic contents, for example “Climate change and health” or “Health literacy”. Methodological-theoretical research topics focused on the application or development of specific research methods and paradigms as well as on the development of theories and concepts for public health research, for example “Participation in health research” or “Interdisciplinary research”. This split resulted in 46 substantive research topics and 30 methodological-theoretical research topics, each with three associated assessment criteria. The operational definitions of the 6 assessment criteria are shown in Box 1 (see Additional file 2a and b for the final list of research topics based on the content analysis).

Second Delphi round: rating of research topics

In the second Delphi round the respondents rated the research topics using the assessment criteria on a 4-point Likert scale, ranging from 1 “totally agree” to 4 “totally disagree”. Alternatively, the respondents could also select the option “I cannot assess this”.

Table 2 shows the average rating by assessment criterion. Methodological-theoretical research topics received on average lower ratings than substantive research topics. Furthermore, for the two criteria that required broader insight into existing public health research (criterion S3 “Insufficient research” and criterion M3 “Potential for innovative insights”), the share of respondents who answered “I cannot assess this” is highest.

Table 2 Assessment results per criterion of both substantive and methodological-theoretical research topics

This table shows the results of the assessments of all the respondents per criterion. The respondents had to rate the research topics according to the assessment criteria on a 4-point Likert scale, ranging from 1 “totally agree” to 4 “totally disagree”. Alternatively, the respondents could also select the option “I cannot assess this”.

We asked each respondent to assess approximately 50% of all the research topics in order to reduce the workload. The selection and order of the research topics that each respondent had to rate was randomised. Therefore, the total number of respondents that assessed a certain topic differs slightly.

Substantive research topics

As shown in Table 3, the average overall rating score for all substantive research topics was 1.96, ranging from 1.42 for the research topic “Interventions in everyday life” (ranked highest) to 2.48 for the research topic “Accidents, violence, self-harm” (ranked lowest).

Table 3 Results of the rating of substantive research topics

In addition to the rating score, we also provided the rank of each research topic. For some research topics, the ranking per criterion showed slight differences, whereas for other research topics we found larger differences. For example, the research topic “Social inequality and injustice” was not ranked very high according the assessment criterion “Insufficient research” (ranked 20th out of 46); however, according to the assessment criteria “Improving health” and “Health justice”, the same research topic was ranked very high (respectively, 2nd and 1st).

Methodological-theoretical research topics

Table 4 shows the rating and ranking of methodological-theoretical research topics for the three assessment criteria separately (criteria M1 to M3). The average overall rating score for all methodological-theoretical research topics was 2.07, ranging from 1.64 for the research topic “Interdisciplinary research” (ranked highest) to 2.46 for the research topic “Mobility concepts” (ranked lowest). For all the three criteria, the highest-ranked research topics were virtually identical. For other (not highly-ranked) research topics we found larger ranking differences between the criteria. For example, the research topic “Comparative effectiveness research (CER)” was ranked 9th with the assessment criterion “Potential for innovative insights”. With the assessment criteria “Impact on public health research” and “Impact on public health practice”, the same research topic was ranked much lower (17th and 25th, respectively).

Table 4 Results of the rating of methodological-theoretical research topics

Comparison by stakeholder groups

As shown in Table 1, approximately 50 percent of the respondents belonged to the "Public health research and/or higher education" group. The other stakeholder groups, "Representatives of the general public", "Administration and/or politics", "Self-governing associations of health providers and statutory health insurance" and "Healthcare professionals" were grouped together and defined as "Public health practitioners". This allowed for a comparison between these two major stakeholder groups.

Additional file 3a and b show the ranking of substantive and methodological-theoretical research topics, respectively, by stakeholder group. For the majority of the substantive research topics, we did not see a major difference between the two stakeholder groups in the ranking. For nine research topics, however, there was a very large difference of ten or more positions in the ranking: The research topics "Digitalisation and health", "Nutrition and health (cultural, physiological, social)", and "Infectious diseases and vaccination" showed the largest differences (all ranked 14 places higher by public health practitioners as compared to public health researchers). Among the methodological-theoretical research topics (see Additional file 3b), we saw a very large difference (ten or more positions) in the ranking of seven methodological-theoretical research topics, with the research topics "Process evaluation", "Causal analyses / Experiments", and "Modelling studies / Decision analysis" showing the largest difference (up to 15 ranking positions difference).

As the judging difference in ranking as "large" or "very large" may seem somewhat arbitrary, we calculated the interrater reliability by stakeholder group, i.e., public health researchers vs. public health practitioners. The interrater reliability can be quantified by calculating an agreement coefficient (AC). We calculated Gwet’s AC because it allows for multiple rates and multiple topics [54, 55]. The interpretation is straightforward: Values close to 1 indicate almost total agreement among the respondents, values close to 0 indicate agreement is mostly due to chance. The analysis was conducted using the Stata software version 17 [56, 57].

In Table 5 we report the degree of agreement among respondents by the two major stakeholder groups (public health researchers vs. public health practitioners). For all three substantive assessment criteria the agreement of raters within both stakeholder groups could be considered as moderate (Gwets AC between 0.41 and 0.60). The agreement was highest for the criterion S1 (“Improving health”). For the criterion S3 (“Insufficient research”), the agreement among public health researchers and public health practitioners was the lowest among the substantive assessment criteria and the difference between both stakeholder groups was not statistically significant anymore.

Table 5 Agreement coefficient† by stakeholder groups (public health researchers vs. public health practitioners) for each criterion

For all three methodological rating criteria the agreement among public health researchers could be considered moderate (Gwets AC between 0.41 and 0.60). However, among public health practitioners there was only fair agreement (Gwets AC between 0.21 and 0.40). Moreover, the coefficients for the methodological-theoretical assessment criteria were slightly lower than for the substantive assessment criteria. Concerning criteria M2 (“Impact on public health practice”) there was no statistically significant difference between the two groups of stakeholders.

Discussion

Funding decisions in public health research are complex and multiple criteria play a role. Consequently, priority setting becomes a challenge that policy-makers cannot solve easily [23, 58]. Therefore, policy-makers need structured and transparent RPSs that take all relevant criteria into account [25, 29, 59].

We conducted an RPS study with the aim to investigate which research topics in public health should be prioritised in Germany according to different stakeholders. To the best of our knowledge, we conducted the first structured and transparent RPS study for public health research topics in Germany.

Improved transparency in RPS can strengthen the acceptability of the prioritised research topics, not least because research efforts and funding can be directed towards research that is relevant to all stakeholders [14]. Involving public health researchers and policy-makers simultaneously in RPS studies is not rare, however, Cortier and colleagues [60] demonstrated that other stakeholder groups such as public and advocacy organisations are not frequently included. By using a modified Delphi technique, we involved a wide range of stakeholders in order to investigate which research topics should be prioritised in public health in Germany.

We used a multi-stage approach to balance two conflicting aims, i.e., (i) eliciting proposals from a wide range of stakeholders and (ii) yet ensure that the proposals are within the realm of public health research. In order to ensure the latter, we defined domains that each stand for a broader public health research area and created—by conducting expert workshops in cooperation with the DGPH—an initial list of relevant research topics.

In the first Delphi round, we collected additional proposals for research topics from the participating stakeholders. The findings of the first Delphi round demonstrated clearly the need to split research topics into two groups (46 substantive vs. 30 methodological-theoretical research topics). We applied a different set of assessment criteria to these two types of research topics in the second Delphi round, as methodological-theoretical research topic cannot be assessed by the same criteria as substantive research topics.

The overall rating score for a particular research topic is the average of the rating scores for the three corresponding assessment criteria. No research topic received an overall score lower than 2.5. That is, on average no research topic was considered unimportant.

We found large differences in the rating and ranking of the research topics when differentiating the results along the three assessment criteria. These results corresponded to our expectation that the assessment of a particular research topic depends on the criterion applied (e.g. improving population health vs. insufficient research). It shows that the assessment criteria are measuring distinct dimensions of a research topics and can give an indication on why a particular research topic is prioritised high or low. Although many studies highlighted the importance of selecting multiple assessment criteria that fit to the specific context and that can sufficiently discriminate between the assessment of different research topics [25, 29, 31, 61], most RPS studies do not involve stakeholders in the selection of relevant assessment criteria [60]. The use of multiple assessment criteria also makes it easier for the respondents to rate the research topics as it provides clarity of what aspect of the research topic they were rating exactly.

The descriptive comparison of the priority ranking of the research topics by stakeholder groups, showed that both public health practitioners and public health researchers ranked predominantly similar research topics as top priorities. However, it should also be noted that clear differences exist in the priority ranking of many (non-top priority) research topics between the two stakeholder groups. Moreover, the degree of agreement among the respondents on the importance of research topics differed by stakeholder group: Public health practitioners had on all criteria a lower degree of agreement than public health researchers. This overall lower degree of agreement among public health practitioners might be a result from the existing variation within the stakeholder group, as several more narrowly defined stakeholder groups were aggregated together and labelled ‘public health practitioners´. Further research is needed to investigate how different and/or more narrowly defined stakeholder groups might produce differing results in an RPS. However, the group sizes of the more narrowly defined stakeholder groups in our RPS study were too small, which prevented a comparison between them.

Public health is a complex and multifaceted field that is mainly conducted in a real-life setting, which makes it a challenge for researchers and other stakeholders in public health research to develop a common understanding of what research is most relevant or important [62].The top priorities derived from our study (see Tables 3 and 4) also clearly highlight the importance of inter- or transdisciplinary research in a complex and real-life setting.

Funders in Germany can use the results of this RPS study to discuss future calls in a transparent and structured manner. The overall results show which research topics are prioritised highest by a respective stakeholder groups, while the assessment criteria also help to explain why particular topics are rated lower or higher. A worthwhile next research step should be to investigate if and how funders and other policy-makers in Germany make use of the findings of this RPS study.

Strengths and limitations

This is, to our knowledge, the first RPS study for public health research in Germany. So far no widely accepted standards exist on how to conduct such an RPS. Hence, our works was in many ways novel, but limitations have to be acknowledged.

We recruited respondents for the Delphi stage by identifying relevant organisations in the field of public health and ask them for the nomination of respondents. The approach may seem onerous, but it has been successfully applied already before [41] and had several advantages: The workload and cost for recruiting was relatively low and the approached organisations provided the contact details of the respondents. Furthermore, we had a very high participation rate (close to 90%) in both Delphi rounds as compared to other RPS studies that did not use this specific recruitment approach for their Delphi stage [13, 63, 64]. Also, the approached organisations nominated respondents they considered to have sufficient expertise in public health to participate meaningfully in an RPS study.

The validity of the sample is limited by the specific public health organisations we identified in our search (see Additional file 1) that nominated respondents. As organisations could only nominate up to three respondents, no single organisation could influence the rating unduly. We explicitly refrained from asking the organisations to form and present an official position; we merely asked their nominees to propose and prioritise research topics based on their expertise. In absence of a readily available list of all relevant public health experts and/or stakeholders in Germany, the only alternative would have been an open call to participate in the Delphi stage with all its disadvantages (e.g., unclear participation rates, unclear biases, and uncertain expertise of respondents).

Furthermore, we did not collect data on the context of the respondents within their organisations, such as position, decision-making power and organizational culture, although these factors could have an influence on the actual responses. However, we believe that this was necessary in order to ensure the anonymity of the respondents and to limit the time respondents had to spend for participating in our modified Delphi, therefore ensuring a sufficiently high participation rate. Future research is needed to investigate how contextual factors influence the RPS.

Our RPS approach incorporated a modified Delphi technique, i.e., the results of the rating in the second Delphi round were not discussed by the respondents in an additional follow-up round. We modified the Delphi process to reduce the workload for the respondents and to be able to include more stakeholders. Therefore, the results of our Delphi study should be interpreted as an indication of preferences rather than a consensus. On the other hand, this modified process increased the involvement of stakeholders who otherwise might be marginalised in the discussion of a full Delphi study, or who would not be invited at all as one of the top experts for a Delphi study [35, 65]. The open nature of the first Delphi round, i.e., the possibility to propose additional research topics, required us to conduct a content analysis to aggregate the large number of redundant or overlapping proposals (529 research topics) into a practicable number of sufficiently distinct research topics. However, an aggregation of data often means a loss of information. Furthermore, we are also aware that the formulation of the research topics through the content analysis is based on partly subjective assessments by the author team; a procedure that is however inherent to content analysis and a well-established method for condensing vast amounts of qualitative data [51, 52, 66,67,68].

Despite wide agreement that distinct criteria should be applied to rate the different dimensions of a research topic, no standard exists. We used the input of the respondents and were able to develop and apply 6 distinctive assessment criteria. However, further research is needed to investigate if additional or perhaps different assessment criteria are needed. Moreover, it is not clear if generic assessment criteria exist that can be used in any RPS study or if they must be specific to the research field, e.g., public health vs. health service research. Nevertheless, our study demonstrated that for public health research topics, depending on whether a research topic is substantive or methodological-theoretical, different assessment criteria should be used.

As this is the first RPS study of its kind in Germany that considers all public health research domains, it is unclear how long the results of the study remain sufficiently valid to inform policy-makers. This study was conducted before the outbreak of the Covid-19 epidemic in 2020 and an update may already yield different results. However, we do believe that many proposed research topics, such as “Health in all policies” or “Health literacy promotion”, will be considered relevant for the foreseeable future. A replication of the study after a few years would help to distinguish which research topics are rather long-term priorities and which research topics are rather short-term.

Funders and other research teams could easily use our study as a template for their own RPS studies in a different field of study. Although conducting our structured RPS study is a time intensive exercise initially, we believe that a follow-up study would not need much additional work.

Conclusions

We conducted a structured RPS study involving a wide range of stakeholders in the field of public health in Germany. Our study demonstrates how a multi-stage RPS study, using an anonymous and online modified Delphi technique in which stakeholders rate and rank a comprehensive list of research topics, can be implemented with limited resources. Being the first structured RPS study for public health that can be replicated easily, it may lay important groundwork for future RPS studies in public health and related fields of health research.

Availability of data and materials

Available upon request.

Abbreviations

RPS:

Research priority setting

DGPH:

German Public Health Association (Deutsche Gesellschaft für Public Health)

NGO:

Non-governmental organisation

References

  1. Armstrong R, Waters E, Dobbins M, Anderson L, Moore L, Petticrew M, et al. Knowledge translation strategies to improve the use of evidence in public health decision making in local government: intervention design and implementation plan. Implement Sci. 2013;8:121.

    Article  PubMed  PubMed Central  Google Scholar 

  2. ECDC. The use of evidence in decision-making during public health emergencies. Stockholm: ECDC. 2019.

  3. van de Goor I, Hämäläinen RM, Syed A, Juel Lau C, Sandu P, Spitters H, et al. Determinants of evidence use in public health policy making: Results from a study across six EU countries. Health Policy. 2017;121(3):273–81.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7(4):239–44.

    Article  PubMed  Google Scholar 

  5. Ellen ME, Lavis JN, Sharon A, Shemer J. Health systems and policy research evidence in health policy making in Israel: what are researchers’ practices in transferring knowledge to policy makers? Health Res Policy Syst. 2014;12:67.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Ginexi EM, Hilton TF. What’s next for translation research? Eval Health Prof. 2006;29(3):334–47.

    Article  PubMed  Google Scholar 

  7. Haines A, Kuruvilla S, Borchert M. Bridging the implementation gap between knowledge and action for health. Bull World Health Organ. 2004;82(10):724–31; discussion 32.

    PubMed  PubMed Central  Google Scholar 

  8. Lavis JN, Guindon GE, Cameron D, Boupha B, Dejman M, Osei EJ, et al. Bridging the gaps between research, policy and practice in low- and middle-income countries: a survey of researchers. CMAJ. 2010;182(9):E350–61.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Lobb R, Colditz GA. Implementation science and its application to population health. Annu Rev Public Health. 2013;34:235–51.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Bhaumik S, Rana S, Karimkhani C, Welch V, Armstrong R, Pottie K, et al. Ethics and equity in research priority-setting: stakeholder engagement and the needs of disadvantaged groups. Indian J Med Ethics. 2015;12(2):110–3.

    PubMed  Google Scholar 

  11. Kapiriri L, Tomlinson M, Chopra M, El Arifeen S, Black RE, Rudan I. Setting priorities in global child health research investments: addressing values of stakeholders. Croat Med J. 2007;48(5):618–27.

    PubMed  PubMed Central  Google Scholar 

  12. Mitton C, Smith N, Peacock S, Evoy B, Abelson J. Public participation in health care priority setting: a scoping review. Health Policy. 2009;91(3):219–28.

    Article  PubMed  Google Scholar 

  13. Nast I, Tal A, Schmid S, Schoeb V, Rau B, Barbero M, et al. Physiotherapy research priorities in Switzerland: views of the various stakeholders. Physiother Res Int. 2016;21(3):137-46.

    Article  PubMed  Google Scholar 

  14. Tong A, Synnot A, Crowe S, Hill S, Matus A, Scholes-Robertson N, et al. Reporting guideline for priority setting of health research (REPRISE). BMC Med Res Methodol. 2019;19(1):243.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Clavisi O, Bragge P, Tavender E, Turner T, Gruen RL. Effective stakeholder participation in setting research priorities using a Global Evidence Mapping approach. J Clin Epidemiol. 2013;66(5):496-502 e2.

    Article  PubMed  Google Scholar 

  16. Sibbald S, Singer P, Upshur R, Martin D. Priority setting: what constitutes success? A conceptual framework for successful priority setting. BMC Health Serv Res. 2009;9(1):43.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Lomas J, Fulop N, Gagnon D, Allen P. On being a good listener: setting priorities for applied health services research. Milbank Q. 2003;81(3):363–88.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Smith N, Mitton C, Peacock S, Cornelissen E, MacLeod S. Identifying research priorities for health care priority setting: a collaborative effort between managers and researchers. BMC Health Serv Res. 2009;9:165.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Ioannidis JPA. How to make more published research true. PLoS Med. 2014;11(10): e1001747.

    Article  PubMed  PubMed Central  Google Scholar 

  20. McGregor S, Henderson KJ, Kaldor JM. How are health research priorities set in low and middle income countries? A systematic review of published reports. PLoS ONE. 2014;9(9): e108787.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Sabik LM, Lie RK. Priority setting in health care: lessons from the experiences of eight countries. Int J Equity Health. 2008;7:4.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Saldanha IJ, Wilson LM, Bennett WL, Nicholson WK, Robinson KA. Development and pilot test of a process to identify research needs from a systematic review. J Clin Epidemiol. 2013;66(5):538–45.

    Article  PubMed  Google Scholar 

  23. Baltussen R, Niessen L. Priority setting of health interventions: the need for multi-criteria decision analysis. Cost Effect Resour Allocat. 2006;4(1):14.

    Article  Google Scholar 

  24. James Lind Alliance. The James Lind Alliance Guidebook: Version 10. 2021. Available from: https://www.jla.nihr.ac.uk/jla-guidebook/downloads/JLA-Guidebook-Version-10-March-2021.pdf. Accessed 23 Aug 2023.

  25. Viergever RF, Olifson S, Ghaffar A, Terry R. A checklist for health research priority setting: nine common themes of good practice. Health Res Policy Syst. 2010;8(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Nasser M, Welch V, Tugwell P, Ueffing E, Doyle J, Waters E. Ensuring relevance for cochrane reviews: evaluating processes and methods for prioritizing topics for cochrane reviews. J Clin Epidemiol. 2013;66(5):474–82.

    Article  PubMed  Google Scholar 

  27. Dubois RW, Graff JS. Setting priorities for comparative effectiveness research: from assessing public health benefits to being open with the public. Health Aff. 2011;30(12):2235–42.

    Article  Google Scholar 

  28. Okello D, Chongtrakul P. A manual for research priority setting using the ENHR strategy. Geneva: Council Health Research Development (COHRED); 2000; p. 47.

    Google Scholar 

  29. Rudan I, Gibson JL, Ameratunga S, El Arifeen S, Bhutta ZA, Black M, et al. Setting priorities in global child health research investments: guidelines for implementation of CHNRI method. Croat Med J. 2008;49(6):720–33.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Ranson MK, Bennett SC. Priority setting and health policy and systems research. Health Res Policy Syst. 2009;7:27.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Bryant J, Sanson-Fisher R, Walsh J, Stewart J. Health research priority setting in selected high income countries: a narrative review of methods used and recommendations for future practice. Cost Effect Resour Allocat. 2014;12(1):23.

    Article  Google Scholar 

  32. Oxman AD, Schunemann HJ, Fretheim A. Improving the use of research evidence in guideline development: 2. Priority setting. Health Res Policy Syst. 2006;4:14.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Grill C. Involving stakeholders in research priority setting: a scoping review. Res Involv Engage. 2021;7(1):75.

    Article  Google Scholar 

  34. Niederberger M, Spranger J. Delphi technique in health sciences: a map. Front Public Health. 2020;8:457.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Schneider P, Evaniew N, Rendon JS, McKay P, Randall RL, Turcotte R, et al. Moving forward through consensus: protocol for a modified Delphi approach to determine the top research priorities in the field of orthopaedic oncology. BMJ Open. 2016;6(5):e011780

    Article  PubMed  PubMed Central  Google Scholar 

  36. Barrett D, Heale R. What are Delphi studies? Evid Based Nurs. 2020;23(3):68–9.

    Article  PubMed  Google Scholar 

  37. Merlin JS, Young SR, Azari S, Becker WC, Liebschutz JM, Pomeranz J, et al. Management of problematic behaviours among individuals on long-term opioid therapy: protocol for a Delphi study. BMJ Open. 2016;6(5): e011619.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Ramirez AG, Chalela P, Gallion KJ, Green LW, Ottoson J. Salud America! Developing a National latino childhood obesity research agenda. Health Educ Behav. 2011;38(3):251–60.

    Article  PubMed  Google Scholar 

  39. Dalkey NC. The Delphi method. In Rand C, editor. An experimental study of group opinion. Santa Monica, Calif. 1969. IX, 79 Bl. p.

  40. Hoekstra D, Mütsch M, Kien C, Gerhardus A, Lhachimi SK. Identifying and prioritising systematic review topics with public health stakeholders: a protocol for a modified Delphi study in Switzerland to inform future research agendas. BMJ Open. 2017;7(8): e015500.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Schmitt J, Petzold T, Nellessen-Martens G, Pfaff H. Priorisierung und Konsentierung von Begutachtungs-, Förder- und Evaluationskriterien für Projekte aus dem Innovationsfonds: Eine multiperspektivische Delphi-Studie. Gesundheitswesen. 2015;77(08/09):570–9.

    Article  CAS  PubMed  Google Scholar 

  42. Garcia AB, Cassiani SH, Reveiz L. A systematic review of nursing research priorities on health system and services in the Americas. Rev Panam Salud Publica. 2015;37(3):162–71.

    PubMed  Google Scholar 

  43. Iqbal H, McEachan RRC, West J, Haith-Cooper M. Research priority setting in obesity: a systematic review. J Public Health. 2021. https://doi.org/10.1007/s10389-021-01679-8

    Article  PubMed  Google Scholar 

  44. Forsman AK, Wahlbeck K, Aaro LE, Alonso J, Barry MM, Brunn M, et al. Research priorities for public mental health in Europe: recommendations of the ROAMER project. Eur J Public Health. 2015;25(2):249–54.

    Article  PubMed  Google Scholar 

  45. Doyle J, Waters E, Yach D, McQueen D, De Francisco A, Stewart T, et al. Global priority setting for cochrane systematic reviews of health promotion and public health research. J Epidemiol Community Health. 2005;59(3):193–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Kingsland M, Barnes C, Doherty E, McCrabb S, Finch M, Cumpston M, et al. Identifying topics for future cochrane public health reviews. J Public Health (Oxford, England). 2021;44(4):e578–81.

    Article  Google Scholar 

  47. Rieks S, Gerhardus A. Health research funding in Germany. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2018;61(7):864–71.

    Article  PubMed  Google Scholar 

  48. Barnieh L, Jun M, Laupacis A, Manns B, Hemmelgarn B. Determining research priorities through partnership with patients: an overview. Semin Dial. 2015;28(2):141–6.

    Article  PubMed  Google Scholar 

  49. Reveiz L, Elias V, Terry RF, Alger J, Becerra-Posada F. Comparison of national health research priority-setting methods and characteristics in Latin America and the Caribbean, 2002–2012. Rev Panam Salud Publica. 2013;34(1):1–13.

    PubMed  Google Scholar 

  50. Corbin JM, Strauss AL. Basics of qualitative research. Techniques and procedures for developing grounded theory. 4 edn. Los Angeles, Calif. [u.a.]: Sage; 2015. XVIII, 431 S. p.

  51. Krippendorff K. Content analysis: an introduction to its methodology. Thousand Oaks: SAGE Publications; 2018.

    Google Scholar 

  52. Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

    Article  PubMed  Google Scholar 

  53. Angood C, McGrath M, Mehta S, Mwangome M, Lung’aho M, Roberfroid D, et al. Research priorities to improve the management of acute malnutrition in infants aged less than six months (MAMI). PLoS Med. 2015;12(4): e1001812.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Gwet KL. Handbook of inter-rater reliability, 4th ed. The definitive guide to measuring the extent of agreement among raters. Gaithersburg, Maryland: Advanced Analytics, LLC; 2014.

    Google Scholar 

  55. Vanacore A, Pellegrino MS. Robustness of κ-type coefficients for clinical agreement. Stat Med. 2022;41(11):1986–2004.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Klein D. Implementing a general framework for assessing interrater agreement in stata. Stand Genomic Sci. 2018;18(4):871–901.

    Google Scholar 

  57. StataCorp. Stata Statistical Software: Release 17 [program]. College Station, TX: StataCorp LLC; 2021.

  58. Tromp N, Baltussen R. Mapping of multiple criteria for priority setting of health interventions: an aid for decision makers. BMC Health Serv Res. 2012;12:454.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Baltussen R, Niessen L. Priority setting of health interventions: the need for multi-criteria decision analysis. Cost Eff Resour Alloc. 2006;4:14.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Cartier Y, Creatore MI, Hoffman SJ, Potvin L. Priority-setting in public health research funding organisations: an exploratory qualitative study among five high-profile funders. Health Res Policy Syst. 2018;16(1):53.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Kaur G, Prinja S, Lakshmi PVM, Downey L, Sharma D, Teerawattananon Y. Criteria used for priority-setting for public health resource allocation in low- and middle-income countries: a systematic review. Int J Technol Assess Health Care. 2019;35(6):474–83.

    Article  PubMed  Google Scholar 

  62. Gerhardus A, Becher H, Groenewegen P, Mansmann U, Meyer T, Pfaff H, et al. Applying for, reviewing and funding public health research in Germany and beyond. Health Res Policy Syst. 2016;14(1):43.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Evans C, Rogers S, McGraw C, Battle G, Furniss L. Using consensus methods to establish multidisciplinary perspectives on research priorities for primary care. Primary Health Care Res Dev. 2004;5(01):52–9.

    Article  Google Scholar 

  64. Li T, Ervin AM, Scherer R, Jampel H, Dickersin K. Setting priorities for comparative effectiveness research: a case study using primary open-angle glaucoma. Ophthalmology. 2010;117(10):1937–45.

    Article  PubMed  Google Scholar 

  65. Jones J, Hunter D. Consensus methods for medical and health services research. BMJ. 1995;311(7001):376–80.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  66. Vaismoradi M, Snelgrove S. Theme in qualitative content analysis and thematic analysis. Forum Qual Sozialforschung Forum Qual Soc Res. 2019;20(3):23.

    Google Scholar 

  67. Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.

    Article  CAS  PubMed  Google Scholar 

  68. Erlingsson C, Brysiewicz P. A hands-on guide to doing content analysis. Afr J Emerg Med. 2017;7(3):93–9.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We would like to show our gratitude to the advisory board members for sharing their expertise with us during the course of this research: Thomas Altgeld (State Association for Health Lower Saxony), Prof. Dr. Marie-Luise Dierks (Hannover Medical School), Prof. Dr. Albrecht Jahn (Heidelberg University), Dr. Bärbel-Maria Kurth (Robert Koch Institute), Prof. Dr. Jochen Schmitt (German Network Health Service Research). Furthermore, we would like to thank Thomas Heise of the Research Group for Evidence-Based Public Health (Leibniz-Institute for Prevention Research and Epidemiology & Institute for Public Health and Nursing Research, University of Bremen, Germany) for his support in the questionnaire design. Last but not least, we are very grateful for the time and effort that all the participants invested for our study. Not only the general response rate, but also the amount of own proposed research topics and assessment criteria during the first Delphi round, exceeded our expectations.

Funding

Open Access funding enabled and organized by Projekt DEAL. This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

AG and SKL conceived the idea for the study. All three authors conceptualised the study. DH collected the data. All three authors analysed the data. All three authors interpreted the results and commented substantially to the presentation of the results. DH was responsible for drafting this manuscript, which all authors have read, revised, and approved.

Corresponding author

Correspondence to Dyon Hoekstra.

Ethics declarations

Ethics approval and consent to participate

The ethical board of the University of Bremen (Germany) approved the study proposal. The data from the online questionnaire were automatically stored in a database on a local university server. The server is only accessible with a username and a password that are only known to the main investigators. Also, the online questionnaire software provider (Lime Survey) does not have access to the database. The username and password will not be shared with other parties. Participation in this study was voluntary. We provided detailed information during the recruitment phase with regards to the study in general and to the tasks of the respondents, so they could make an informed decision. Furthermore, we repeated this information at the beginning of each questionnaire and the respondents actively had to give their consent before starting the questionnaires. Responses will not be traced back to the respondent. No personally identifiable information was captured. We did not ask the respondents to indicate which organisations or institutes they represented. Therefore, we could also not report this information in this or any other publication.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

List of organisations that were invited to participate in the study—by stakeholder group. We identified specific organisations that fall within each of the included stakeholder groups. The stakeholder groups stand for different professional fields who are either producers, facilitators, or consumers of public health research in Germany. We used this list of organisations for the recruitment of individual respondents for the Delphi stage.

Additional file 2.

Final list of research topics after content analysis—Incl. how many respondents´ suggestions were aggregated into the research topic. This file represents the results of the content analysis after the first Delphi round. In total, the respondents proposed 529 research topics and 50 assessment criteria in the first Delphi round, which we aggregated through a content analysis into 76 sufficiently distinct research topics and 6 assessment criteria, respectively. a and 2b show the final list of research topics based on the content analysis for the substantive and the methodological-theoretical research topics, respectively.

Additional file 3.

Comparison of the ranking of research topics between public health researchers versus public health practitioners. a and b show the comparison of the ranking of substantive and methodological-theoretical research topics, respectively, by stakeholder group (public health researchers versus public health practitioners). A difference in minus means the research topic is ranked higher by public health researchers; a difference in plus means the research topic is ranked higher by public health practitioners.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hoekstra, D., Gerhardus, A. & Lhachimi, S.K. Priority setting to support a public health research agenda: a modified Delphi study with public health stakeholders in Germany. Health Res Policy Sys 21, 86 (2023). https://doi.org/10.1186/s12961-023-01039-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12961-023-01039-w

Keywords