Skip to main content

How are health research partnerships assessed? A systematic review of outcomes, impacts, terminology and the use of theories, models and frameworks



Accurate, consistent assessment of outcomes and impacts is challenging in the health research partnerships domain. Increased focus on tool quality, including conceptual, psychometric and pragmatic characteristics, could improve the quantification, measurement and reporting partnership outcomes and impacts. This cascading review was undertaken as part of a coordinated, multicentre effort to identify, synthesize and assess a vast body of health research partnership literature.


To systematically assess the outcomes and impacts of health research partnerships, relevant terminology and the type/use of theories, models and frameworks (TMF) arising from studies using partnership assessment tools with known conceptual, psychometric and pragmatic characteristics.


Four electronic databases were searched (MEDLINE, Embase, CINAHL Plus and PsycINFO) from inception to 2 June 2021. We retained studies containing partnership evaluation tools with (1) conceptual foundations (reference to TMF), (2) empirical, quantitative psychometric evidence (evidence of validity and reliability, at minimum) and (3) one or more pragmatic characteristics. Outcomes, impacts, terminology, definitions and TMF type/use were abstracted verbatim from eligible studies using a hybrid (independent abstraction–validation) approach and synthesized using summary statistics (quantitative), inductive thematic analysis and deductive categories (qualitative). Methodological quality was assessed using the Quality Assessment Tool for Studies with Diverse Designs (QATSDD).


Application of inclusion criteria yielded 37 eligible studies. Study quality scores were high (mean 80%, standard deviation 0.11%) but revealed needed improvements (i.e. methodological, reporting, user involvement in research design). Only 14 (38%) studies reported 48 partnership outcomes and 55 impacts; most were positive effects (43, 90% and 47, 89%, respectively). Most outcomes were positive personal, functional, structural and contextual effects; most impacts were personal, functional and contextual in nature. Most terms described outcomes (39, 89%), and 30 of 44 outcomes/impacts terms were unique, but few were explicitly defined (9, 20%). Terms were complex and mixed on one or more dimensions (e.g. type, temporality, stage, perspective). Most studies made explicit use of study-related TMF (34, 92%). There were 138 unique TMF sources, and these informed tool construct type/choice and hypothesis testing in almost all cases (36, 97%).


This study synthesized partnership outcomes and impacts, deconstructed term complexities and evolved our understanding of TMF use in tool development, testing and refinement studies. Renewed attention to basic concepts is necessary to advance partnership measurement and research innovation in the field.

Systematic review protocol registration: PROSPERO protocol registration: CRD42021137932

Peer Review reports


Efforts to quantify partnership outcomes and impacts have increased rapidly since the early 1990s, propelled by demands to quantify tangible returns arising from the investment of public funds, in health and other research domains [1, 2]. However, accurate, consistent measurement of health research partnership outcomes and impacts remains a long-standing challenge [3,4,5]. In the health research partnerships domain, the systematic assessment of objective, quantifiable outcomes and impacts remains emergent [3, 6,7,8,9,10,11,12,13,14,15]. For our purposes, a health research partnership comprises a relationship between researchers and other partner(s) involved in the research process (e.g. decision- or policy-makers, health care administrators or leaders, community agencies, charities, networks, patients and/or industry partners, among others) [16, 17].

Despite early recognition of measurement complexities and key partnership needs, including enhanced workforce capacity, collaboration, social communication and knowledge exchange networks required to bring about successful partnership innovations [2], gaps in basic partnership concepts and partnership measurement persist. Among the challenges are commonly reported measurement and partnership domain-related concerns (e.g. small study sample sizes, evolving terminology and a lack of standardized term and concept definitions and their consistent application) [7, 9, 11, 13, 18]. In addition, there are numerous tool-specific challenges, including the high prevalence of single-use or bespoke tools, a lack of psychometric and pragmatic testing and evidence and a lack of tool standardization [3, 7, 19], that hinder measurement advancements.

There is a well-established link between the quality of available assessment tools, researchers’ ability to measure partnerships accurately and consistently and the overall advancement of scientific inquiry [3, 20, 21]. For at least the past two decades, partnership researchers have documented growing concerns about the presence, nature and qualities of available partnership assessment tools [3, 7, 8, 22,23,24]. Recent tool reviews in the partnership domain reveal gaps in the methodological strength, scientific rigor and pragmatic aspects of the available measurement tools [3, 8, 10, 11, 14, 15, 24, 25].

Among the most important challenges is the improvement of tool conceptual, psychometric and pragmatic characteristics to advance partnership outcomes and impacts assessment [19, 20, 26]. Conceptual foundations of assessment tools are important because they influence many elements of the research process (i.e. establishing research rationale, the structure of inquiry, research questions, guiding construct and item development, development and testing of hypotheses, identification and prioritization of key determinants, research and measurement structure and approach, and the interpretation and contextualization of findings) [27,28,29,30]. Theories, models and frameworks (TMF) also increase research efficiency by both guiding and producing evidenced generalizations that can help reduce study replication burden [31]. TMF also help researchers hypothesize and test proposed relationships between partnership constructs that cannot otherwise be directly assessed [32]. Unfortunately, while authors may cite or refer to TMF, theoretical concepts may not be appropriately or fully operationalized, or integrated across multiple research study phases [8, 33].

The lack of psychometric and pragmatic evidence for existing tools and the persistent absence of dedicated tool development, evaluation and improvement studies in the field are also well-documented challenges [14, 15]. There is a growing emphasis on and need for psychometrically and pragmatically robust tools [3, 8, 13, 20, 24]. However, even when tools are well conceptualized and psychometrically robust, their operationalization is not guaranteed, particularly if the tools are challenging to apply in practice [19]. Hence, studying and testing specific tool pragmatic features that facilitate or hinder assessment tool use is a key part of ensuring they get used in practice [24, 34]. Pragmatic characteristics are a more recent, but critical, addition to existing calls for more dedicated focus on traditional conceptual and psychometric characteristics of tool development, testing and improvement [19, 33, 34].

Our understanding of health research partnerships, their systematic measurement and development, and the capture of partnership outcomes and impacts is hindered by the lack of assessment tools possessing such characteristics [13,14,15, 26], and the overall lack of deliberate development, testing and ongoing improvement of existing tools [33,34,35,36]. Closing these gaps would help to facilitate tool use, advance the systematic measurement of research partnerships and drive improvements in research partnership science [8, 35]. The accuracy of research findings and partnership measurement of outcomes and impacts can be advanced when tool items, constructs and tools are systematically and iteratively improved [35].

This segment of the overall dissertation research [37] was conducted as part of the Integrated Knowledge Translation Network (IKTRN) based at the Centre for Practice-Changing Research in Ottawa, Canada, and supported by the Canadian Institutes of Health Research [38]. The IKTRN comprises researchers and research users from over 50 research and other organizations with a research agenda to ensure best practices and their routinized use produce “effective, efficient and appropriate healthcare” [38]. Mandated IKTRN aims include advancing knowledge about outcomes and impacts assessment and partnership science [39].

As part of a previous series of cascading syntheses (Fig. 1), we identified and assessed health partnership outcomes and impacts measurement tools across multiple partnership traditions, partner groups and contexts [14, 15].

Fig. 1
figure 1

Schematic of cascading scoping and systematic reviews series

Based on the preceding findings, the focus of the current study was to (1) to identify and assess the outcomes and impacts of health research partnership arising from studies using tools with known theoretical, psychometric and pragmatic characteristics, and secondarily, to understand (2) what terms were used to describe and assess outcomes and impacts, (3) what definitions were used to describe outcomes and impacts terms, (4) what TMF were used in eligible studies and (5) how TMF were employed (Additional file 1: Table S1). This study is the third in a series of doctoral thesis studies contributing synthesis-level evidence on health research partnership assessment tools and cascading from Research Theme 2b [16, 37].


We used a four-part, consensus-built conceptual framework to describe the principles, strategies, outcomes and impacts of health research partnerships; the current research addresses two of these four described domains [16], specifically pertaining to tools. We assessed the outcomes, impacts and TMF use arising in studies of health research partnerships employing partnership outcomes and impacts assessment tools with known conceptual, psychometric and pragmatic characteristics. We provide a synopsis of the comprehensive review methods with key protocol deviations used to generate the data reported herein (see Additional file 1: Table S1). Several review standards guided our research [40,41,42] and reporting of results [43].

We included studies involving health research partnerships that (1) developed, used and/or assessed tools (or an element or property of a tool) to evaluate partnership outcomes or impacts [7, 44] as an aim of the study; (2) reported conceptual foundations (reference made to at least one TMF related to the health research partnership outcome or impact assessment tool, at minimum); (3) reported empirical, quantitative evidence of tool psychometrics (i.e. validity and reliability evidence, at a minimum); (4) reported one or more pragmatic characteristics [14, 15]; (5) were accessible and amenable to full text review; (6) reported primary research findings drawn from empirical evidence; and (7) reported relevant, abstractable data. We retained studies of any design type meeting these criteria.

We excluded studies that did not meet these criteria, could not be located or reviewed in full text, reported head-to-head comparisons without stratified findings, did not report primary or empirical findings, or lacked sufficient data for abstraction (Additional file 1: Table S1).

We abstracted key variables verbatim, as reported by authors, from all eligible studies using a hybrid approach (sequential, independent abstraction and validation). Abstracted variables included reported outcomes and impacts, terms and definitions, identified TMF and their use. We collated a citation bibliography of referenced TMF employed by eligible studies. The team assessed study methodological quality independently and in duplicate, using the 16-item Quality Assessment Tool for Studies with Diverse Designs (QATSDD) tool. The QATSDD was developed to assess the quality of health research studies with different designs [45].

We calculated summary statistics (mean, standard deviation, frequency and proportion) to synthesize quantitative study and tool characteristics using Microsoft Excel [46] and Stata v13.1 [47], including category frequencies for TMF use and number of terms and definitions. We tabulated study quality assessments (% quality score) for each study, and an aggregated mean and standard deviation (SD) % QATSDD quality score were reported [45]. We analysed qualitative data using an inductive approach and synthesized key terms, definitions and reported outcomes and impacts using thematic analysis[48] with NVivo v12.7 [49]. We modified pre-existing, deductive categories [30] to guide our capture of TMF use.


After de-duplicating 56 123 total records [50] and undertaking title/abstract and full-text screening on 2784 full-text articles with substantial agreement at each phase [L1 title/abstract screening: 95.23% agreement, к = 0.66 (95% confidence interval: 0.64–0.67) and L2 full-text screening: 87.60% agreement, к = 0.74 (95% confidence interval: 0.72–0.76)] [51, 52], we identified 37 eligible studies (Fig. 2).

Fig. 2
figure 2

Outcomes and impacts systematic review—PRISMA citation flow diagram

Study characteristics

All eligible studies were published in English. Four studies contained French (1) and Spanish (3) bilingual tools [53,54,55,56]. Of the 40 total global study sites represented, most were North American (33, 83%), published after 2010 (24, 65%) and employed cross-sectional (22, 59%) or mixed-methods (12, 33%) study designs (Table 1).

Table 1 Study characteristics and use of theories, models and frameworks (n = 37)

Study quality assessment (QATSDD)

We applied 16 QATSDD criteria to all studies, yielding a mean quality score of 80.0% (SD 0.11%) and scores range from 45.8% to 100.0% (Additional file 1: Table S2). Studies most frequently scored high on (1) fit between the research question and analytic methods (97%), (2) appropriate justification for the chosen analytic methods (95%), (3) explicit reference to a theoretical framework (97%), (4) tool validity/reliability (97% scoring 2 or 3) and (5) the presence of aims/objective statements in the body of the report (89%). Lowest frequency scores were found for (1) fit between the research question and data collection method (43%), (2) evidence of sample size considerations linked to the analysis (46%), (3) the discussion of strengths and limitations in reports (49%) and (4) evidence of user involvement in design (51%).

Reported health research partnership outcomes and impacts

The primary focus of included studies was on the identification, refinement and testing of tool constructs; very few were focused on the assessment of specific characteristics and outcomes/impacts arising from the health research partnership(s) studied therein (14, 38%). Overall, nine studies reported only health research partnership outcomes (24%), two reported only impacts (5%) and three reported both outcomes and impacts (8%). We identified 48 outcomes in 12 studies comprising 19 individual- (40%), 27 partnership- (56%) and two organizational-level outcomes (4%). In total, only five of 48 identified outcomes (10%) were negative (Table 2).

Table 2 Synthesis of reported outcomes (n = 48 outcomes reported from 12 studies)

Synthesis of outcomes

By descending frequency, we identified three thematic levels of outcomes: partnership, individual and organizational. Positive partnership-level outcomes (27) were the most frequently reported outcomes and included personal (e.g. ownership, commitment, empowerment) as well as functional (e.g. synergy) and structural outcomes (e.g. process, structural improvements and autonomy of resource sharing/control and data monitoring/use and dissemination). We identified two partnership-level outcomes subthemes: leadership and implementation outcomes. The absence or lack of leadership characteristics, leadership style characteristics and leadership partnership management and engagement comprised all negative outcomes reported at the partnership level. Positive implementation outcomes included implementation effectiveness, facilitation and intervention effectiveness.

Individual-level outcomes (19) were diverse and included both positive self-improvements (e.g. gaining knowledge, skills, capacity; perceptions of empowerment, confidence, being valued, self-efficacy; personal goal achievement, health-enhancing behaviours) and positive contextual improvements (i.e. relationships with researchers, opportunities to participate, adequate research support, ability to contribute meaningfully). We identified a single subtheme (level of engagement) that captured high/deep engagement levels and positive engagement outcomes and contexts, and included a single negative outcome related to trust/competency.

Organizational-level outcomes (2) included improved organizational awareness of health status and capacity-building.

Synthesis of impacts

We found 55 health research partnership impacts reported in four studies, with a large proportion of impacts reported by a single study (40, 73%) [67] (Table 3). In descending order of frequency, the emergent impact themes comprised 28 individual-level (51%), 16 organizational-/community-level (29%) and five partnership-level (9%) impacts, and six negative impacts (11%) (Table 4).

Table 3 Synthesis of reported impacts (n = 55 impacts reported, 4 studies)
Table 4 Reported definitions for outcomes and impacts terms (n = 9)

Similarly to reported partnership outcomes, impacts at the individual level (28) included personal self-improvement impacts (e.g. capacity-building, perceptions of making a difference, enhanced status, feeling valued/personal fulfilment and goal achievement), functional impacts (e.g. access to information, service and resource use, role modelling, enhanced problem-solving and productivity) and contextual impacts (e.g. improved physical/social environment, increased participation opportunities). Specific youth-related impacts included a mix of personal, functional and contextual impacts (e.g. feeling useful, networking and employment opportunities) (Table 3). At the community/organizational level (16), the studied partnerships generated capacity, resource/financial, structural/process and collaborative networking/community connectivity impacts, as well as an array of other community/organizational impacts (e.g. enhanced collaborative power/reciprocity, policy changes, improvements to quality of life). Finally, partnership-level impacts (5) included positive perceptions of impact, health status impacts and implementation impacts (subtheme, including increased impact, intervention efficacy and improved implementation uptake). The negative impacts (6) lacked clear links to a specific reporting level, so were grouped separately. Negative impacts included mostly personal repercussions (i.e. negative emotions, conflicting roles, insufficient influence, lack of attribution, negative status by association).

Outcomes and impacts terms and definitions

In total, 44 terms were used to describe health research partnership outcomes and impacts, which were sorted into six themes and one subtheme (Additional file 1: Table S3). We observed frequent interchange of outcomes and impacts terms within and between studies.

Far more terms were used to describe outcomes (39, 89%) than impacts (5, 11%). The individual theme categories revealed several underlying terminology dimensions, including time- and stage-bound descriptors (27, 61%), specific categories or types of outcomes/impacts (15, 34%); however, we also identified several neutral terms (8, 18%) (Additional file 1: Table S3). Of the 44 terms we identified, 30 were unique (68%), but very few were explicitly defined (9, 20%). When terms were defined, the nature and depth of term definitions posed challenges for different reasons [i.e. concept mixing (in one case, outcome was defined as “outcome measures by which to assess the impacts of research partnerships” [68]) or cursory detail in definition (e.g. impact defined as “both product and process” [66])] (Table 4).

Use of TMF

In examining theoretical underpinnings at the study level, we found most studies explicitly referenced one or more TMF (mean TMF per study: 5, SD 4) and informed hypothesis test(s) (34, 92%) (Table 1). Only two studies used TMF on a conceptual level alone [i.e. reference made to TMF but lacked TMF-informed hypothesis test(s), (5%)]. Across 37 studies, a total of 179 TMF were noted, 138 of which were unique (21%) (Additional file 1: Table S4). There were 15 TMF sources referenced two or more times [e.g. Wallerstein et al. (6) [74], Lasker et al. (5) [75], Butterfoss et al. (2) [76], Hawkins et al. (2) [77]]. Explicit tool-related TMF use focused on the type/choice of tool constructs and tool-related hypothesis tests (36, 97%) (Table 1). A bibliography of referenced study-level TMF identified is appended (Additional file 1: Table S4).


In this review, we systematically assessed the outcomes and impacts of health research partnerships, terminology and the type and use of TMF arising from studies using partnership assessment tools with known conceptual, psychometric and pragmatic characteristics (Additional file 1: Table S5).

Few studies reported on the actual outcomes and impacts of the health research partnerships studied therein. We found numerous outcomes and impacts terms; however, these were both poorly defined and conceptually mixed on one or more dimensions (e.g. temporality, research stage, type, perspective). Most studies used multiple TMF, many of these sources were unique, and the use of tool-related TMF was exclusively linked to the type/choice of tool constructs under investigation and hypothesis tests. We found the overall quality of included studies scored using the QATSDD tool was high; however, despite high scores, we identified several improvements to methodological and reporting elements. Of particular importance to this review and the partnership research domain in general was the lack of explicit reporting of user involvement in research design in half of included studies (Q15, Additional file 1: Table S2).

The findings of our review can be explained in several ways. First, the aims of included studies were focused mainly on tool and construct development, refinement and testing; most studies were not designed for the purpose of examining and reporting on partnership outcomes and impacts. This mismatch between the purpose of included studies and their anticipated products helps explain the low proportion of studies reporting outcomes and impacts observed in our study. Almost 75% of reported impacts were generated by a single dissertation [67], a finding that may reflect the required reporting brevity and scope of peer-reviewed works and/or the challenge of comprehensively reporting tool and outcomes/impacts in a single report. An area for future inquiry is the feasibility of meaningfully combining tool-specific evaluative findings and partnership outcomes and impacts into a single, mixed report. The proportion of reported negative outcomes and impacts was low; however, their presence reinforces the importance of partnership assessment tools that solicit the full range of positive and negative effects.

Despite these shortcomings, we identified both partnered research outcomes and impacts in a small proportion of studies with several key takeaways: (1) outcomes were mainly reported at the partnership and individual levels; reported impacts were largely individual and organizational-/community-level effects; (2) the majority of outcomes comprised positive personal, functional, structural and contextual effects, and most reported impacts were of a personal, functional and contextual nature; (3) negative outcomes and impacts were rare, comprising a lack or absence of leadership-related characteristics and a lack of trust/fairness/competency affecting levels of engagement within partnerships. The reported negative impacts were almost exclusively comprised of personal repercussions.

Secondly, even in this well-defined literature sample, the systematic and consistent use of terms was lacking, as were term definitions (Additional file 1: Tables S3, S4). The use and interchanging of outcomes and impacts terms occurred variably within and across studies, and within term definitions themselves. While these findings are among the documented gaps in this field, another reason for these findings may be the complexity and nature of identified terms. We observed frequent conceptual mixing of terms on one or more dimensions (e.g. temporality, nature, perspective, philosophical disposition, target population) (Additional file 1: Table S3). Such complexity precludes straightforward standardization of both term meanings and their use. Deconstructing term dimensions is one possible way to explore term standardization and ultimately enhance the measurement and reporting of outcomes and impacts.

Third, we learned that while studies may be explicitly linked to TMF, TMF were not easily identifiable from study citations and/or manuscript texts alone and were frequently lacking detail. Underlying reasons for the lack of detail about TMF use and about where, when and how TMF were explicitly integrated and/or tested across multiple study phases could be due to the use of different research approaches for tool development, testing and refinement. It is established that tool development, testing and refinement must often occur across multiple studies and samples in a step-wise or segmented manner and over prolonged periods [35]. Thus, it may not be possible to fully understand and accurately characterize TMF use by examining a single tool development, testing or refinement study alone.

Our findings echo previously reported research in several ways. First, our findings confirmed a lack of detailed measurement, inconsistent categorization, measurement and reporting of outcomes and impacts, and the presence of term switching, as have other partnership domain reviews [12, 78]. We did not observe the researcher-reported outcomes and impacts that were identified in other previous works [12, 78, 79]; however, note that these works also involved non-health domains in their catchment. Most outcomes and impacts were thematically consistent with the “levels of reporting” published in other reviews (i.e. individual, partnership, organizational and community levels) [3, 12, 23, 78, 80, 81]. The exception to this was research process outcomes and impacts [23, 78, 79]; in our study, these findings were not grouped as a stand-alone category, rather we kept them thematically located within the reporting structure underlying abstracted data (i.e. individual, partnership and community/organizational levels).

Even though Vat and colleagues’ review findings were structured slightly differently (i.e. according to research decision points), the types of outcomes and impacts captured therein were closely aligned with our review, with few exceptions [12]. The positive personal, functional, structural and contextual outcomes and personal, functional and contextual impacts we identified (Tables 2, 3) were consistent with other studies [12, 17, 23, 78, 79, 81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102]. For example, positive outcomes common to previous research included feeling valued, ability to contribute, empowerment, partnership establishment, partnership synergy, research process facilitation, enhanced partner capacity, achievement of personal goals, level of engagement, enhanced uptake/use/dissemination of findings, enhanced health or community outcomes, and positive changes to partnership contexts.

We also identified positive impacts common to previous reports, including implementation uptake, increased service awareness/access/use, improved health outcomes, improved physical environment, increased trust, the inclusion of partner voices, valuing partners’ voices and contributions, positive cost–benefit ratio (benefits of partnership outweigh risks), improved partner capacity, improved (career) status, support, increased involvement opportunities, networking and high-quality relationship development, better community connections, youth impacts, peer network support, personal goal achievement and feelings of personal fulfilment, value and empowerment, shared power, positive changes in attitudes/prejudice and bias, improved research administration (including accountability and transparency), information and resource sharing, and sustainability.

Negative emotions (frustration) and time/resource diversions were commonly identified negative impacts, and trust was the only mixed effect (i.e. an outcome or an impact that is reported as both a positive and a negative effect) common to previously reported mixed-effects outcomes [93].

While the proportion of reported negative outcomes and impacts was low, our review revealed several unique negative outcomes and impacts (i.e. negative outcomes: level of engagement issues related to trust, fairness and competency; leadership issues related to leadership characteristics, style, engagement and partnership management; negative impacts: role conflicts between occupational and partnership work, negative status or profile by association, and lack of attribution). We did not observe the following categories of negative partnership outcomes and impacts previously reported by other reviews, including researcher-partner tensions [78, 79, 87, 95], tokenism [78, 79, 87, 88], biased data [78], representativeness [88] and study design issues [12, 79].

As previously reported, outcomes and impacts terminology, term definitions and their clear differentiation was also problematic in this review [3, 8, 78, 87, 103]. The combined difficulty of locating relevant studies using individual terms or combinations, paired with the lack of consensus and clarity around terms, has led to their grouping in recent reviews to facilitate review comprehensiveness [3, 8, 78, 87]. While this strategy is certainly pragmatic in terms of literature catchment, reporting outcomes and impacts in a combined group can further enmesh outcomes and impacts conceptualization [8, 78, 87, 103] rather than refine and improve our understanding of key terms and definitions. Unfortunately, the nature, use and explicit reporting of outcomes and impacts terms and definitions in the partnership literature precluded term-specific assessments. Deconstructing diverse outcomes and impacts terms as they arise in studies can help discern term complexity, conceptual overlaps and reveal other sources of terminology confusion. We used this approach to identify dimensions contributing to conceptual mixing of both outcomes and impacts terms (Table 3; Additional file 1). This approach may be helpful for researchers in their attempts to advance consensus, standardization and clarification of terms, term definitions and their use in future research. In one terminology-focused review, similar problems pertaining to research impact conceptualization, missing definitions, and bureaucratic and heterogeneous terms lacking conceptual clarity were identified [104]. However, the authors took a contrasting approach by categorizing term complexity by specific definition type (i.e. positive effects, interpretive, bibliometric and use-based term definitions) and by characterizing key, underlying constructs [104].

Lastly, the proportion of TMF use in eligible studies was much higher in our study, when compared with several other reviews [3, 8, 23, 24]. We also found a high degree of TMF referencing involving hypothesis testing; however, TMF use findings should be interpreted with caution because (1) the actual TMF employed were difficult to discern from citations or manuscript texts alone; (2) while we screened for duplicate TMF citations and noted the frequency of several high profile, verifiable TMF sources, we did not perform any secondary study auditing of referred TMF, in keeping with a pragmatic review approach; (3) the explicit citation of tool-related TMF was one of our study inclusion criteria, and therefore the sample of literature we reviewed already contained studies with embedded tool-related TMF, at minimum, which may explain the high proportion of studies with TMF use; and (4) the use of TMF in tool development, testing and refinement studies may comprise a multi-study, multistep approach [35], which could render assessments of TMF use in singular studies incomplete.

Strengths and limitations

To our knowledge, this is the first systematic review of outcomes and impacts arising from health research partnership assessment studies involving tools with known conceptual, psychometric and pragmatic characteristics. This synthesis revealed complexities in terminology, including unstandardized descriptors, and a lack of consistent application and comprehensive term definitions. Study aims were largely focused on tool development, testing and refinement, thus largely lacked abstractable evidence of reported health research partnership outcomes and impacts. The identified outcomes and impacts were generated by a small number of studies, and our findings must be considered within this context.

The findings highlight several strengths and weaknesses in our approach. One strength was our ability to identify studies containing health research partnership assessment tools with known conceptual, psychometric and pragmatic characteristics. Given longstanding and recurrent calls for more robust, quantitative and conceptually, psychometrically and pragmatically sound tools, we believe this review contributes to the evolving literature and may offer researchers better access to studies using partnership assessment tools that meet these criteria.

Second, by confining our review to such studies, we have (1) refined our understanding of the existing gap in reported health research partnership outcomes and impacts and (2) drawn attention to and elaborated on multiple challenges associated with quantifying health research partnership outcomes and impacts. Future research should focus attention on foundational issues, including standardized, defined terms (and the separate reporting of any other key dimensions), clear reporting of where, how and why TMF are used (and the results of that application), as well as clear reporting of partnership outcomes and impacts.

Given our findings, it is still unclear whether the development, testing and refinement of partnership assessment tools could meaningfully evolve through reports of their application, or whether deliberate attention must be applied to this activity as a separate or separately reported endeavour. Regardless, the search for new and creative approaches to balance the scientific and measurement goals of identifying, testing and refining tools, tool constructs and their associations with their individual and contextualized application and reporting is paramount. Both features are essential to the evolvement and detailed reporting of partnership outcomes and impacts, and broader research partnerships study.

Our study was limited in several ways. Given previous reports, it was not surprising that our study was limited by the type, level of detail and quality of data available for abstraction. For example, we encountered a lack of in-depth, easily abstractable detail pertaining to TMF in manuscript text and linked citations, despite their high referral frequency. In this regard, the study was also limited in that we did not audit TMF citations provided by authors. Secondary analysis of these citations and their underlying TMF is an important area for our future research. Examining specific TMF underlying both studies and health research partnership assessment tools may provide unique insights about how to efficiently evolve constructs and tools in the future.

Our single-study approach to assessing TMF use did not recognize a potentially different use of TMF associated with tool development, testing and refinement. TMF use and evolvement frequently occurs across multistep research studies and over extended periods [35]. Researchers must often search for and refine the most relevant constructs, associations between constructs, tools and test concepts in different contexts in an iterative fashion. Furthermore, such testing is predicated on sample size, often precluding the simultaneous testing of constructs and associations which may exacerbate the need for multistep, longer-term studies [35]. From a pragmatic standpoint, understanding TMF use at this depth would have required tracking down multiple, sequenced studies from inception to present to fully understand TMF use. As this activity was outside the feasible scope of our systematic review, we note it as a study limitation, but highlight its importance as an area of future research.

Finally, low volume and inherent variation in the definition and reporting of outcomes and impacts limited our ability to advance both terminology standardization and the categorization of outcomes and impacts in this review. Standardization of terminology, term definitions and use and better defining the conceptual boundaries of outcomes and impacts remain key targets for consensus-building activities and future study in this field.


In sum, several novel insights were generated by our examination of outcomes and impacts, terms, definitions, TMF type and use arising in studies employing assessment tools with known conceptual, psychometric and pragmatic qualities. Attention to the foundational terms, definitions and their consistent application is required to continue advancing partnership measurement and research innovation in the health research partnerships domain.

Availability of data and materials

Study materials including the search strategy, abstraction tools and bibliographic tool index will be accessible through the Open Science Framework after the research and publication of findings is complete. Study data will be made available upon reasonable request to the first author, upon conclusion of the dissertation research and publication of findings.


  1. Hanney S, Packwood T, Buxton M. Evaluating the benefits from health research and development centres. Evaluation. 2000;6(2):137–60.

    Article  Google Scholar 

  2. Spaapen J, Djistelbloem H, Warnelink F. Evaluating research in context. A method for comprehensive assessment. 2nd ed. Den Haag: COS; 2007.

    Google Scholar 

  3. Luger TM, Hamilton AB, True G. Measuring community-engaged research contexts, processes and outcomes: a mapping review. Milbank Q. 2020;98(2):493–553.

    Article  Google Scholar 

  4. Arcury TA, Quandt SA, McCauley L. Farmworkers and pesticides: community based research. Environ Health Perspect. 2000;108(8):787–92.

    Article  CAS  Google Scholar 

  5. Kuruvilla S, Mays N, Walt G. Describing the impact of health services and policy research. J Health Serv Res Policy. 2007;12(Suppl 2):23–31.

    Article  Google Scholar 

  6. Marjanovic S, Hanney S, Wooding S. Chapter 1: a historical overview of research evaluation studies. In: A historical reflection on research evaluation studies, their recurrent themes and challenges (Project Retrosight). Santa Monica, CA: RAND Corporation; 2009. p. 1–55.

    Google Scholar 

  7. Sandoval JA, Lucero J, Oetzel J, Avila M, Belone L, Mau M, Pearson C, Tafoya G, Duran B, Iglesias Rios L, Wallerstein N. Process and outcome constructs for evaluating community-based participatory research projects: a matrix of existing measures. Health Educ Res. 2012;27(4):680–90.

    Article  Google Scholar 

  8. MacGregor S. An overview of quantitative instruments and measures for impact in co-production. J Profess Capital Commun. 2020;6(2):163–83.

    Google Scholar 

  9. Tigges BB, Miller D, Dudding KM, Balls-Berry JE, et al. Measuring quality and outcomes of research collaborations: an integrative review. J Clin Transl Sci. 2019;3:261–89.

    Article  Google Scholar 

  10. Brush BL, Mentz G, Jensen M, Jacobs B, Saylor KM, Rowe Z, Israel BA, Lachance L. Success in longstanding community based participatory research (CBPR) partnerships: a scoping literature review. Health Educ Behav. 2019;47(4):556–68.

    Article  Google Scholar 

  11. Bowen DJ, Hyams T, Goodman M, West KM, Harris-Wai J, Yu JH. Systematic review of quantitative measures of stakeholder engagement. Clin Transl Sci. 2017;10:314–36.

    Article  CAS  Google Scholar 

  12. Vat LE, Finlay T, Schuitmaker-Warnaar TJ, et al. Evaluating the ‘return on patient engagement initiatives’ in medicines research and development: a literature review. Health Expect. 2020;23:5–18.

    Article  Google Scholar 

  13. Ortiz K, Nash J, Shea L, Oetzel J, Garoutte J, Sanchez-Youngman S, Wallerstein N. Partnerships, processes and outcomes: a health equity-focused scoping meta-review of community-engaged scholarship. Annu Rev Public Health. 2020;41(3):281–2823.

    Google Scholar 

  14. Mrklas KJ, Boyd JM, Shergill S, Merali SM, Khan M, Moser C, Nowell L, Goertzen A, Swain L, Pfadenhauer LM, Sibley KM, Vis-Dunbar M, Hill MD, Raffin-Bouchal S, Tonelli M, Graham ID. A scoping review of the globally available tools for assessing health research partnership outcomes and impacts. Health Res Pol Syst. 2022;41:177.

    Google Scholar 

  15. Mrklas KJ, Boyd JM, Shergill S, Merali SM, Khan M, Nowell L, Goertzen A, Pfadenhauer LM, Paul K, Sibley KM, Swain L, Vis-Dunbar M, Hill MD, Raffin-Bouchal S, Tonelli M, Graham ID. Tools for assessing health research partnership outcomes and impacts: a systematic review. Health Res Pol Syst. 2022.

  16. Hoekstra F, Mrklas KJ, Sibley K, Nguyen T, Vis-Dunbar M, Neilson CJ, Crockett LK, Gainsforth HL, Graham ID. A review protocol on research partnerships: a coordinated multicenter team approach. Syst Rev. 2018;7(217):1–14.

    Google Scholar 

  17. Drahota A, Meza RD, Brikho B, Naaf M, Estabillo JA, Gomez ED, Vejnoska SF, Dufek S, Stahmer AC, Aarons GA. Community-academic partnerships: a systematic review of the state of the literature and recommendations for future research. Milbank Q. 2016;94(1):163–214.

    Article  Google Scholar 

  18. Sanders Thompson VL, Ackermann N, Bauer KL, Bowen DJ, Goodman MS. Strategies of community enagement in research: definitions and classifications. Soc Behav Med. 2021;11:441–51.

    Google Scholar 

  19. Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prevent Med. 2013;45(2):237–43.

    Article  Google Scholar 

  20. Goodman MS, Sanders Thompson VL, Arroyo Johnson C, Gennarelli R, Drake BF, Bajwa P, Witherspoon M, Bowen D. Evaluating community engagement in research: quantitative measure development. J Commun Psychol. 2017;45(1):17–32.

    Article  Google Scholar 

  21. Goodman, M.S., & Sanders Thompson, V.L., The science of stakeholder engagement in research: classification, implementation and evaluation. Translational Behavioral Medicine, 2017. 7(3): 486–491.

  22. Butterfoss FD, Goodman RM, Wandersman A. Community coalitions for prevention and health promotion: factors predicting satisfaction, participation and planning. Health Educ Q. 1996;23:65–79.

    Article  CAS  Google Scholar 

  23. Granner ML, Sharpe PA. Evaluating community coalition characteristics and functioning: a summary of measurement tools. Health Educ Res Theo Pract. 2004;19(5):514–32.

    Article  CAS  Google Scholar 

  24. Boivin A, L’Esperance A, Gauvin FP, Dumez V, Maccaulay AC, Lehoux P, Abelson J. Patient and public engagement in research and health system decision making: a systematic review of evaluation tools. Health Expect. 2018;21(6):1075–84.

    Article  Google Scholar 

  25. Hamzeh J, Pluye P, Bush PL, Ruchon C, Vedel I, Hudon C. Towards assessment for organizational participatory research health partnerships: a systematic mixed studies review with framework synthesis. Eval Program Plan. 2018;73:116–28.

    Article  Google Scholar 

  26. Goodman MS, Ackermann N, Bowen DJ, Thompson V. Content validation of a quantitative stakeholder engagement measure. J Commun Psychol. 2019;47:1937–51.

    Article  Google Scholar 

  27. Brinkerhoff JM. Assessing and improving partnership relationships and outcomes: a proposed framework. Eval Program Plann. 2002;25(3):215–31.

    Article  Google Scholar 

  28. Bartholomew LK, Mullen PD. Five roles for using theory and evidence in the desing and testing of behaviour change interventions. J Public Health Dent. 2011;71:S20-33.

    Article  Google Scholar 

  29. Birken SA, Powell BJ, Shea CM, Haines ER, Kirk MA, Leeman J, Rohweder C, Damschroder L, Presseau J. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12(1):124.

    Article  Google Scholar 

  30. Davies P, Walker AE, Grimshaw JG. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010;5(1):14.

    Article  Google Scholar 

  31. Foy R, Ovretveit J, Shekelle PG, Pronovost PJ, Taylor SL, Dy S, Hempel S, McDonald KM, Rubenstein LV, Wachter RM. The role of theory in research to develop and evaluate the implementation of patient safety practices. BMJ Qual Saf. 2011;20:453–9.

    Article  Google Scholar 

  32. DeVellis RF. Scale development: theory and application. Los Angeles: Sage Publications; 2012.

    Google Scholar 

  33. Lewis CC, Mettert KD, Stanick CF, Halko HM, Nolen EA, Powell BJ, Weiner BJ. The psychometric and pragmatic evidence rating scale (PAPERS) for measure development and evaluation. Implement Res Pract. 2021.

  34. Stanick CF, Halko HM, Nolen EA, Powell BJ, Dorsey CN, Mettert KD, Weiner BJ, Barwick M, Wolfenden L, Damschroder LJ, Lewis CC. Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale (PAPERS). Transl Behav Med. 2021;11(1):11–20.

    Article  Google Scholar 

  35. Boateng GO, Neilands TB, Frongillo EA, Melgar-Quinonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioural research. Front Public Health. 2018;6:149.

    Article  Google Scholar 

  36. Yun J, Ulrich DA. Estimating measurement validity: a tutorial. Adapt Phys Activ Q. 2002;19(1):32–47.

    Article  Google Scholar 

  37. Mrklas K. Towards the development of valid, reliable, and acceptable tools for assessing the outcomes and impacts of health research partnerships. In: Community Health Sciences, Cumming School of Medicine. 2022, The University of Calgary: Calgary, Alberta. p. 441.

  38. IKTRN (Integrated Knowledge Translation Research Network). IKTRN: About Us - Vision and Mission. 2022 [cited 2022 26 October]; Available from:

  39. Graham ID, Kothari A, McCutcheon C, The Integrated Knowledge Translation Research Network Project Leads. Moving knowledge into action for more effective practice, programmes and policy: protocol for a research programme on integrated knowledge translation. Implement Sci. 2018;13:22.

    Article  Google Scholar 

  40. Centre for Reviews and Dissemination (CRD)., U.o.Y., Systematic Reviews: CRD's Guidance for Undertaking Reviews in Health Care. 2009, CRD, University of York, Layerthorpe, York, UK.

  41. Joanna Briggs Institute, The Joanna Briggs Institute Reviewers’ Manual 2015. 2015, Joanna Briggs Institute: South Australia. p. 24.

  42. Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions, Version 6.2. 2021, Cochrane.

  43. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, The PRISMA, et al. statement: an updated guideline for reporting systematic reviews. BMJ. 2020;2021:372.

    Google Scholar 

  44. Terwee CB, de Vet HCW, Prinsen CAC, Mokkink LB. Protocol for Systematic Reviews of Measurement Properties. 2011 [cited 2022 24 February]; Available from:

  45. Sirriyeh R, Lawton R, Gardner P, Armitage G. Reviewing studies with diverse designs: the development and evaluation of a new tool. J Eval Clin Pract. 2012;18:746–52.

    Article  Google Scholar 

  46. Microsoft Corporation., Microsoft Excel for Mac 2021, V. (21101001), Editor. 2021, 2021 Microsoft Corporation.

  47. Statacorp LP., Stata 13.1 Statistics/Data Analysis Special Edition. 2013, StataCorp LP: College Station, TX.

  48. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

    Article  Google Scholar 

  49. International Q. NVivo12 for Mac. 2019, QSR International: New York, USA.

  50. Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in Endnote. J Med Lib Assoc (JMLA). 2016;104(3):240–3.

    Article  Google Scholar 

  51. Altman DG. Practical statistics for medical research: measuring agreement. UK: Chapman and Hall; 1991.

    Google Scholar 

  52. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med. 2012;22(3):276–82.

    Article  Google Scholar 

  53. Brown LD, Chilenski SM, Ramos R, Gallegos N, Feinberg ME. Community prevention coalition context and capacity assessment: comparing the United States and Mexico. Health Educ Behav. 2016;43(2):145–55.

    Article  Google Scholar 

  54. Duran B, Oetzel J, Magarati M, et al. Toward health equity: a national study of promising practices in community-based participatory research. Progr Commun Health Partnerships Res Educ Action. 2019;13(4):337–52.

    Article  Google Scholar 

  55. Dickson E, Magarati M, Boursaw B, Oetzel J, Devia C, Ortiz K, Wallerstein N. Characteristics and practices within research partnerships for health and social equity. Nurs Res. 2020;69(1):51–61.

    Article  Google Scholar 

  56. Loban E, Scott C, Lewis V, Haggerty J. Measuring partnership synergy and functioning: multi-stakeholder collaboration in primary health care. PLoS ONE. 2021;16: e0252299.

    Article  CAS  Google Scholar 

  57. Weiss ES, Miller-Anderson R, Lasker RD. Making the most of collaboration: Exploring the relationship between partnership synergy and partnership functioning. Health Educ Behav. 2002;29(6):683–98.

  58. Provan KG, Nakama L, Veazie MA, Teufel-Shone NI, Huddlesston C. Building community capacity around chronic disease services through a collaborative interorganizational network. Health Educ Behav. 2003;30:646–62.

  59. Israel BA, Checkoway B, Schulz A, Zimmerman M. Health education and community empowerment: Conceptualizing and measuring perceptions of individual, organizational, and community control. Health Educ Quart. 1994;21:149–70.

  60. Bullen P, Onyx J. Measuring social capital in five communities in NSW, A practitioner’s guide. 1998. URL: Accessed 11 Dec 2022.

  61. Mattessich PW, Murray-Close M, Monsey BR. The Wilder collaboration factors inventory: assessing your collaboration’s strengths and weaknesses. Amherst H: Wilder Foundation; 2001.

  62. Cramm JM, Strating MM, Nieboer AP. Development and validation of a short version of the Partnership Synergy Assessment Tool (PSAT) among professionals in Dutch disease-management partnerships. BMC Res Notes. 2011;4:224.

    Article  Google Scholar 

  63. Slaghuis SS, Strating MM, Bal RA, Nieboer AP. A framework and a measurement instrument for sustainability of work practices in long-term care. BMC Health Serv Res. 2011;11:314.

  64. Cramm JM, Strating MM, Nieboer AP. The role of partnership functioning and synergy in achieving sustainability of innovative programmes in community care. Health Soc Care Commun. 2013;21(2):209–15.

    Article  Google Scholar 

  65. Morrow E, Ross F, Grocott P, Bennett J. A model and measure for quality service user involvement in health research. Int J Cons Stud. 2010;34(5):532–9.

  66. Jones J, Barry MM. Developing a scale to measure synergy in health promotion partnerships. Glob Health Promot. 2011;18(2):36–44.

    Article  Google Scholar 

  67. Orr Brawer CR. Replication of the value template process in a community coalition: implications for social capital and sustainability. Philadelphia: Temple University; 2008.

    Google Scholar 

  68. King G, Servais M, Kertoy M, Specht J, Currie M, Rosenbaum P, Law M, Forchuk C, Chalmers H, Willoughby T. A measure of community members’ perceptions of the impacts of research partnerships in health and social services. Eval Program Plann. 2009;32:289–99.

    Article  Google Scholar 

  69. Perkins CM. Partnership functioning and sustainability in nursing academic practice partnerships: The mediating role of partnership synergy. School of Nursing. 2014, University of Northern Colorado: Greeley, CO. p. 132.

  70. Rodriguez Espinosa P, Sussman A, Pearson CR, Oetzel J, Wallerstein N. Personal outcomes in community-based participatory research partnerships: a cross-site mixed methods study. Am J Comm Psychol. 2020;66:439–49.

    Article  Google Scholar 

  71. Nargiso JE, Friend KB, Egan C, Florin P, Stevenson J, Amodei B, Barovier L. Coalitional capacities and environmental strategies to prevent underage drinking. Am J Commun Psychol. 2013;51(1–2):222–31.

    Article  Google Scholar 

  72. Oetzel JG, Villegas M, Zenone H, White Hat ER, Wallerstein N, Duran B. Enhancing stewardship of community-engaged research through governance. Am J Public Health. 2015;105:1161–7.

    Article  Google Scholar 

  73. Feinberg ME, Bontempo DE, Greenberg MT. Predictors and level of sustainability of community prevention coalitions. Am J Prev Med. 2008;34(6):495–501.

    Article  Google Scholar 

  74. Wallerstein N, Oetzel J, Duran B, Tafoya G, Belone L, Rae R. CBPR: what predicts outcomes? In: Minkler M, Wallerstein N, editors. Community-based participatory research for health: from process to outcomes. San Francisco: Jossey-Bass; 2008. p. 371–92.

    Google Scholar 

  75. Lasker RD, Weiss ES, Miller R. Partnership synergy: a practical framework for studying and strengthening the collaborative advantage. Milbank Q. 2001;79(2):179–205.

    Article  CAS  Google Scholar 

  76. Butterfoss FD, Kegler MK. Toward a comprehensive understanding of community coalitions: moving from practice to theory. In: DiClemente RJ, Crosby RA, Kegler MC, editors. Emerging theories in health promotion practice and research: strategies for improving public health. San Francisco: Jossey-Bass; 2002. p. 157–93.

    Google Scholar 

  77. Hawkins JD, Catalano RF, Arthur MW. Promoting science-based prevention in communities. Addict Behav. 2002;27(6):951–76.

    Article  Google Scholar 

  78. Hoekstra F, Mrklas KJ, Khan M, McKay RC, Vis-Dunbar M, Sibley K, Nguyen T, Graham ID, SCI Guiding Principles Consensus Panel, Gainforth HL. A review of reviews on principles, strategies, outcomes and impacts of research partnerships approaches: a first step in synthesising the research partnership literature. Health Res Pol Syst. 2020;18:51.

    Article  CAS  Google Scholar 

  79. Slattery P, Saeri AK, Bragge P. Research co-design in health: a rapid overview of reviews. Health Res Pol Syst. 2020;18:17.

    Article  Google Scholar 

  80. Rifkin SB. Examining the links between community participation and health outcomes: a review of the literature. Health Pol Plan. 2014;29:ii98–106.

    Article  Google Scholar 

  81. King G, Servais M, Forchuk C, Chalmers H, Currie M, Law M, Specht J, Rosenbaum P, Willoughby T, Kertoy M. Features and impacts of five multidisciplinary community-university research partnerships. Health Soc Care Commun. 2010;18(1):59–69.

    Google Scholar 

  82. Boote J, Baird W, Sutton A. Public involvement in the systematic review process in health and social care: a narrative review of case examples. Health Policy. 2011;102(2–3):105–16.

    Article  Google Scholar 

  83. Gagnon MP, Desmartis M, Lepage-Savary D, Gagnon J, St-Pierre M, Rhainds M, Lemieux R, Gauvin FP, Pollender H, Legare F. Introducing patients’ and the public’s perspectives to health technology assessment: a systematic review of international experiences. Int J Technol Assess Health. 2011;27(1):31–42.

    Article  Google Scholar 

  84. Hanney S, Boaz A, Jones T, Soper B. Engagement in research: an innovative three-stage review of the benefits for healthcare performance. Health Serv Deliv Res. 2013;1(8):1–172.

    Article  Google Scholar 

  85. Concannon TW, Fuster M, Saunders T, et al. A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. J Gen Intern Med. 2014;29(12):1692–701.

    Article  Google Scholar 

  86. Tapp H, White L, Steuerwald M, Dulin M. Use of community-based participatory research in primary care to improve healthcare outcomes and disparities in care. J Comparative Effect Res. 2013;2(4):405–19.

    Article  Google Scholar 

  87. Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, Suleman R. A systematic review of the impact of patient and public involvement on service users, researchers and communities. Patient. 2014;7(4):387–95.

    Article  Google Scholar 

  88. Cottrell E, Whitlock E, Kato E, Uhl S, Belinson S, Chang C, Guides JM. Defining the benefits of stakeholder engagement in systematic reviews. Research White Paper. 2014, Agency for Healthcare Research and Quality: Rockville, MD.

  89. Camden C, Shikako-Thomas K, Nguyen T, Graham E, Thomas A, Sprung J, Morris C, Russell DJ. Engaging stakeholders in rehabilitation research: a scoping review of strategies used in partnerships and evaluation of impacts. Disabil Rehabil. 2015;37(15):1390–400.

    Article  Google Scholar 

  90. George AS, Mehra V, Scott K, Sriram V. Community participation in health systems research: a systematic review assessing the state of research, the nature of interventions involved and the features of engagement with communities. PLoS ONE. 2015;10(10): e0141091.

    Article  Google Scholar 

  91. Jagosh J, Bush PL, Salsbert J, Macaulay A, Greenhalgh T, Wong G, Cargo M, Green L, Herbert C, Pluye P. A realist evaluation of community-based participatory research: partnership synergy, trust building and related ripple effects. BMC Public Health. 2015;15(725):1–11.

    Google Scholar 

  92. Jones EL, Williams-Yesson BA, Hackett RC, Staniszewska SH, Evans D, Kamal Francis N. Quality of reporting on patient and public involvement within surgical research: a systematic review. Ann Surg. 2015;261:243–50.

    Article  Google Scholar 

  93. Gagliardi A, Berta W, Kothari A, Boyko J, Urquhart R. Integrated knowledge translation (iKT) in health care: a scoping review. Implement Sci. 2016;11(38):1–12.

    Google Scholar 

  94. Baldwin JN, Napier S, Neville S, Wright St Clair VA. Impacts of older people’s patient and public involvement in health and social care research: a systematic review. Age Ageing. 2018;47(6):801–9.

    Article  Google Scholar 

  95. Cook N, Siddiqui N, Twiddy M, Kenyon R. Patient and public involvement in health research in low and middle-income countries: a systematic review. BMJ Open. 2019;9(5): e026514.

    Article  Google Scholar 

  96. Forsythe LP, Carman KL, Szydlowski V, Fayish L, Davidson L, Anyanwu CU. Patient engagement in research: early findings from the Patient-Centered Outcomes Research Institute. Health Aff. 2019;38(3):359–67.

    Article  Google Scholar 

  97. Arnstein L, Wadsworth AC, Yamamoto BA, Stephens R, Sehmi K, Jones R, Sargent A, Gegney T, Woolley KL. Patient involvement in preparing health research peer reviewed publications or results summaries: a systematic review and evidence-based recommendations. Res Involve Engag. 2020;6(34):1–14.

    Google Scholar 

  98. Ludwig CL, Graham ID, Gifford W, Lavoie J, Stacey D. Partnering with frail or seriously ill patients in research: a systematic review. Res Involve Engag. 2020;6(52):1–22.

    Google Scholar 

  99. van Schelven F, Boeije H, Marien V, Rademakers J. Patient and public involvement of young people with a chronic condition in projects in health and social care: a scoping review. Health Expect. 2020;23(4):789–801.

    Article  Google Scholar 

  100. Valdez ES, Skobic I, Valdez L, Garcia DO, Korchmaros J, Stevens S, Sabo S, Caravajal S. Youth participatory action research for youth substance use prevention: a systematic review. Subst Use Misuse. 2020;55(2):314–28.

    Article  Google Scholar 

  101. Daniels N, Gillen P, Casson K. Practitioner engagement by academic researchers: a scoping review of nursing, midwifery, and therapy professions literature. Res Theory Nurs Pract. 2020;34(2):85–128.

    Article  Google Scholar 

  102. Halvorsrud K, et al. Identifying evidence of effectiveness in the co-creation of research: a systematic review and meta-analysis of the international healthcare literature. J Public Health. 2020;43(1):197–208.

    Article  Google Scholar 

  103. Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, Suleman R. Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect. 2012;17:637–50.

    Article  Google Scholar 

  104. Alla K, Hall WD, Whiteford HA, Head BW, Meurk CS. How do we define the policy impact of public health research? A systematic review. Health Res Policy Syst. 2017;15:84.

    Article  Google Scholar 

Download references


We would like to acknowledge Christie Hurrell (University of Calgary) for consultative advice on developmental search term clusters and Christine Neilson (CN), University of Manitoba for her PRESS assessments. Our thanks to Dr Aziz Shaheen (Cumming School of Medicine, University of Calgary) who generously provided summer studentships for trainees (Liam Swain (LS), Kevin Paul (KP) and Kate Aspinall). Our gratitude to Cheryl Moser (CM) for full-text screening contributions. We owe a debt of gratitude to our colleagues in the IKTRN and extend our thanks to the members of the Multicentre Collaborative Team.


Dr Ian Graham provided dissertation support through a Canadian Institutes for Health Research (CIHR) Foundation Scheme Grant (#143237) “Moving Knowledge Into Action for More Effective Practice, Programs and Policy: A Research Program Focusing on Integrated Knowledge Translation”. Dr Kathryn Sibley supported review research assistants through a CIHR Project Grant (#156372) “Advancing the Science of Integrated Knowledge Translation with Health Researchers and Knowledge Users: Understanding Current & Developing Recommendations for iKT Practice”. Dr Aziz Shaheen provided summer studentship support for University of Calgary Summer trainees (L Swain, K Paul, K Aspinall) through the Department of Gastroenterology, Cumming School of Medicine, University of Calgary. Funding agencies were not involved in the study design, collection, analysis or interpretation of the data, or the writing of the manuscript and its dissemination.

Author information

Authors and Affiliations



Conceptualization, study design [KJM with Doctoral Supervisory Committee: MDH, SRB, MT, IDG]. Formal analysis [KJM]. Funding acquisition [KJM, KMS, IDG]. Investigation [KJM, SM, MK, SS, JMB, LN, LMP, KP, AG, LS, KMS, MVD]. Methodology [KJM, KMS, MVD, MDH, SRB, MT, IDG]. Project administration (KJM, KMS, IDG]. Supervision [IDG, MDH, SRB, CT]. Validation [KJM, SM, MK, SS, KP]. Writing—original draft [KJM]. Writing—review, editing and approval of final manuscript [KJM, SM, MK, SS, JMB, LN, LMP, KP, AG, LS, KMS, MVD, MDH, SRB, CT, IDG]. [IDG] guarantor. All authors read and approved the final manuscript.

Authors’ information

KM is a Doctoral Candidate at the University of Calgary in the Department of Community Health Sciences—Health Services Research stream. She is employed by the Strategic Clinical Networks™ at Alberta Health Services as a Knowledge Translation Implementation Scientist. SM is a BSc Kinesiology major in the Faculty of Kinesiology at the University of Calgary. MK is a research coordinator in the Department of Community Health Sciences, University of Manitoba. SS is a BSc Health Sciences major (Biomedical Stream) at the University of Calgary. JMB is employed by the Knowledge Translation Program, St. Michael’s Hospital, Unity Health Toronto, as a Research Manager. LN is an Assistant Professor at the University of Calgary in the Faculty of Nursing. She holds a Teaching and Learning Research Professorship and is a University of Calgary Teaching Scholar. LMP is a senior research fellow at the Pettenkofer School of Public Health, University of Munich LMU), Germany. KP is a student funded by the University of Calgary Summer Studentships Program (2021). AG is a BSc Physiology major in the Faculty of Science at the University of Alberta. LS is an MSc student (Epidemiology Stream) in the Department of Community Health Sciences, Cumming School of Medicine, University of Calgary. KMS is an Associate Professor, Department of Community Health Sciences; Director—Knowledge Translation, Centre for Healthcare Innovation; University of Manitoba. MVD is the Data and Digital Scholarship Librarian at the University of British Columbia’s Okanagan campus. MDH is the Medical Director for the Cardiovascular and Stroke Strategic Clinical Network™ at Alberta Health Services with a primary appointment as Professor in the Department of Clinical Neuroscience and Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, and Foothills Medical Centre. SRB is Associate Professor, Faculty of Nursing University of Calgary. CT is the Associate Vice President, Research, University of Calgary. IDG is a Distinguished University Professor in the Schools of Epidemiology and Public Health and Nursing at the University of Ottawa and Senior Scientist at the Ottawa Hospital Research Institute.

Corresponding author

Correspondence to Kelly J. Mrklas.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed and approved by the Conjoint Health Research Ethics Board (CHREB) at the University of Calgary (REB180174).

Consent for publication

Not applicable.

Competing interests

KM, SM, MK, SS, JMB, LN, LMP, KP, AG, LS, KMS, MVD, SRB and CT have no competing interests to declare. MDH is Medical Director for the Cardiovascular and Stroke Strategic Clinical Network™ at Alberta Health Services. IDG is the Scientific Director for the Integrated Knowledge Translation Research Network (IKTRN).

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

: Table S1 Synopsis of study methods. Table S2 Quality Assessment Tool for Studies with Diverse Designs (QATSDD) scores for included studies. Table S3 Synthesis of outcomes and impacts terms. Table S4 Bibliography of referenced study-level theories, models and frameworks used in eligible studies. Table S5 Bibliography of included studies. Table S6 PRISMA Systematic Review Checklist.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mrklas, K.J., Merali, S., Khan, M. et al. How are health research partnerships assessed? A systematic review of outcomes, impacts, terminology and the use of theories, models and frameworks. Health Res Policy Sys 20, 133 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: