Skip to main content

Stakeholders’ experiences with the evidence aid website to support ‘real-time’ use of research evidence to inform decision-making in crisis zones: a user testing study

Abstract

Background

Humanitarian action in crisis zones is fraught with many challenges, including lack of timely and accessible research evidence to inform decision-making about humanitarian interventions. Evidence websites have the potential to address this challenge. Evidence Aid is the only evidence website designed for crisis zones that focuses on providing research evidence in the form of systematic reviews. The objective of this study is to explore stakeholders’ views of Evidence Aid, contributing further to our understanding of the use of research evidence in decision-making in crisis zones.

Methods

We designed a qualitative user-testing study to collect interview data from stakeholders about their impressions of Evidence Aid. Eligible stakeholders included those with and without previous experience of Evidence Aid. All participants were either currently working or have worked within the last year in a crisis zone. Participants were asked to perform the same user experience-related tasks and answer questions about this experience and their knowledge needs. Data were analysed using a deductive framework analysis approach drawing on Morville’s seven facets of the user experience — findability, usability, usefulness, desirability, accessibility, credibility and value.

Results

A total of 31 interviews were completed with senior decision-makers (n = 8), advisors (n = 7), field managers (n = 7), analysts/researchers (n = 5) and healthcare providers (n = 4). Participant self-reported knowledge needs varied depending on their role. Overall, participants did not identify any ‘major’ problems (highest order) and identified only two ‘big’ problems (second highest order) with using the Evidence Aid website, namely the lack of a search engine on the home page and that some full-text articles linked to/from the site require a payment. Participants identified seven specific suggestions about how to improve Evidence Aid, many of which can also be applied to other evidence websites.

Conclusions

Stakeholders in crisis zones found Evidence Aid to be useful, accessible and credible. However, they experienced some problems with the lack of a search engine on the home page and the requirement for payment for some full-text articles linked to/from the site.

Peer Review reports

Background

Humanitarian action in crisis zones is fraught with many challenges, not the least of which is having rapid access to research evidence that has the potential to inform decisions. Acting on available research evidence can help to improve the effectiveness and efficiency of humanitarian interventions [1]. Access to research evidence to support decision-making is even more imperative in crisis zones because the magnitude and speed of the disaster creates a unique setting with known difficulties around accessing research evidence in a timely way (e.g. insufficient time, limited search skills, limited access to relevant evidence) [1,2,3,4,5,6,7,8,9]. Existing research has focused primarily on identifying the challenges decision-makers face in accessing evidence in crisis zones, highlighting the need for evidence websites to support evidence use in a timely way. However, because there has been so little research done on the experiences of stakeholders working in crisis zones with evidence websites, we currently do not know if such strategies address this key challenge. This analysis will help address this critical gap in the literature, contributing to efforts to support the use of research evidence in decision-making.

This gap persists in the existing literature for five main reasons. First, while literature exists that examines evidence websites in other settings, these studies do not focus on evidence use in crisis zones [10,11,12]. Second, user-testing studies have tended to focus on facets of user experience without first investigating the information needs of users [12,13,14]. This means that they potentially missed gaining valuable insight into how evidence websites can best meet stakeholders’ knowledge needs. Third, some studies have contributed evidence about best practices in organizing content, but there are many other facets of user experience that remain unexplored [15]. Fourth, studies have not explored stakeholders’ views of and experiences with using a database to find evidence summaries on specific health policy and systems-relevant questions [11, 15]. Finally, there is a lack of third-party research about the effectiveness of evidence websites, with most existing research designed and conducted by groups associated with the website under study [11, 12, 15].

In light of the lack of third-party research in this area, this study presents a non-affiliated examination of the use of Evidence Aid by a diverse array of stakeholders working in crisis zones. Evidence Aid is the only evidence website designed for crisis zones that focuses on providing research evidence in the form of systematic reviews. Systematic reviews critically appraise and summarize all relevant individual studies, which reduces the amount of time and search skills other stakeholders need to access and appraise large bodies of research [16]. Evidence Aid has invested efforts to improve the site, but such efforts have not yet been formally evaluated.

Methods

Study aim

Employing a user-testing study design, our objective herein was to explore the information needs of stakeholders working in crisis zones and their views of and experiences with the Evidence Aid website. This paper also aims to put forward specific suggestions about how to improve evidence websites designed to support the use of research evidence in decision-making in crisis zones. Many of these suggestions can also be applied to other evidence websites that support the use of evidence in decision-making more broadly.

Study design

A user-testing study design was used to address our research objective. This type of design is widely employed in the field of product design and evaluation, and involves having users complete task-specific problems [17,18,19]. User testing involves inviting representative users of a product (in this case a website) to participate in individual semi-structured interviews where they are asked about their experience as they interact with the website [20]. This study used qualitative methods (e.g. interview data, thematic analysis, etc.) to describe users’ knowledge needs, views and experiences with using Evidence Aid, gathering specific suggestions about how their experiences could be improved. Qualitative research methods have the potential to drive improvements to the experience of using particular resources, creating information to allow developers to make user-centred improvements. Our use of a concurrent think-aloud protocol allowed us to access user thoughts in-the-moment, lessening the likelihood that users would forget their insights or dismiss them as unimportant when asked to discuss their experience at a later date [21, 22].

We started our interview with a set of preliminary general questions about the participant’s profession and knowledge needs followed by a set of think-aloud user experiences and views while performing task-specific questions. Our lack of involvement with Evidence Aid makes us particularly well positioned to elicit frank feedback. Participants were informed of our lack of involvement with Evidence Aid at the outset of the interview.

Defining Evidence Aid

Evidence Aid (https://www.evidenceaid.org) is an English-language interface with some articles and user-friendly summaries available in Spanish and French. To be included in Evidence Aid, the systematic review must focus on the effectiveness of humanitarian action and include health-related outcomes. Evidence Aid provides appraisal for each of the systematic reviews. Research evidence is available on Evidence Aid in three ways — first, through a simple search bar located under a resources tab with the option of selecting month (e.g. March 2019) and category (e.g. emergency type); second, research evidence is organized into four main categories, namely health issues (i.e. burns, cardiovascular disease), emergency type (i.e. flood, epidemic), humanitarian cluster (i.e. camp coordination and camp management, emergency shelter) and person groups (i.e. adolescents, adults); finally, Evidence Aid produces curated collections of evidence specific to crisis zones (e.g. acute malnutrition, prevention and treatment in emergencies and humanitarian crises).

Evidence Aid provides free access through their website to some of the full-text articles available on other websites that usually require a payment (e.g. the Cochrane Library collection for earthquakes). However, some of the full-text articles available through the site do require a payment to access the content, although this is arguably outside of Evidence Aid’s scope given the nearly limitless liability they would face if they offered free access to all articles.

Characteristics of participants

We purposively sampled two types of participants for the study — participants who have used Evidence Aid before and those who have not. Purposeful sampling allowed us to gain valuable insights and an in-depth understanding of stakeholders’ views and experiences with Evidence Aid to support evidence use in crisis zones [23]. All participants enrolled in the study were either currently working or have worked within the last year in a crisis zone. Participants were asked to self-identify the type of stakeholder they are based on their profession (e.g. senior decision-maker, advisor). We define stakeholder as “anyone that has an interest in, is likely to be affected by, or has the ability to influence” a decision ([24], p. 1939). All participants were asked the same general questions and user experience-related task questions. Those who have used Evidence Aid before were asked about how frequently they used the site, and this additional information was used to explore patterns in their views of and experiences with evidence websites in addressing their research evidence needs.

Participant recruitment and sample size

Decision-making processes are complex and require a network of stakeholders with different types of expertise. The types of stakeholders involved in decision-making processes include advisors, analysts and researchers providing formal support to senior decision-makers, field managers and healthcare providers [25]. A two-stage sampling approach was used to identify and recruit key stakeholders [26, 27]. The first stage included identifying stakeholders in the following five categories based on their anticipated roles in decision-making in crisis zones and, where appropriate, across the humanitarian aid, health system and health research system sectors: (1) senior decision-makers (e.g. presidents, directors); (2) field managers (e.g. field coordinators, heads of missions) directly involved in coordination and management of crisis zones; (3) healthcare providers (e.g. doctors, nurses) involved with either the development of medical guidelines in crisis zones or directly delivering medical care to people in crisis zones; (4) advisors directly involved in advising about policy development and implementation strategies; and (5) analysts and researchers directly involved in responding to research evidence requests from the previous four categories of participants. The second stage of recruitment used snowball sampling; research participants in the first stage were asked to identify any additional potential stakeholders.

To capture users who have used Evidence Aid, we sent a LinkedIn email invitation to a list of 789 members who are part of a LinkedIn thematic working group named ‘Health Systems in Fragile and Conflict Affected States’. This thematic working group contained key actors in health who are working or have formerly worked in fragile and conflict-affected states and who were invited to participate in the Humanitarian Evidence Week initiative led by Evidence Aid on November 6–12, 2017. Participants who have not used Evidence Aid before were recruited in three ways. First, we included, in the same LinkedIn email invitation described above, a request to nominate colleagues who are in similar roles but who did not participate in Humanitarian Evidence Week and who did not use Evidence Aid. Second, we sent email invitations to those listed on a publicly available contact list for a quality improvement exercise conducted at Médecins Sans Frontières that focused on the organization’s approach in transferring research knowledge to policy and practice during the Syrian Refugee Crisis. Third, we sent email invitations to those identified through documentary and media analysis (using publicly available documents only).

We aimed at completing at least five user test interviews for each type of participant category (i.e. senior decision-makers, field managers, healthcare providers, advisors, analysts and researchers) for both types of participants (i.e. those that have used Evidence Aid and those that have not), recognizing that this estimate was dependent on the availability of appropriate participants. We recruited 9 participants from our first stage of sampling, and 22 additional participants were identified through snowball sampling. Our sample size amounted to a total of 31 participants (Table 1). Previous user testing studies highlighted that 80% of known usability problems could be obtained from 5 representative users, with diminishing returns after the fifth user [28].

Table 1 Profiles of respondents involved in the user-testing exercises

Data collection methods

Interviews were conducted via Skype by the first author (AFK), who acted as both the interviewer and note taker. The interviews lasted approximately 60 minutes and were audio-recorded after receiving permission from the participant. Audio recordings were transcribed verbatim and the written transcriptions were used for data analysis. Potentially identifying information (e.g. name) was removed at the time of transcription. We conducted the interviews in English, which is the language used in the Evidence Aid interface.

The user testing began with preliminary questions about the participant’s profession, what sources of research evidence they use and knowledge of evidence websites including Evidence Aid (see Additional file 1 for more details). We provided participants with a set of instructions, starting from an empty browser window. This was followed by a series of tasks for the participant to perform, some of which involved looking for specific content tailored to their field or professional interests. For example, a healthcare provider in a crisis zone may choose to find a specific review about the effect of antibiotic resistance among children in refugee camps. Other general tasks asked of the participants included finding help, finding the search engine within Evidence Aid website and finding information about Evidence Aid. The concurrent think-aloud method was used throughout [19]. Additionally, participants were asked to state the major problems they faced, whether these were ‘big’ problems or frustrations while performing the task or minor issues, any positive feedback they would like to provide, and suggestions for improving their experience. We explained to participants that major problems are ones that have serious potential for causing erroneous usage of Evidence Aid and therefore unable to complete the intended tasks. Big problems are ones where users face frustration and difficulty in completing tasks but are able to work around the problem, and minor issues are those that slow down or inconvenience users unnecessarily in completing tasks [29,30,31]. Finally, to assess their overall experience with Evidence Aid, we asked questions related to Morville’s seven facets of the user experience — findability, usability, usefulness, desirability, accessibility, credibility and value [32].

Data analysis

We used a deductive framework analysis approach towards our collected data [33, 34]. Framework analysis is a qualitative method that can be applied to research that has specific questions, professional participants and a limited time frame [34]. This approach allowed us to describe and interpret what is happening in a particular setting (i.e. use of Evidence Aid) by asking our participants specific questions [33]. It involved a five-step process that included familiarization (i.e. immersing ourselves in collected data making notes of key ideas and recurrent themes), identifying a thematic framework (i.e. recognizing emerging themes), indexing (i.e. using NVivo to identify sections of data that correspond to particular themes), charting (i.e. arranging identified sections of data into table exhibits), and mapping and interpretation (i.e. analysing key characteristics from the exhibits) [33].

Data were analysed by drawing on Morville’s seven facets of the user experience, as described above [32]. A detailed description of the seven facets of the user experience is provided in Table 3. Morville’s framework was selected because it combines the main facets of usability, incorporates the emotional aspects of user experience, and is often used in other user-testing studies to explore user experience in an information design context, which Morville refers to as the ‘honeycomb’ [12, 13, 35].

Results

Participant profiles

A total of 31 interviews were completed (Table 1), with senior decision-makers (n = 8), advisors (n = 7), field managers (n = 7), analysts/researchers (n = 5) and healthcare providers (n = 4). Good balance was achieved across types of organizations (e.g. non-governmental organizations, international agencies, government agencies and academic institutions). A high proportion of interviewees had not used Evidence Aid before (n = 22); 65% of the participants were women (n = 20) and 35% were men (n = 11); 17 interviewees have never heard of Evidence Aid before our interview, while 14 participants had heard and used Evidence Aid occasionally.

Participant knowledge needs, types of information used to address knowledge needs and sources for obtaining information

Many of our participants highlighted the scarcity of available knowledge relevant to crisis zones, with one senior decision-maker stating:

There is never enough knowledge and evidence in fast evolving crisis, especially when we deal with emergencies and we never know what is going on and we are always desperate to get more information. The lack of ability to get … information during a fast-moving developing disaster situation is a massive challenge.

The distribution of participant knowledge needs, types of information used and sources for obtaining information varied depending on the type of stakeholder (Table 2). The following knowledge needs were most cited by a specific type of stakeholder: policy development related to health-system strengthening and health-advocacy approaches by senior decision-makers; operational logistical management (e.g. setting up mobile health clinics in crisis zones) by field managers; clinical management of patients in a crisis zones by healthcare providers; and community-level programme development (e.g. how to support behaviour change in a community setting) and implementation strategies for any of the above four domains cited by advisors and senior decision-makers, respectively.

Table 2 Users’ knowledge needs, types of information used, and sources for obtaining informationa

As for the types of information used by our participants to address their knowledge needs, we focus our attention here on those that are within Evidence Aid’s scope — systematic reviews and meta-analyses were most cited by analysts and researchers, while intervention studies (e.g. clinical trials) were most cited by senior decision-makers, healthcare providers and advisors. Global guidelines (e.g. from WHO) were most cited by advisors. Finally, our participants obtained information from a wide variety of sources (e.g. evidence websites such as ReliefWeb and Health Systems Evidence, reports by UN agencies, correspondence with senior decision-makers, and social networking sites such as Facebook and Twitter).

User experiences

Overall, there were two notable differences in responses across our diverse types of stakeholders and between users and non-users of Evidence Aid. First, analysts and researchers we interviewed demonstrated enthusiasm that Evidence Aid is attempting to bring research evidence closer to humanitarian aid workers, while some senior decision-makers were sceptical about using Evidence Aid as opposed to relying on information stemming from their ground operations to answer specific questions. Additionally, participants that have used Evidence Aid before were more familiar with the organization of tabs on the website, which facilitated faster access to desired content than non-users. Finally, there were no notable differences in responses across gender.

Participants did not identify any ‘major’ problems (highest order) across the seven domains of the user experience (Table 3). However, participants identified two ‘big’ problems (second highest order) related to findability and accessibility. In terms of findability, participants frequently cited the lack of a search engine on the home page as a problem in locating desired articles. Turning to accessibility, participants expressed frustration that some of the full-text articles available through the site required a payment to access the content and that timely assessment data on current crisis is missing; provision of access to pay-walled research and timely assessment data is outside of the scope of Evidence Aid’s services. We outline below, by domain, the most frequently cited minor issues, positive feedback and specific suggestions.

Table 3 Users’ experiences using Evidence Aida

Findability

Participants cited a minor issue of having difficulty locating the search bar. As for positive feedback, participants indicated that the four cluster areas (i.e. health issues, emergency type, humanitarian cluster and person groups) under the ‘Resources’ tab were helpful in locating desired information. In addition, participants appreciated that the ‘tags’ in the results page helped to further narrow down their search results. Participants suggested the addition of an advanced search filter for more targeted search results (e.g. date of last search, specific contexts and language preference).

Usability

Participants cited as a minor issue having to undertake multiple steps to perform basic tasks to arrive at results on first use. However, some participants did note that, once they had enough time on the site, they were able to perform basic tasks efficiently. A field manager commented:

I appreciate that there is a learning curve until one is familiar with the site and how to use it efficiently.

To improve the usability of the site, some participants suggested creating a clearer statement of the site’s purpose and the type of evidence it provides.

Usefulness

For minor issues, participants sometimes cited a lack of systematic reviews and guidelines related to their own particular areas of professional interests or fields of work. Participants provided positive feedback related to how useful the site is in providing an independent evidence website for curated evidence on crisis zones for decision-makers working in the field. As one senior decision-maker commented:

It is good for humanitarian workers to have all the articles on one site so they can go there and look for evidence-based approaches.

Most participants suggested that Evidence Aid should focus some of their efforts on turning the evidence available into explicit actionable points for decision-makers to use in crisis zones. A stakeholder highlighted this suggestion by stating:

Most people in the humanitarian sector do not understand abstracts and they almost alienate them. A better strategy is friendly-summary reviews that are shorter, to the point, with clear actionable points.

Desirability

Participants cited a minor issue of photos on Evidence Aid being ‘ordinary’ (i.e. academic looking) and repetitive. A healthcare provider explained Evidence Aid choice of pictures on the home page stating:

Photos make it seem like a training workshop website with the pictures of classroom settings.

Photos displayed on Evidence Aid prompted many participants (including the above healthcare provider) to suggest that the developers behind the site should consider using compelling photos that are relevant to humanitarian contexts. Participants did appreciate the basic simple design of the site and the lack of numerous pop-up advertisements.

Accessibility

Participants cited concerns over whether documents can be read online or have to be downloaded first, the latter of which can be a problem in a low-bandwidth internet setting and would pose a significant limitation to those using the site from the frontlines of a crisis zone, with a healthcare provider stating:

Access to the internet in the field is a big barrier. It is a touch and go situation.” – healthcare provider working in the field at a non-governmental organization (NGO)

Participants did appreciate that Evidence Aid is accessible to a broad spectrum of people working in the humanitarian sector who have access to the internet. A mobile friendly app, which is not currently available, or the use of a responsive web design was suggested as a way to improve the overall user experience. Senior decision-makers highlighted the importance of having open-access resources and timely assessment data on current crises to inform decision-making, with one stakeholder stating:

There needs to be more open-access resources. Organizations need to share early on data from the field that would allow us to somehow get other actors to build the evidence to better inform our decisions.

A healthcare provider further emphasized the importance of open-access resources stating:

Open source access is still a big problem unless you have university library access.

Credibility

Participants cited as a minor issue not clearly knowing what inclusion criteria are used to include best available evidence on the site. Participants emphasized that the direct and clear link to the Cochrane Library increased their level of trust of the evidence presented. For specific suggestions, participants wanted to see greater visibility given to major contributors and funders, with an advisor working at an NGO, stating:

Highlight the main funders of the site on front page to make it more transparent with emphasis on the major contributors to Evidence Aid.

Value for the user

The lack of awareness among humanitarian aid workers about the existence of or value added by Evidence Aid was cited by participants as a minor issue. Several participants made comments about hearing of Evidence Aid but never using it because of lack of awareness about its value. An advisor and a field manager highlighted this during the interview, stating:

I heard of it but never used it. It has the potential of being super helpful. But not many people know about it now.” – advisor working at an NGO

This prompted our participants to suggest that Evidence Aid should emphasize more clearly on their site why evidence matters in humanitarian action and to continue collaborating with other organizations to fill gaps with new systematic reviews.

Discussion

Our study suggests that there are no ‘major’ problems (highest order) and only two ‘big’ problems (second highest order) that stakeholders’ experience with using Evidence Aid website, namely the lack of a search engine on the home page and that some full-text articles linked to or from the site are not accessible without payment to the publisher. Our study participants identified a positive feedback related to credibility (i.e. direct and clear link to the Cochrane Library increasing their level of trust of the evidence presented) that raises an important point that warrants highlighting. We found that users were inclined to make judgements about the trustworthiness of the Cochrane Library as the publishing source rather than critically assessing individual pieces of evidence, a similar finding in other studies [12]. Additionally, participants identified a minor issue related to value (i.e. the lack of awareness among humanitarian aid workers about the existence of or value added by Evidence Aid), that provides a key insight into the challenges with supporting the use of research evidence to inform decision-making in crisis zones, as highlighted in other studies [36,37,38].

Seven specific suggestions made by our participants and illustrated in Table 3 present actionable suggestions for improving Evidence Aid, many of which can also be applied to other evidence websites designed to support the use of research evidence in decision-making; these include (1) create a home page-based search engine; (2) strive to ensure that basic tasks can be easily accomplished on first use; (3) ensure that the search results are presented in a user-friendly way (e.g. turn the evidence available into explicit actionable points), in a language that can be read (i.e. in common first languages), and without jargon; (4) keep the site design simple, with images that are appropriate to crisis zones and capture users’ attention; (5) accommodate diverse user contexts (e.g. inability to pay for articles) and physical functioning (e.g. colour blindness); (6) ensure accuracy of the information on the site (e.g. correct years of publication); and (7) increase the value of Evidence Aid for the user by achieving the second part of the stated mission (i.e. enabling the use of evidence), whereby Evidence Aid or another group can choose from a variety of additional ways to enable the use of research evidence (e.g. rapid reviews).

A common challenge that stakeholders face when trying to use research evidence to inform their decision-making relates to the lack of knowledge management skills and infrastructure [6, 39,40,41,42,43]; for example, the huge volume of research evidence currently produced and scattered across journals, books, reports and websites, many of which require a payment to access. Evidence use includes not only the determination of what evidence is needed to inform a decision, but also how to best support the use of that evidence to its full potential. Our study found that Evidence Aid is contributing to strengthening efforts to support evidence-informed decision-making in crisis zones.

Our findings suggest the following three contributions to understanding evidence use in a crisis zone. First, many of our participants emphasized the need for evidence to be turned into explicit actionable points (e.g. check-lists). However, we recognize that this task is better delegated to a person or group that create connections between researchers and decision-makers (e.g. knowledge brokers). Second, our participants highlighted that evidence summaries must clearly indicate the basic findings from systematic reviews, including key messages that can be acted upon [10]. Third, our stakeholders raised the importance of having a well-organized website that consists of a wide variety of relevant information, allowing them easy and efficient access to the best available evidence in the limited time they have available to make, inform or advocate for a decision [11]. Clearly, stakeholders working in crisis zones have a diverse array of knowledge needs, and these findings reaffirm the importance of doing further scholarly work to better understand how to best support evidence use in crisis zones.

Findings in relation to other studies

Our finding that participants did not identify any major problems (and only two big problems) with using Evidence Aid aligns with previous studies identifying that users generally find there are many helpful attributes of using evidence websites (e.g. multiple sources of information in one spot) [10, 15]. This study also aligns with other studies in putting forward specific suggestions to improve the use of evidence websites (e.g. functions in the users’ first language) [10]. Finally, this study complements existing literature in being the first study to specifically focus on an evidence website for crisis zones, elaborating on the information needs of stakeholders working in crisis zones and putting forward specific suggestions that address all facets of improving users experience; additionally, the research team is independent from Evidence Aid [4, 5, 10,11,12, 15, 35, 38, 44,45,46,47].

Strengths and limitations

There are a number of strengths to this study. As far as we are aware, this is the first study to examine evidence website use in crisis zones and the first user-testing study to investigate the information needs of stakeholders working in crisis zones, which provides valuable insight on how best to meet their knowledge needs. Second, we interviewed a large number and diverse range of people for a study of this type, with the number higher than that thought to reveal 80% of known usability problems [28]. The diversity in our study lies within the types of stakeholders included, organizational affiliations, and whether the users employed Evidence Aid or not (and hence a likely broad sampling of the challenges stakeholders would face in navigating research evidence for use in crisis zones). Some notable differences in responses emerged across these diverse types of stakeholders and between users and non-users of Evidence Aid. However, there were no notable differences in responses across gender or participant ability to verbally communicate their insights in English. Finally, this study presents a non-affiliated examination of Evidence Aid, when there is a lack of third-party research about the effectiveness of evidence websites, with most existing research designed and conducted by groups associated with the website under study.

One potential limitation to this study is that all our interviews, except one, were conducted with stakeholders not physically present in a crisis zone at the time of the interview. Increased time pressure in crisis zones may influence participants’ views and experiences in finding relevant research evidence for decision-making. To mitigate this limitation, we purposively sampled participants who were either currently working or have worked within the last year in a crisis zone and we prompted them to consider real-life situations when responding.

Implications for practice

There are four main implications, the first of which is that the developers of Evidence Aid should continue their efforts of providing the best available evidence on the effectiveness of humanitarian action while taking into account the specific suggestions, summarized above, to improve the site. These specific suggestions can also be applied to other evidence websites designed to support the use of research evidence in decision-making. Second, the developers of Evidence Aid site should consider whether they or another group are better positioned to fulfil the second part of their mission, namely ‘enabling the use of the best available research evidence’, by expanding their activities to include creating demand for research evidence, providing rapid reviews in response to decision-maker requests and institutionalizing the use of research evidence, among other options [48,49,50,51,52]. Third, senior decision-makers working in crisis zones should work with humanitarian aid workers to raise awareness of the existence of evidence websites, like Evidence Aid, and to build their capacity to find and use research evidence in decision-making. Finally, the users of Evidence Aid should continue to provide their feedback on how Evidence Aid and other evidence websites can best meet their knowledge needs.

Future research

The next steps in research could be for researchers to explore stakeholders’ experiences with an updated version of Evidence Aid to ‘test’ (e.g. randomized controlled trials) if specific changes have improved the usability and use of the site. Additionally, researchers could evaluate future efforts by Evidence Aid or its partners to address the part of its mission focused on enabling the use of research evidence. Researchers could also explore other evidence websites (e.g. ReliefWeb, Cochrane database), which were most cited by our participants as their main source of obtaining information to find ways to adapt these websites’ strengths to improve Evidence Aid. Finally, researchers working in other domains (i.e. outside humanitarian crises) could use our methodology (i.e. diversity in user types of stakeholders and organizational affiliation) to explore stakeholders’ views and experiences with other evidence websites designed to support evidence-informed decision-making.

Conclusion

Stakeholders in crisis zones found Evidence Aid to be useful, accessible and credible. However, they experienced some problems with the lack of a search engine on the home page and the fact that some full-text articles linked to or from the site require a payment. This is the first study to specifically focus on an evidence website for crisis zones, elaborated on the information needs of stakeholders, and put forward specific suggestions about how to improve evidence websites. By making evidence available, evidence websites provide one of the necessary inputs for evidence-informed decision-making processes. The absence of evidence websites creates a clear gap in supporting evidence-informed decision-making.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

References

  1. Altay N, Labonte M. Challenges in humanitarian information management and exchange: evidence from Haiti. Disasters. 2014;38(s1):S50–72.

    Article  PubMed  Google Scholar 

  2. Mellon D. Evaluating Evidence Aid as a complex, multicomponent knowledge translation intervention. J Evid Based Med. 2015;8(1):25–30.

    Article  PubMed  Google Scholar 

  3. Allen C. A resource for those preparing for and responding to natural disasters, humanitarian crises, and major healthcare emergencies. J Evid Based Med. 2014;7(4):234–7.

    Article  PubMed  Google Scholar 

  4. Kayabu B, Clarke M. The use of systematic reviews and other research evidence in disasters and related areas: preliminary report of a needs assessment survey. PLoS Curr. 2013;5. https://doi.org/10.1371/currents.dis.ed42382881b3bf79478ad503be4693ea.

  5. Clarke M. Evidence Aid – from the Asian tsunami to the Wenchuan earthquake. J Evid Based Med. 2008;1(1):9–11.

    Article  PubMed  Google Scholar 

  6. Lavis J, Davies H, Oxman A, Denis J-L, Golden-Biddle K, Ferlie E. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy. 2005;10(1_suppl):35–48.

    Article  PubMed  Google Scholar 

  7. Lavis JN, Oxman AD, Moynihan R, Paulsen EJ. Evidence-informed health policy 1 – synthesis of findings from a multi-method study of organizations that support the use of research evidence. Implement Sci. 2008;3:53.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Shearer JC, Dion M, Lavis JN. Exchanging and using research evidence in health policy networks: a statistical network analysis. Implement Sci. 2014;9:126.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Straus SE, Tetroe J, Graham I. Defining knowledge translation. Can Med Assoc J. 2009;181(3–4):165–8.

    Article  Google Scholar 

  10. Barbara AM, Dobbins M, Haynes RB, Iorio A, Lavis JN, Raina P, et al. McMaster Optimal Aging Portal: an evidence-based database for geriatrics-focused health professionals. BMC Res Notes. 2017;10:271.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Lavis JN, Wilson MG, Moat KA, Hammill AC, Boyko JA, Grimshaw JM, et al. Developing and refining the methods for a ‘one-stop shop’ for research evidence about health systems. Health Res Policy Syst. 2015;13:10.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Rosenbaum SE, Glenton C, Cracknell J. User experiences of evidence-based online resources for health professionals: user testing of The Cochrane Library. BMC Med Inf Decis Mak. 2008;8:34.

    Article  Google Scholar 

  13. Mijumbi-Deve R, Rosenbaum SE, Oxman AD, Lavis JN, Sewankambo NK. Policymaker experiences with rapid response briefs to address health-system and technology questions in Uganda. Health Res Policy Syst. 2017;15:37.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Rosenbaum SE, Glenton C, Nylund HK, Oxman AD. User testing and stakeholder feedback contributed to the development of understandable and useful Summary of Findings tables for Cochrane reviews. J Clin Epidemiol. 2010;63(6):607–19.

    Article  PubMed  Google Scholar 

  15. Mutatina B, Basaza R, Obuku E, Lavis JN, Sewankambo N. Identifying and characterising health policy and system-relevant documents in Uganda: a scoping review to develop a framework for the development of a one-stop shop. Health Res Policy Syst. 2017;15(1):7.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Gopalakrishnan S, Ganeshkumar P. Systematic reviews and meta-analysis: understanding the best evidence in primary healthcare. J Family Med Prim Care. 2013;2(1):9.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  17. Nielsen J, Loranger H. Prioritizing web usability. London: Pearson Education; 2006.

    Google Scholar 

  18. Kuniavsky M. Observing the User Experience: A Practitioner's Guide to User Research. Waltham: Elsevier; 2003.

    Google Scholar 

  19. Van Den Haak M, De Jong M, Jan SP. Retrospective vs. concurrent think-aloud protocols: testing the usability of an online library catalogue. Behav Inform Technol. 2003;22(5):339–51.

    Article  Google Scholar 

  20. Krahmer E, Ummelen N. Thinking about thinking aloud: a comparison of two verbal protocols for usability testing. IEEE Trans Prof Commun. 2004;47(2):105–17.

    Article  Google Scholar 

  21. Newell A, Simon HA. Human Problem Solving. Englewood Cliffs: Prentice-Hall; 1972.

    Google Scholar 

  22. Ericsson KA, Simon HA. Verbal reports as data. Psychol Rev. 1980;87(3):215.

    Article  Google Scholar 

  23. Patton MQ. Qualitative Research & Evaluation Methods: Integrating Theory and Practice. Thousand Oaks: Sage; 2015.

    Google Scholar 

  24. Boyko JA, Lavis JN, Abelson J, Dobbins M, Carter N. Deliberative dialogues as a mechanism for knowledge translation and exchange in health systems decision-making. Soc Sci Med. 2012;75(11):1938–45.

    Article  PubMed  Google Scholar 

  25. Hofmeyer A, Scott C, Lagendyk L. Researcher-decision-maker partnerships in health services research: practical challenges, guiding principles. BMC Health Serv Res. 2012;12:280.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Gentles SJ, Charles C, Ploeg J, McKibbon K. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

    Google Scholar 

  27. Yin RK. Basic Types of Designs for Case Studies. Case Study Research: Design and Methods 5th ed. Thousand Oaks: Sage Publications; 2014.

    Google Scholar 

  28. Nielsen J. Why you only need to test with 5 users. Alertbox. Useit.com. 2000. Accessed 4 Feb 2019.

  29. Tan W-s, Liu D, Bishu R. Web evaluation: heuristic evaluation vs. user testing. Int J Ind Ergon. 2009;39(4):621–7.

    Article  Google Scholar 

  30. Forsell C, Johansson J, editors. An Heuristic Set for Evaluation in Information Visualization. Proceedings of the International Conference on Advanced Visual Interfaces. New York: ACM; 2010.

  31. Stone D, Jarrett C, Woodroffe M, Minocha S. User Interface Design and Evaluation. San Fransisco: Elsevier; 2005.

    Google Scholar 

  32. Morville P. User Experience Design Ann Arbor: Semantic Studios LLC2004. http://semanticstudios.com/publications/semantics/000029.php. Accessed 4 Feb 2019.

  33. Ritchie J, Spencer L. Qualitative data analysis for applied policy research. Qual Res Companion. 2002;573(2002):305–29.

    Google Scholar 

  34. Srivastava A, Thomson SB. Framework Analysis: A Qualitative Methodology For Applied Policy Research. J Adm Govern. 2009;4:72.

    Google Scholar 

  35. Gariba EB. User-testing of Health Systems Evidence and the EVIPNet Virtual Health Library among health system policymakers and stakeholders in Uganda and Zambia: a qualitative study. Hamilton: McMaster University; 2015.

    Google Scholar 

  36. Ager A, Burnham G, Checchi F, Gayer M, Grais R, Henkens M, et al. Strengthening the evidence base for health programming in humanitarian crises. Science. 2014;345(6202):1290–2.

    Article  CAS  PubMed  Google Scholar 

  37. Blanchet K, Ramesh A, Frison S, Warren E, Hossain M, Smith J, et al. Evidence on public health interventions in humanitarian crises. Health in humanitarian crises. Lancet. 2017;390:2287–96.

    Article  PubMed  Google Scholar 

  38. Turner T, Green S, Harris C. Supporting evidence-based health care in crises: what information do humanitarian organizations need? Disaster Med Public Health Prep. 2011;5(1):69–72.

    Article  PubMed  Google Scholar 

  39. Straus SE, Tetroe JM, Graham ID. Knowledge translation is the use of knowledge in health care decision making. J Clin Epidemiol. 2011;64(1):6–10.

    Article  PubMed  Google Scholar 

  40. Lavis JN, Ross SE, Hurley JE. Examining the role of health services research in public policymaking. Milbank Q. 2002;80(1):125–54.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Milner M, Estabrooks CA, Myrick F. Research utilization and clinical nurse educators: a systematic review. J Eval Clin Pract. 2006;12(6):639–55.

    Article  PubMed  Google Scholar 

  42. Légaré F, Ratté S, Gravel K, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: update of a systematic review of health professionals’ perceptions. Patient Educ Couns. 2008;73(3):526–35.

    Article  PubMed  Google Scholar 

  43. Grimshaw JM, Eccles MP, Walker AE, Thomas RE. Changing physicians' behavior: what works and thoughts on getting more things to work. J Contin Educ Heal Prof. 2002;22(4):237–43.

    Article  Google Scholar 

  44. Tharyan P, Clarke M, Green S. How the Cochrane collaboration is responding to the Asian tsunami. PLoS Med. 2005;2(6):e169.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Wallace J, Nwosu B, Clarke M. Barriers to the uptake of evidence from systematic reviews and meta-analyses: a systematic review of decision makers’ perceptions. BMJ Open. 2012;2(5):e001220.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Petticrew M, McCartney G. Using systematic reviews to separate scientific from policy debate relevant to climate change. Am J Prev Med. 2011;40(5):576–8.

    Article  PubMed  Google Scholar 

  47. Kar-Purkayastha I, Clarke M, Murray V. Dealing with disaster databases–what can we learn from health and systematic reviews?: Application in practice. PLoS Curr. 2011;3:RRN1272.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Lavis JN, Røttingen J-A, Bosch-Capblanch X, Atun R, El-Jardali F, Gilson L, et al. Guidance for evidence-informed policies about health systems: linking guidance development to policy development. PLoS Med. 2012;9(3):e1001186.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Moat K, Lavis J. Supporting the use of Cochrane Reviews in health policy and management decision-making: Health Systems Evidence. Cochrane Database Syst Rev. 2011;8:ED000019.

    Google Scholar 

  50. Moat K, Lavis J. Supporting the use of research evidence in the Americas through an online “one-stop shop”: the EVIPNet VHL. Cad Saude Publica. 2014;30(12):2697–701.

    Article  CAS  PubMed  Google Scholar 

  51. Wilson M, Lavis J, Grimshaw J. Supporting the use of research evidence in the Canadian health sector. Healthcare Q. 2012;15:58–62.

    Article  Google Scholar 

  52. Lavis JN. How can we support the use of systematic reviews in policymaking? PLoS Med. 2009;6(11):e1000141.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Lab SPT. Stanford Guidelines for Web Credibility: Stanford University; 2004. http://credibility.stanford.edu/guidelines/index.html. Accessed 4 Feb 2019.

Download references

Acknowledgements

Not applicable.

Funding

No funding was received from Evidence Aid to conduct this study. AFK is supported by an Ontario Graduate Scholarship providing financial support to enable doctoral dissertation work.

Author information

Authors and Affiliations

Authors

Contributions

AFK conceived, collected the data, and drafted the manuscript. All authors participated in the design of the study, analysis of the data and provided comments on drafts of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ahmad Firas Khalid.

Ethics declarations

Ethics approval and consent to participate

Ethics approval was sought from the McMaster University through the Hamilton Integrated Research Ethics Board (HiREB), Project#: 4830. We also sought consent for the process (including audio-recording) from all participants.

Consent for publication

Not applicable.

Competing interests

All authors declare that they have no competing interests. No pre-existing relationship exists between the authors of this study and the group behind Evidence Aid.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Interview guide used in Test.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khalid, A.F., Lavis, J.N., El-Jardali, F. et al. Stakeholders’ experiences with the evidence aid website to support ‘real-time’ use of research evidence to inform decision-making in crisis zones: a user testing study. Health Res Policy Sys 17, 106 (2019). https://doi.org/10.1186/s12961-019-0498-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12961-019-0498-y

Keywords