Open access
State-of-the-Art Reviews
Aug 16, 2021

Environmental Health Science and Engineering for Policy, Programming, and Practice

Publication: Journal of Environmental Engineering
Volume 147, Issue 10

Abstract

At the interface of environmental health science and engineering with policy, programming, and practice, multiple actors and social processes support communication and decisions. Understanding this interface would help accelerate public health improvements and the effectiveness of interventions, especially in managing the contextual factors that lead to disparities in service provision, resource use, and health. Misconceptions about the uptake of evidence in decision-making often limit fault to the overly technical nature of research findings or the competing and frequently shifting priorities of policy makers. The most effective means to enhance the flow of evidence, however, is regular, structured, and inclusive two-way communication, where knowledge brokers (individuals or institutions) link scientists, engineers, practitioners, policy makers, and the public. Good practices for enhancing science application should be recognized and tailored for environmental health science and engineering. We recommend that (1) science application and interprofessional engagement be incorporated into higher education, (2) funding mechanisms ensure stakeholder engagement beyond the project cycle, (3) evidence synthesis be collectively supported, (4) common rigor and study quality measures be adopted for diverse study types, and (5) research into effective science brokering practices and outcomes be encouraged.

Introduction

Environmental health science and environmental engineering (EHSE) are applied sciences because they advance solutions to real-world problems. Surprisingly—for fields in which much attention goes toward validating interventions, assessing impact, and analyzing cost-effectiveness and cost-benefit ratios—less attention has been paid to the impacts of science on the policies, planning, and practices they ostensibly influence. Other fields have embraced boundary science and evidence-based decision-making (EBDM), for example, in the adjacent environmental science and medical literatures (Cash et al. 2003; Greenhalgh et al. 2014).
This insufficient attention has cultivated the concept of a science-policy gap or valley of death (Poch et al. 2017). Scientists are caricatured as unable to convey timely evidence to those needing information or to avoid caveats that strip it of actionable meaning; and parodies of decision makers suggest attention spans limited to two pages of banal generalization or one election cycle. Neither description is fair or comprehensive. These stereotypes obscure the efforts of individuals who are keen to contribute to beneficial outcomes and respond to societally critical challenges, such as climate change and the COVID-19 pandemic, but suspect and lack understanding of one another (Quevauviller 2010).
Sarewitz and Pielke (2007) discuss science (evidence supply), policy (evidence demand), and the neglected reconciliation process in the middle. Quevauviller describes several shortcomings at the interface: policies are not sufficiently based in evidence, policy processes fail to address problems highlighted by science, science fails to account for and respond to urgent policy needs, science is conducted in a way that does not produce policy-relevant results, and inadequate communication fails to bridge the worlds of science and policy. Many of these issues, also highlighted in science meets policy workshops and bridging the gap conferences in the European Union, stem from poor communication or miscommunication (Quevauviller 2010).
In this paper, we review and reflect on evidence and experience around science application as relevant to EHSE. Our objective is to identify means to enhance the benefits arising from the combined efforts of evidence generators and users. We initially describe relevant paradigms such as evidence-based, boundary science, engaged research, and diffusion theory and critique them. We then consider knowledge about good practices for the science application interface. Next, we explore characteristics of EHSE that challenge work at this interface, such as the need to make decisions with little evidence, complexity, and internationality. We conclude with five recommendations for enhancing the benefits of EHSE by improving science application.

Methods

We iterated between two evidence streams on EHSE: a review of published evidence and reflection on career experience in science application interactions. Because of our backgrounds, water supply and sanitation are disproportionately represented in examples. Our scope spans low-, middle- and high-income countries.
Because the pool of literature on this topic was slight, a scoping literature review sought resources from all academic fields to capture historical trends and shifting theory and provide practical interpretation. Literature was gathered from 2015 to 2020 with no restrictions on the date of publication. All types of resources were considered, including peer-reviewed journal articles, books, gray literature, briefs, and website content. Information was cataloged in an encyclopedic format for examination and synthesis.
The reflection on experience resembles analytic autoethnography (Anderson 2006) and participant observation (e.g., Stevens 2011), calling on examples of policy, programming, and practice, where the decision-making institutions were governments, nongovernmental organizations (NGOs), and commercial companies. Like Cooper (2016), contemporaneous notes, interviews, and other data were not systematically collected and recorded; the events are recalled from numerous, diverse interactions over many years. To counteract associated bias, effort was made to verify and ground-truth descriptions with others.
Prior literature often refers to science-policy relationships, which we intentionally broadened to science-application. While policy can indicate commitment to a course of action by any group, it is often interpreted narrowly to suggest governments or large organizations. Here, we emphasize that science can also be applied to the planning of programs and interventions, and in development and adoption of good professional practices. Similarly, we adopt the term interface over boundary (as widely used in the literature) to emphasize connection rather than a presumed obstacle. Related terms, as cataloged by Setty et al. (2019), include translational research, applied science, and implementation science.

What Is Science Application?

Applied science uses scientific methods to pursue practical goals (Narayanamurti and Odumosu 2016), wherein a system of interrelated individuals works together on the same issue. Science application is challenging to define because it may be undocumented outside the minds and personal experiences of the scientists and decision makers involved. One definition of the science-policy interface cites the “social processes which encompass relations between scientists and other actors in the policy process, and which allow for exchanges, co-evolution, and joint construction of knowledge with the aim of enriching decision-making” (van den Hove 2007). This can be readily extended to other applications, for example, thought leadership, practice, program management, and funding allocation. It centers on people and communication, not scientific publications or policy documents.
Because of its practical purpose, applied science is necessarily embedded in political processes. While evidence might be considered a value-free influence on decision-making, the scientists and scientific processes behind it are typically value laden. Cleaving to the notion of scientific impartiality may obscure the role of implicit values such as democracy, justice, sustainability, religious belief, and equality (of access, distribution, or outcome). However, such themes are prominent in some professional forums (e.g., NAS 2020), especially with renewed attention to social equity and justice. Frameworks such as Sen’s capability framework (Sen 1985) and human rights are infrequently applied in EHSE. However, such perspectives are insightful when applied broadly [e.g., within the role of the United Nations (UN) special rapporteurs on the human rights to water and sanitation (Heller et al. 2020)] or in a focused manner (e.g., Barrington et al. 2017).
Most use of scientific research is conceptual (contributing to a holistic knowledge, awareness, or understanding), rather than explicit (leading directly to one or more changes in policy) (De Goede et al. 2012; Nutley et al. 2007). Documented research findings alone are usually insufficient to change practices, although they can raise awareness as a prelude to change. Depending on the field, only a small minority of decision makers actively seek out research (Nutley et al. 2007). Amjad et al. (2015), contrasting water utilities with governments, found distinct types of decision-making and different preferred sources of evidence. Bounded rationality suggests policy makers often gather limited information just before making time-bound decisions (Cairney 2016). Further, constructionist theories of learning suggest new knowledge will be shaped and filtered by the reader’s preexisting experiences and perspective (Nutley et al. 2007). Thus, decision-making is a somewhat artificial construct, combining the moment of commitment with all knowledge and action that may precede it (Langley et al. 1995).
Although instances of science application (e.g., times, places, actors involved) are rarely recorded, occasionally scientific evidence is documented and transcribed into decision-making models, frameworks, memos, meeting minutes, or policy itself (e.g., in justification statements, references, or technical appendices). For example, New Zealand’s bathing water regulations (New Zealand, n.d.) cite adaptation from the World Health Organization (WHO 1999) Annapolis Protocol, an innovative scientific consensus statement. Few, albeit valuable, academic explanations of science application experiences exist because the primary focus of literature outlets (and professional positions) is often science alone or policy alone. Rare efforts have taken deep dives into coalescing knowledge for sectors such as EHSE (e.g., Hering 2018).

Boundary Science Links Science to Policy and Programming

Actions at the interfaces among communities of experts and decision makers are sometimes referred to as boundary science, and good practices in boundary spanning are emerging (Bednarek et al. 2018). Because both scientists and policy makers have insufficient training, skills, and resources to work at this interface (Cash et al. 2003; Quevauviller 2010), knowledge brokers and public entrepreneurs have emerged. These formal or informal roles vary widely in scope and scale.
Knowledge brokers are people or groups familiar with more than one professional world (e.g., science, practice, policy) who find, assess, and interpret evidence; formulate recommendations; transfer information; facilitate interaction; and identify emerging research questions (Ward et al. 2009). These individuals or institutions are accountable to the operational norms of both scientists and decision makers (Cash et al. 2003). Some knowledge brokers have been practicing in US EHSE for decades, such as the National Academies of Sciences, Engineering, and Medicine (1863), Resources for the Future (1959), and the Southern California Coastal Water Research Project (SCCWRP) (1969).
An international example of knowledge brokers was explored by Brocklehurst (2013), who brought together senior advisers to finance ministers from six African countries in a Chatham House Rule focus group. The report describes ministers of finance keen to serve as custodians of the public purse and interested in evidence on value for money and objective comparison of policy options. Politicians and their advisers were exasperated by (rather than welcoming of) the lack of detail, specificity, and precision in flat briefings. Participants felt staff in sector ministries (e.g., water resources, rural development) failed to relate their concerns to the prevailing political context. Ministers were further frustrated by weak monitoring and evaluation of both service delivery and impact, and disliked ad hoc requests for last-minute briefings, rather than planned and timely dialogue.
The UN Special Rapporteur on the human rights to water and sanitation is an institutionalized, individual knowledge broker. Incumbent experts have produced many documents to bridge science and application among human rights and practice constituencies (Heller et al. 2020). These include careful reflection on contentious ideas (such as the right to free water, effects of megaprojects, affordability, and privatization of services) that facilitate the practical application of human rights principles. However, analysis of the results of country missions and associated reports concluded these communications alone were insufficient to trigger substantive (structural) change (Heller et al. 2020).
Some institutes and NGOs established to influence policy have declared missions as knowledge brokers. Three US-based examples are Water Advocates, WaSH Advocates, and Global Water 2020. Their shared characteristics include commitment to a finite time span and exit strategy (enabling them to focus on addressing issues rather than self-promotion); sufficient core funding (liberating them from competing demands for fundraising and the risk of pandering to donor interests); and engagement with diverse stakeholders across science application interfaces.
In contrast to knowledge brokers, public entrepreneurs are individuals or institutions (e.g., policy analysts, educators, authors, nonprofit or for-profit organizations, researchers, lobby groups, or think tanks) who generate, translate, and introduce new ideas into the public sector (Roberts and King 1991). These include professional associations, which often represent the interests of their members and develop and assert policy stances. For example, the International Water Association (IWA), responding to pending publication of a novel policy development [water safety plans (WSPs)] in the 2004 edition of the WHO Guidelines for Drinking Water Quality (WHO 2004), adopted an aligned policy position in its Bonn Charter for Safe Drinking Water (IWA 2004). Both organizations acted as public entrepreneurs in introducing a new idea. WHO managed the 10-year policy development initiative, involving more than 500 people (frequently knowledge brokers themselves) from approximately 100 countries, as well as other potential public entrepreneurs (e.g., World Plumbing Council, International Organization for Standardization). IWA, with membership classes for both individual professionals and water suppliers, translated it into a form that was more accessible and relevant to water supply organizations.
A further type of public entrepreneurship is publication of recommendations through articles or letters, typically by experts or authority figures. These appear in news periodicals and lay media formats (e.g., as an op-ed), as well as scientific journals. One example concerns whistleblowing by scientists and physicians about the highly publicized issue of lead in Flint, Michigan’s drinking water (e.g., Hanna-Natisha 2019). A multiauthor academic example, Howard et al. (2020), explored COVID-19 and water, sanitation, and hygiene, suggesting policy adaptations and preparedness measures to respond to this and future pandemics.

Evidence-Based Movement Adds Structure at the Risk of Oversimplifying

Evidence-based policy, evidence-based practices, and EBDM (and their evidence-informed analogs) indicate policy and practice decisions informed by rigorous, empirical science. They benefited from the developing social sciences, such as public policy, economics, and sociology, in the 20th century (Amjad et al. 2015). In science, the evidence-based movement has roots in evidence-based medicine, with its emphasis on randomized controlled trials (RCTs) (a methodology aimed at controlling sources of confounding) in the 20th century, and growth of evidence-based ideals in the 1990s (CHSRF 2000). In policy, the movement influenced wider debate from the late 1990s, for instance, about how early childhood influences and interventions could alter later quality of life (Nutley et al. 2007). The evidence-based movement led to policy initiatives such as Sure Start in the UK in 1998, which provides government support for early education, childcare, health care, and families; and No Child Left Behind in the US in 2001, which applied standardized measurable goals to improve individual educational outcomes and provided for disadvantaged students. Mixed results from such attempts at policy construction demonstrate the potential harm of lip service to a largely unachievable reducibility of real-world issues to a singular evidence-based solution, rather than applying and updating policy strategies tailored to both evidence and context.
One outcome of the evidence-based movement was establishment of the Cochrane Collaboration in 1993, which sets guidelines for systematic review and synthesis of published RCTs. This Global Network of researchers, funders, practitioners, patients, and caregivers works to produce credible, accessible health information to support informed decision-making (Cochrane, n.d.). The Campbell Collaboration, formed in 1999, similarly synthesizes social and educational evidence to support improved decisions and programming (Campbell Collaboration, n.d.). These structured tools have been popularized for a specific set of problems (e.g., where high-quality research has been conducted in comparable settings but not synthesized) but are not applicable to every problem (Setty et al. 2019).
Organizations that adopted explicit missions around knowledge brokering include:
the Evidence Network, which coalesced in 1999 in Canada to develop media content on public policy topics and link journalists with nonpartisan, evidence-based information from experts;
the Research Unit for Research Utilisation, funded since 2001 by the UK’s Economic and Social Research Council;
the Coalition for Evidence-Based Policy, now part of the Laura and John Arnold Foundation, which was active in the US from 2001 to 2015; and
the Scholars Strategy Network, founded in 2011, which coordinates researchers to address public challenges and enhance evidence accessibility to professionals outside academia.
EBDM efforts seek to avoid policy or programming failures by combining professional expertise and the needs of clients with the best advice from high-quality research studies, rather than basing decisions on tradition, anecdote, assumption, or hypothetical reasoning (Greenhalgh et al. 2014). Logically, EBDM should minimize bias by applying germane scientific insights, making policies and their applications safer, more consistent, and more cost effective. Because evidence typically contributes to decision-making only partially and among other factors, some prefer the term evidence-informed (De Goede et al. 2012; Nutley et al. 2007).
Adopting EBDM aligns with the declared approaches of many EHSE decision makers globally, exemplified by many governments’ totemic assertions to follow the science in response to COVID-19. At the WHO, the role of evidence was brought to the fore around 1998, triggering progressive clarification of evidence requirements for developing public health guidance, supported by distinct organizational positions, divisions, and standards. Among international NGOs, World Vision has created an evidence and learning unit to strengthen monitoring and evaluation, research and learning, and knowledge management. Similarly, Plan International’s global strategy calls for high-quality, rigorous research and accountable, transparent, and evidence-based decision-making (Plan International, n.d.). In the US, the Foundations for Evidence-Based Policymaking Act was signed into law in 2019 to “improve the use of evidence and data to generate policies and inform programs in the federal government,” requiring all agencies “to develop evidence-based policy and evaluation plans” (USEPA, n.d.-a). These intentions respond to major shifts in public and political priorities (Vernon 2017).
While a stated commitment to EBDM is low cost, low effort, and appeals to diverse stakeholders, its implementation may be complex and has unpredictable consequences.
Overall, understanding progress on and impacts of EBDM requires agreement on appropriate evaluation methods and terminology (Setty et al. 2019), and the process is yet poorly characterized. Upon reflection, the effects of the evidence-based movement have been principally twofold: a critical reflection on quality of evidence and its synthesis in the science sphere; and a formalization of the already-implicit role of evidence in the decision-making sphere. In both cases, effects may arise more through reflection and peer pressure to adhere to modified professional norms than through formal processes and commitments.

Models for Evidence Uptake Demonstrate Plurality

With increased attention to EBDM, various models of the science application interface have emerged, often building on one other. Examples include the knowledge-driven model, the problem-solving model, the interactive model, the political/tactical model, and the enlightenment model (Young et al. 2002). In general, one-way, rational-linear models of research production and uptake have been criticized for overreliance on a positivist and objectivist epistemology, where science discovers a singular truth, which is used as the sole basis for decisions (Huberman 1994; Nutley et al. 2007). Post-positivist models instead incorporate more frequent, multidirectional, and complex interactions and feedback loops; recognize the importance of values alongside evidence; and highlight the necessity of an explicit paradigm to understand evidence. Postmodern frameworks acknowledge suspicion of reason and the existence of privileged and marginalized groups (Nutley et al. 2007).
Realism recognizes multiple interpretations of truth, seeking instead “what works, for whom, in what circumstances, and to what extent” (Groot et al. 2017). Although research institutions might be presumed to be the main purveyors of science, realist approaches recognize that all parties both produce and use evidence, and that direct interaction among actors is the basic vehicle for evidence cycling. Thus, Gupta (2014), for example, characterizes science application interactions as a ladder, based on their structure and continuity, with greater consensus and legitimacy toward the top of the ladder idealized as continuous participatory assessments, and interactions lacking centralized assessment or governance toward the bottom.
A constructive interpretation of these diverse models is that some are more useful in certain science application scenarios than others.
Examples reflecting linear models include the systems for deriving guideline values or standards for chemical contaminants of drinking water, wherein the procedures of WHO (2009) and USEPA (USEPA, n.d.-b) are codified to describe overall approach, grading of evidence, and technical derivation. Here, the benefit of agreeing to common rules exceeds the risk of lost control because stakeholders’ interests are advanced by incorporating evidence into regulation, deriving credibility from visible transparency and consistency, accelerating decision-making, and minimizing conflict by adopting explicit procedures.
Complex evidence uptake examples typically involve many stakeholders with potentially-competing interests and values. Examples include the negotiation of international development policy, planning processes around major infrastructure (such as dams and transport), and intersectoral policies (such as decarbonizing energy production in response to climate change). The country missions carried out by UN special rapporteurs also reflect this model, involving diverse stakeholders with competing interests and imbalanced power. In the early years of the UN, development policy adoption appears to have been relatively straightforward, perhaps reflecting the few stakeholder groups (primarily national governments) and low risk associated with largely inconsequential development decade outcomes (Bartram et al. 2015), as well as the little evidence available to compete with other inputs to decision-making. Changes following adoption of the Millennium Declaration (2000) and the Millennium Development Goals (MDGs) (2001) reflected trends in many countries toward greater engagement in governance by civil society and private sector actors, more available evidence, and increasing impact of multilateral goals. Whereas the MDGs were developed by a small group of individuals loosely inspired by the Millennium Declaration in meetings at the UN headquarters, the UN organized a complex multistakeholder process to negotiate their successors—the Sustainable Development Goals (SDGs). A generalizable lesson from these UN examples concerns the value of deliberate, early efforts to identify and engage stakeholders.
Some examples that appear linear evolve to become complex. For example, in work with the international NGO World Vision, research into metal contamination of drinking water wells (Fisher et al. 2021) was initially conceived as a simple linear process (if problem detected, then evaluate and apply corrective and preventive measures). It evolved to become more complex and politically relevant because lead was widely detected. Other stakeholders became involved, including the governments of known-affected and potentially-affected countries, other implementing agencies, and national and international regulators. Evidence was critical in the trigger and became the fulcrum for discussion, but the detail of the evidence was secondary to the negotiation of communications and roles. The evidence did not implicate any specific country or implementing organization, minimizing finger-pointing and fostering negotiation. In contrast, much commentary about the detection of arsenic in community wells in Bangladesh [arising from a well-intentioned national government well-drilling program supported by the United Nations Children’s Fund (UNICEF)] was somewhat distracted by efforts to identify fault.
No single model is applicable to all cases; diverse models contribute to exploring and understanding different scenarios of science application, and all are simplifications of real-world processes. Linear models are readily applied in single, delegated authorities. Complex models reflect situations in which interactions among science, application, and politics are extensive and call for attention to diverging interests. These distinctions reflect in part the philosophical distinction of law (simplistically: rule following) from politics (simplistically: management of trade-offs and competing interests) (Gray 2009).

Diffusion of Innovation May Insufficiently Address Acute Needs

Diffusion of innovation theory [the science of how new ideas, technologies, and practices spread (Rogers 2003)] parallels science application. Its five process steps are knowledge, persuasion, decision, implementation, and confirmation (Nutley et al. 2007). Depending on when adopters of an innovation accomplish this process, they are categorized as innovators, early adopters, early majority, late majority, and laggards (Rogers 2003). Characteristics that influence uptake of an innovation include its relative advantage, compatibility with existing values and practices, simplicity and ease of use, trialability, and observable results (Robinson 2012). Borrowing from the Bass diffusion model, some adoption is initially driven by media reports and novel information access (among innovators), while most adopters (imitators) are driven by conversations with peers (Rogers 2003).
Modern implementation science distinguishes passive diffusion from active dissemination, which allocates specific effort and resources to encourage uptake of evidence-based practices (Setty et al. 2019).
The already-cited example of WHO and IWA’s speedy dissemination of a policy initiative offers a prime ESHE example of active dissemination. WHO grounded the WSP innovation in familiar, established concepts [e.g., sanitary inspection, hazard analysis and critical control point (HACCP), failure mode analysis, continuous quality improvement]; involved many stakeholders in policy formulation; collaborated with innovators (e.g., Australia, Iceland) to document and build on their experiences; and fostered early adopters (e.g., United Kingdom, New Zealand). A decade after the innovation was cointroduced in 2004, “WSPs [were] being implemented to varying degrees in 93 countries representing every region of the world, with 30% of countries at an early adoption stage and others implementing on a national scale” (WHO 2017). Further, “46 countries report[ed] having policy or regulatory instruments in place that promote or require WSPs, and another 23 countries report[ed] that such instruments are under development.” Factors associated with diffusion included political readiness and compatibility with cultural values.
Another example is household water treatment and safe storage (HWTS) (ensuring a household’s ability to transport, handle, and treat its own drinking water). It has several beneficial characteristics that appeal to widespread values: enables individual behaviors, targetable on needy populations, rapidly deployable in emergencies, not reliant on expensive infrastructure, and marketable. While HWTS aggregates several potentially-competing technical approaches to household water purification (e.g., boiling, filtration, sunlight and ultraviolet irradiation, chlorine dosing), stakeholders cooperate through the International Network to promote household water treatment and safe storage, which serves as both a knowledge broker and public entrepreneur (WHO, n.d.). Early dissemination efforts emphasized potential for disease reduction; however, over time, evidence from longer-term and more rigorous studies questioned the benefits and sustainability of HWTS under normal circumstances (Hunter 2009). From these examples, implementation and dissemination situations are far from straightforward and benefit from active understanding of the complexity of evidence, contextual factors, actors, roles, and measures of success.

Engaged Science Remains Elusive and Unsupported

Several well-studied perspectives and approaches exist for academic engagement with communities (e.g., community-based participatory research) and peer groups (e.g., participatory action research) (Setty et al. 2019; Perkmann et al. 2021). This concept extends to research conducted by stakeholders outside of academia (e.g., continuous quality improvement in the private sector; governmental research institutes). In common, these perspectives and approaches emphasize the value of frequent, meaningful, sustained engagement between scientists and concerned stakeholders (through all stages of research) and the beneficial impact of co-construction on research relevance, conduct, and uptake (Cash et al. 2003; Gupta 2014; Setty et al. 2020). Engaged science incorporates the views of multiple parties in research agendas from the outset, preferably via joint, explicitly structured problem identification (Bryant et al. 2014; Viergever et al. 2010; Weichselgartner and Kasperson 2010). Some evidence suggests engagement benefits scientific productivity (Perkmann et al. 2021).
While engagement is intuitively attractive, it poses challenges in practice. Scientific neutrality may be perceived as incompatible with close stakeholder engagement, and alignment with one stakeholder or stakeholder group may alter credibility among others (Shields et al., forthcoming; Pielke 2007). Regrettably, research projects often take place in isolation of stakeholders, especially during phases such as proposal and data analysis (Setty et al. 2020).
One example study of water, sanitation, and hygiene (WaSH) in selected Pacific Island countries was predicated on engaged research with communities seeking to improve their WaSH conditions (Barrington et al. 2016). Researchers made extensive and effective efforts to engage with communities identified by a local NGO from the earliest stages. Early interactions—potentially subject to courtesy bias—confirmed interest in WaSH improvements. As confidence grew, however, community members revealed additional and higher priorities elsewhere (e.g., building pathways, reducing unemployment, building a new church). While the research team endeavored to broker contact and encourage action by other agencies, ultimately the tension between an honestly preconceived project focus and the revealed preferences of community stakeholders remained unresolved. This experience confirms the value of engagement in identifying valid priorities and suggests true engagement fits poorly with short-term project-funding cycles.

What is Evidence?

At the heart of EBDM is the assumption that evidence is neutral and objective, exploiting the image of the objective scientist and appealing to those who wish to tie decision makers to the implications of known facts. These notions can fairly be questioned. Pielke (2007) suggests that applied scientists are rarely neutral on the issues they research. He proposes four stances that scientists adopt at science application interfaces: pure scientist, science arbiter, issue advocate, and honest broker of policy alternatives. The former two are disengaged from policy, the latter two engaged with it. Pielke does not rank or judge these, rather recommending honesty about one’s stance as a scientist. However, he argues that the stance of pure scientist is rare in practice and difficult to sustain, and that attempts to separate science and policy are naïve and unhelpful.
Evidence may be rejected by decision makers, especially if it challenges a preferred decision (Pielke 2007). Rejection can be through inaction despite evidence justifying intervention, or failure to cut popular but ineffective programming. One contemporary example is political failure to recognize the relationship between climate change and the frequency of wildfires, which reduce air quality and cause mortality (CDC 2020; BBC News 2020). Historically, recognition of the severe ecological effects of DDT (Carson 1962) brought widespread public support and led to strict controls, including designation as requiring prior informed consent (PIC) under the Rotterdam Convention and as a persistent organic pollutant (POP) under the Stockholm Convention. Notwithstanding DDT’s ecological impacts, it has only moderate direct adverse human health effects and substantial benefits in controlling insect vectors of disease. For a time, the conventions threatened to curtail the targeted use of a malaria preventive, especially affecting low-income countries with no realistic alternative and a high disease burden. The policy impasse met a successful, evidence-driven resolution: a DDT expert panel was established to assess the scientific, technical, environmental, and economic information for consideration by the Conference of the Parties. While the panel was ostensibly a science arbiter (providing an objective scientific response to a specified question), such roles are prone to issue advocacy (where the scientist aligns with, and seeks to advocate for, a decision outcome).
Some aspects of the contemporary scientific enterprise substantively mold the evidence available to decision makers. Scientific publishing entrenches bias because studies demonstrating significant effects are preferred to those with null findings. To help align intended study methods with reported outcomes in the field of medicine, the International Committee of Medical Journal Editors has, since 2005, required prospective trial registration as a condition of publication (ICMJE, n.d.). Its implementation remains imperfect—Loder et al. (2018) report “improperly registered trials are almost always published.” Methods registration is weakly developed in EHSE and could usefully be extended beyond trials involving human health outcomes. Increasing pressure from publishers, funders, nonprofits, peers, and others to engage in open science mechanisms may help develop and disseminate responses to these challenges (Setty et al. 2019).
Sponsorship of research by commercial entities was once widely criticized by the evidence-based movement, for example, for allocating excessive power to biased interests, such as drug and medical device industries who often control RCT specifications (Greenhalgh et al. 2014). However, conflicts of interest among other stakeholders received less attention, despite recognized problems, particularly for clinical trials funded by governments or foundations (Loder et al. 2018). EBDM recognizes and attempts to address such issues, in part by transparently documenting evidence quality and synthesis.

Evidence Hierarchies and Grading Approaches Continue to Evolve

Explicit grading criteria for assessing the quality of evidence from studies aid rules-based approaches to decision-making.
Study types have often been organized into a hierarchy, misleadingly represented by a pyramid. Such depictions undermine the value of complementary insights from different method types, and fail to convey the relationships and progressions among them as evidence accumulates (Setty et al. 2019). Recognizing that not all studies of the same nature should hold the same weight, the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group was formed in 2000 to address simplistic grading systems used in health care research. Its methods have since been adopted by more than 100 organizations globally, including WHO (Alonso-Coello et al. 2016) and the Cochrane Handbook for Systematic Reviews of Interventions (Higgins and Green 2011). This approach still reflects a dominant focus on study type. Beginning with four levels of evidence quality determined by study type, reviewers upgrade evidence (based on a large effect size, the influence of plausible confounding, and/or demonstration of a dose-response gradient) or downgrade evidence [responding to limitations in the design and implementation, ability to make direct inferences, unexplained heterogeneity or inconsistency in results, imprecision of results (e.g., wide confidence intervals), and the probability of publication bias] (Higgins and Green 2011). Another tool, adopted by the US National Toxicology Program for human and animal study quality evaluation (NTP 2015), digs further into individual studies’ risk of bias (internal validity) based on 11 factors that might compromise the credibility of the link between exposure and outcome.
Some international approaches to qualifying evidence for use in standard setting (analogous to GRADE) were formalized in the 2000s. For example, for chemicals in drinking water, WHO prefers human studies over animal studies (in principle, although high-quality human studies are few); and prefers peer-reviewed literature, including commercially sensitive literature after review by an international body such as the Joint Food and Agriculture Organization of the United Nations (FAO)/WHO Meeting on Pesticide Residues (WHO 2009). Uncertainty factors are applied where appropriate to account for inter- and intraspecies variation, study or database adequacy, and the nature and severity of health effects.
Several constraints obstruct application of these types of study quality criteria to EHSE, and may impede use of the best-available evidence for decision-making. Even the basic assumption that RCTs are always superior to observational epidemiological studies for managing bias in environmental exposures studies is questioned (Steenland et al. 2020). Some evidence in EHSE falls outside the GRADE structure, and in some instances evidence from preferred study types is nonexistent and potentially ethically unattainable. For example, evidence for the derivation of standards for chemical contaminants of drinking water cannot be ethically attained through RCTs that expose human populations to known risk. Thus, natural experiments (unintentionally exposed human populations) and animal studies are relied on. Natural experiments are necessarily retrospective, so exposure is typically poorly characterized, identifying controls is challenging, and exposure may be through multiple routes, short term, or restricted to certain life stages (e.g., adults in the workplace). Laboratory animal experiments enable better exposure control but require extrapolation from nonhuman species.
Further, much important EHSE insight derives from qualitative and mixed methods research, which is often undervalued or explicitly excluded from evidence hierarchies. While the research method is commonly misconceived as the dominant element of rigor, different study types are appropriate for different research questions, and applied, qualitative, and translational research types can all be carried out rigorously (Setty et al. 2019). Cooper (2016) emphasizes the need for greater social science inputs, especially for more complex science applications, and quantitative and qualitative research perspectives are necessary and complementary in many EHSE domains. Research quality also depends on the actors’ breadth and depth of knowledge, the actors’ communication and coordination skills, and external (potentially uncontrollable) circumstances and events.
Using medical approaches to categorize EHSE evidence may discourage valuable scientific study designs, such as phenomenology and case studies, which elaborate on the meaning behind statistical relationships and identify counterexamples to generic knowledge, respectively. Post-positivism acknowledges that scientific methods can reduce, but not eliminate, the risk of bias. Uncritical application of evidence based on statistical significance or automated decision algorithms may miss nuance in real-world problems relevant to decision makers (Greenhalgh et al. 2014). Conversely, embracing mixed methods research and triangulating evidence types can illuminate the consequences of implicit method assumptions and increase robustness of findings.
The newer GRADE Evidence to Decision Framework (EtD) (Alonso-Coello et al. 2016), often carried out by an expert panel, systematically assesses evidence and the role it plays among other factors in decisions. Criteria, each backed by justification, account for factors external to the evidence, including priority of the problem, benefits and harms, certainty of the evidence, outcome importance, balance, resource use, equity, acceptability, and feasibility. Poor external applicability of the EtD called for a comparable exercise or adaptation to the diversity of evidence applicable to public health and EHSE problems, wherein the WHO-INTEGRATE framework proposes a generic approach suitable for complex individual-, population- and system-level health interventions (Stratil et al. 2020).
On an individual study level, the important factors in evidence quality have been standardized for a minority of study types (Setty et al. 2020). Recommendations for method adherence and reporting rigor could be better developed (especially with audience needs in mind) and made accessible to both authors and reviewers involved in the publication process.

Evidence Synthesis Offers a Check on Research Directions

In some EHSE applications, evaluation of individual studies leads directly to decision-making. Continuing the example of norms for chemicals in drinking water, guideline development typically relies on determination of the most applicable (single) study (WHO 2009). However, decisions would preferably be supported by synthesized bodies of evidence. Congruency among multiple studies (where findings provide similar implications, or differences relate logically to contextual differences) is one of the nine principles laid out by Hill (1965) for determining causality, where environmental exposures lead to human health effects. Synthesizing bodies of evidence and understanding their coherence (or otherwise) has increasing importance in the context of “fake news” and the long-established tendency of politicians to use outlier scientific voices to undermine scientific consensus (e.g., for anthropogenic climate change).
The systematic review and meta-analysis approaches adopted initially in medicine (Cochrane, n.d.) have also been embraced in EHSE. A systematic literature review uses explicit and comprehensive methods to formulate a question; identify, select, and critically appraise relevant research; and synthesize data (Khan et al. 2003). However, because of method diversity, meta-analysis is rarely feasible in EHSE.
Describing scientific uncertainty is similarly problematic in EHSE evidence synthesis, in part because it is often conflated with context dependency. The outcomes of many EHSE interventions depend on context. Water treatment effectiveness, for example, varies with water quality characteristics, pathogen die-off dependency on temperature, and the effect of preexisting health status on disease reduction. When studies from diverse contexts are aggregated, clear differences may be incorrectly labeled as uncertainty, casting doubt on the merit of the intervention.
Critiques of evidence synthesis further observe that the volume of evidence has become massive, even unmanageable. Young et al. (2002) state, “The rush of enthusiasm for evidence-based policy making overlooks the fact that a great deal of research has already been carried out on a wide range of social problems, providing policy makers with pointers that they rarely follow.” Potential implications are twofold. First, evidence alone is often insufficient to elicit decision-making, suggesting that most influence arises when evidence is available at an influential moment in time (Rose et al. 2020). Second, better use should be made of evidence, where available, rather than on generation of new evidence (Setty et al. 2019). Certainly, there are areas replete with duplicative studies (each adding little to the accumulated evidence base) due to conscious or unconscious repetition across geographies, research groups, literature outlets, languages, and disciplinary silos.
In contrast, sparse evidence exists in many critical areas. In some cases, only insufficient, poor-quality, imbalanced or potentially misleading evidence is available to inform decision-making. For example, recent reviews explored environmental health conditions during the emergency (Shackelford et al. 2020), transition (Cooper et al. 2021) and protracted (Behnke et al. 2020) phases of forced population displacement. Even though more than 70 million people are forcibly displaced at any given time, the reviews revealed a dearth of meaningful or high-quality evidence. Reviews of conditions in orphanages (Moffa et al. 2019a), prisons (Guo et al. 2019), and homeless shelters (Moffa et al. 2019b) pointed to similar conclusions.
Recognition (and, if possible, mapping) of evidence gaps is crucial to avoid placing priority on better-characterized issues through the streetlight effect (Setty et al. 2020; Whaley et al. 2020). For example, the Collaboration for Evidence-Based Healthcare and Public Health in Africa (CEBHA+) sets out a pragmatic approach to identify and fill evidence gaps that address policy and practice needs in resource-limited research settings (Rehfuess et al. 2016). Scoping reviews (or evidence maps) explore the breadth and depth of evidence where it is sparse (Pham et al. 2014; Whaley et al. 2020).

Evidence Should Be Tailored to Inform Decision-Making

Three primary evidence characteristics inform decision-making: credibility, relevance, and legitimacy (CRELE) (Cash et al. 2003). Continuing the example of WSPs, legitimacy was enhanced by hearing and accommodating stakeholder perspectives through convened consultations, international conferences, and deliberate engagement with knowledge brokers and public entrepreneurs. Credibility was enhanced by the extensive stakeholder inclusion process, while salience was enhanced by high-profile waterborne disease outbreaks such as those in Milwaukee, Wisconin (1993) and Walkerton, Ontario, Canada (2000).
Although ideally evidence would have all three attributes, there may be trade-offs among them [e.g., time constraints may impact quality assurance measures (Sarkki et al. 2014)]. Sarkki et al. (2014) and others suggest tailoring evidence characteristics on a case-by-case basis to fit the needs at hand. Better defining user demand, understanding context, and identifying meaningful aspects of usefulness inform the process and goals of producing evidence (McNie 2007; van Kerkhoff and Pilbeam 2017; Setty et al. 2020; Whittington et al. 2020). Mitchell et al. (2006) emphasize the importance of regular communication and inclusive coproduction, discussed in the next section, in driving perceptions of these qualities.

Saliency Is Planned Timeliness

Saliency, or relevance, refers to timeliness (Cash et al. 2003; Sarkki et al. 2014). Because most applied research is reactive, evidence is likely to become available after decisions have been made. Achieving saliency requires a paradigm shift from rigid adherence to individual project funding cycles to broader program continuity and ongoing readiness.
One example of intentional research delivery timing comes from an initiative to deliver germane WaSH-related evidence at the time of SDG negotiation. Seven themes were laid out in an opinion article (Bartram 2008) and ultimately 18 related research papers were published in the lead-up to SDG adoption. At the time, each provided a substantive addition to a sparse evidence base during complex multistakeholder negotiations. It would be difficult to demonstrate these studies’ influence among other diverse efforts; however, all themes covered were reflected in the SDGs in some form. This experience illustrates that saliency does not demand scientific sophistication—several of the publications were scoping reviews or compilation and interpretation of existing publicly accessible data. Rather, matching the policy window observations of Rose et al. (2020), packaging the right information for the right audience at the right time tapped into previously inaccessible insights.
Further examples of planned salience come from deliberate attempts to garner foresight. Foresight is difficult and neither scientists nor decision makers are normally trained in it. However, some events may be individually unpredictable and entirely foreseeable. Whether and when a waterborne disease outbreak will occur in a certain location is unpredictable; that such outbreaks will occur in general is foreseeable. What disease agent will cause the next global pandemic is unpredictable; that further global pandemics will occur and that the causes will be disproportionately zoonotic viruses is foreseeable. Perhaps the weak response to such general foresight arises in part because of its weak specific salience.
Recognition of salient issues, such as new policies yet to be implemented, can prompt reassessment of research needs and priorities. One example surveyed a network of senior and operational government, civil society, external support, private, and research and learning stakeholders participating in Sanitation and Water for All (SWA). The exercise sought to identify priority knowledge needs just after the SDGs were adopted, and to describe evidence-use challenges (Setty et al. 2020). Low confidence was observed concerning the target on managing untreated wastewater and fecal sludge, demonstrating an evidence demand. Respondents preferred multinational information sources and reported little direct demand for local university research. They also valued combined brief and lengthy information formats (e.g., summaries with attached technical explanation). Persons outside research and learning institutions more often perceived information sources as contradictory or unreliable, illustrating the need for a bridge between scientists and others.
Saliency benefits from timely identification of influencing opportunities. Furthermore, saliency may benefit from academic freedom, i.e., the ability to pursue exploratory research, regardless of specific topic funding. Such freedom may be eroded, for example, through highly constrained funding conditions. Interactions among researchers and funders or decision makers can help raise and gauge salient evidence needs in a way that balances applied needs with the basic scientific tenet of expanding knowledge. For example, the research development cycle at SCCWRP is revisited quarterly by a 14-member board comprising regulatory and management agencies to ensure all parties’ interests are considered (SCCWRP 2020). While health sciences funding agencies, for example, increasingly support science application, a focus on communication at the end-of-grant stage may overlook engagement during critical initial stages (Smits and Denis 2014).

Legitimacy Reinforces Evidence Use

Legitimacy refers to adherence to the quality norms of the research community (Cash et al. 2003). Deficiencies in individual research outputs occur, for example, from inappropriate (even if popular) methods, ticking boxes on rigor checklists while losing sight of context and needed outcomes, or implementing methods poorly.
The HWTS example illustrates some issues concerning legitimacy. Most evidence purveyors acted as issue advocates and many were closely invested in a specific technical approach.
Journal publication is highly associated with legitimacy, and requirements for academic career progression set expectations for evidence supply, creating a high incentive to publish. Some poor-quality or unnecessarily repetitive research is therefore conducted and published, in part because voluntary peer review capacity (fundamental to the premise that peer review sorts for and ensures quality) is overwhelmed. The consequences can be substantive if research is improperly vetted, as with the Lancet’s retraction of an influential paper on chloroquine treatment for COVID-19 (Editors of the Lancet Group 2020) and consequently changed editorial policies. This example arose on an issue under public scrutiny (e.g., Davey 2020). In general, though, the scientific community is poor at collective self-regulation, in part because the economics of publication create no industry incentive to limit output and research supporters often expect publication as a tangible and countable deliverable.
As outlined previously, some study types have carried greater prestige and legitimacy. Often the “purer” study methods applied earlier in the science application pipeline persist in popularity even when other methods would provide more valuable information (Brown et al. 2017; Fig. 1 of Setty et al. 2019). In some instances, this is beneficial—for example, in continuing to explore the occurrence of a phenomenon under different conditions. In other instances, inappropriate study methods may be repeated despite known weaknesses and the opportunity cost. This is problematic if the number of papers creates the impression of a weight of evidence. Following popular research trends may also foster weak practices and poor self-reflection among researchers, and absorb financial, intellectual, and community resources in low-return efforts.
The disproportionate value placed on experimental research close to causal relationships and fundamental processes stands in bizarre contrast to the frequent demand for greater societal relevance. Even the most high-quality RCTs tell us little about how to implement the intervention. Generating evidence around close-to-application interventions tends to be viewed as a component of practice rather than a proper subject for research, despite its direct relevance to widespread costs and outcomes.

Credibility Relies on Transparency and Cooperation

Credibility relates to the quality of being trusted, convincing, or believable (Cash et al. 2003). In contrast to legitimacy, it primarily concerns the perspective of the decision maker; that is, why decision makers should have confidence in evidence to which they are asked to respond. Credibility may arise from biased or true perceptions around individual studies, bodies of evidence, researchers, research groups, institutions, publishers, or settings.
The media influence credibility; however, occasional overinterpretation, exaggeration of confidence in findings, and unreasonably extrapolated implications undermine credibility and leave some scientists wary of interactions (Nutley et al. 2007).
One effort to increase transparency, individual accountability, and implicitly credibility around applied WaSH research is the Nakuru Accord (Fig. 1). Some of the commitments it describes will prove challenging to career researchers. For example, Barrington et al. (2012) suggests “engineers are not neutral bystanders in these processes, though this is often how they see themselves. Rather, they are political actors whose decisions and actions can well determine the extent to which outcomes are likely to be socially and environmentally just.” Such pledges to openly learn from failures could aid deimplementation of unsuccessful initiatives and thereby accelerate progress in EHSE. Similarly, conflict of interest training and reporting, ethics standards and procedures, and open science initiatives are growing in prominence and gradually shifting professional norms.
Fig. 1. Nakuru Accord: Failing better in the WaSH sector. (Reprinted with permission from University of Leeds, n.d.)
Scientists tend to discuss uncertainty extensively (e.g., EFSA et al. 2019), in contrast to the confidence expected of decision makers. Van den Hove (2007) argues that transparency about limitations does not weaken science, but rather reinforces its quality. Communicating scientific knowledge should “systematically include reflective information on boundaries, uncertainties, indeterminacy, and ambiguity, as well as acknowledgment of ignorance and of the irreducible plurality of valid standpoints” (van den Hove 2007).
Much credibility arises from early and meaningful engagement. Efforts to further build credibility would benefit from researcher training in communication and knowledge brokering. This contrasts with the lauding of the pure scientist and undervaluing or criticizing the stance of issue advocate (Pielke 2007), which are widespread stereotypes, especially in EHSE.

Good Practices at the Science Application Interface

Three functions support effective science application communications: convening, translation, and mediation (Cash et al. 2003). We describe convening, rather than communication, to emphasize that all three functions contribute to communication. All are best understood within the context of the issue at hand. Stakeholder mapping exercises (e.g., on a power and interest grid) can clarify which actors influence a given topic, while description of context [geographical, epidemiological, sociocultural, socioeconomic, ethical, legal, political (Pfadenhauer et al. 2017)] can be elicited via stakeholder interviews. Van Kerkhoff and Pilbeam (2017), for example, recommend targeted inquiry into the sociopolitical context that constitutes “knowledge governance.”
Early and sustained science engagement with actors (ideally before, during, after, and between project cycles) promotes these functions. Cash et al. (2003) suggest “active, iterative, and inclusive” modes of communication, while Gupta (2014) similarly recommends “formalized, centralized, continuously structured, participatory” evidence assessment. Ongoing communication between scientists and decision makers should rely on multidirectional information exchange rather than one-way reporting (Roux et al. 2006). Examples of actions that could reinforce good evidence-based practices are outlined in the Appendix. Recognizing and accepting differences in professional norms, limitations, and realities, as raised by Quevauviller (2010), is a practical step in overcoming their effects (Cairney 2016).

Convening Brings All Stakeholders to the Table

General goodwill and interpersonal connections between scientists and decision makers can be naturally strong (e.g., facilitated by neighboring offices and frequent interaction), nonexistent, or contentious.
Convening encourages direct interaction between researchers and practitioners, which has been shown to enhance research use even in an antagonistic environment (Nutley et al. 2007), especially given strong leadership and coherence, a supportive political and social context, and responsiveness to the needs of both parties. It promotes meaningful, shared decisions, which are at the root of the evidence-based philosophy (Greenhalgh et al. 2014) and form the foundations of engaged science.
Convening implies action by individuals and institutions, such as a knowledge broker, alliance, or partnership network, who have the credibility and means to bring together others who would not spontaneously or voluntarily meet otherwise. It typically seeks to proactively enhance mutual understanding and recognition of counterparts’ valid viewpoints to prevent more fractious interactions (Quevauviller 2010; Cairney 2016).
Some EHSE conferences (e.g., Stockholm International Water Institute’s Stockholm Water Week and the Water and Health Conference of the University of North Carolina at Chapel Hill) have evolved or were designed, respectively, to include platforms (e.g., side events or workshops) for convening by participants who might not otherwise be able to meet in the same time and place. Face-to-face interactions most encourage research use (Nutley et al. 2007). However, they are constrained by time and travel costs and other restrictions, such as those arising from the COVID-19 pandemic. These practical considerations exacerbate and perpetuate power imbalances (e.g., between the global north and global south).
Some UN entities (such as WHO and UNICEF) use their formal convening powers frequently. This includes both technical-normative convening of scientists (e.g., WHO initiative on emerging issues in water and infectious disease; WHO-UNICEF advisory group on SDG monitoring of water, sanitation, and hygiene) and political intergovernmental convening (e.g., the intergovernmental European Environment and Health conferences leading to the 1999 European Protocol on Water and Health, which brought together policy makers and scientists from multiple sectors and promotes evidence-based national target setting to meet the legal requirements of the protocol).
Much convening involves scientists in the stance Pielke (2007) characterizes as science arbiter (i.e., a responsive resource ready to answer factual questions conceived by the decision maker, but without influence over those questions nor interacting with stakeholders). Less frequent but beneficial in some instances is Pielke’s honest broker of policy alternatives (increasing and characterizing the range of options but leaving the selection among them to the decision maker). Both roles are frequently played by groups, for example, in the form of expert panels or science advisory boards. Because convening typically involves bringing together dissimilar stakeholders, power balance is important (SPLASH, n.d.-a). In practice, the voice of presumed eventual beneficiary populations is often weak or absent.

Translation Brings All Stakeholders into the Discussion

Translation concerns rendering science into the vocabulary and modes of communication of other stakeholders, including recognition that some characteristics, findings, and implications differ in importance among groups. Cash et al. (2003) suggest “mutual understanding between experts and decision makers is often hindered by jargon, language, experiences, and presumptions about what constitutes persuasive argument.” Quevauviller (2010) noted differences between the norms of science and policy, although generalizations do not apply to every individual and norms of practice evolve. These differences include pertinent aims, time scales, people, communication, success measures, evidence sources, quality control mechanisms, information synthesis approaches, and views of interaction.
Translation is needed throughout the research process. The goal is not one-sided, but mutual comprehension (Cash et al. 2003). Both active listening and information delivery are crucial—for example, scientists may craft a research presentation or report without gathering information about the audience and what they would like to know, as described by Brocklehurst (2013). Translation is impossible without knowledge of the language of the intended stakeholder recipient, and no single translation is appropriate for all audiences. Structured standards for involvement of stakeholders throughout the research process can help ensure communication needs are discussed to steer effective translation (Weichselgartner and Kasperson 2010).
Translation is challenging in part because most individuals and organizations work within discipline or subject boundaries, communicating with others who share similar vocabulary and sometimes values. Alone, they are thereby unsuited for the translation function, which is often associated with conveners and knowledge brokers. Translation is a major function for some knowledge brokers; for example, ASCE’s report card for America’s infrastructure adopts the familiar format of a school report card with assigned letter grades (ASCE, n.d.), while the Scholars Strategy Network requires members to develop op-eds in their area of expertise. In international development, foreign consultants often translate and adapt international policy and guidelines for national and local governments. These examples typify translation as recipient-focused dissemination through multiple channels and formats with commitment to stakeholder engagement.
Regrettably, much applied science communication begins with already-secured findings. One checklist to aid research translation suggests: adopting a strategic approach to dissemination and reviewing existing organizational practice; defining target users, working in collaboration with in-country partners; undertaking a user information needs analysis; ensuring a viable dissemination strategy through planned activities at all stages of the project and beyond; and planning to monitor and evaluate dissemination activities and outcomes (SPLASH, n.d.-b).
The perspective of a scientist preparing a paper or report differs substantially from that of those reading and considering applying the evidence. Common standards, criteria, checklists, or methods for reporting, such as the message box developed by COMPASS, could help to develop research communication skills (COMPASS 2020). Turnhout (2018) recommends repackaging knowledge using modern terms to fit categories considered politically salient. Information can usefully be packaged in multiple formats, such as policy briefs, visual presentations, and technical addendums, that directly reflect the requests of end users (Gagliardi et al. 2015; Setty et al. 2020).

Mediation Makes Difficult Consensus Possible

Mediation involves active rather than passive resolution of conflicts, stalemates, and other issues that impede information flow between experts and decision makers (Cash et al. 2003). Such conflicts arise from trade-offs among saliency, legitimacy, and credibility, and competing priorities beyond the nature of evidence available. Constituency silos may lead to misunderstandings and friction (Roux et al. 2006). The skills required for mediation (e.g., moderation of group forums and knowledge-building exercises, active listening, cognitive interviewing, summarization of peer review feedback, diplomacy) differ from those needed for scientific discovery, and may require targeted professional development.
Mediation is especially critical in values-dominated, politicized decision-making, such as decisions about large dam construction—a touchstone for the early environmental movement. In this case an attempt at evidence-based resolution—the World Commission on Dams—failed, likely related to its low credibility and complicated recommendations, slowing improvement in the massive undersupply of water storage in low-income countries. In the negotiation of the Nukus Declaration on Sustainable Development Issues in the Aral Sea Basin, signed by the Central Asian heads of state in 1995, shuttle diplomacy involved individuals going between and reporting back to different stakeholder groups to achieve consensus. Such mediators translate feedback and interpret must-have versus nice-to-have requests to help reach consensus. In some instances, as here, mediation takes place rapidly and intensively (over one night); in others, it occurs sporadically over an extended time.

Tailoring Guidance to Environmental Health Science and Engineering

Several characteristics of EHSE create specific challenges for science application: First, many EHSE decisions reflect Kant’s assertion, “it is often necessary to make a decision on the basis of knowledge sufficient for action but insufficient to satisfy the intellect” (Kant, n.d.). This need is especially germane to EHSE, where weak and debated evidence may delay decision-making about societally critical issues of health, sustainability, welfare, and equality. Scientists are confronted with the conundrum of arguing for application of evidence they concurrently argue is insufficient, while decision makers need to justify action without a clear evidence base. Some policy tools that might assist, such as the precautionary and polluter pays principles, are inconsistently applied, in part based on variations in cultural values. An example comes from international development policy. For an indicator to be adopted under the 2001 SDGs, it was required that data be available from the back-dated baseline of 1990. Similarly, in SDG negotiations, targets had to be associated with indicators backed by substantive global data. These requirements favored conservativism with consequences for implementation policy and approaches. The MDG indicators were criticized accordingly (Bartram et al. 2014) and the SDGs may suffer similarly (Bartram et al. 2018).
Second: Rehfuess and Bartram (2014) conceived intervention effectiveness in EHSE as complex, influenced by five layers—direct (intrinsic) impact, user compliance, delivery, programming, and policy measures—and argued the “multi-component, multi-sectoral nature of most environmental health interventions results in a complex relationship.” They contrasted EHSE interventions with clinical interventions, which typically have few stakeholders and short, direct causal pathways. In EHSE, examining distal elements is as important as assessing direct impact (Rychetnik et al. 2004). Rehfuess and Bartram (2014) concluded, “An analysis limited to any single layer is likely to be misleading, as actual impact may be substantially greater or lesser. Furthermore, several phenomena of environmental health interventions, such as sustainability, are the result of interactions and feedback loops between people, intervention components and [context].” Arguments for the complexity of EHSE have also been made by Mara (2006) and by Gelting et al. (2019). The WHO-INTEGRATE framework would support EHSE guideline development from a complexity perspective, notably in relation to public health interventions, which are deeply value laden (Stratil et al. 2020).
Third, of special interest to EHSE, the professional environmental arena is often conflated with environmentalism among the general population, and environmental science represents a relatively polarizing political topic compared to issues such as clinical health care. Scientific research has always met business and other interests in the political process, and it confronts alternative facts in modern post-truth society (Vernon 2017). Research evidence has weak influence and is less likely to be used when alternative ideology, interests, and information sources align (Nutley et al. 2007). While many industries have trade associations or unions with strong lobbying voices, the hesitancy of environmental scientists to organize or present unified statements undermines the role of applied science, especially where research funding is politically determined.
Finally, critical reflection on international perspectives has special relevance in EHSE because of foreign aid transfers, the frequency with which scientists and professionals from high-income countries study issues or implement projects abroad, reliance on international markets for evidence and services, and cross-cutting policy and governance (e.g., international conventions). Transferability of EHSE insights is often limited by prior conditions, such as endemic health status and prevailing disease transmission, the prior state of conditions and services, financing availability and restrictions, and governance accountability. A sparse evidence base particularly affects low- and middle-income countries that may lack long-term investment in (or external recognition of) higher education or research and development infrastructure. Communicating across time zones, languages, and diverse professional and cultural backgrounds each heighten the challenges of science application. Further, skewed power relationships between globally northern and globally southern partners, as well as gender and race inequalities, are reflected in imbalanced authorship and representation in science and merit intentional balancing (SPLASH, n.d.-a; Kolsky, n.d.; Perkmann et al. 2021).
These EHSE-specific challenges highlight the value of system-level thinking (Galea et al. 2010; Hering 2018) and engagement, using both natural (hard) and social sciences (Cooper 2016). They reinforce the need for improved practices, including evidence mapping, synthesis and triangulation, exploratory research, cultural humility, engagement with other disciplines and sectors, tools and methods adapted to EHSE realities, and demand-elicitation exercises involving diverse stakeholders.
The EHSE professions could benefit from harvesting experience and insights from other domains, such as education, medicine, and social services, in defining a body of knowledge on effective research use and implementation practices (Setty et al. 2019). This might include adaptation of terminology, relevant actors, time scales, or criteria from existing science application tools, models, and approaches. Cross-pollinating curricula that are delivered in silos and offering continued education and training opportunities to a broad range of professional stakeholders could enhance knowledge and awareness of good science application practices (Setty et al. 2020).

Recommendations and Conclusions

Because EHSE is an applied field with substantive implications for human and planetary well-being, efficient science application would enhance the societal benefits that justify the research and implementation endeavors. Great scope exists to enhance the beneficial impact of EHSE, capitalize on the efforts of scientists and decision makers, and increase accountability. Promoting actionable change requires reflection on professional norms, with implications for both producers and consumers of evidence. We make five cross-cutting recommendations:
1.
Incorporate teaching about science application and effective interprofessional engagement into higher and professional education. In already-crowded curricula, arguing for a special place for soft skills is an uphill battle. However, applied science in general and EHSE in particular are purpose-bound to the science application interface. Continued professional development could maintain and enhance targeted knowledge brokering skills. Such education would benefit researchers and decision makers as well as the ultimate beneficiaries of EHSE.
2.
Reform science funding and reward mechanisms to ensure stakeholder engagement throughout and beyond the project cycle. Applied science funding—with short-term, time-bound, predefined scope proposals—often contrasts sharply with the ideal of ongoing, iterative engagement with stakeholders, especially in critical early project proposal stages. This impedes relevance and later uptake of findings, which in turn weakens funding entities (via commitment to time-bound completion rather than impact), university administrations (arguably underattentive to exploratory and applied research), and researchers (whose career progression norms do not facilitate sustained engagement before and after project timeframes, nor engagement in knowledge application). While many funding agencies ostensibly or practically favor stakeholder engagement, and some support its extension (e.g., Bednarek et al. 2018), systematic engagement beyond project cycles and boundary specialists remains elusive (e.g., Smits and Denis 2014).
3.
Support increased coherence and synthesis in the body of research efforts and evidence. Professional associations and publishing houses could drive improved study quality and science synthesis by adopting explicit guidance and standards [e.g., Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) and Consolidated Standards of Reporting Trials (CONSORT)] where applicable. Adhering to common terminology would aid these efforts (Whaley et al. 2020). While the medical approach to trial registration is not readily transferable to the breadth of study types and purposes in EHSE, the underlying notion of open science and matching a priori methods to research outcomes is highly relevant. Developing, adopting, and/or applying guidelines would require extensive cooperation among EHSE professionals and publishers, but could clarify applicability of systematic review and evidence grading norms to the specific challenges of EHSE. Other mechanisms include encouraging and committing to publish properly conducted studies that demonstrate null outcomes, welcoming (e.g., by identifying a pool of reviewers for) mixed and translational research methods, and expanding journal disciplinary boundaries.
4.
Commit to common measures of rigor and objective description of quality in research. Ethics demand responsible use of finite funding and scarce resources such as stakeholder time. Professional societies could play stronger roles in promoting rigorous research and in conveying the results and implications of that research (Scott et al. 2008), combining the functions of translation and convening. Because EHSE professionals are not unified by professional membership (e.g., having diverse educations, ranging from ecology to chemical engineering to anthropology to public health to law to economics), associations might extend engagement beyond their conventional professional boundaries and backgrounds. By actively engaging decision makers and providing information on pressing policy issues, groups such as ASCE could enhance their role as brokers of reliable, unbiased information. Developing a critical knowledge gap elicitation role could help reconcile the supply of and demand for evidence, and support improved decisions, about how science itself is organized (Sarewitz and Pielke 2007).
5.
Increase research into science application methods and outcomes to optimize approaches and maximize beneficial impact. The evidence base for using evidence in a structured way to achieve good policy and practice outcomes is sparse, although more is known about the process of research use (Nutley et al. 2007; Verboom et al. 2016). Further development will require scientists and policy makers to question established practices, “from modes of research training to lines of accountability, from funding practices to assumed professional omniscience, from the relative value of different publication modes to the inviolability of peer review” (Young et al. 2002). Specific research priorities include the conduct and efficacy of virtual convenings (of increasing relevance as study teams and stakeholders become more numerous and diverse globally, and as new professional norms become established during the COVID-19 pandemic) and clarifying the influential factors in effective knowledge brokering (Ward et al. 2009).

Appendix. Examples of Practical Actions to Bolster Science Application

Examples of known enabling factors for the use of science in public services (adapted from Nutley et al. 2007) are given in Table 1, with an illustration of how each could manifest in EHSE science application. These factors might have synergistic or antagonistic relationships, and it would likely be difficult to tease apart their individual contribution to evidence uptake in any given situation. Because perspectives differ widely, estimations of the impact of research on decision-making are nearly always subjective (Nutley et al. 2007). Further, norms of practice and enabling factors are likely to evolve over time.
Table 1. Examples of enabling factors in science application for environmental engineering and health researchers, practitioners, and policy makers
EnableraExample pathways to promote science application
Nature of research
High-quality, credible sourceResearchers connected with statistical support specialists
Clear and uncontested findingsSummary research briefs explain uncertainties and reasons for differences across studies
Commissioned, or with high-level political supportRegulators play an active role in research solicitation and dissemination
Aligned with local priorities, needs, and contextsStakeholders invited to review interim products throughout research process
Timely and relevant to policy/practice requirementsUsefulness specifications negotiated at project outset and revisited when changes occur
User-friendly (concise, jargon-free, visually appealing)Researchers connected with editing/communication support specialists
Personal characteristics of researchers and users
Policy makers/practitioners with higher levels of research education or experiencePolicy and practice entities actively recruit personnel with some scientific experience
Skill at interpreting and appraising researchContinued on-the-job training deals with current issues in interpreting and evaluating information
General goodwill toward researchResearchers and nonresearchers take time to understand each other’s cultures and ways of working
Researchers with greater knowledge/skills in dissemination and research use activitiesResearch institutions actively recruit personnel with some policy or practice experience
Links between research and users
Policy makers/practitioners have access to researchPeriodic newsletters or reports summarize recent studies and how to access them
Knowledge brokers (individuals or agencies) act as a bridgeOrganizations identify their functional research arms or research champions and create formal engagement mechanisms
Direct, face-to-face, two-way exchangesRegular events bring together communities of researchers, end users, and knowledge brokers
Policy context for research use
Alignment with current ideology and individual/agency interestsMessages are reframed for the intended audience, emphasizing common values
Findings fit in existing ways of thinking/acting or other information in policy environmentMessages are reframed for the intended audience, emphasizing areas of concordance and constructive recommendations
Political systems are openGovernments are held to transparency and accountability measures
Researchers and policy makers come into contactScientific meeting organizers issue invitations to policy makers and vice versa
Broad support for evidence use in organizational culturesPublication of the evidence base for policy decisions is required
Practice context for research use
Time to read researchWork schedules accommodate periodic retreats, brief sabbaticals, or conference attendance to encourage reflection and learning
Autonomy to implement findingsFormal partnerships encourage individuals or organizations from different silos to work together, especially at a leadership level
Financial, administrative, and personal supportEmployees foster relationships with others from similar organizations who can offer peer support
Cultural openness to research and its useEmployees are trained on how to access and use common evidence-based decision support tools
Research context for research use
Incentives or rewards for engaging in dissemination and research use activitiesInformation on committees joined, meetings attended, and such, is recorded and incorporated into salary, advancement, or bonus structure
Value on user-friendly research outputs (rather than only academic journal publications)Contributions to gray literature are valued alongside peer-reviewed publications in employee evaluations
Time and financial resources for research use activitiesTravel funding and time for post-project follow-up activities is written into project proposals
Attitude that dissemination is part of research roleIndicators for outreach added to performance evaluations, funding, and publication criteria
a
Data adapted from Nutley et al. (2007).

Data Availability Statement

No data, models, or code were generated nor used during the study.

Acknowledgments

We thank Greg Allgood, Urooj Amjad, Darcy Anderson, Dani Barrington, Joseph Bartram, Felix Dodds, David Douglas, Jacqueline MacDonald Gibson, Bruce Gordon, Pete Kolsky, Daniele Lantagne, Oliver Schmoll, Steve Weisberg, and Maged Younes for their thoughtful comments on drafts or partial drafts of this manuscript. The authors contributed equally to this study.

Disclaimer

The views expressed here do not necessarily reflect those of our present or former employers.

References

Alonso-Coello, P., et al. 2016. “GRADE evidence to decision (EtD) frameworks: A systematic and transparent approach to making well informed healthcare choices. 2: Clinical practice guidelines.” BMJ 353 (Jun): i2089. https://doi.org/10.1136/bmj.i2089.
Amjad, U., E. Ojomo, K. Downs, R. Cronk, and J. Bartram. 2015. “Rethinking sustainability, scaling up, and enabling environment: A framework for their implementation in drinking water supply.” Water 7 (4): 1497–1514. https://doi.org/10.3390/w7041497.
Anderson, L. 2006. “Analytic autoethnography.” J. Contemp. Ethnography 35 (4): 373–395. https://doi.org/10.1177/0891241605280449.
ASCE. n.d. “Report card history.” Accessed October 12, 2020. https://www.infrastructurereportcard.org/making-the-grade/report-card-history/.
Barrington, D. J., S. Dobbs, and D. I. Loden. 2012. “Social and environmental justice for communities of the Mekong River.” Int. J. Eng. Social Justice Peace 1 (1): 31–49. https://doi.org/10.24908/ijesjp.v1i1.3515.
Barrington, D. J., R. T. Souter, S. Sridharan, K. F. Shields, S. G. Saunders, and J. K. Bartram. 2017. “Sanitation marketing: A systematic review and theoretical critique using the capability approach.” Social Sci. Med. 194 (Dec): 128–134. https://doi.org/10.1016/j.socscimed.2017.10.021.
Barrington, D. J., S. Sridharan, S. G. Saunders, R. T. Souter, J. Bartram, K. F. Shields, S. Meo, A. Kearton, and R. K. Hughes. 2016. “Improving community health through marketing exchanges: Insights from a participatory action research study on water, sanitation, and hygiene in three Melanesian countries.” Social Sci. Med. 171 (Dec): 84–93. https://doi.org/10.1016/j.socscimed.2016.11.003.
Bartram, J. 2008. “Improving on haves and have-nots.” Nature 452 (7185): 283–284. https://doi.org/10.1038/452283a.
Bartram, J., B. Brocklehurst, D. Bradley, M. Muller, and B. Evans. 2018. “Policy review of the means of implementation targets and indicators for the sustainable development goal for water and sanitation.” npj Clean Water 1 (1): 1–5. https://doi.org/10.1038/s41545-018-0003-0.
Bartram, J., C. Brocklehurst, M. B. Fisher, R. Luyendijk, R. Hossain, T. Wardlaw, and B. Gordon. 2014. “Global monitoring of water supply and sanitation: History, methods and future challenges.” Int. J. Environ. Res. Public Health 11 (8): 8137–8165. https://doi.org/10.3390/ijerph110808137.
Bartram, J., G. Kayser, B. Gordon, and F. Dodds. 2015. “International policy.” Chap. 43 in Routledge handbook of water and health, edited by J. Bartram, 433–446. London: Routledge.
BBC News. 2020. “US West Coast fires: Is Trump right to blame forest management?” Accessed May 23, 2021. https://www.bbc.com/news/world-us-canada-46183690.
Bednarek, A. T., et al. 2018. “Boundary spanning at the science–policy interface: The practitioners’ perspectives.” Sustainability Sci. 13 (4): 1175–1183. https://doi.org/10.1007/s11625-018-0550-9.
Behnke, N., R. Cronk, B. Banner, B. Cooper, R. Tu, L. Heller, and J. Bartram. 2020. “Environmental health conditions in protracted displacement: A systematic scoping review.” Sci. Total Environ. 726 (Jul): 138234. https://doi.org/10.1016/j.scitotenv.2020.138234.
Brocklehurst, C. 2013. “Outcomes of a meeting of senior finance ministry officials to discuss decision-making for WaSH.” Accessed May 23, 2021. https://waterinstitute.unc.edu/files/2014/11/Finance-Ministry-Decision-Making-for-WaSH_Policy-Brief.pdf.
Brown, C. H., et al. 2017. “An overview of research and evaluation designs for dissemination and implementation.” Annu. Rev. Public Health 38 (Mar): 1–22. https://doi.org/10.1146/annurev-publhealth-031816-044215.
Bryant, J., R. Sanson-Fisher, J. Walsh, and J. Stewart. 2014. “Health research priority setting in selected high income countries: A narrative review of methods used and recommendations for future practice.” Cost Eff. Resour. Allocation 12 (1): 23. https://doi.org/10.1186/1478-7547-12-23.
Cairney, P. 2016. “The politics of evidence-based policymaking.” Accessed May 23, 2021. https://www.theguardian.com/science/political-science/2016/mar/10/the-politics-of-evidence-based-policymaking.
Campbell Collaboration. n.d. “Campbell collaboration.” Accessed May 23, 2021. https://campbellcollaboration.org/.
Carson, R. 1962. Silent spring. Boston: Houghton Mifflin.
Cash, D. W., W. C. Clark, F. Alcock, N. M. Dickson, N. Eckley, D. H. Guston, J. Jäger, and R. B. Mitchelll. 2003. “Knowledge systems for sustainable development.” Proc. Natl. Acad. Sci. 100 (14): 8086–8091. https://doi.org/10.1073/pnas.1231332100.
CDC (Centers for Disease Control and Prevention). 2020. “Climate and health: Wildfires.” Accessed May 23, 2021. https://www.cdc.gov/climateandhealth/effects/wildfires.htm.
CHSRF (Canadian Health Services Research Foundation). 2000. Health services research and evidence-based decision making. Ottawa: CHSRF.
Cochrane. n.d. “About us.” Accessed May 23, 2021. http://www.cochrane.org/about-us.
COMPASS. 2020. “The message box.” Accessed May 23, 2021. https://www.compassscicomm.org/leadership-development/the-message-box/.
Cooper, A. C. G. 2016. “Exploring the scope of science advice: Social sciences in the UK government.” Palgrave Commun. 2 (1): 1–9. https://doi.org/10.1057/palcomms.2016.44.
Cooper, B., N. L. Behnke, R. Cronk, C. Anthonj, B. B. Shackelford, R. Tu, and J. Bartram. 2021. “Environmental health conditions in the transitional stage of forcible displacement: A systematic scoping review.” Sci. Total Environ. 762 (Mar): 143136. https://doi.org/10.1016/j.scitotenv.2020.143136.
Davey, M. 2020. “The Lancet changes editorial policy after hydroxychloroquine Covid study retraction.” Accessed October 15, 2020. https://www.theguardian.com/world/2020/sep/22/the-lancet-reforms-editorial-policy-after-hydroxychloroquine-covid-study-retraction?CMP=Share_AndroidApp_Other.
De Goede, J., M. van Bon-Martens, J. J. Mathijssen, K. Putters, and H. Van Oers. 2012. “Looking for interaction: Quantitative measurement of research utilization by Dutch local health officials.” Health Res. Policy Syst. 10 (1): 9. https://doi.org/10.1186/1478-4505-10-9.
Editors of the Lancet Group. 2020. “Learning from a retraction.” Lancet 396 (10257): P1056. https://doi.org/10.1016/S0140-6736(20)31958-9.
EFSA (European Food Safety Authority), et al. 2019. “Guidance on communication of uncertainty in scientific assessments.” EFSA J. 17 (1): e05520. https://doi.org/10.2903/j.efsa.2019.5520.
Fisher, M. B., A. Z. Guo, J. W. Tracy, S. K. Prasad, R. D. Cronk, E. G. Browning, K. R. Liang, E. R. Kelly, and J. K. Bartram. 2021. “Occurrence of lead and other toxic metals derived from drinking-water systems in three West African countries.” Environ. Health Perspect. 129 (4): 047012. https://doi.org/10.1289/EHP7804.
Gagliardi, A. R., C. Marshall, S. Huckson, R. James, and V. Moore. 2015. “Developing a checklist for guideline implementation planning: Review and synthesis of guideline development and implementation advice.” Implementation Sci. 10 (1): 1–9. https://doi.org/10.1186/s13012-015-0205-5.
Galea, S., M. Riddle, and G. A. Kaplan. 2010. “Causal thinking and complex system approaches in epidemiology.” Int. J. Epidemiol. 39 (1): 97–106. https://doi.org/10.1093/ije/dyp296.
Gelting, R. J., S. C. Chapra, P. E. Nevin, D. E. Harvey, and D. M. Gute. 2019. “‘Back to the future’: Time for a renaissance of public health engineering.” Int. J. Environ. Res. Public Health 16 (3): 387. https://doi.org/10.3390/ijerph16030387.
Gray, J. 2009. Gray’s anatomy: Selected writings. London: Allen Lane.
Greenhalgh, T., J. Howick, and N. Maskrey. 2014. “Evidence based medicine: A movement in crisis?” BMJ 348 (Jun): g3725. https://doi.org/10.1136/bmj.g3725.
Groot, G., T. Waldron, T. Carr, L. McMullen, L. A. Bandura, S. M. Neufeld, and V. Duncan. 2017. “Development of a program theory for shared decision-making: A realist review protocol.” Syst. Rev. 6 (1): 1–8. https://doi.org/10.1186/s13643-017-0508-5.
Guo, W., R. Cronk, E. Scherer, R. Oommen, J. Brogan, M. M. Sarr, and J. Bartram. 2019. “A systematic review of environmental health conditions in penal institutions.” Int. J. Hyg. Environ. Health 222 (5): 790–803. https://doi.org/10.1016/j.ijheh.2019.05.001.
Gupta, J. 2014. “Global scientific assessments and environmental resource governance: Towards a science-policy interface ladder.” In The role of ‘experts’ in international and European decision-making processes, edited by M. Ambrus, K. Arts, E. Hey, and H. Raulus, 148–170. Cambridge, UK: Cambridge University Press.
Hanna-Natisha, M. 2019. “I helped expose the lead crisis in Flint. Here’s what other cities should do.” Accessed May 23, 2021. https://www.nytimes.com/2019/08/27/opinion/lead-water-flint.html.
Heller, L., C. de Albuquerque, V. Roaf, and A. Jimenez. 2020. “Overview of twelve years of special rapporteurs on the human rights to water and sanitation: Looking forward to future challenges.” Water 12 (9): 2598. https://doi.org/10.3390/w12092598.
Hering, J. G. 2018. “Implementation science for the environment.” Environ. Sci. Technol. 52 (10): 5555–5560. https://doi.org/10.1021/acs.est.8b00874.
Higgins, J., and S. Green, eds. 2011. Cochrane handbook for systematic reviews of interventions (version 5). London: Cochrane Collaboration.
Hill, A. B. 1965. “The environment and disease: Association or causation.” Proc. R. Soc. Med. 58 (5): 295–300. https://doi.org/10.1177/003591576505800503.
Howard, G., et al. 2020. “Covid-19: Urgent actions, critical reflections and future relevance of ‘WaSH’: Lessons for the current and future pandemics.” J. Water Health 18 (5): 613–630. https://doi.org/10.2166/wh.2020.162.
Huberman, M. 1994. “Research utilization: The state of the art.” Knowl. Policy 7 (4): 13–33. https://doi.org/10.1007/BF02696290.
Hunter, P. 2009. “Household water treatment in developing countries: Comparing different intervention types using metaregression.” Environ. Sci. Technol. 43 (23): 8991–8997. https://doi.org/10.1021/es9028217.
ICMJE (International Committee of Medical Journal Editors). n.d. “Recommendations—Clinical trial registration.” Accessed August 28, 2020. http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html.
IWA (International Water Association). 2004. “The Bonn charter for safe drinking water.” Accessed September 24, 2020. https://iwa-network.org/publications/the-bonn-charter-for-safe-drinking-water/.
Kant, I. n.d. “It is often necessary to make a decision on the basis of knowledge sufficient for action but insufficient to satisfy the intellect.” Accessed September 25, 2020. https://www.azquotes.com/quote/897494.
Khan, K. S., R. Kunz, J. Kleijnen, and G. Antes. 2003. “Five steps to conducting a systematic review.” J. R. Soc. Med. 96 (3): 118–121. https://doi.org/10.1177/014107680309600304.
Kolsky, P. n.d. “The process of applied research in the water and sanitation sector.” Accessed May 23, 2021. http://www.lboro.ac.uk/garnet/actiwp1.html.
Langley, A., H. Mintzberg, P. Pitcher, E. Posada, and J. Saint-Macary. 1995. “Opening up decision making: The view from the black stool.” Organ. Sci. 6: 260–279.
Loder, E., S. Loder, and S. Cook. 2018. “Characteristics and publication fate of unregistered and retrospectively registered clinical trials submitted to the BMJ over 4 years.” BMJ Open 8 (2): e020037. https://doi.org/10.1136/bmjopen-2017-020037.
Mara, D. D. 2006. “Modern engineering interventions to reduce the transmission of diseases caused by inadequate water supplies and sanitation in developing countries.” Build. Serv. Eng. Res. Technol. 27 (2): 75–83. https://doi.org/10.1191/0143624406bt148oa.
McNie, E. C. 2007. “Reconciling the supply of scientific information with user demands: An analysis of the problem and review of the literature.” Environ. Sci. Policy 10 (1): 17–38. https://doi.org/10.1016/j.envsci.2006.10.004.
Mitchell, R., W. Clark, and D. Cash. 2006. “Information and influence.” In Global environmental assessments: Information and influence, edited by R. Mitchell, W. Clark, D. Cash, and N. Dickson, 307–338. Cambridge, MA: MIT Press.
Moffa, M., R. Cronk, D. Fejfar, S. Dancausse, L. Padilla, and J. Bartram. 2019a. “A systematic scoping review of hygiene behaviors and environmental health conditions in institutional care settings for orphaned and abandoned children.” Sci. Total Environ. 658 (Mar): 1161–1174. https://doi.org/10.1016/j.scitotenv.2018.12.286.
Moffa, M., R. Cronk, L. Padilla, D. Fejfar, S. Dancausse, and J. Bartram. 2019b. “A systematic scoping review of environmental health conditions and hygiene behaviors in homeless shelters.” Int. J. Hyg. Environ. Health 222 (3): 335–346. https://doi.org/10.1016/j.ijheh.2018.12.004.
Narayanamurti, V., and T. Odumosu. 2016. Cycles of invention and discovery: Rethinking the endless frontier. Cambridge, MA: Harvard University Press.
NAS (National Academy of Sciences, Engineering, and Medicine). 2020. “Progress, challenges, and opportunities for sustainability science: A workshop.” Accessed December 16, 2020. https://www.nationalacademies.org/event/11-30-2020/progress-challenges-and-opportunities-for-sustainability-science-a-workshop.
New Zealand. n.d. “Microbiological water quality guidelines for marine and freshwater recreational areas.” Accessed September 28, 2020. https://www.mfe.govt.nz/fresh-water/tools-and-guidelines/microbiological-guidelines-recreational-water.
NTP (National Toxicology Program). 2015. “OHAT risk of bias rating tool for human and animal studies.” Accessed February 13, 2021. https://ntp.niehs.nih.gov/ntp/ohat/pubs/riskofbiastool_508.pdf.
Nutley, S. M., I. Walter, and H. T. O. Davies. 2007. Using evidence: How research can inform public services. Bristol, UK: Policy Press.
Perkmann, M., R. Salendra, V. Tartari, M. McKelvey, and A. Hughes. 2021. “Academic engagement: A review of the literature 2011–2019.” Res. Policy 50 (1): 104114. https://doi.org/10.1016/j.respol.2020.104114.
Pfadenhauer, L. M., et al. 2017. “Making sense of complexity in context and implementation: The context and implementation of complex interventions (CICI) framework.” Implementation Sci. 12 (1): 1–17. https://doi.org/10.1186/s13012-017-0552-5.
Pham, M. T., A. Rajic, J. D. Greig, J. M. Sargeant, A. Papadopolous, and S. A. McEwen. 2014. “A scoping review of scoping reviews: Advancing the approach and enhancing the consistency.” Res. Synth. Methods 5 (4): 371–385. https://doi.org/10.1002/jrsm.1123.
Pielke, R. A., Jr. 2007. The honest broker: Making sense of science in policy and politics. Cambridge, UK: Cambridge University Press.
Plan International. n.d. “Plan policy: Research policy and standards.” Accessed May 23, 2021. https://plan-international.org/publications/research-policy-and-standards.
Poch, M., J. Comas, U. Cortés, M. Sànchez-Marrè, and I. Rodríguez-Roda. 2017. “Crossing the Death Valley to transfer environmental decision support systems to the water market.” Global Challenges 1 (3): 1700009. https://doi.org/10.1002/gch2.201700009.
Quevauviller, P., ed. 2010. Water system science and policy interfacing. Cambridge, UK: Royal Society of Chemistry Publishing.
Rehfuess, E., and J. Bartram. 2014. “Beyond direct impact: Evidence synthesis towards a better understanding of effectiveness of public health interventions.” Int. J. Hyg. Environ. Health 217 (2–3): 155–159. https://doi.org/10.1016/j.ijheh.2013.07.011.
Rehfuess, E. A., S. Durao, P. Kyamanywa, J. J. Meerpohl, T. Young, and A. Rohwer. 2016. “An approach for setting evidence-based and stakeholder-informed research priorities in low- and middle-income countries.” Bull. World Health Organ. 94 (4): 297–305. https://doi.org/10.2471/BLT.15.162966.
Roberts, N. C., and P. J. King. 1991. “Policy entrepreneurs: Their activity structure and function in the policy process.” J. Public Administration Res. Theory 1 (2): 147–175.
Robinson, L. 2012. Changeology. Cornwall, UK: TJ International.
Rogers, E. M. 2003. Diffusion of innovations. 5th ed. New York: Free Press.
Rose, D. C., N. Mukherjee, B. I. Simmons, E. R. Tew, R. J. Robertson, A. B. Vadrot, R. Doubleday, and W. J. Sutherland. 2020. “Policy windows for the environment: Tips for improving the uptake of scientific knowledge.” Environ. Sci. Policy 113 (Nov): 47–54. https://doi.org/10.1016/j.envsci.2017.07.013.
Roux, D. J., K. H. Rogers, H. C. Biggs, P. J. Ashton, and A. Sergeant. 2006. “Bridging the science-management divide: Moving from unidirectional knowledge transfer to knowledge interfacing and sharing.” Ecol. Soc. 11 (1): 4. https://doi.org/10.5751/ES-01643-110104.
Rychetnik, L., P. Hawe, E. Waters, A. Barratt, and M. Frommer. 2004. “A glossary for evidence based public health.” J. Epidemiol. Community Health 58: 538–545.
Sarewitz, D., and R. A. Pielke. 2007. “The neglected heart of science policy: Reconciling supply of and demand for science.” Environ. Sci. Policy 10 (1): 5–16. https://doi.org/10.1016/j.envsci.2006.10.001.
Sarkki, S., J. Niemelä, R. Tinch, S. van den Hove, A. Watt, and J. Young. 2014. “Balancing credibility, relevance and legitimacy: A critical assessment of trade-offs in science–policy interfaces.” Sci. Public Policy 41 (2): 194–206. https://doi.org/10.1093/scipol/sct046.
SCCWRP (Southern California Coastal Water Research Project). 2020. “Governance.” Accessed May 26, 2021. https://www.sccwrp.org/about/governance/.
Scott, J. M., J. L. Rachlow, and R. T. Lackey. 2008. “The science-policy interface: What is an appropriate role for professional societies?” BioScience 58 (9): 865–869. https://doi.org/10.1641/B580914.
Sen, A. 1985. Commodities and capabilities. Amsterdam, Netherlands: Elsevier.
Setty, K., R. Cronk, S. George, D. Anderson, G. O’Flaherty, and J. Bartram. 2019. “Adapting translational research for water, sanitation, and hygiene.” Int. J. Environ. Res. Public Health 16 (20): 4049. https://doi.org/10.3390/ijerph16204049.
Setty, K., J. Willetts, A. Jimenez, M. Leifels, and J. Bartram. 2020. “Global water, sanitation, and hygiene research priorities and learning challenges under sustainable development goal 6.” Dev. Policy Rev. 38 (1): 64–84. https://doi.org/10.1111/dpr.12475.
Shackelford, B. B., R. Cronk, N. Behnke, B. Cooper, R. Tu, M. D’Souza, J. Bartram, R. Schweitzer, and D. Jaff. 2020. “Environmental health in forced displacement: A systematic scoping review of the emergency phase.” Sci. Total Environ. 714 (3): 136553. https://doi.org/10.1016/j.scitotenv.2020.136553.
Shields, K. F., D. J. Barrington, S. Meo, S. Sridharan, S. G. Saunders, J. Bartram, and R. T. Souter. Forthcoming. The enabling environment is not a checklist: Building practical authority within a dynamic ecology of participation can foster water, sanitation and hygiene improvements. Boulder, CO: Univ. of Colorado Boulder.
Smits, P. A., and J. L. Denis. 2014. “How research funding agencies support science integration into policy and practice: An international overview.” Implementation Sci. 9 (1): 1–12. https://doi.org/10.1186/1748-5908-9-28.
SPLASH. n.d.-a. “Maximizing the benefits of water research to international development—What research programmers can do.” Accessed May 23, 2021. http://splash-era.net/downloads/SPLASH_Briefing_note_01.pdf.
SPLASH. n.d.-b. “Maximizing the benefits of water research to international development—What researchers can do.” Accessed May 23, 2021. http://splash-era.net/downloads/SPLASH_Briefing_note_02.pdf.
Steenland, K., M. K. Schubauer-Berigan, R. Vermeulen, R. M. Lunn, K. Straif, S. Zahm, P. Stewart, W. D. Arroyave, S. S. Mehta, and N. Pearce. 2020. “Risk of bias assessments and evidence syntheses for observational epidemiologic studies of environmental and occupational exposures: Strengths and limitations.” Environ. Health Perspect. 128 (9): 095002. https://doi.org/10.1289/EHP6980.
Stevens, A. 2011. “Telling policy stories: An ethnographic study of the use of evidence in policy-making in the UK.” J. Social Policy 40 (2): 237–255. https://doi.org/10.1017/S0047279410000723.
Stratil, J. M., D. Paudel, K. E. Setty, C. E. Menezes de Rezende, A. A. Monroe, J. Osuret, I. B. Scheel, M. Wildner, and E. A. Rehfuess. 2020. “Advancing the WHO-INTEGRATE framework as a tool for evidence-informed, deliberative decision-making processes: Exploring the views of developers and users of WHO guidelines.” Int. J. Health Policy Manage. https://doi.org/10.34172/ijhpm.2020.193.
Turnhout, E. 2018. “The politics of environmental knowledge.” Conserv. Soc. 16 (3): 363–371. https://doi.org/10.4103/cs.cs_17_35.
University of Leeds. n.d. “The Nakuru Accord.” Accessed October 14, 2020. https://wash.leeds.ac.uk/failing-better-in-the-wash-sector/.
USEPA. n.d.-a. “EPA’s role in open government.” Accessed May 23, 2021. https://www.epa.gov/data.
USEPA. n.d.-b. “How EPA regulates drinking water contaminants.” Accessed May 23, 2021. https://www.epa.gov/sdwa/how-epa-regulates-drinking-water-contaminants.
van den Hove, S. 2007. “A rationale for science-policy interfaces.” Futures 39 (7): 807–826. https://doi.org/10.1016/j.futures.2006.12.004.
van Kerkhoff, L., and V. Pilbeam. 2017. “Understanding socio-cultural dimensions of environmental decision-making: A knowledge governance approach.” Environ. Sci. Policy 73 (Jul): 29–37. https://doi.org/10.1016/j.envsci.2017.03.011.
Verboom, B., P. Montgomery, and S. Bennett. 2016. “What factors affect evidence-informed policymaking in public health? Protocol for a systematic review of qualitative evidence using thematic synthesis.” Syst. Rev. 5 (1): 61. https://doi.org/10.1186/s13643-016-0240-6.
Vernon, J. L. 2017. “Science in the post-truth era.” Am. Sci. 105 (1): 2. https://doi.org/10.1511/2017.124.2.
Viergever, R. F., S. Olifson, A. Ghaffar, and R. F. Terry. 2010. “A checklist for health research priority setting: Nine common themes of good practice.” Health Res. Policy Syst. 8 (1): 1–9. https://doi.org/10.1186/1478-4505-8-36.
Ward, V., A. House, and S. Hamer. 2009. “Knowledge brokering: The missing link in the evidence to action chain?” Evidence Policy 5 (3): 267–279. https://doi.org/10.1332/174426409X463811.
Weichselgartner, J., and R. Kasperson. 2010. “Barriers in the science-policy-practice interface: Toward a knowledge-action-system in global environmental change research.” Global Environ. Change 20 (2): 266–277. https://doi.org/10.1016/j.gloenvcha.2009.11.006.
Whaley, P., S. W. Edwards, A. Kraft, K. Nyhan, A. Shapiro, S. Watford, S. Wattam, T. Wolffe, and M. Angrish. 2020. “Knowledge organization systems for systematic chemical assessments.” Environ. Health Perspect. 128 (12): 125001. https://doi.org/10.1289/EHP6994.
Whittington, D., M. Radin, and M. Jeuland. 2020. “Evidence-based policy analysis? The strange case of the randomized controlled trials of community-led total sanitation.” Oxford Rev. Econ. Policy 36 (1): 191–221. https://doi.org/10.1093/oxrep/grz029.
WHO (World Health Organization). n.d. “International network to promote household water treatment and safe storage.” Accessed May 23, 2021. https://www.who.int/water_sanitation_health/water-quality/household/household-water-network/en/.
WHO (World Health Organization). 1999. Health based monitoring of recreational waters: The feasibility of a new approach (the ‘Annapolis Protocol’): Outcome of an expert consultation. WHO/SDE/WSH/99.1. Geneva: WHO.
WHO (World Health Organization). 2004. Guidelines for drinking-water quality. 3rd ed. Geneva: WHO.
WHO (World Health Organization). 2009. Guidelines for drinking water quality: Policies and procedures in the updating of the guidelines for drinking water quality. Geneva: WHO.
WHO (World Health Organization). 2017. Global status report on water safety plans: A review of proactive risk assessment and risk management practices to ensure the safety of drinking-water. Geneva: WHO.
Young, K., D. Ashby, A. Boaz, and L. Grayson. 2002. “Social science and the evidence-based policy movement.” Social Policy Soc. 1 (3): 215–224. https://doi.org/10.1017/S1474746402003068.

Information & Authors

Information

Published In

Go to Journal of Environmental Engineering
Journal of Environmental Engineering
Volume 147Issue 10October 2021

History

Published online: Aug 16, 2021
Published in print: Oct 1, 2021
Discussion open until: Jan 16, 2022

Authors

Affiliations

Professor, School of Civil Engineering, Univ. of Leeds, Leeds LS2 9JT, UK; Professor Emeritus, Gillings School of Global Public Health, Univ. of North Carolina at Chapel Hill, Chapel Hill, NC 27599 (corresponding author). ORCID: https://orcid.org/0000-0002-6542-6315. Email: [email protected]; [email protected]
Lead Health Scientist, Health Science Portfolio, ICF, 2635 Meridian Pkwy Suite 200, Durham, NC 27713; formerly, Graduate Research Assistant, Dept. of Environmental Sciences and Engineering, Univ. of North Carolina, Chapel Hill, NC 27599. ORCID: https://orcid.org/0000-0002-5591-9693. Email: [email protected]; [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • A Case Study of Community-Engaged Design: Creating Parametric Insurance to Meet the Safety Needs of Fisherfolk in the Caribbean, Journal of Environmental Engineering, 10.1061/(ASCE)EE.1943-7870.0001971, 148, 3, (2022).

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share