Open access
Technical Papers
Oct 30, 2023

The Warning Lexicon: A Multiphased Study to Identify, Design, and Develop Content for Warning Messages

Publication: Natural Hazards Review
Volume 25, Issue 1

Abstract

Emergency managers and alert and warning originators require tools to build effective warning messages. The purpose of this research is to develop one such tool—a Warning Lexicon—that can be used to construct consistent warning messages for Wireless Emergency Alerts. Specifically, the Warning Lexicon systematically establishes a common set of statements about hazard impacts and their associated recommended protective actions that can be used to quickly write effective warning message contents. Thus, the Warning Lexicon allows practitioners and risk communicators to (1) write effective warning messages at the time of the threat, (2) reduce message issuance delay, and (3) develop templated messages as part of their preparedness process. We built the Warning Lexicon through a theoretically informed, multiphased mixed methods process of content analysis and subject matter expert review to verify the accuracy of lexicon statements and validate the language used to instruct message receivers about protective actions. The resulting content incorporated into the Warning Lexicon includes 48 hazards, their associated impacts, and 112 protective action statements spanning atmospheric, technological, biological, and human-induced events. The result of this project is a comprehensive, theory-based, expert-informed, and practitioner-reviewed resource to support the composition of warning messages for imminent threat events.

Introduction

Effective hazard warnings are vital for reducing the loss of life as threats approach. The number of hazards and potential types of actions that can be recommended, coupled with the urgency of communication and a general lack of resources for warning message design (see, for example, Whitcomb 2018), makes it difficult for some risk communicators to construct effective warning messages in a timely manner. For example, organizations have been faulted for failure to disseminate emergency information during power outages for winter storms (Agnew 2021) and wildfires (Whitcomb 2018; Woodworth 2020; McKinley 2022). In other cases, organizations have issued alerts with vague or inaccurate messaging that has led to confusion (Schneider 2022), fear (Nagourney et al. 2018), and anger (Tutten 2023) among recipients. Indeed, there is a significant gap for most alerting authorities in jurisdictions across the United States, whereby the focus has been on the technology for message distribution first (i.e., how to get the message to those at risk), followed by limited training and education to improve message design for imminent threat communication (see Bean 2019).
Specifically, when faced with a fast-moving, rapid onset event, public officials must write a warning message quickly while simultaneously managing uncertainties and competing demands for attention. Therefore, warning message templates (i.e., predesigned messages) that include appropriate and effective content for a range of hazards are needed. This desire has been reinforced by emergency managers and risk communicators across the United States who have requested access to verified and vetted message contents to reduce message issuance delay and increase the likelihood that a message will be understood, believed, and personalized by message receivers (DHS S&T Directorate 2018). Indeed, this request has resulted in 22 technical assistance webinars delivered by the FEMA National Integration Center to more than 1,200 persons since 2020, of which approximately 10% report having developed prescripted 90-character alert and warning messages prior to the webinar (personal communication, A. Savitt). Technical assistance webinars draw from the research record and provide guidance on how to construct effective messages. Participants are advised to make use of the webinar contents to develop their own messages, post-training. Their progress is hampered by a lack of access to vetted and verified contents that would enable them to build templated messages pre-event.
We address this problem by developing consistent warning message contents for 48 hazards in the form of a Warning Lexicon. Our goal is to both improve warning message design and reduce the time and effort required for their construction when there are threats to public safety. We draw from the empirical model on effective warning messages identified by Mileti and Sorensen (1990), here forward identified as the Warning Response Model (WRM; Kuligowski et al. 2023) to determine the necessary contents for a warning message and their style of presentation. Then, we use a multiphase qualitative research design to determine Warning Lexicon contents (Creswell and Creswell 2017). First, we collect historical warning messages from existing repositories and code each message to identify hazard impact statements and protective action recommendations. Second, we verify these statements through interviews with hazard subject matter experts. Third, we conduct focus groups and solicit written feedback from communication subject matter experts to validate the language used in each statement. From this approach, we present a warning lexicon consisting of hazard impact and protective action guidance statements for 48 unique hazards, including atmospheric, technological, biological, and human-induced threats.
We begin by discussing lexicons more broadly before examining the importance of a Warning Lexicon, specifically.

Literature Review

Lexicons

A lexicon is generally defined as a set of vocabulary related to a specific language, subject matter, and/or specific group of people (Merriam-Webster 2023). They are used to organize terms and concepts into a common language and provide definitions for words used across knowledge domains. A lexicon that is agreed upon by its users can help to standardize terminology and definitions and facilitate mutual understanding and communication among disparate groups of researchers, practitioners, policy makers, and the public (see, for example, Goldman 2011; McCrindle and Wolfinger 2011; Plotnick and Hiltz 2018).
Because the language used to describe technology and interventions can differ by domain and expertise, creating a lexicon enables a shared language that can, over time, become familiar to practitioners and the public alike. For example, Salafasky (2008) developed a standard lexicon related to biodiversity conservation that enables scientists and policy makers across multiple disciplines to communicate using shared definitions for their common vocabulary. Following this effort, the US Environmental Protection Agency (EPA) developed a standard lexicon for ecosystem services through a process of consensus building and iterative discourse among partners (Munns et al. 2015). The objective of the EPA lexicon is to facilitate mutual understanding of terminology used by multiple disciplines and foster transdisciplinary collaboration among scientists (Munns et al. 2015).
Within the space of disaster communication, lexicons have primarily been developed for the purpose of automatically detecting message content posted to social media during disasters (Plotnick and Hiltz 2018). Through a combination of machine learning and content analysis, researchers have identified vocabulary that is specific to crisis management (Temnikova et al. 2014), individual hazards (Olteanu et al. 2014), and disaster response (Hodas et al. 2015). By using a lexicon to quickly identify what members of the public post about online, public officials can direct resources to assist with rescue and recovery actions (see, for example, Buyuk 2023). This ability to detect public needs from large and expanding data sets in real time remains a priority for disaster responders (Linardos et al. 2022).
However, little attention has been devoted to building a lexicon that is useful for warning message creation, specifically. The ability to clearly communicate to the public about an imminent threat is critical for reducing the loss of life and injury. Public officials and responders depend on precision and clarity of warning language but lack access to a common vocabulary that can be used to consistently convey information about impending threats and the actions the public should take to reduce their impacts (see Hodas et al. 2015; Olteanu et al. 2014). For risk communicators, a warning lexicon is a valuable tool that can be used to write evidence-based messages with greater speed, confidence, and effectiveness.
Specifically, effective warning messages must include the hazard, its impacts, and protective actions. Thus, a warning lexicon must also include these types of information, resulting in a common set of actionable statements and impacts for a variety of hazards. We further discuss effective warning message design below.

Warning Response Model

Mileti and Sorensen (1990) document five key types of warning message content that motivate people to take timely and appropriate protective action in response to a warning message, which is referred to as the WRM. These contents include the following:
a description of the threat/event (i.e., hazard) and its consequences (i.e., what is happening and how it will affect people);
protective action guidance (i.e., what to do);
the location and population at risk (i.e., where it is happening);
the time by which the public should begin taking the protective action, as well as the time by which taking the protective action should be completed; and
the name of the message sender or source (i.e., who is sending the message).
Decades of research investigating effective warnings has demonstrated the importance of including these contents to warn about imminent hazards, such as earthquake (Sutton et al. 2020), tsunami (Sutton et al. 2018), tornado (Sutton et al. 2021), dust-storm, and snow squall (Fischer et al. 2023), especially for hazards that are unfamiliar to message receivers (Wood et al. 2018).
Mileti (2018) later stated that warning messages need to also include statements describing a hazard’s potential impacts and consequences to populations as well as how protective actions can reduce those consequences. Described by scholars as “impact-based warnings” (Potter et al. 2018), these messages are intended to increase one’s perceived severity of the hazard (Harrison et al. 2014).
The WRM also emphasized the importance of message style—or the way content is communicated. Warning messages should be clear, without using ambiguous or unfamiliar terms (Sharon and Baram-Tsabari 2014; Sivle and Aamodt 2019), as well as certain and complete. They should also be internally consistent, lacking conflicting information (Williams and Eosco 2021), and specific (Sutton et al. 2023). Messages that are clear can also result in increased knowledge and feelings of self-efficacy, and decrease protective action delay (Fischer et al. 2023; Frisby et al. 2014; Wood et al. 2018). Message style can also increase a sense of urgency by using an imperative voice and exclamation marks or by calling attention to content in ALL CAPS (Sutton and Fischer 2021).
Within the WRM research record, responses to messages have largely been measured as perceptual outcomes, which are focused on acquiring new information from the alert or warning message. WRM scholars have focused on perceptions of message understanding, message believing, and message personalization by measuring the extent to which a message receiver agrees or disagrees with statements in a message to which they are exposed (see, for example, Fischer et al. 2023). WRM scholars also measure the extent to which a message receiver believes that they will be able to make a quick, effective decision about protective action response based upon the message contents (see, for example, Wood et al. 2017). More recently, WRM scholars have inquired about perceptions of self- and response-efficacy that affect a message receiver’s perceived ability to do the recommended action, which can also affect their potential decision-making (see, for example, Sutton et al. 2021). Notably, the focus within the warning literature has been on measuring perceptions rather than behavioral intention or behavioral change.
There are key limitations to assessing behavioral intention in warning message design research. First, this type of research commonly uses hypothetical scenarios, which cannot reflect the environmental or social cues people also use to make protective action decisions (Lindell and Perry 2012) and therefore will not faithfully represent actual responses to a warning message (NRC 2013). Second, people tend to overestimate their intention to perform protective actions when actual consequences are not present (Kox and Thieken 2017). Third, behavioral intention cannot capture barriers that may exist during actual decision making. Overall, the context in which an individual actually receives an alert or warning message cannot be reliably replicated, making it difficult to clearly assess whether an individual message will result in a behavioral outcome (FEMA 2021a).
Importantly, the content and style in which a warning is delivered increases in importance when it is sent over short messaging channels. Because of the limited character counts restricting the message writer’s ability to communicate details, effectively capturing the key contents, structured in a standardized manner, becomes vital in time-limited contexts. Wireless Emergency Alerts (WEAs) represent a national solution to delivering imminent threat messages to all persons through the United States. We turn to a discussion of WEAs next.

Wireless Emergency Alerts

WEAs are the one channel in the United States that has the greatest potential to reach persons at risk of an imminent threat, without requiring them to sign up, opt in, or turn on a device to receive a warning message. By using the Integrated Public Alert and Warning System (IPAWS), risk communicators at the local, state, and national levels can send geotargeted, text-like messages to WEA-enabled mobile devices (Bean 2019). WEAs can alert persons of imminent threat using text, audio signal, and vibration for hazards of all types, ranging from short fuse events such as earthquakes via the USGS ShakeAlert service (Minson et al. 2022) to longer fuse threats such as hurricanes via the National Weather Service (Gao and Wang 2021). The WEA system is also used to deliver AMBER, presidential, and public safety alert messages.
Initially, WEAs were limited to 90 characters of text (known as WEA 1.0). In 2019, WEA messages were expanded by four times the number of characters to 360 characters of text (known as WEA 2.0). Since 2019, all the mobile devices built by the top five brands—or Apple, Samsung, Motorola, Google, and LG—are WEA 2.0 enabled, which represents more than 95% of all mobile devices sold in the US (AT&T 2022; Statcounter GlobalStats 2023). WEA 2.0 enabled devices, and some 1.0 devices (Verizon, n.d.), also facilitate the inclusion of a URL in messaging, allowing risk communicators to add a link for more complete information (see Kuligowski et al. 2023).
The design, dissemination, and conditions leading to the of delivery of WEA messages highlight the complexities of communicating warnings effectively. The character limitations pose a challenge for message writers to construct a terse message under heightened conditions of stress (Bean 2019). Researchers have also found that short messages, especially WEAs limited to 90 characters, tend to be incomplete and lack specificity; therefore, these types of messages are less actionable (Wood et al. 2018). Expanding WEA messages to 360 characters allows message writers to increase details about the hazard, impacts, and recommended guidance necessary to act. Risk communicators who have access to relevant content to design an effective warning message can reduce issuance delay and increase the likelihood of message receivers taking action.
The five message contents included in the WRM, presented in an order optimized for the message receiver (source, hazard, location, guidance, time; Bean et al. 2014) served as the initial foundation for WEA messages in the United States (Bean 2019). Indeed, this standardized structure was found to optimize message receiver outcomes (Bean et al. 2015), leading to greater understanding, believing, personalizing, and ability to decide in comparison with other message content ordering. As a result, this standardized message structure has been adopted and included in Federal trainings (FEMA National Integration Center 2021) and guidance documents (USACE 2019; CISA 2019; FEMA IPAWS 2023) for all warning communications issued via WEA and other short message channels. The structure should not differ due to the type of hazard, the location of the threat, or the population receiving the message. However, there is not yet a lexicon—or a consistent set of evidence-based warning message contents—that can be used to quickly communicate hazard impacts and their recommended actions.

Hazard Matrix

In 2012, researchers carried out initial efforts to define and limit a set of protective actions that could be reduced to short phrases suitable for 90-character WEA 1.0 messages (Mileti et al. 2012). This research led to the creation of a Hazard Versus Protective Actions Message Matrix (aka “The Hazard Matrix”) that organized a set of natural, technological, and human-induced hazards (n=59) and their associated protective actions (n=27) into a grid, whereby action statements were sequentially numbered representing the initial, secondary, and tertiary warning message content, plus an “all clear” (see Fig. 1).
Fig. 1. An adapted example of the Hazard versus Protective Actions Message Matrix.
The Hazard Matrix represented an innovative and important contribution at the time it was developed; however, it is limited in scope and only includes a terse set of actions that contains no instruction or elaboration on each protective action recommendation. It provides only the name of each hazard, and lacks information about the hazard itself as well as potential impacts or consequences to populations. We use the Hazard Matrix as an initial starting point for the Warning Lexicon and build upon it by describing each hazard’s impacts and broadening the protective action categories into complete and specific action statements. We describe the method used to develop this more comprehensive Warning Lexicon next.

Method

To develop the Warning Lexicon, we conducted a qualitative multiphase study (Creswell and Creswell 2017; see Fig. 2). In Phase 1, we collected, coded, and analyzed the content of previously disseminated hazard warning messages to identify a set of cross-hazard protective action recommendations and hazard impact statements using content and thematic analysis. In Phase 2, we conducted interviews with hazard subject matter experts who verified the accuracy of the data collected and analyzed in Phase 1. In Phase 3, we completed focus group interviews with communication experts who validated the language used in the Warning Lexicon. The research protocol (21×93) was approved by the University at Albany Institutional Review Board (10-22-2021).
Fig. 2. Multiphase methods used to create, verify, and validate contents in the Warning Lexicon.

Phase 1: Lexicon Content Development—Data Collection and Organization

In this first phase of research, we deconstructed English language warning message templates (n=470) from eight metropolitan areas located across the United States. These templates were used as the starting point for developing Lexicon contents. The hazards included in this template data set far exceeded the number and type of hazards contained in the original Hazard Matrix, allowing the research team to draw from a wide breadth of message examples for message deconstruction. Each message was deconstructed into content categories representing the hazard, hazard impact, and protective action statements that would later be verified, vetted, and potentially changed; thus, we were not concerned with any prior assessment of their effects on WRM outcomes or the location from which they originated. For these templates, we focused on the warning messages that had been designed for short messaging channels, including social media (140–280 characters in length), opt-in messaging systems and Short Message Service (SMS; 160 characters), and WEAs (90 or 360-characters).
Deconstructed messages were organized by hazard type using Excel spreadsheets to show variations on language describing each hazard, hazard impact, and protective action statements (see example in Table 1). Messages were manually deconstructed by the third author in consultation with the first two authors of this manuscript. This step resulted in a comprehensive list of protective actions statements (n=372) across 91 hazards. We discuss next how we selected the list of hazards included in the Warning Lexicon.
Table 1. Message deconstruction example for hazards related to fire/wildfire/fire weather
Message originComplete messageSourceHazardImpactsLocationGuidanceTime
Portland, OregonPortland Fire reports large wildfire in NW Portland. Evacuate the area immediately.Portland FireLarge wildfireN/ANW PortlandEvacuate the areaImmediately
New YorkFire: [LOCATION/INTERSECTION] in [BOROUGH]. Expect smoke & traffic delays in area. People nearby avoid smoke, close windows.N/AFireExpect smoke & traffic delays[LOCATION/INTERSECTION] in [BOROUGH]People nearby avoid smoke, close windows.N/A
HoustonThe National Weather Service has issued a Fire Weather Warning, also known as a Red Flag Warning, for Houston on <_date_>, due to a forecast of strong winds combined with low humidity. This means that any sparked fires may intensify and spread more rapidly than usual.The National Weather ServiceFire Weather Warning, also known as a Red Flag WarningDue to a forecast of strong winds combined with low humidity. This means that any sparked fires may intensify and spread more rapidly than usual.HoustonN/Aon <_date_>

Hazard List

The hazards included in the Warning Lexicon list conform primarily to the initial guidelines issued by FEMA on the appropriate use of WEA (McGregor et al. 2014). These guidelines include the following:
1.
The event must be urgent, that is, classified as either immediate, requiring immediate responsive action, or expected, requiring responsive action within 1 h.
2.
The event must be severe; that is, the severity of the event must be classified as either extreme or severe, posing an extraordinary or significant threat to life or property.
3.
The event must be certain; that is, the certainty of the event must be classified as observed, determined to have occurred, or ongoing, or, likely, determined to have a probability of occurrence of 50% or greater.
Several years after these criteria were issued, FEMA amended the guidelines, acknowledging that some messages may also be sent for nonimmediate events that also pose a degree of danger and demand knowledge or action among members of the public. Described as “public safety” messages, these hazards include, for example, 911 service outages, contaminated water, and messaging about infectious disease outbreaks, in addition to law enforcement-oriented messages directing the public to “be on the lookout” (FEMA IPAWS 2021).
Based on these guidelines, the research team eliminated from the data set any warning messages that had been designed for exercises, school closures, planned infrastructure work, and transit delay/disruption. Also excluded are messages designed to communicate about missing persons (e.g., Amber or Silver Alerts), and messages sent to communicate the event is “all clear,” as neither type of message requires a protective action response by the message receiver. Furthermore, due to the complexity of communicating a complete message, including the sequence in which messages must be sent, we do not include contents for nuclear attack [complete messages for this hazard are available from FEMA (2013)]. Finally, messages that contained only protective action statements absent any hazard name (e.g., shelter in place, evacuate, curfew) were incorporated into the Warning Lexicon as protective action statements that were relevant to specific hazards.
We then reviewed our list of identified hazards to determine areas of redundancy—or hazards that vary by name but align with similar impacts and protective actions (e.g., earthquake foreshock and earthquake aftershock; active shooter and armed assault; civil disturbance, violence, and protestors). Notably, the original Hazard Matrix included similar hazards but separated them into different events based upon the stage of the event (e.g., earthquake foreshock, earthquake post initiation early warning, and earthquake secondary hazards). Upon completion of this review of hazards, 48 event types remained (See Table 2).
Table 2. Hazards included in the Warning Lexicon
Hazard categoryHazards
AtmosphericBlizzardDust stormExtreme cold
 Extreme heatFlash floodFog
 HailHeavy rainHeavy snow
 High windHurricane/tropical storm/tropical cycloneIce
 Severe thunderstormSnow squallStorm surge
 TornadoTsunamiWinter storm
GeophysicalAvalancheEarthquakeLandslide
 Mud/debris flowRock fallSinkhole
 Volcano
Law enforcementActive shooterBomb threatCivil disturbance
 Hostage taking
Public healthAir qualityBio-hazardInfectious disease/novel pandemic
Public safetyBlackout/brownoutWater service disruption911 telephone outage
TechnologicalBridge collapseBuilding collapseBuilding fire
 Chemical releaseDam/levee failureExplosion
 Hazardous materials releaseIndustrial plant fireRadiological release/accident
 Toxic fumes
Wildfire

Protective Action Statements

After eliminating certain types of events from the overall data set (discussed above), we then focused on the protective actions recommended for the remaining hazards. Drawing from the remaining 372 protective actions, the second author conducted thematic analysis on this list of protective actions to identify commonalities between them and reduce synonyms to a single set of consistent, plain language words and phrases (Braun and Clarke 2006).
After thematic analysis was complete, we then organized these protective action recommendations into broader categories of actions identified in the Hazard Matrix (i.e., shelter, evacuate, protect, refrain, contain, obtain, avoid, listen, rearrange), as well as one action not previously included (contact). This exercise allowed us to find additional commonalities among the protective action statements (see Table 3). Thematic analyses and data organizing occurred iteratively among the research team members until further refinement failed to produce meaningful changes to the list of protective actions. This process resulted in 112 protective actions. Of those, 68 of the protective actions cross multiple hazards and 44 are specific to individual hazards.
Table 3. Protective action categories, definitions, and example protective action statements
Action categoryDefinitionExample protective action statements
ShelterActions relating to staying in one’s current location
Stay in an interior room of a sturdy building
EvacuateActions relating to leaving one’s current location
Leave the area
ProtectActions relating to protecting one’s body
Cover your face and head with your arms
DecontaminateActions relating to cleaning one’s body or objects
Clean and disinfect commonly used objects and surfaces, such as doorknobs, counters, and light switches
RefrainActions relating to not performing certain actions
Do not walk or drive through moving water
AvoidActions relating to actively avoiding certain situations
Avoid glass, windows, outside doors and walls
ObtainActions relating to assembling materials and/or supplies
Get your essential documents, including birth certificate, passports/drivers license, social security card
ListenActions relating to waiting for additional information or seeking additional information
Wait for further information; Check [source] for [updates/information] at [URL]
RearrangeActions relating to adjusting items in one’s immediate surroundings
Secure outdoor objects, such as outdoor furniture and trash cans
ContactActions relating to directly contacting a source
Call [or text] [phone number] if you or others are [injured/in danger/trapped/buried/have an emergency]

Hazard Impact Statements

Hazard impact statements describe the “effects, outcomes, or consequences of hazardous events” (Harrison et al. 2021, p. 2). Few of the historical warning messages that were initially coded contained impact statements for hazards. Therefore, to further develop content about each hazard and its impacts, we consulted the websites of the Federal Emergency Management Agency (FEMA, n.d.), National Oceanic and Atmospheric Administration, the Centers for Disease Control and Prevention, and the American Red Cross (see Appendix S2). Although there are many types of potential hazard impacts, we focused on content that described the primary and immediate consequences to one’s safety resulting from the initiation of a hazardous event. Statements about immediate hazard impacts were compiled into a Word document and organized by hazard.
Impact statements were developed for all hazards in the Warning Lexicon (n=93). Impact statements fall into two categories: pre-event and during or post-event. For hazards that can be forecast in advance, pre-event hazard impact statements include some measures of uncertainty about what could occur and the potential consequences to the population and the place at risk. Impact statements designed to be sent during the initial phase of or immediately following an event’s initiation provide certainty about the status of the hazard’s occurrence followed by potential consequences to the population and the place at risk. All impact and action statements are available in the Supplemental Materials.

Phase 2: Lexicon Verification—Interviews with Hazard Subject Matter Experts

After collecting and developing the initial Warning Lexicon protective action and hazard impact statements, hazard subject matter expert interviews were conducted to verify their accuracy. Fifteen hazard subject matter experts (SMEs) were identified through professional contacts with researchers and practitioners in local, state, and federal agencies, as well as recommendations from other hazard experts. Hazard SME participants included both practitioners (n=10) and researchers (n=5) with 12 to 29 years of service in their professional roles. All hazard SMEs had degrees, ranging from a bachelor’s degree to doctorate, in their relevant areas of practice or research, including atmospheric science or meteorology (n=8), emergency management/public administration (n=1), geosciences (n=2), criminal justice (n=2), and engineering and applied mechanics (n=2). Hazard SMEs work professionally as program coordinators and trainers (n=9), researchers (n=7), practitioners (n=5), and lab directors (n=1). Hazard SMEs were invited via email to participate in a virtual interview for the purpose of verifying the accuracy of and providing changes to Warning Lexicon contents. Hazard SMEs had expertise in each of the 48 hazards contained in the Warning Lexicon; in many cases a single SME had expertise with more than one hazard type (e.g., severe storm, tornado, hail, and lightning; hurricane and storm surge; volcano and earthquake).
Hazard SME interviews were conducted via Zoom, using guided discussion methods with open-ended response (Gray et al. 2020). The facilitator began with a short presentation summarizing the objectives and goals for developing the Warning Lexicon. Following this introduction, participants were told that they would be reviewing hazard impact and protective action statements for the hazards in which they had subject matter expertise.
First, the facilitator made their computer screen visible to the SME and presented a list of protective action statements associated with the hazard, which had been compiled in Phase 1 of the project. They were then asked to consider all possible protective actions that might be recommended for a single hazard type, rather than focusing narrowly on specific instances or conditions in which a single incident might occur. SMEs were asked to think about (1) protective action recommendations that should be made when an emerging hazardous condition would likely result in an impact on life and property; and (2) recommendations that should be made when the hazardous event is occurring or has already occurred, resulting in impacts that were verifiable. Each protective action statement was reviewed and confirmed by the SME as an appropriate protective action for a particular hazard. In some cases, the SME made suggestions for changes to the language describing the recommendations, and in other cases additional actions were recommended. In a few cases, action statements that had been included were removed. The facilitator typed these changes directly into the review document that was shared on the computer conferencing screen.
After the protective action statements were reviewed for a single hazard, the SME assessed the associated hazard impact statement developed in Phase 1 of the research. The SME was invited to review each statement, comment upon its accuracy, and, in cases where a statement was perceived to be incomplete or inaccurate, to offer alternative language. Changes were made directly on the shared review document to ensure that language was accurately captured.
At the conclusion of the hazard impact statement review, the SME cross validated the linkages between the impact and the protective action statements. In some cases, additional protective actions were warranted due to changes in the hazard impact statement; those new contents were added. In cases where the protective action list included actions that were considered vague, imprecise, or unwarranted relative to the hazard impact statement, they were eliminated.
Some hazards and protective actions in the Warning Lexicon were not confirmed directly with SMEs who were interviewed due to their perceived concerns about how the information might be used by emergency management alerting authorities in the future. For example, a defined set of WEAs is issued for hazards over which the National Weather Service has jurisdiction. These include dust storm, wind, flash flood, hurricane/typhoon, severe thunderstorm, snow squall, storm surge, tornado, and tsunami (NWS, n.d.). Concerns were expressed by some SMEs about the potential for overalerting, should an AA choose to replicate an alert previously issued. In other cases, SMEs felt a particular hazard was not appropriate to be communicated via WEA. However, there is no consistent national policy dictating “appropriate” WEA use, nor has there been a definitive, large scale study to determine the extent to which alerting via WEA is viewed as problematic by the public. These hazards for which concern was expressed include blizzard, ice storm, heavy snow, extreme cold, winter storm, snow squall, air quality, and infectious disease. In these cases, the protective action and impact statements were validated with pre-existing resources (see Appendix S2).
After the protective actions were validated by SMEs for each hazard, the final list of protective actions was cross-checked using the materials located on the FEMA (2021b) Protective Action Research website. This site presents research-based protective action recommendations for 15 hazards and served as an additional confirmation that the hazard, impact statements, and recommended actions aligned with Federal emergency management guidance documents.
At the conclusion of Phase 2, message contents had been verified by 15 hazard subject matter experts for 48 hazards resulting in 93 hazard impact statements, reflecting event timing, and 112 protective action statements. (See Supplemental Materials, S1, for the complete list.)

Phase 3: Lexicon Message Style Validation with Communication Experts

In Phase 3, we consulted communication and warning message design experts who provided feedback on the language used in each statement via focus group and written communication. Phase 3 experts included a medical anthropologist with expertise in infectious disease and public health, and technological and radiological hazards; a public health expert with expertise on public health, geological, and atmospheric hazards; and three social scientists with expertise in wildfire and a range of hydrological, geological, and atmospheric hazards. SMEs were asked to review hazard impact and protective action statements with a focus on message simplicity, eliminating the use of “jargon” (i.e., words and phrases used to convey technical information among scientists, which are not well understood by the public; Shulman and Bullock 2020), and identifying unintended consequences from specific message phrases (e.g., message content that may stigmatize certain populations).
An initial group meeting was held via online teleconference (i.e., Zoom). First, the goals and objectives of the Warning Lexicon project were shared by the facilitator. This introduction was followed by instructions on how the group would contribute to the review process, beginning with a review of protective action statements that applied to most of the hazards. The facilitator shared their screen and manually entered recommended changes into each statement. A key example highlighting this review is for one protective action recommendation associated with infectious disease. The original action statement included, “stay away from sick people.” During expert review, one participant noted that the language used in this statement could be stigmatizing for some populations, leading to unintended consequences that would later require additional communication interventions. They also noted the difficultly associated with discerning whether someone is indeed sick when out in public. Thus, this protective action statement was then eliminated.
Group concurrence was obtained for every protective action statement included in the Warning Lexicon. A portion of hazard impact statements was also jointly reviewed. SMEs independently completed inspection and made edits to any impact statements that were not discussed in the group call. Revised hazard impact statements were shared among the SMEs before the research team incorporated them into the final Warning Lexicon.

Results

In summary, we completed three phases of research beginning with a database of templated messages that were deconstructed, then validated with SMEs, cross checked with existing documentation, and vetted with risk communication experts, resulting in a data set of message contents that have been triangulated and cross-checked. The Warning Lexicon includes 48 hazard types, 93 impact statements, and 112 protective actions in English. Notably, hazards and contents in the Warning Lexicon are likely to continue to expand due to emergent hazards and public safety events, as discussed below.
To illustrate Warning Lexicon contents for one hazard, we provide an example of the impact and protective action contents for blizzard (see Table 4). Here we present an impact statement that is suitable for both phases of a blizzard (pre-event and during event). We also provide the protective action statements that have been vetted for both time phases [pre-event, n=4; during or post-event (n=5)]. Specific words are presented in all CAPS to help the reader quickly identify the key message contents (i.e., PREPARE, AVOID, CHECK). For some hazards, included in the Supplemental Materials, the use of ALL CAPS helps to also convey a sense of urgency (i.e., LEAVE NOW). The Warning Lexicon presents some content in [brackets], allowing the message writer to select between contents [i.e., (do/do not)]. Contents presented in italics signal to the message writer where to edit the location, time, and additional information about the organization (i.e., source such as a URL or phone number).
Table 4. Warning Lexicon content example
Warning lexicon content areasPre-event contentDuring or post-event content
HAZARD: specific name of the hazard that is a threat to the populationBLIZZARD
HAZARD IMPACTS: Consequences of the hazard on populations and infrastructure.BLIZZARD conditions in [location] [may/will] bring blowing snow and high winds, resulting in extremely limited visibility and unsafe road conditions for travelers and drivers. Potential for power outages to occur.
PROTECTIVE ACTION GUIDANCE: Recommended actions for message receivers.
PREPARE by having nonperishable food and water on hand by [time]
PREPARE for power outages by fully charging your cell phone and backup batteries/external power packs
PREPARE for power outages by testing batteries in radios and flashlights by [time]
CHECK [source] for [updates/information] at [info]
AVOID travel [until further notice/until [time]]
IF DRIVING, and you become stranded, hang a distress flag from the antenna or window and stay in your vehicle. Use seat covers and floor mats for insulation or huddle with other passengers
IF DRIVING, leave extra space between cars
IF DRIVING, make sure headlights are on. [Do/do not] use high beams
IF DRIVING, reduce speed
CHECK [source] for [updates/information] at [info]
The complete results from this research are integrated into a single document intended to support risk communicators in writing imminent threat warning messages. Results can be found in the Supplemental Materials S1, which provides the complete list of hazards, their associated impact statements, and protective action statements.
We now turn to a discussion centered on strategies for practitioners and other risk communicators to implement the Warning Lexicon. We start by describing best practices for warning design and then offer a series of prompts that can be utilized by message writers to develop templates as part of their preparedness efforts or at the time of an imminent threat event.

Discussion

Applying the Warning Lexicon

To utilize the Warning Lexicon, risk communicators are encouraged to draw from research guidance on designing an effective warning message (see Doermann et al. 2021; Sutton and Kuligowski 2019). As described previously, messages that include (1) the hazard and its impacts, (2) the location of the hazard, (3) protective action guidance, (4) the message source, and (5) the time to act or time the message expires have been found to increase the likelihood that individuals will take protective action (Mileti and Sorensen 1990; Sutton and Kuligowski 2019; Wood 2017). Placing an official, recognizable, and familiar source at the beginning of the message increases message receivers’ understanding and belief (Bean et al. 2015). Using common words that are unambiguous and require no interpretation to instruct message receivers on the actions they should take will increase understanding and perceptions of self-efficacy (Sutton et al. 2021; Wood et al. 2015). Using language that names the potential impacts or consequences of the hazard and accurately depicts the severity and certainty of the event increases perceptions of risk and personalization (Potter et al. 2018). Identifying the hazard location by towns, cities, counties, major landmarks, and roads that are known to most individuals will increase threat understanding and personalization (Cao et al. 2016). And clearly identifying the timing of the hazardous event, such as when it starts and stops, by when actions should be taken, or when updates will be provided, can reduce delayed action (Bean et al. 2014).
Message style conventions (e.g., the use of ALL CAPS and imperative sentences) can increase attention and perceptions of urgency (Sutton and Kuligowski 2019). Urgency can be further communicated with the inclusion of timing information, such as the words NOW or IMMEDIATELY. However, the use of acronyms, abbreviations, and technical jargon can decrease understanding and delay action (Bean et al. 2016). Furthermore, researchers have found that the order of contents affects message understanding, belief, and decision-making (Wood et al. 2015). For messages between 90 and 280 characters long, contents should be presented as follows: source, hazard, location, time, guidance. For messages reaching 360 characters, the following order is conventional: source, hazard, location, guidance, time (Bean et al. 2015). Frequently, messages will have some integration of content across categories. For example, location may be integrated into the description of the hazard and guidance information. Additionally, time is frequently integrated into guidance statements as part of the instruction, such as by the use of the words “now,” “immediately,” or “until further notice” (see example messages in Sutton and Kuligowski 2019).
Preparing warning message contents prior to an imminent threat event will reduce the time and effort required to write a complete message under conditions of heightened stress and uncertainty. As part of preparedness efforts, risk communicators can (1) identify a list of threats and hazards that are likely to occur within their community; (2) develop a list of trusted and familiar organizations or agencies to be included as the message source; (3) prepare a list of well-known landmarks, town/city/county names, and major roads or intersections to describe the location of a threat; and (4) establish a webpage or social media URL on which updates can be posted (see FEMA IPAWS 2022a). Message writers also can identify, download, and practice writing character-constrained warnings using existing software, such as hemingwayapp.com, or Grammarly.com, or build their own character counters using Excel or Google sheets.
Additionally, the Warning Lexicon can be used to develop an imminent threat message at the time of a hazardous event. However, message templates for known hazards written prior to an event will require minimal editing to include the specific impacts, location of the hazard, and time to act or to receive an update (see Supplemental Materials, S1, for example message templates for each hazard). Because WEA messages are delivered to all WEA-enabled mobile devices located within a geographically defined area, the language included in the Warning Lexicon is intended to be applicable to the broader, English-speaking public. Unlike targeted messages that can be delivered to individuals who opt in to specific messaging channels or groups, WEA messages are not tailored to the characteristics of individual populations. Therefore, the Warning Lexicon is useful for plain language messaging, using the empirically supported standardized format, to reach all populations at risk.
The following prompts adapted from Doermann et al. (2021) should inform the design of any warning message, regardless of the time at which it is written (pre, during, or post-initiation), and regardless of the short-messaging channel for which it is written (WEA, SMS, social media, etc.). Content from the Warning Lexicon can be used to address prompts about the hazard, hazard impacts, and protective actions; risk communicators should be prepared to populate contents about the source, location, time, and additional information.
1.
SOURCE: Which agency or organization should be listed as the source of this message?
2.
HAZARD: What type of hazardous event is happening or about to happen?
3.
HAZARD IMPACTS: What are the consequences of this threat that the public should be aware of or how will it affect them personally?
4.
HAZARD LOCATION: Where is the hazardous event occurring? If the hazard is moving, in what direction is it moving or likely to move?
5.
RESPONSE LOCATION: Is there a specific location that should be evacuated or avoided by people? If so, where should they go?
6.
PROTECTIVE ACTION GUIDANCE: What are the specific actions people receiving the message should take?
7.
TIME: When should people take protective action? When will the message expire? When should people expect to receive an update?
8.
ADDITIONAL GUIDANCE: Where should people go for updates?

Limitations

The Warning Lexicon currently includes 48 hazards, which vary by frequency of occurrence. Hazards that are seasonal, such as tornado, hurricane, or wildfire, are routinely communicated to populations at risk, suggesting greater content familiarity by AAs. Hazards that occur less frequently, such as earthquake or tsunami, may result in greater reliance upon the Warning Lexicon by AAs. In both cases, AAs will benefit from reviewing the Warning Lexicon during the preparedness and planning phases of the hazard lifecycle, and to develop messaging templates early. There is also a high likelihood that new or emerging threats have not been captured and will require risk communicators to adapt these contents to additional contexts, such as complex events such as tornado and flash flood hazards that co-occur. The use of WEA for messaging is also likely to continue to adapt to new uses, as seen in 2019 when FEMA expanded its recommendations to include public safety messages unrelated to, but coinciding with, the start of the COVID-19 pandemic. Notably, WEA was used to update people on vaccine clinics (NBC Los Angeles 2021), curfews (Margolis 2020), and the need for medical personnel (Quad Cities 2021) to name only a few novel uses. Researchers have not yet conducted empirical studies to support message design recommendations for these nonimminent threats sent via WEA.
Furthermore, the technological affordances of WEA are also likely to change in the future (FEMA IPAWS 2022b). Specifically, WEA 1.0 supported 90-character messages, WEA 2.0 supported 360 characters, Spanish language messages, and URLs, and WEA 3.0, representing more than 60% of active smartphones, supports narrow geotargeting through device based geofencing (Bender 2022). Next, WEA 4.0 is likely to feature new capabilities such as those proposed by the Federal Communications Commission (FCC) to “identify the software or functional requirements necessary to allow WEA software to pull capabilities from other mobile device applications and firmware to enhance the presentation of WEA alert messages” (FCC 2022). These capabilities could also affect the ability to increase the number of languages in which WEAs are delivered. However, the design of messages in languages beyond English will remain constrained due to human resources; most emergency management agencies do not have in-house translators. At this time, the Warning Lexicon is limited to text only, in English, and does not include evidence-based guidance on icons, maps, or other graphics.
Similarly, because the Warning Lexicon is focused primarily on hazards requiring protective action response, it does not include things like missing persons, such as AMBER alerts. Although notifications for these types of events may help to raise public awareness and enlist their help (US Department of Justice 2019), they do not require message recipients to act due to an immediate safety threat (Bean and Hasinoff 2022). Furthermore, some hazards, such as water contamination and nuclear attack, have messaging that is informed by legal statutes for language that cannot be adapted. They are, therefore, not included in the Warning Lexicon.
Additionally, the Warning Lexicon does not include any policy recommendations about when a warning should be issued or under what conditions. Notably, there is no Federal government policy dictating the appropriate use of the WEA system by authorized Alerting Originators. The lack of consistent federal policy is a limitation that applies not only to research, but to practice as well. We also acknowledge that the Warning Lexicon is focused only on English language contents, suggesting significant gaps in communicating effectively to a sizeable portion of the US population (Trujillo-Falcón et al. 2021).
Finally, although the Warning Lexicon was designed to include plain language that is accessible to message receivers, impact and protective action statements were not rated for their readability or reading ease. We did, however, explore strategies to assess the reading level of each statement. One readability assessment found that more than half of Americans between the ages of 16 and 74 read below the equivalent of a sixth-grade level (Schmidt 2022). Because of this, some agencies within the US Government, such as the Department of Defense, Internal Revenue Service, and Social Services Administration, make use of the Flesch-Kincaid Reading Ease test, with a goal of meeting a sixth to eighth grade reading level (US Social Security Adminstration 2015; Wylie Communications, n.d.). Readability formulas are designed and used to estimate the difficulty of textual features, such as the frequency of familiar vocabulary and the syntax of text, including number of words and their syllables, number of sentences, and even number of characters that can be quantified. However, readability formulas were designed to examine extended text of at least 200–600 words in paragraph form, not brief statements such as those that are included in warning messages (Readability Formulas, n.d.). Scholars who have applied readability formulas to questions used on tests and exams, which are frequently similar in length to short messages used for warnings, argue that “given the brevity and density of information conveyed in [test items], readability formula data on them are likely to be unreliable” (Oakland and Lane 2004, p. 245). Although they are easy to access and use and demonstrate compliance with government directives such as the Plain Writing Act of 2010 (Office of the Federal Register, National Archives and Records Administration 2010), readability calculators may not provide valid assurance about ease of reading for short messages. Thus, assessing the readability of the Warning Lexicon cannot be completed using existing formulas and calculators, but will instead rely on future testing with populations that are likely to encounter content presented as short messages.

Future Research

The contents in the Warning Lexicon were verified, and validated, using a multiphase strategy, drawing from the state-of-the-art knowledge on warning messages. At the time of this writing, the research team has conducted one public message testing experiment to determine the effectiveness of the Warning Lexicon contents for five different hazards on WRM outcomes described above. Results demonstrated that the inclusion of contents provided in the Warning Lexicon increased message understanding, belief, and self-efficacy in comparison with similar, complete messages. However, additional message testing and research needs remain. First, contents drawn from the Warning Lexicon can be used in additional message-testing experiments for emerging hazards to determine their effectiveness, in comparison with those using alternative contents. Second, the Warning Lexicon, as a content database, can be tested with message writers to determine how it can be incorporated it into their own preparedness and response practices as well as how it affects their ability to write effective messages. Third, the contents of the Warning Lexicon can be expanded to include new or emergent threats, to account for complex hazards, or to add public safety contents such as alerts for missing persons. Fourth, the Warning Lexicon can be translated into additional languages and tested with those language-speaking populations. And fifth, the Warning Lexicon can also adapt to include other lexical features, such as icons, to depict hazards and protective actions and maps to depict hazard location.
In the future, the Warning Lexicon will serve as the data backbone for software that will enable alert providers to systematically construct warning messages using drop-down menus and other features. Like the suggestion made by Doermann et al. (2021) for wildfire messaging, the Warning Lexicon will provide the foundation for an all-hazards message design dashboard.

Conclusions

Emergency managers and other risk communicators are frequently tasked with writing effective alert and warning messages. Many do so with little training and a lack of knowledge about the messaging practices that can increase the likelihood that message receivers will act upon warning information. The Warning Lexicon is a tool that provides alert and warning originators the contents needed to quickly build effective warning messages. The Warning Lexicon provides impact and protective action statements that have been verified and validated for 48 hazards and can be expanded to include new content specific to hazards and threats that emerge in the future. The application of the Warning Lexicon, used in conjunction with empirically supported alert and warning practices, can lead to standards for messaging via WEAs and other short message alerting systems.

Supplemental Materials

File (supplemental_materials_nhrefo.nheng-1900_sutton.pdf)

Data Availability Statement

Anonymized data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

This material is based upon work supported by the Federal Emergency Management Agency Integrated Public Alert and Warning System, or, FEMA-IPAWS (70FA5021C00000016) awarded to Jannette Sutton; we also extend thanks to the anonymous subject matter experts who shared their time and knowledge with us; and we acknowledge the unmatched contributions made to this field by Dr. Dennis Mileit and Dr. John Sorensen, upon whose shoulders this work stands. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of FEMA.

References

Agnew, D. 2021. “As texans endured days in the dark, the state failed to deliver vital emergency information.” Accessed September 30, 2023. https://www.texastribune.org/2021/02/19/texas-emergency-communication-power-outages/.
AT&T. 2022. “WEA capable phones.” Accessed March 15, 2023. https://www.att.com/idpassets/images/support/pdf/WEA-capablePhones.pdf.
Bean, H. 2019. Mobile technology and the transformation of public alert and warning. Santa Barbara, CA: ABC-CLIO.
Bean, H., and A. A. Hasinoff. 2022. “The social functions of idle alerts.” In Social media and crisis communication, edited by Y. Jin and L. Austin, 360–370. New York: Routledge.
Bean, H., B. F. Liu, S. Madden, D. Mileti, J. Sutton, and M. Wood. 2014. Comprehensive testing of imminent threat public messages for mobile devices. College Park, MD: National Consortium for the Study of Terrorism and Responses to Terrorism.
Bean, H., B. F. Liu, S. Madden, J. Sutton, M. M. Wood, and D. S. Mileti. 2016. “Disaster warnings in your pocket: How audiences interpret mobile alerts for an unfamiliar hazard.” J. Contingencies Crisis Manage. 24 (3): 136–147. https://doi.org/10.1111/1468-5973.12108.
Bean, H., J. Sutton, B. F. Liu, S. Madden, M. M. Wood, and D. S. Mileti. 2015. “The study of mobile public warning messages: A research review and agenda.” Rev. Commun. 15 (1): 60–80. https://doi.org/10.1080/15358593.2015.1014402.
Bender, A. 2022. “Wireless emergency alerts; amendment of Part 11 of the commission’s rules regarding the emergency alert system, PS Docket nos. 15-91, 15-94.” Accessed January 23, 2023. https://www.fcc.gov/ecfs/document/10728245959959/1.
Braun, V., and V. Clarke. 2006. “Using thematic analysis in psychology.” Qual. Res. Psychol. 3 (2): 77–101. https://doi.org/10.1191/1478088706qp063oa.
Buyuk, H. F. 2023. “Turkey blocks Twitter after public criticism of quake response.” BalkanInsight. Accessed March 15, 2023. https://balkaninsight.com/2023/02/08/turkey-blocks-twitter-after-public-criticism-of-quake-response/.
Cao, Y., B. J. Boruff, and I. M. McNeill. 2016. “Is a picture worth a thousand words? Evaluating the effectiveness of maps for delivering wildfire warning information.” Int. J. Disaster Risk Reduct. 19 (Oct): 179–196. https://doi.org/10.1016/j.ijdrr.2016.08.012.
CISA (Cytersecurity and Infrastructure Security Agency). 2019. “National emergency communications plan.” Accessed January 23, 2023. https://www.cisa.gov/sites/default/files/publications/19_0924_CISA_ECD-NECP-2019_1_0.pdf.
Creswell, J. W., and J. D. Creswell. 2017. Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: SAGE.
DHS S&T Directorate (Department of Homeland Security Science and Technology Directorate). 2018. “Report on alerting tactics.” Accessed March 15, 2023. https://www.dhs.gov/sites/default/files/publications/1051_IAS_Report-on-Alerting-Tactics_180807-508.pdf.
Doermann, J. L., E. D. Kuligowski, and J. Milke. 2021. “From social science research to engineering practice: Development of a short message creation tool for wildfire emergencies.” Fire Technol. 57 (Mar): 815–837. https://doi.org/10.1007/s10694-020-01008-7.
FCC (Federal Commuications Commission). 2022. “Communications security, reliability, and interoperability council VIII working group tasks.” Accessed March 15, 2023. https://www.fcc.gov/file/23814/download.
FEMA. n.d. “Ready.” Accessed September 29, 2023. https://www.ready.gov.
FEMA. 2013. “Improvised nuclear device response and recovery: Communicating in the immediate aftermath.” Accessed March 15, 2023. https://www.fema.gov/sites/default/files/documents/fema_improvised-nuclear-device_communicating-aftermath_june-2013.pdf.
FEMA. 2021a. “Improving public messaging for evacuation and shelter in pace. Findings and recommendations for emergency managers from peer-reviewed research.” Accessed July 5, 2023. https://www.fema.gov/sites/default/files/documents/fema_improving-public-messaging-for-evacuation-and-shelter-in-place_literature-review-report.pdf.
FEMA. 2021b. “Protective actions research.” Accessed March 15, 2023. https://community.fema.gov/ProtectiveActions/s/.
FEMA IPAWS (Integrated Public Alert and Warning System). 2021. “Tip 38: Imminent threat vs. public safety.” Accessed March 15, 2023. https://www.fema.gov/sites/default/files/documents/fema_ipaws-tip-38-it-vs-ps.pdf.
FEMA IPAWS (Integrated Public Alert and Warning System). 2022a. “IPAWS program planning toolkit.” Accessed March 15, 2023. https://www.fema.gov/emergency-managers/practitioners/integrated-public-alert-warning-system/public-safety-officials/toolkit.
FEMA IPAWS (Integrated Public Alert and Warning System). 2022b. “Wireless emergency alert capabilities by cellular handset and wireless provider.” Accessed March 15, 2023. https://www.fema.gov/sites/default/files/documents/fema_ipaws-guidance-wea-versions-provider-links.pdf.
FEMA IPAWS (Integrated Public Alert and Warning System). 2023. “IPAWS best practices.” Accessed July 5, 2023. https://www.fema.gov/sites/default/files/documents/fema_ipaws-best-practices-guide.pdf.
Fischer, L., D. Huntsman, G. Orton, and J. Sutton. 2023. “You have to send the right message: Examining the influence of protective action guidance on message perception outcomes across prior hazard warning experience to three hazards.” Weather Clim. Soc. 15 (2): 307–326. https://doi.org/10.1175/WCAS-D-22-0092.1.
Frisby, B. N., S. R. Veil, and T. L. Sellnow. 2014. “Instructional messages during health-related crises: Essential content for self-protection.” Health Commun. 29 (4): 347–354. https://doi.org/10.1080/10410236.2012.755604.
Gao, S., and Y. Wang. 2021. “Assessing the impact of geo-targeted warning messages on residents’ evacuation decisions before a hurricane using agent-based modeling.” Nat. Hazards 107 (1): 123–146. https://doi.org/10.1007/s11069-021-04576-1.
Goldman, J. 2011. Words of intelligence: An intelligence professional’s lexicon for domestic and foreign threats. Lanham, MD: Scarecrow Press.
Gray, L., G. Wong-Wylie, G. Rempel, and K. Cook. 2020. “Expanding qualitative research interviewing strategies: Zoom video communications.” Qual. Rep. 25 (5): 1292–1301. https://doi.org/10.46743/2160-3715/2020.4212.
Harrison, J., C. McCoy, K. Bunting-Howarth, H. Sorensen, K. Williams, and C. Ellis. 2014. Evaluation of the national weather service impact-based warning tool. Washington, DC: National Oceanic and Atmospheric Administration.
Harrison, S. E., S. H. Potter, R. Prasanna, E. E. H. Doyle, and D. Johnston. 2021. “‘Where oh where is the data?’: Identifying data sources for hydrometeorological impact forecasts and warnings in Aotearoa New Zealand.” Int. J. Disaster Risk Reduct. 66 (Dec): 102619. https://doi.org/10.1016/j.ijdrr.2021.102619.
Hodas, N. O., G. Ver Steeg, J. Harrison, S. Chikkagoudar, E. Bell, and C. D. Corley. 2015. “Disentangling the lexicons of disaster response in Twitter.” In Proc., 24th Int. Conf. on World Wide Web, 1201–1204. New York: Association for Computing Machinery.
Kox, T., and A. H. Thieken. 2017. “To act or not to act? Factors influencing the general public’s decision about whether to take protective action against severe weather.” Weather Clim. Soc. 9 (2): 299–315. https://doi.org/10.1175/WCAS-D-15-0078.1.
Kuligowski, E. D., N. A. Waugh, J. Sutton, and T. J. Cova. 2023. “Ember alerts: Assessing wireless emergency alert messages in wildfires using the warning response model.” Nat. Hazards Rev. 24 (2): 04023009. https://doi.org/10.1061/NHREFO.NHENG-1724.
Linardos, V., M. Drakaki, P. Tzionas, and Y. Karnavas. 2022. “Machine learning in disaster management: Recent developments in methods and applications.” Mach. Learn. Knowl. Extr. 4 (2): 446–473. https://doi.org/10.3390/make4020020.
Lindell, M. K., and R. W. Perry. 2012. “The protective action decision model: Theoretical modifications and additional evidence.” Risk Anal. 32 (4): 616–632. https://doi.org/10.1111/j.1539-6924.2011.01647.x.
Margolis, J. 2020. “Confused about curfew? One of those messages was meant for Glendale.” LAist. Accessed October 25, 2023. https://laist.com/news/curfew-confusion-los-angeles-glendale-mistake-emergency-wireless-alerts.
McCrindle, M., and E. Wolfinger. 2011. Word up: A lexicon and guide to communication in the 21st century. Braddon, ACT, Australia: Halsted Press.
McGregor, J., J. P. Elm, E. T. Stark, J. Lavan, R. Creel, C. Alberts, C. Woody, R. J. Ellison, and T. Marshall-Keim. 2014. Best practices in wireless emergency alerts. Pittsburgh: Software Engineering Institute.
McKinley, C. 2022. “Marshall fire exposes holes in emergency notification systems.” The Gazette. Accessed July 5, 2023. https://gazette.com/premium/marshall-fire-exposes-holes-in-emergency-notification-systems/article_20114fdc-7594-11ec-ab04-3b0890a75587.html.
Merriam-Webster. 2023. “Lexicon.” Accessed March 15, 2023. https://www.merriam-webster.com/dictionary/lexicon.
Mileti, D. 2018. PrepTalks: Modernizing public warning messaging. Washington, DC: FEMA.
Mileti, D., M. Wood, H. Bean, F. Lui Brooke, J. Sutton, and S. Madden. 2012. Workshop briefing white paper: Comprehensive testing of imminent threat public messages for mobile devices. College Park, MD: National Consortium for the Study of Terrorism and Responses to Terrorism.
Mileti, D. S., and J. H. Sorensen. 1990. Communication of emergency public warnings: A social science perspective and state-of-the-art assessment. Oak Ridge, TN: Oak Ridge National Laboratory.
Minson, S. E., E. S. Cochran, J. K. Saunders, S. K. McBride, S. Wu, A. S. Baltay, and K. R. Milner. 2022. “What to expect when you are expecting earthquake early warning.” Geophys. J. Int. 231 (2): 1386–1403. https://doi.org/10.1093/gji/ggac246.
Munns, W. R., A. W. Rea, M. J. Mazzotta, L. A. Wainger, and K. Saterson. 2015. “Toward a standard lexicon for ecosystem services.” Integr. Environ. Assess. Manage. 11 (4): 666–673. https://doi.org/10.1002/ieam.1631.
Nagourney, A., D. E. Sanger, and J. Barr. 2018. “Hawaii panics after alert about incoming missile is sent in error.” NY Times. Accessed September 30, 2023. https://www.nytimes.com/2018/01/13/us/hawaii-missile.html.
NBC Los Angeles. 2021. “LA citywide alert issued to remind people to get the COVID-19 vaccine.” Accessed March 15, 2023. https://www.nbclosangeles.com/news/local/la-to-send-wireless-emergency-alert-to-cell-phones-to-urge-vaccinations/2587155/.
NRC (National Research Council). 2013. Public response to alerts and warnings using social media: Report of a workshop on current knowledge and research gaps, Washington, DC: National Academies Press.
NWS (National Weather Service). n.d. “Wireless emergency alerts (360 characters).” Accessed July 5, 2023. https://www.weather.gov/wrn/wea360.
Oakland, T., and H. B. Lane. 2004. “Language, reading, and readability formulas: Implications for developing and adapting tests.” Int. J. Test. 4 (3): 239–252. https://doi.org/10.1207/s15327574ijt0403_3.
Office of the Federal Register, National Archives and Records Administration. 2010. “Public law 111–274—Plain writing act of 2010.” US Government Printing Office. Accessed October 13, 2010. https://www.govinfo.gov/content/pkg/PLAW-111publ274/pdf/PLAW-111publ274.pdf.
Olteanu, A., C. Castillo, F. Diaz, and S. Vieweg. 2014. “CrisisLex: A lexicon for collecting and filtering microblogged communications in crises.” In Proc., Int. AAAI Conf. on Web and Social Media. Cambridge, MA: Association for the Advancement of Artificial Intelligence Press.
Plotnick, L., and S. R. Hiltz. 2018. “Software innovations to support the use of social media by emergency managers.” Int. J. Hum.-Comput. Interact. 34 (4): 367–381. https://doi.org/10.1080/10447318.2018.1427825.
Potter, S. H., P. V. Kreft, P. Milojev, C. Noble, B. Montz, A. Dhellemmes, R. J. Woods, and S. Gauden-Ing. 2018. “The influence of impact-based severe weather warnings on risk perceptions and intended protective actions.” Int. J. Disaster Risk Reduct. 30 (Sep): 34–43. https://doi.org/10.1016/j.ijdrr.2018.03.031.
Quad Cities. 2021. “Illinois sends emergency alert asking for healthcare workers to join COVID-19 fight.” Accessed March 15, 2023. https://www.ourquadcities.com/news/local-news/illinois-sends-emergency-alert-asking-for-healthcare-workers-to-join-covid-19-fight/.
Readability Formulas. n.d. “What are readability formulas and how do they help me?” Accessed March 15, 2023. https://www.readabilityformulas.com/.
Salafasky, N., et al. 2008. “A standard lexicon for biodiversity conservation: Unified classifications of threats and actions.” Conserv. Biol. 22 (4): 897–911. https://doi.org/10.1111/j.1523-1739.2008.00937.x.
Schmidt, E. 2022. “Reading the numbers: 130 million American adults have low literacy skills, but funding differs drastically by state.” Accessed March 15, 2023. https://www.apmresearchlab.org/10x-adult-literacy.
Schneider, D. 2022. “Green Bay residents encounter confusion after ‘emergency alert’ was sent out cellphones across city.” Green Bay Press Gazette. Accessed July 5, 2023. https://www.greenbaypressgazette.com/story/news/local/2022/12/01/green-bay-residents-encounter-confusion-after-alert-was-sent/69690451007/.
Sharon, A. J., and A. Baram-Tsabari. 2014. “Measuring mumbo jumbo: A preliminary quantification of the use of jargon in science communication.” Public Understanding Sci. 23 (5): 528–546. https://doi.org/10.1177/0963662512469916.
Shulman, H. C., and O. M. Bullock. 2020. “Don’t dumb it down: The effects of jargon in COVID-19 crisis communication.” PLoS One 15 (10): e0239524. https://doi.org/10.1371/journal.pone.0239524.
Sivle, A. D., and T. Aamodt. 2019. “A dialogue-based weather forecast: Adapting language to end-users to improve communication.” Weather 74 (12): 436–441. https://doi.org/10.1002/wea.3439.
Statcounter GlobalStats. 2023. “Mobile vendor market share in United States of America.” Accessed March 15, 2023. https://gs.statcounter.com/vendor-market-share/mobile/united-states-of-america.
Sutton, J., L. Fischer, L. E. James, and S. E. Sheff. 2020. “Earthquake early warning message testing: Visual attention, behavioral responses, and message perceptions.” Int. J. Disaster Risk Reduct. 49 (Oct): 101664. https://doi.org/10.1016/j.ijdrr.2020.101664.
Sutton, J., L. Fischer, and M. M. Wood. 2021. “Tornado warning guidance and graphics: Implications of the inclusion of protective action information on perceptions and efficacy.” Weather Clim. Soc. 13 (4): 1003–1014. https://doi.org/10.1175/WCAS-D-21-0097.1.
Sutton, J., and L. M. Fischer. 2021. “Understanding visual risk communication messages: An analysis of visual attention allocation and think-aloud responses to tornado graphics.” Weather Clim. Soc. 13 (1): 173–188. https://doi.org/10.1175/WCAS-D-20-0042.1.
Sutton, J., and E. D. Kuligowski. 2019. “Alerts and warnings on short messaging channels: Guidance from an expert panel process.” Nat. Hazards Rev. 20 (2): 04019002. https://doi.org/10.1061/(ASCE)NH.1527-6996.0000324.
Sutton, J., S. C. Vos, M. M. Wood, and M. Turner. 2018. “Designing effective tsunami messages: Examining the role of short messages and fear in warning response.” Weather Clim. Soc. 10 (1): 75–87. https://doi.org/10.1175/WCAS-D-17-0032.1.
Sutton, J., M. Wood, D. Huntsman, N. Waugh, and S. Crouch. 2023. “Communicating hazard location in earthquake early warnings.” Nat. Hazards Rev. 24 (4). https://doi.org/10.1061/NHREFO.NHENG-1723.
Temnikova, I., A. Varga, and D. Biyikli. 2014. “Building a crisis management term resource for social media: The case of floods and protests.” In Proc., 9th Int. Conf. on Language Resources and Evaluation (LREC’14), 740–747. Reykjavik, Iceland: European Language Resources Association (ELRA).
Trujillo-Falcón, J. E., O. Bermúdez, K. Negrón-Hernández, J. Lipski, E. Leitman, and K. Berry. 2021. “Hazardous weather communication en Español: Challenges, current resources, and future practices.” Bull. Am. Meteorol. Soc. 102 (4): E765–E773. https://doi.org/10.1175/BAMS-D-20-0249.1.
Tutten, J. 2023. “Officials apologize after ‘emergency alert’ test sent in ‘error’ to cell phones at 4:45 a.m.” WFTV9. Accessed July 5, 2023. https://www.wftv.com/news/local/floridians-outraged-after-alarm-during-emergency-alert-test-sent-445-am/65R2O6EDT5HTRCDST3NDRYHE4Q/.
USACE. 2019. “Public Warning for dam and levee failure.” Accessed July 5, 2023. https://www.publications.usace.army.mil/Portals/76/Users/182/86/2486/EP%201110-2-17.pdf?ver=2019-06-20-152050-550.
US Social Security Administration. 2019. “What is the Flesch-Kincaid readability test.” Accessed March 15, 2023. https://secure.ssa.gov/poms.nsf/lnx/0910605105.
Verizon. n.d. “Wireless emergency alert compatible devices.” Accessed March 15, 2023. https://www.verizon.com/support/wireless-emergency-alerts-compatible-devices/.
Whitcomb, D. 2018. “After disaster alert failures, US moves toward national system.” Reuters. Accessed July 5, 2023. https://www.reuters.com/article/us-usa-disaster-alerts/after-disaster-alert-failures-u-s-moves-toward-national-system-idUSKBN1KU1C1.
Williams, C. A., and G. M. Eosco. 2021. “Is a consistent message achievable? Defining ‘message consistency’ for weather enterprise researchers and practitioners.” Bull. Am. Meteorol. Soc. 102 (2): E279–E295. https://doi.org/10.1175/BAMS-D-18-0250.1.
Wood, M., H. Bean, B. Liu, and M. Boyd. 2015. Comprehensive testing of imminent threat public messages for mobile devices: Updated findings. College Park, MD: National Consortium for the Study of Terrorism and Responses to Terrorism.
Wood, M., D. S. Mileti, H. Bean, B. F. Liu, J. Sutton, and S. Madden. 2018. “Milling and public warnings.” Environ. Behav. 50 (5): 535–566. https://doi.org/10.1177/0013916517709561.
Wood, M. M. 2017. “Geotargeted alerts and warnings.” In International encyclopedia of geography: People, the earth, environment and technology, 1–11. Oxford, UK: Wiley.
Woodworth, W. 2020. “‘Completely botched’: Failed emergency alerts raise questions for future disasters.” Accessed September 30, 2023. https://www.statesmanjournal.com/story/news/2020/09/28/failed-oregon-wildfire-emergency-evacuation-alerts-questions-future-disasters-earthquake/5792075002/.
Wylie Communications. n.d. “Measure reading levels with readability indexes: How to calculate readability with the Flesch Reading Ease, Flesch-Kincaid and Gunning Fog tests.” Accessed March 15, 2023. https://www.wyliecomm.com/2021/11/measure-reading-levels-with-readability-indexes/.

Information & Authors

Information

Published In

Go to Natural Hazards Review
Natural Hazards Review
Volume 25Issue 1February 2024

History

Received: Mar 23, 2023
Accepted: Aug 22, 2023
Published online: Oct 30, 2023
Published in print: Feb 1, 2024
Discussion open until: Mar 30, 2024

ASCE Technical Topics:

Authors

Affiliations

Associate Professor, College of Emergency Preparedness, Homeland Security, and Cybersecurity, Univ. at Albany, State Univ. of New York, 1220 Washington Ave., Albany, NY 12226 (corresponding author). ORCID: https://orcid.org/0000-0002-4345-9108. Email: [email protected]
Michele K. Olson [email protected]
Senior Research Associate, College of Emergency Preparedness, Homeland Security, and Cybersecurity, Univ. at Albany, State Univ. of New York, 1220 Washington Ave., Albany, NY 12226. Email: [email protected]
Nicholas A. Waugh [email protected]
Research Associate, College of Emergency Preparedness, Homeland Security, and Cybersecurity, Univ. at Albany, State Univ. of New York, 1220 Washington Ave., Albany, NY 12226. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • The Legacy of Dennis S. Mileti and the Future of Public Alert and Warning Research, Natural Hazards Review, 10.1061/NHREFO.NHENG-2113, 25, 3, (2024).

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share