Introduction
The consequences of extreme events on transportation networks largely depend on (1) the level of damage to the assets after the event, and (2) the developed program to restore the demanded level of service. An optimal restoration program helps infrastructure managers in assigning the priority and the level of service recovery to damaged assets in the network such that the network functionality is restored as fast as possible and with minimal consequences that are usually monetized (
Cavdaroglu et al. 2011).
Researchers have mostly focused on two approaches in developing postdisaster restoration programs. Some have used simple policies and guidelines based on engineering or economic criteria to find the order of restoring the damaged assets, for example prioritizing the recovery of the assets with higher damage levels (
Buckle et al. 2006), or higher average daily traffic volumes (
Miller 2014), or prioritizing the recovery of the more important or critical assets in the network (
Liu et al. 2020;
Scott et al. 2006). These approaches are favored in real-world practices because they are relatively fast; however, they often do not result in optimal solutions.
In this approach, an algorithm is used to search among the feasible solutions to the problem and find the optimal solution. These algorithms are divided into exact and approximate algorithms. Exact algorithms guarantee to find the globally optimal solution, whereas approximate algorithms (heuristics and metaheuristics) have some randomness, and although they will find a good solution, it may not be the best (the global optimum). However, approximate algorithms are characterized by their shorter computation time and thus are more efficient for problems with a large solution space, including selecting the optimal way to restore the demanded service in large transportation networks, after extreme events.
In the present literature, various algorithms are used to minimize the consequences of extreme events on transportation infrastructure and develop restoration programs; including stochastic mixed-integer programs (
Chen and Miller-Hooks 2012;
Zhang et al. 2021), genetic algorithms (GAs) (
Chen and Tzeng 1999;
Mao et al. 2021;
Moghtadernejad et al. 2020), ant colony system (ACS) (
Yan and Shih 2012), or simulated annealing (SA) (
Hackl et al. 2018;
Vugrin et al. 2014). According to the studies, these algorithms have been successful in finding good solutions for transportation network recovery. However, except for a few of these studies (
Hackl et al. 2018;
Orabi et al. 2009), the proposed algorithms were tested for their efficiency on small networks with fewer than a dozen edges; hence, more investigation is needed to assess the performance of the algorithms in real-world problems. Moreover, it is difficult to determine how well-suited each algorithm is for a specific problem because it is not possible to compare the performance of various algorithms based on the results of different studies that look at different networks and objective functions.
Since the late 1990s, computational swarm intelligence has been receiving increasing attention due to the improvements in swarm intelligence theory and is been used by researchers to solve real-world optimization problems. Particle swarm optimization (PSO) is a stochastic algorithm that is normally used in continuous search spaces but can also be implemented in combinatorial optimization problems with a discrete search space. In infrastructure management, PSO has been previously applied to the electrical energy distribution systems by Lambert-Torres et al. (
2008), to maximize the supplied load and minimize the energy loss. Ahmed et al. (
2019) implemented a discrete particle swarm optimization (DPSO) algorithm for pavement maintenance activities, and Hu et al. (
2012) used PSO for an optimal allocation of earthquake emergency shelters. Edrisi and Askari (
2020) used a model for critical infrastructure that integrated the predisaster mitigation and preparedness actions with postdisaster response operations during earthquakes and found the optimal budget allocation using PSO.
In a study by Chang (
2018), GA and PSO were used to optimize network optimization scheduling, where the suitability of the algorithms was compared in 12 network instances that were generated using a random geometric graph algorithm. However, despite the current applications of PSO in the infrastructure sector, its application in the postdisaster restoration of real-world transportation networks has not been investigated yet. The difference between the nature of these studies is due to the traffic interruptions that are considered in the latter but not in the former studies and have a strong influence on the complexity and the overall computation time.
Consequently, this paper investigates the application of a DPSO algorithm in developing suitable restoration programs with regards to the overall direct and indirect costs for real-world transportation networks. The suitability of the algorithm is examined in a road network in Switzerland for multiple flood events with different damage levels, and the results are compared with a benchmark model and the SA algorithm that was previously studied on the same network by Hackl et al. (
2018).
The structure of the paper is as follows. First the objective function of the model for developing the restoration programs is reviewed. Then the PSO algorithm is adjusted to reach a good solution for the objective function. The proposed restoration model is tested on a case study in Chur, Switzerland, and the introduced DPSO algorithm is used to minimize the overall costs after three flood events. The results are then compared with a benchmark model and the SA algorithm and the advantages and limitations of the DPSO are discussed along with some suggestions for future work.
Restoration Model
Optimal restoration programs propose the sequence and level of interventions that will result in restoring the functionality of the network with minimal consequences (in this paper the functionality is defined in terms of flow capacity). The objective function of the restoration model in this study was previously introduced by Hackl et al. (
2018). This function minimizes the sum of the direct and indirect costs [Eq. (
1)], i.e., the cost of the restoration work [Eq. (
2)] and the costs related to the loss/reduction of service [Eq. (
3)] from the time the hazard happens until the restoration work is finished
where
= overall cost of intervention
on object
with a damage state
and comprises the overall fixed, variable, and resource-related costs (
Hackl et al. 2018). In each time period
, only one intervention
can be executed for the damaged object
. If in the time period
, the intervention
is executed on the object
with a damage state
, then the variable
; otherwise,
The indirect costs are caused by additional travel time for all paths , or the loss of connectivity for all paths , where is the set of origin–destination (od) paths with some traffic flow that does not comprise any objects with no functionality , and is the set of all disconnected od-paths.
The indirect costs are calculated from the time the hazard starts until the time all the damaged objects in the network are restored. This value is equal to the difference between the indirect costs in period
and
when the network was fully functional (
Hackl et al. 2018). Here,
represents the link flow on edge
in period
(hence,
refers to the link flow at
where the hazard has not happened yet) and is estimated by solving a user equilibrium assignment as indicated in Eq. (
4)
where
= travel cost function for edge
and can be determined from Eq. (
5) (
BPR 1964)
where
= free-flow travel duration;
= travel duration on edge
in each period
;
capacity in period
; and parameters
and
are for calibration.
The costs related to the loss in connectivity
[Eq. (
3)] are estimated by multiplying the estimated labor productivity value
and the number of lost demands
due to disconnected paths
in period
(
Freeman 2008), as indicated in Eq. (
6)
The costs due to the additional travel time
[Eq. (
3)] are the summation of the costs due to travel prolongation
[Eq. (
7)], and the related increase in vehicle operating costs
[Eq. (
8)]
where
= percentage of each vehicle type
on edge
;
= travel costs per time unit for each vehicle type
on edge
;
= length of each edge; and
,
, and
= average fuel consumption, average fuel price, and the operating and maintenance costs for each vehicle type without considering fuel consumption, respectively.
It can be observed that the objective function above is a bilevel optimization problem, where the minimization of the indirect costs in Eq. (
1) is constrained to the optimization of the traffic flow assignment in Eq. (
4). To solve this problem, an optimization algorithm should be implemented to reach a good solution for the objective function in Eq. (
1), and a proper traffic model should be used to solve the link-flow assignment in Eq. (
4).
The following section will provide a brief overview of the classification of optimization methods and a detailed description of how the continuous PSO algorithm is adjusted to be used in discrete combinatorial problems.
Optimization Algorithms for the Restoration Model
According to the restoration model introduced earlier, the development of a suitable restoration program necessitates solving a combinatorial optimization problem, where the priority and level of restoring the damaged objects are chosen from a finite set of different intervention sequences in time. As mentioned previously, optimization algorithms are divided into exact and approximate algorithms (Fig.
1). Exact algorithms guarantee to find the global optimal (e.g., branch and bound which is an enumerative method), whereas approximate algorithms attain a good solution, although it may not be the global optimum. The application of exact algorithms in nonconvex problems [convex problems are those where a line segment between any two points of the search space graph does not lie outside the graph between the two points (
Boyd et al. 2004)] with a discrete search space, such as finding the optimal restoration programs, is complex and time-consuming. However, computation time is an important indicator of the efficiency of optimization algorithms for the proposed restoration model; the longer it takes to develop the recovery plan after an extreme event, the more the indirect costs due to travel prolongation and missed trips will increase. Therefore, approximate methods are more suitable for the task due to their shorter computation time.
Approximate algorithms are divided into heuristic and metaheuristic algorithms. Heuristic algorithms are problem-dependent and find a good solution by trial and error, such as the nearest neighbor algorithm. Metaheuristic algorithms have advanced heuristic algorithms by combining the information from the local search and the randomization as well as other constructive procedures (
Halim and Ismail 2019). These methods are normally inspired by natural processes in the environment (swarm-based algorithms), biology (evolutionary algorithms), and physics (e.g., harmony search and simulated annealing).
Swarm intelligence is based on modeling the intelligent behavior of the population of interacting yet autonomous agents or swarms that can self-organize (
Lenau and Hesselberg 2013). These models have been widely employed in the work on artificial intelligence and were initially inspired by the observation of nature, in particular biological systems. In these systems, in absence of a centralized control structure to dictate the individual behavior of the agents, the local and to some degree random interactions among the agents lead to an intelligent global behavior that is unknown to the individual agents. Examples of optimization algorithms using swarm intelligence include PSO (
Clerc 2004;
Gao et al. 2012;
Wang et al. 2003), ACS (
Dorigo and Gambardella 1997;
Nilsson 2003;
Yan and Shih 2012), and artificial bee colony (ABC) (
Pandey and Kumar 2013;
Wang et al. 2019). According to the literature, PSO is more widely used in continuous optimization problems; ACS is used to deal with discrete optimization problems, and ABC is mostly implemented in optimizing multivariable objective functions with few parameters (
Gao et al. 2012). These algorithms have been previously used in combinatorial optimization problems, especially the traveling salesman problem, and reached suitable solutions (
Chaudhari and Thakkar 2019).
Identifying the optimal order to restore the transportation networks after extreme events is also a combinatorial optimization problem; however, the algorithms that work well in conventional combinatorial problems are not necessarily as efficient in developing the optimal restoration programs for transportation networks. For example, when using swarm intelligence to find the optimal restoration programs, the ACS and ABC algorithms should calculate the cost difference between interventions in a sequence. This step corresponds to finding the distance between various cities in the traveling salesman problem. Hence, it can be inferred that although this is a very straightforward step for problems such as the traveling salesman, it is not the case for the existing restoration model [Eq. (
1)] due to the necessity of optimizing the traffic flow assignment in each step to calculate the related indirect costs. The PSO algorithm, on the other hand, does not include this step; although it has been mostly used in continuous problems, its tailored applications in discrete optimization problems have been successful (
Ahmed et al. 2019;
Gao et al. 2012;
Hu et al. 2012;
Edrisi and Askari 2020). Consequently, in the following section, the conventional PSO is altered to identify good solutions to the restoration model presented earlier.
Discrete Algorithm for Particle Swarm Optimization
The PSO algorithm was first introduced in 1995 by Eberhart and Kennedy (
1995) and Eberhart and Shi (
2011). PSO is an old but most commonly used swarm intelligence algorithm, which was initially inspired by a flying swarm of birds searching for food. This method searches for the optimal solution through agents, known as particles, whose trajectories are adjusted by both stochastic and deterministic components. Although the particles tend to move randomly, their moving direction is influenced by their individual best-known and the group’s best-known solutions (
Ravagnani et al. 2014). In other words, the direction of the particles’ movements is influenced by their individual best-known solution and particles are steered toward the best-known solution by all particles in the search space, which will be updated as better solutions are detected. This process will steer the swarm to move toward the best solutions. The pseudocode for PSO, which is suitable for continuous optimization problems is illustrated in Fig.
2.
In this algorithm, is the inertia coefficient that determines the influence of the previous solution on the particles’ movement, and are the coefficients that determine the influence of the best solution by a particle () and the best solution by the swarm or the global best (). In general, as the individual coefficient approaches zero, the particles converge to the local/global optimum faster. On the other hand, as the social coefficient approaches zero, the particles will just search their vicinity. Here, and are random numbers at iteration .
This algorithm is widely popular and effective in continuous optimization problems; however, for combinatorial optimization problems, the presented PSO algorithm should be altered to accommodate a discrete search space. This means that the particles should search for various combinations of interventions in time rather than moving in a continuous search space. Consequently, if we consider a feasible solution , as a sequence of restoration interventions (), to shift to a new feasible solution , we need a swap operator to exchange the position of two restoration tasks. For example, in a restoration program with 4 restoration interventions, a swap operator (1,2) would change solution to (, , , ).
A combination of one or more swap operators constructs a swap sequence, where the order of the swap operators is important. Equivalent swap sequences are those that act on a solution and produce the same new solution. In the equivalent set, the sequence with the least number of operators would be the base sequence. The base sequence that acts on solution B to get solution A is represented as
(
Wang et al. 2003). Hence,
. The operators
and
are used to represent the integration of two swap sequences that creates a new swap sequence. Accordingly, the pseudocode for PSO would be updated as follows to be suitable for discrete problems (Fig.
3).
In this algorithm, first,
swarms with
particles are generated. Then an initial position (solution or restoration sequence) is created, and the overall costs related to that position are calculated using the objective function
in Eq. (
1). At each iteration
, the basic swap sequence that acts on each particle position
(current solution) to get the historically best position of that particle, i.e.,
, is found according to the aforementioned method. This is in parallel with finding the basic swap sequence that acts on each particle position
to get the historically best position in the swarm, i.e.,
. The resulting swap sequence
acts on each particle position
to get a new position (solution), and then the solution is evaluated using the objective function
. This process is continued until a satisfactory position (solution) is reached.
The discrete PSO requires an appropriate choice of the model parameters including the number of particles, number of iterations, and the individual , social , and inertia () coefficients.
Choosing the Parameters of DPSO for Determining Optimal Postdisaster Restoration Programs
The suitability of approximate algorithms for identifying the optimal restoration programs is dependent on the computation time (i.e., efficiency) and the closeness of the solution to the global optimum, which is problem-specific and depends on the objective function, the size of the network under study, the number of damaged assets and intervention options, and the choice of parameters for the optimization algorithm.
Choosing the right parameters for metaheuristic algorithms is a challenging task and requires considerable time and effort. Eiben et al. (
1999) categorized the principal ways of setting parameter values into parameter tuning and parameter control. In parameter tuning, the parameter values are found and set before running the algorithm, whereas in parameter control, the parameter values change while the algorithm is being executed. For example in PSO, the inertia coefficient (
) is normally controlled, and it is widely accepted in literature to gradually decrease it from 0.9 to 0.4 (
Rezaee-Jordehi and Jasni 2013). The individual
and social
coefficients are, however, problem-dependent and are normally tuned before executing the algorithm.
In this section, a road network located in Chur, Switzerland, was selected as the area of study to test the suitability of the proposed DPSO algorithm in developing restoration programs. This network is near the Rhine River, and the historical records indicated that the zone is susceptible to annual floods and landslides (
Hackl et al. 2018).
In total, the network contains approximately 610 km of roads (motorway, main, and minor roads) which comprises 2,011 road sections and 116 bridges (Fig.
4). The road network information was taken from the VECTOR25 data set, provided by swisstopo (
2019).
In this paper, parameter tuning was conducted for three flood scenarios with different levels of damage to the network. However, reality is more complex and a more rigorous effort must be made in considering various hazard scenarios beforehand to tune the parameters for postdisaster decision making. The following scenarios were only considered as illustrative examples of the effect of the number of damaged assets on the parameters of PSO:
•
Scenario 1, with 10 damaged objects, where one bridge and two road sections lost full functioning capacity (major damage) and one bridge and six road sections suffered 50% and 70% capacity reductions (minor damage).
•
Scenario 2, with 30 damaged objects, where two bridges and five road sections lost full functioning capacity and three bridges and 20 road sections suffered 50% and 70% capacity reductions.
•
Scenario 3, with 50 damaged objects, where four bridges and 11 road sections lost full functioning capacity and five bridges and 30 road sections suffered 50% and 70% capacity reductions.
The suitability of the parameters was tested in terms of the overall direct and indirect costs and the average computation time. The direct costs of restoration work were estimated using Eq. (
2) and by using the data provided in Table
1. The total indirect costs were calculated using Eq. (
3) and Table
2. It was assumed that there were six working crews available, each working 8 h per day. Three intervention options were considered to restore the damaged objects, namely high-priority, normal-priority, and low-priority interventions. The number of available work crews and types of interventions were solely selected as an illustrative example in this study and can always be changed. The availability of more work crews will result in a shorter restoration period (because the damaged objects can be restored faster) and hence lower the imposed indirect costs without affecting the computation time. On the other hand, adding multiple intervention types will enlarge the search space and hence will add to the computation time.
A conjugate direction Frank–Wolfe (CFW) method (
Mitradjieva and Lindberg 2013) suitable for convex optimization problems such as in traffic assignments (
Gawron 1998) was employed for the traffic flow modeling. This user equilibrium traffic assignment model was selected because it is relatively fast and widely used in practice.
was estimated using cost–flow relationship presented in Eqs. (
4) and (
5), i.e., the lower-level optimization. The calibration parameters in Eq. (
5) were set as
and
as suggested by the
Highway Capacity Manual (
Horowitz 1991;
Hackl et al. 2018).
As suggested by Rezaee-Jordehi and Jasni (
2013), the inertia coefficient (
) was set to gradually decrease from 0.9 to 0.4. The choice of individual
and social
coefficients and the population size are affected by the size of the search space, which is mainly dependent on the number of damaged objects that need to be restored (regardless of the hazard type), and the intervention options. In this study, the population size was examined for
particles to provide a balance between accuracy and computation speed. The
and
coefficients in PSO were tested in the range [
] with 0.2 increments, as suggested by Babazadeh et al. (
2011) to converge to a good solution within 100 iterations. To allow for better comparability of the results, the same seed for the random generator of the initial state was used for all tests in each scenario. Fig.
5 shows the movement of the particles in the search space in each iteration for the worst-case scenario (largest search space and smallest population size), i.e., 50 damaged objects and a swarm with five particles. The figure illustrates that the swarm a reaches a (good) solution within 100 iterations.
Because it was not possible to find the global optimum for this problem, the quality of the solutions were compared with a benchmark model that is the standard approach in practice (Table
3). The solutions are in terms of million monetary units. The adopted benchmark model uses a prioritization rule where it first sorts the damaged objects based on their average traffic flow and then among the objects with the same traffic flow, it prioritizes the restoration of the objects with major damage. The best solutions reached for each combination of parameters for the three scenarios are summarized in Table
3 and illustrated in Fig.
6.
Comparing the solutions from DPSO with the benchmark model confirmed that with a good choice of parameters, the algorithm can reach a good solution (better than the existing benchmark model) within 100 iterations. The results suggest that in this network when smaller population sizes are selected, to reach a good solution, larger values of and coefficients should be used. However, when the number of particles is increased, relatively smaller and values would reach better solutions. Moreover, as the number of particles is increased, the solutions improve [except for Scenario 1 with 15 particles, where the best solution was 2.268 million monetary units ()], which was identical with the best solution reached with 10 particles in this scenario. This makes sense because as the number of particles increases, more agents are searching for a good solution, and the algorithm can conduct a wider search and would probably find better solutions. However, the computation time of the algorithm significantly increases by increasing the number of particles. Consequently, a compromise must be made between cost-minimization and the time to reach a good solution because in reality, delays in developing a restoration program would add to the inflicted indirect costs. One possible way to ensure achieving good solutions with a fewer number of particles is to introduce a good initial state such as the solutions developed by simple prioritization rules (benchmark model).
All tests were conducted on a shared
server running on Linux 64 bit operating system with a Core Intel Xenon E5-2690v2 3.0 GHz, with 384 GB DDR2. Therefore, in addition to the number of damaged objects and particles, the computation time was also influenced by the number of users at the time the tests were running. However, the average computation time (Table
3) can provide a fair estimation of the time required to develop a suitable restoration program for each case.
By comparing the best-achieved solution and the average computation time with respect to the population size for each scenario, the following parameters were chosen for the DPSO algorithm in this case study:
•
Scenario 1: 10 particles and ,
•
Scenario 2: 10 particles, , and , and
•
Scenario 3: five particles, , and .
Comparison of DPSO and SA
Because metaheuristic algorithms do not guarantee finding the globally optimal solution, it is almost impossible to learn how close the attained solutions are to the global optimum. Consequently, the stand-alone implementation of the algorithms will not demonstrate how well-suited each algorithm is for a specific problem. So far, many researchers have studied the application of a single metaheuristic algorithm for developing restoration programs for their networks; also, many of these studies have been conducted on illustrative examples for small networks with few edges and not on real-world networks, which makes the tailored algorithms unsuitable for making comparisons in real-world case studies.
To test the suitability of the proposed DPSO algorithm, the method was compared with the SA algorithm proposed by Hackl et al. (
2018), which was tested on the same network as this study (Chur, Switzerland), to find which one is more suitable in the postdisaster restoration of road networks. Moreover, for both algorithms, the initial state was selected using the benchmark model to show how much improvement can be attained over the current practice if heuristic algorithms are used instead of relying only on the prioritization rules. For both algorithms, computation times were considered in calculating the imposed indirect costs to allow for a fair demonstration of the potential added benefit that can be attained if optimization approaches are used on top of the existing benchmarks models. This biased initiation will also save computation time and ensure that the search outcome will not be worse than the solution by the conventional prioritization rules that are used by decision makers in real life. This is especially significant in situations where, due to high computation time, the number of particles in DPSO and the iterations per step in SA are decreased.
The implementation of the SA algorithm in the Chur network was discussed in detail by Hackl et al. (
2018) and is illustrated in the flowchart in Fig.
7. In this study, the same cooling schedule was selected for the SA algorithm as used by Hackl et al. (
2018), where at every
th iteration out of
, the initial temperature
is multiplied by
, where
is the decay rate and is calculated as:
The temperature parameters for the cooling schedule, i.e.,
and
, were adopted as proposed by Ben-Ameur (
2004) and Hackl et al. (
2018).
The number of steps (for the original temperature
to reach
) and iterations per step for the three case studies were selected as follows:
•
Scenario 1: 100 steps with 20 iterations per step,
•
Scenario 2: 100 steps with 100 iterations per step, and
•
Scenario 3: 100 steps with 50 iterations per step.
These parameters (steps and iterations per step) were adjusted with the goal that the maximum computation time per scenario would be 15 h to allow for better comparability with DPSO. The results and the improvements of the algorithms over the benchmark model are summarized in Table
4.
The results suggest that when a good initial state (benchmark model) is introduced, DPSO can reach significantly better solutions than when a random initial state is used (Table
3). The best solutions attained by DPSO with a random initial state for the three scenarios were 2.268, 7.095, and 18.225, respectively, without considering the additional costs imposed due to the delays (Table
3). These values for the DPSO with biased initiation were 2.280, 6.792, and 17.621, including the imposed indirect costs due to the delays related to the computation time (Table
4). Even with random initiation, DPSO reached better solutions than the benchmark model, i.e., 2.375, 7.484, and 19.745 for the three scenarios, respectively.
DPSO is a slower algorithm in comparison with SA, but due to benefiting from the swarm intelligence, it needs fewer iterations to reach a good solution.
In Scenario 1, for the network with few damaged objects to restore, both algorithms reached the same solution, but SA took 42% less time to reach this solution. Consequently, SA seems to be a better choice. In Scenario 2, SA reached a slightly better result, but it took 40% more time to reach this solution. Considering that at time , the solution found by SA was 7.405, perhaps the DPSO is more suitable because the cost difference is not that significant to compensate for the longer computation time. In the last scenario, the DPSO reached a better solution (around 2%) but the computation time was approximately 14% more. In this scenario, although DPSO seems more suitable, the choice of the algorithm may be dependent on decision makers’ preference based on the time or budget constraints.