Open access
Technical Papers
Jul 5, 2021

Performance-Based Building System Manufacturer Selection Decision Framework for Integration into Total Cost of Ownership Evaluations

Publication: Journal of Performance of Constructed Facilities
Volume 35, Issue 5

Abstract

Facility managers are often faced with building system procurement or replacement decisions, requiring them to select a system from among competitive manufacturers. Total cost of ownership (TCO) criteria, informed by built assets in operation in the manager’s portfolio, provides some information to select the right asset manufacturer. However, managers must also consider technical performance to complete a more comprehensive analysis. Performance can be calculated using asset parameters like condition, age, and variation in condition to aid in TCO assessments. Leveraging past research and approaches, performance is calculated using an additive model that scales each parameter using a standard normalization technique and employs weighting factors to account for decision-maker input. Data from 20 Air Force installations across the US and two asset types are analyzed, showing the utility of a performance metric. This analysis shows that as manufacturer diversity in portfolios decreases, performance increases for most of the asset types modeled. This paper presents a new performance metric that can be used as an additional criterion in TCO models to build a more robust decision framework.

Introduction

Decision-makers have traditionally faced budgetary and workforce constraints that make it challenging to effectively maintain and repair buildings and infrastructure assets to ensure adequate performance. Yet, data-driven approaches have the potential to improve many facets of the facility management process, including procurement decisions for building-installed equipment and components. For component-level units (e.g., chillers, air handlers, boilers, and electrical transformers), subsequently referred to as assets, investment decisions occur across the asset life cycle, at points of initial procurement, repair, and maintenance, and at disposal. However, initial procurement decisions may have long-term effects and can set the course of future maintenance and repair frequency and cost. As such, emphasis must be given to asset manufacturer selection, which is defined as the choosing of one asset manufacturer over another, based on some number of selection criteria, e.g., the total cost of ownership (TCO) criteria.
TCO is a method to evaluate all costs over an asset’s life cycle (Roda and Garetti 2014), including initial procurement, regular operating costs, spare part costs, and corrective maintenance costs. TCO provides a strategy for decision-makers to evaluate their assets (Kappner et al. 2019). Infrastructure system life cycle costs have been estimated using TCO frameworks for facilities (Grussing 2014), roofing systems (Coffelt and Hendrickson 2010), stormwater systems (Forasté et al. 2015), and pavements (Rehan et al. 2018). These analyses provide an overview of the current body of knowledge regarding the use of life cycle cost evaluations for infrastructure systems and provide an excellent starting point to detail the costs associated with purchasing and operating infrastructure. However, there is a lack of consideration regarding the performance of assets in the TCO framework (Roda and Garetti 2014; Xu et al. 2012). Roda and Garetti (2014) aimed to fill this gap through the creation of a performance-driven TCO model for assets in the manufacturing industry (Roda et al. 2020). However, this same methodology has not been applied to building systems or built infrastructure.
This gap in research provides motivation to evaluate building system performance statistics and propose a metric that represents asset performance in competitive markets. Ultimately, a performance-based metric could be a component of the TCO framework that enables the selection of manufacturers that produce the highest performing asset for use in their facilities and not simply those that have the lowest initial cost.
Performance-based manufacturer selection can provide many benefits and efficiencies to facility managers, including the creation of a streamlined and repeatable procurement process, the simplification of maintenance through asset standardization, and a reduction of the number and diversity of spare parts required to perform preventative and corrective maintenance. Initial procurement decisions can be simplified by directing facility managers to source assets that are required in many facilities, e.g., chillers and air handlers, from a single manufacturer. Procuring assets from a single manufacturer makes the ordering process repeatable, which promotes efficiency and leads to lower initial costs (Lee and Drake 2010). As technicians learn the specifications of one asset manufacturer, they leverage knowledge gained through repetition to reduce time spent on preventative and corrective maintenance activities. Standardizing assets has the advantage of decreasing time and money spent on maintenance (Tavakoli et al. 1989). Spare part management can be simplified through the reduction of the quantity and costs associated with the number of part types stored (McGean 2001; Neelamkavil 2011). In total, manufacturer selection enables facility managers to lower costs and reduce complexity within their asset portfolio.
The concept of performance-based manufacturer selection can apply to many organizational levels and infrastructure portfolio sizes, including academia, medical, government, or large businesses. Although independent of organizational or portfolio complexity, each facility manager faces the same challenge: to make or recommend decisions that efficiently manage assets. Facility managers in all industry tiers can leverage the benefits of performance-based manufacturer selection to build an inventory of assets that provide the greatest return on investment considering both performance and total life cycle costs. As outlined previously, the efficiencies of a performance-based manufacturer selection approach affect various aspects of the asset life cycle, but they have not been well-described in literature for built infrastructure portfolios. This research expands on the current body of knowledge to consider a performance-based metric to support manufacturer selection decisions and supplement traditional TCO evaluations to increase robustness. In addition, this research develops a novel framework, enabling data-driven manufacturer selection that makes use of actual observed performance data associated with component conditions. To develop the data-driven methodology, data were gathered for United States Air Force (USAF) building component assets across 20 separate geographic installations. To demonstrate and validate the approach, this research focused on two types of building components, chillers and air handlers, chosen because they are routinely found in facilities and because there are several major manufacturers. Using observed conditions, the remaining service life, and a location-specific condition variance, a performance metric is developed that provides facility managers a measure of asset performance for use in the manufacturer selection process.

Data and Case Study

Manufacturer selection requires that sufficient and appropriate data be collected and available to make thoughtful and accurate performance-based decisions. One critical source of data comes from periodic condition assessments of the asset throughout its life cycle. This condition data must be collected in a standardized and repeatable way and with some regularity, e.g., during scheduled preventative maintenance, to ensure all condition information is on the same scale and comparable. Other specific asset information must be available, including the installation date, inspection dates, and manufacturer name. Combining this basic attribute data with the observed condition data enables the creation of the performance-based metric to make manufacturer selection decisions. This requires a database or management program to track this asset information. An enterprise asset management (EAM) system provides a repository for this information. The BUILDER Sustainment Management System (SMS 2012) is the asset management system used for condition assessment and facility management within the Department of Defense (DoD), and it offers the necessary tracking and management features to make manufacturer selection decisions. Additional information on the organization and features of BUILDER has been reviewed in the literature (Bartels et al. 2020; Grussing et al. 2016).
The BUILDER SMS is a DoD-developed facility life cycle management program used by the entire DoD and other federal, state, local, and private organizations to track and manage infrastructure assets (BUILDER Sustainment Management System 2012). BUILDER’s purpose is the support of facility management decisions related to when, where, and how to sustain built infrastructure assets in order to make better investment decisions (SMS 2020). The SMS program provides a large database of historic asset condition information that can be used to create, validate, and test an asset performance metric. It spans a diverse variety of building systems and components and stores related information, including asset installation year, manufacturer, and observed condition state. Asset installation year is the date on which the asset was installed into the facility and first put into operational status. Manufacturer information lists the company that built the asset. The asset condition is measured on a 100-point index scale, which represents the health of an asset. A condition of 100 is considered as-new and free from any defects, distresses, or signs of deterioration, while 0 is a complete failure. The condition data is derived from visual inspections performed by trained assessors, and the condition data is entered into BUILDER either by the assessor or data-entry specialists. Per USAF business rules, all assets must be assessed no less than once every five years but may be more frequent if completed during recurring preventative maintenance activities.
Data from the USAF and its infrastructure assets are used to build, validate, and verify the use of a performance-based manufacturer selection metric. The data show a variety of manufacturers across USAF buildings for a given asset type. This enables the USAF enterprise to compare the performance of building component assets by manufacturer across all of its operating locations. The performance metric framework presented in this study is designed to be simple and flexible such that any organization that tracks performance and manufacturer data can reproduce this analysis or include additional decision criteria valued by the organization, such as uptime and service call frequency.

Data Filtering

In line with BUILDER’s goals, the data is used to supply information to the performance metric, which may ultimately be used as a component of a TCO manufacturer selection decision model. Before proceeding, SMS data requires initial filtering. While BUILDER provides a wealth of data that can be used to construct a performance-based metric, the raw data requires preprocessing to align the data with the objectives of the metric. The performance metric, described in the following “Methods” section, is an age-based index that captures asset performance as it degrades between assessments. Because installation and assessment dates vary across assets, the raw data in BUILDER needs to be modified to transform the temporal basis from an absolute date to a relative asset age. All dates are anchored by the asset’s installation date, which is a value stored in BUILDER. For example, an asset that was installed on January 1, 2000, and first inspected on January 1, 2005, is 5 years old at the time of inspection. This transformation in the temporal scale ensures that assets are being compared against assets of similar ages and not the installation date.
Next, assets were removed from the analysis if the observed condition saw an increase between subsequent inspections. Typically, an increase in the condition indicates a repair or overhaul was performed between inspections, and these situations can have an effect on the resulting life cycle analysis. These assets were removed in order to retain only those assets that were unlikely to have had a repair or overhaul completed between inspections. Retaining assets with a positive change in condition between inspections would be confounding to the calculation of the performance metric.
Finally, any assets with an observed condition of less than 100 at installation (age equal to zero) were removed. Logically, assets should be brand new and in perfect condition at the time of installation, so this exclusion criterion is meant to filter out assets that may have been erroneously entered into the database or assets with initial defects (either due to manufacturing or installation) that would typically be covered during the warranty period. The Air Force does not purchase reconditioned or used assets. The starting data population was 8,579 data points, representing all the unique chiller and air handler assets at the selected installations. Initial filtering criteria resulted in reducing the overall data population by 18% (1,582 assets were removed). Once initial data filtering is performed, the condition as a function of age can be observed, although it does not provide a complete picture of asset performance (Fig. 1). This figure illustrates the point that viewing this data as age versus condition, as it is collected in BUILDER, does not aid facility managers in the identification of trends or selection of strategic life cycle decisions for their infrastructure assets. The goal of the approach presented in this study is to transform this data into a decision support tool for facility managers.
Fig. 1. Example of asset age versus condition.

Asset and Location Selection

The United States Air Force has a long history of recording and tracking its asset condition data in BUILDER (11 years of condition assessment data). Data from 20 Air Force installations were included in this analysis, with installations drawn from across the contiguous United States (Fig. 2). These installations are all spatially dispersed within the US to provide a representative sample from the approximately 60 active-duty Air Force installations. These installations were selected to represent various mission sets within the Air Force, as not to bias the analysis toward one particular function, e.g., mobility versus fighter aircraft missions. The 20 installations hold 35% of the chillers and 33% of air handlers in the Air Force inventory within the contiguous US, which serves 1,631 facilities.
Fig. 2. Location of United States Air Force installations included in the case study analysis.
For this research, air handler and chiller asset types were chosen because they represent assets that require a large initial investment, have several major competitive manufacturers, have moderate service lives [about 20 years for chillers and 24 years for air handlers (ASHRAE 2021)], and are complex enough to have multiple parts that combine to generate condition changes over their lifespan. For chillers, two manufacturers are compared, which include a total of 991 units across the 20 Air Force installations considered in this work. While there are many major manufacturers for chiller units, the two most common across the 20 installations were chosen that represent 46% of all chillers at these installations. There is a large diversity in chillers, both in terms of location (indoor and outdoor) and size (20–1,500 t). For air handlers, three manufacturers are included in the analysis, which produces 2,763 individual units. These three manufacturers represent 43% of the total air handlers at the selected installations. Again, both indoor and outdoor air handlers are included, and units ranging from 2,000 cubic feet per minute (CFM) to 75,000 CFM are included. The goal of this study is to show the use of a performance-based metric to make manufacturer selection decisions. As such, manufacturer names are unimportant and are removed from the presentation of the results to avoid any endorsement of one manufacturer over another. However, each manufacturer is prevalent in the United States and the international market. The chosen manufacturers for this analysis represent roughly half of all assets at each installation, but the distributions do not equal 100% of units within that installation’s portfolio. There were some manufacturers that represent only a small percentage of assets at an installation, as well as manufacturers that are only regionally available, which would not be suitable for manufacturer selection decisions at a national level for an organization like the USAF. Despite distributions not including all assets at an installation, this selection of manufacturers captures the dominant manufacturers at each location.
Sufficient data must be available for an organization’s assets to reproduce the performance metric as laid out in this paper. As such, asset owners should ensure that manufacturer data and asset condition data are properly tracked and maintained. For this study, assets with an incomplete or obviously erroneous manufacturer or inspection data were excluded from this analysis. The exclusion of assets without manufacturer information resulted in reducing available data points by 38% (3,243 assets are removed), producing the final data count of 3,754 assets for analysis. If organizations take care to record all metadata for their built infrastructure assets, they will increase the pool of available data to use an analysis such as this.

Methods

This study produces a metric that measures the technical performance of an asset, which can be used to guide manufacturer selection decisions. The performance metric developed in this work is based on the asset condition at the time of inspection, the remaining service life, and the total variation in the asset condition compared to similar assets. It provides decision-makers with a basis of comparison for determining which assets are performing better than another. As previously stated, this performance metric provides a quantification of the technical performance of an asset, which can be used as a criterion in a TCO evaluation (Fig. 3). The performance metric, if valued meaningfully along with the cost of ownership criteria, could be used to strengthen or change manufacturer selection decisions.
Fig. 3. Framework diagram for manufacturer selection.
To create the performance metric, an equation following a weighted sum model approach is adopted. The weighted sum model enables decision-makers to select the equation parameters that are most important by varying the corresponding weights. Eq. (1) below shows the performance metric equation that is used in this analysis
Performance=(wi×Conditionscaled)+(wj×RSLscaled)+[wk×(1RMSEscaled)]
(1)
This equation uses three data parameters to describe the technical performance of an asset at a point in time. The Conditionscaled parameter is the observed condition of the asset, which is converted from the raw condition assessment (0–100) and is obtained directly from the BUILDER database. The Conditionscaled parameter is converted using a minimum-maximum normalization technique. This same normalization technique is used for each of the three equation parameters and is explained in more detail in the following paragraphs.
The RSLscaled parameter is a measure of the remaining service life (RSL) and is the number of years remaining until the component is expected to fail and needs replacement. RSL is a value taken directly from BUILDER, and it is based on the BUILDER degradation curve, adjusted for past inspection observations. This means that RSL is dynamic to adjust to how the asset is actually performing; if the asset is degrading faster than originally expected, the RSL value will decrease. It is also scaled to a number between zero and one. This parameter provides for a measure of asset age that rewards assets that have more years of useful service life left rather than those that are expected to fail sooner.
The parameter RMSEscaled provides for the consideration of condition variance, representing the total variation in the condition of assets at one location when compared to the average condition of all assets within the analysis. The root-mean-square error (RMSE) is a measure of variability and utilized for this analysis (Functional Networks 2005; Neill and Hashemi 2018). A higher RMSE value indicates a large variation, and a smaller value indicates a lower variation. Computing software is used to calculate the RMSE of each manufacturer’s assets at a location compared to the entirety of that brand of manufacturer’s assets in the analysis. So, there will be one value of RMSE for each manufacturer at each location, and this value is calculated based on the comparison of assets of similar ages. RMSE is also scaled to a value between zero and one. The scaled RMSE value is subtracted from one to allow for assets with conditions closer to the mean to have a greater influence than assets that have a high degree of difference from the mean condition. Ultimately, facility managers should value assets that perform consistently, as this enables more reliable forecasting of maintenance, repair, and replacement. Because condition variation is measured for a location’s assets against all assets in the inventory, a manufacturer’s variation parameter will be the same for all assets at one location.
A performance metric value of 1 indicates the highest performance compared to like assets, and 0 is the lowest performance when compared to like assets. As previously mentioned, each parameter of the performance metric is scaled using a minimum-maximum normalization technique. For each case, a parameter value of 1 indicates the highest value within the dataset, and a 0 indicates the lowest value within that dataset for that parameter. A zero value does not indicate absolute zero but simply represents the minimum value within that dataset. The minimum-maximum normalization is a standard method of scaling in which each value is transformed to a number relative to its distance from the minimum and maximum values within the dataset (Jain et al. 2005, 2018). All values for each parameter are scaled prior to the final calculation of the performance metric. The implementation of the weighting factors attached to each parameter in the equation results in the summative performance metric being a value between 0 and 1.
As previously stated, the calculation of the performance metric also includes weighting factors that allow for decision-maker input to indicate which of the three parameters is the most important for decision-making: condition, age, or variability. Each parameter has a unique weighting factor, which is some fraction of 1, and all weighting factors must be greater than or equal to 0 (wi0,wj0,wk0) and sum to 1 (wi+wj+wk=1). For example, a decision-maker may decide that condition and variability in the condition are paramount and give them weighting factors of 0.4 and 0.4, respectively, which leaves age to have a weighting factor of 0.2. For the analyses in this study, all weighting factors were set to equal weights (wi=wj=wk=0.333).
Table 1 shows the example calculation of the performance metric for three air handler assets from different manufacturers at one location. These three assets are example units, and the result of the scaling operation to each parameter comes from a larger subset of data (within the specific manufacturer brand at the location, the result of those operations are shown). Each parameter of the performance metric calculation is shown in a column with the actual value, and the scaled value after the minimum-maximum normalization operation is shown in parentheses. The final column represents the calculated performance metric for the asset. This example shows that Asset 2 has the highest performance metric because it is in the best condition relative to the other two assets and has the most anticipated years of service life left. Asset 1 has the best value for variability, meaning its condition is most similar to the average condition for similar assets; however, a lower condition and RSL result in a lower performance metric for Asset 1. Finally, Asset 3 is in very poor condition, has a high variability value, and has only one year left of anticipated service life, and therefore, Asset 3 has the lowest performance metric value of the three assets.
Table 1. Example calculation of performance metric
AssetAsset condition condition(conditionscaled)Remaining service life RSL(RSLscaled)Variability in asset condition RMSE(1RMSEscaled)Performance metric
180 (0.4872)4 (0.1600)8.8357 (0.8662)0.5045
288 (0.7941)7 (0.7500)10.9868 (0.7853)0.7765
361 (0.2444)1 (0.0769)10.2158 (0.6940)0.3384

Results

Boxplots illustrate the performance of assets for each Air Force installation between the 25th and 75th percentiles (Fig. 4). The markers represent median asset performance. The figure columns represent the two asset types, chillers (left) and air handlers (right). The plots in each column represent the manufacturers (two for chillers and three for air handlers). Each plot includes the percent distribution on the horizontal axis, which is the percentage of a particular manufacturer each installation has in its inventory. The performance metric is shown on the vertical axis. The average performance within each manufacturer category is shown by the horizontal dotted line on each plot. This value represents the average performance of a manufacturer across the selected installations. Using the average value of the performance metric at each installation, a line of best fit, which is represented by green and red dashed lines, illustrates whether there is a relationship between performance and manufacturer consistency across installations. For most assets and manufacturers presented in this study, there is a positive relationship (green dashed line). For example, Manufacturer A for air handlers shows a positive trend in the performance metric, increasing from 0.43 to 0.65 as the percent distribution increases. This positive trend suggests that increasing the amount of one manufacturer in an installation’s portfolio has a positive effect on the performance of those assets. Overall, these best fit lines have low correlation values, so they do not provide statistically significant results, but they do illustrate a general trend in the relationship between percent distribution and asset performance. This increase in performance may be due to the efficiency gained by technicians maintaining a less diverse pool of assets (Tavakoli et al. 1989). The trendlines point to some of the benefits of manufacturer selection addressed in the “Introduction.”
Fig. 4. Location-specific performance metric plots.
For chiller units, the boxplots show that Manufacturer A has an average performance metric of 0.61, and Manufacturer B has an average performance metric of 0.59. Chiller units have percent distribution values between 2.08% and 63.53%. For air handlers, boxplots show that Manufacturer A has an average performance metric of 0.56, Manufacturer B has an average performance metric of 0.59, and Manufacturer C has an average performance metric of 0.52. Air handler units have a percent distribution value range between 0.76% and 33.74%. Boxplots provide a summary figure for facility managers to make manufacturer selection decisions at the installation level. Installations can compute the performance metric for all their assets of varying manufacturers to identify whether there is a manufacturer that provides higher performance at their installation over another. This information can aid in manufacturer selection decisions.
Bidirectional boxplots, which are similar to rangefinder plots, combine the installation-level performance and percent distribution metrics to more concisely display manufacturer performance (Fig. 5). These boxplots provide an overview of how a manufacturer performs across all Air Force installations compared to another and could provide a useful framework for an organization with multiple operating locations to make enterprise-level decisions. These boxplots show that for chiller units, the performance metric of Manufacturer A ranges between 0.00 and 1.00, but 50% of the assets have performances that fall between 0.48 and 0.69. Manufacturer B has a slightly smaller range of performance (0.00–0.94), although its interquartile range is the same as Manufacturer A, falling between 0.48 and 0.69. For air handlers, Manufacturer A has a range of performance metrics between 0.01 and 1.00, with the majority of the assets falling between 0.47 and 0.65. Manufacturer B of air handlers has a performance between 0.01 and 0.96, with 50% of the assets ranging between 0.40 and 0.68. Manufacturer C of air handlers has a range of performance metrics between 0.00 and 0.97, but the majority of the data falls between the values of 0.42 and 0.65. The air handler analysis shows that the average performance metric of Manufacturer A is very similar to Manufacturer B and Manufacturer C. Given the relative performance similarity between all manufacturers, the Air Force enterprise may allow facility managers to select whichever manufacturer provides the highest performance at their individual locations. These results and conclusions are specific for this particular dataset of Air Force infrastructure assets. Other organizations may follow this methodology to produce their own calculations and make their own conclusions regarding manufacturer performance, which may show a significant difference in asset performance between manufacturers. The bidirectional boxplots provide a useful validation tool for enterprise-wide decisions that may not be as apparent on the location-specific boxplots (Fig. 4). However, the bidirectional plots do lose the visualization of any trends in the data for performance metric and percent distribution across installations. An enterprise-level facility manager could employ this tool to validate manufacturer selection decisions made by spatially distributed operating locations.
Fig. 5. Bidirectional manufacturer plots for performance metrics of (a) chillers; and (b) air handlers.

Discussion

The results presented in the “Results” section illustrate the utility of a metric targeted to evaluate the technical performance of assets in order to augment, make, and validate manufacturer selection decisions. Facility managers at individual locations can use the results of the location-specific performance metric analysis (Fig. 4) to compare which manufacturer provides the best technical performance at their location and make procurement decisions accordingly. If the analysis shows that the manufacturer they are most heavily invested in provides the best performance, then they can continue to invest in that manufacturer brand. If the location-specific boxplots show that better performance is achieved by a manufacturer that they currently are not heavily invested in, then the facility managers can use that analysis as a rationale to switch manufacturers when procuring new assets.
The bidirectional boxplots provide oversight for enterprise-level facility managers and show the overall manufacturer portfolio at all operating locations. This tool allows them to validate manufacturer selection decisions that are made at lower tiers of their organization. It provides them with a tool to see which manufacturer brands their locations are most heavily invested in and whether those are good investment choices based on the performance metric. In the case of the 20 Air Force installations analyzed in this study, the bidirectional boxplots show that overall, enterprise-level decisions should not be made only to choose one manufacturer. For this case, the average performance metrics of all manufacturers are similar enough that directing installations to choose one manufacturer over another does not make sense. Enterprise-level facility managers could use the location-specific performance metric analysis to help bases validate their choice for manufacturers. For example, if a location chooses to proceed with the procurement of a particular asset manufacturer, but the data shows that the manufacturer does not provide a higher performance than another, an enterprise-wide facility manager could choose to redirect the location to choose a manufacturer that does provide higher performance. An example of this redirection is when comparing the performance of air handlers at two Air Force installations [Fig. 6(a)]. This plot shows that at each installation, there is a manufacturer that provides higher performance when compared to other manufacturers. For Barksdale Air Force Base (AFB), LA, Manufacturer A provides higher performance than Manufacturers B and C. For Seymour Johnson AFB, NC, Manufacturer B provides higher performance than that of Manufacturers A and C. Even though Manufacturers A and B have the same investment rate in terms of the percent distribution, Manufacturer B provides greater performance than that of Manufacturer A and shows a clear choice in which manufacturer to select. These distinctions between which manufacturer is superior make the selection clear at the installation level. However, if Barksdale AFB were to choose Manufacturer B for future manufacturer selection decisions, an enterprise-wide facility manager could use this tool to show the installation the data that proves that their assets that are from Manufacturer B do not perform as well as Manufacturer A, and therefore, they should continue to invest in Manufacturer A. At Seymour Johnson, the installation can use the data to see that, while Manufacturer B provides higher performance, their portfolio is split between Manufacturer B and Manufacturer A. Decision-makers could use the data to decide that as their assets that are from Manufacturer A reach the end of their service life, they should be replaced with those of Manufacturer B.
Fig. 6. Boxplots of two installation analyses for performance metrics for (a) equal weighting factors; (b) condition-weighted model; and (c) variability-weighted model.
This analysis provides the tools to help decision-makers at both a local-installation level as well as an enterprise-wide level to aid in decision-making. Additionally, the previous example also points out that each of these two installations reach separate conclusions as to which air handler brand to select, either Manufacturer A or Manufacturer B. The bidirectional boxplot for air handlers [Fig. 5(b)] shows relatively equal performance from each of these manufacturers, so this shows more evidence that for the Air Force, there should not be an enterprise-wide decision for manufacturer selection of air handlers.
Additionally, this tool aids in decision-making that is helpful to make and validate manufacturer selection decisions. This analysis provides a measure of technical performance not previously described in the literature for building systems and built infrastructure that can be included in TCO calculations. And while this novel approach solves a problem for facility managers, technical performance alone does not provide the only means of comparison when selecting an asset. The technical performance analysis allows facility managers to see which manufacturer brand provides the highest performance for their assets, but it does not speak to the costs related to purchasing and maintaining an asset. Technical performance needs to be factored into a TCO assessment so total costs, as well as technical performance, can be considered when facility managers make initial procurement decisions.
As previously stated, the weighting factors that are included in the performance metric calculation provide the opportunity for decision-makers to decide which of the three parameters are the most important when calculating the technical performance of assets. These weighting factors can be varied to provide a customized formula for the performance metric that is tailored to the preferences of the decision-maker. Varying the weighting factors to create a condition-weighted model [Fig. 6(b)] or a variability-weighted model [Fig. 6(c)] show the effects each parameter has on the performance metric. These two examples display the same data as the equal weighting factors model [Fig. 6(a)] with the only change being the weights used for calculation in the performance metric equation. The condition-weighted model more heavily weights the condition of the asset (wi=0.50), and the age and variability parameters are equally weighted (wj=wk=0.25). In the variability-weighted model, the variation in the condition is more heavily weighted (wk=0.50), and the condition and age parameters are equally weighted (wi=wj=0.25). These shifts in the weighting of parameters show the effect that the decision-maker preference may have on the outcome of a manufacturer selection decision. Overall, the decision as to which manufacturer to select does not change, but the variability-weighted model has tighter spreads of performance metrics, which may help make the best manufacturer become more evident. Additional parameters could also be added into this performance metric calculation, such as the frequency of preventative maintenance required, which would capture how often technicians need to attend to the asset to keep it in good working condition. The likelihood of corrective maintenance could also be factored into the calculation of the performance metric to describe what the probability is of an asset needing a major repair to keep it in good condition. To qualify if an asset is over or underperforming relative to its expectation, a ratio of the RSL to the remaining years left based on the original design life could be added as an additional parameter. Because RSL changes depending on the condition of the asset at each assessment, if the assessment shows the asset is degrading quicker than expected, the RSL will shorten to a value smaller than the years left based on the original design life. This ratio would provide an indication if the asset is doing better or worse than expected relative to its age. These additional parameters would also include weighting factors to allow the decision-maker influence over the fraction of the performance metric that is attributed to each parameter. The equation developed in this study to calculate the performance of assets shows it can be used to make and validate manufacturer selection decisions, although the performance metric framework is flexible and customizable to allow for additional parameters to be added and weighting factors to be optimized to the decision-maker’s preference.
While the framework has been developed so that it is flexible such that facility managers and decision-makers can tailor it to meet their needs, the parameterization presented in this study does provide value in its current form. The inclusion of asset condition, remaining service life, and variability in asset condition describe major considerations for facility managers to consider. The asset condition depicts the current operating efficiencies (or lack thereof) of assets, the remaining service life quantifies how far into an asset’s useful service life it currently is, and the variability in asset condition provides a measure of consistency for facility managers to understand if this asset will perform similarly to other assets. These parameters capture what are likely the most important data points for decision-makers to consider when making initial procurement decisions. So, while the framework itself is flexible, this analysis has dialed into useful parameters that provide facility managers a way to make manufacturer selection decisions based on asset condition that is service-life- and variance-informed.
In addition to the utility of developing a technical performance metric, this analysis also underscores the general benefits of collecting and maintaining built infrastructure data. Collecting data enables facility managers to leverage data to make sound decisions that benefit their organizations. Whether facility managers need to decide which maintenance regiment to implement for their asset portfolios or which asset to select when building their portfolios, data provides a tool to enable sound decision-making. This study focuses on Air Force data that highlights, with just 11 years of condition assessment data, how a performance metric can be calculated, enabling a determination at multiple levels of the technical performance of assets in order to make manufacturer selection decisions.
One potential obstacle of using this technical performance metric to make and validate manufacturer selection decisions is overcoming the lack of data for new technologies and new manufacturers that become available in the future. This methodology is predicated on using historical data from manufacturers to quantify the performance of their assets, and without data facility managers, they are unable to calculate the performance of newer manufacturers in which they have not invested. Additionally, this proposed implementation of making performance-based manufacturer selection decisions does not safeguard against one manufacturer having a monopoly over all assets in a portfolio and subsequently reducing their effort to deliver high-performing assets. If a location solely invests in one manufacturer after completing this analysis, they have no point of comparison for those assets against any other manufacturer, nor does that manufacturer have an incentive to provide for the sustained performance of those assets. Overall, both of these points are legitimate concerns for facility managers, but the authors believe the benefits of a performance-based manufacturer selection outweigh a no-selection criterion as the alternative. Additionally, calculating an asset performance metric prior to making sourcing decisions can establish a benchmark that can be tracked over time and compared to the asset performance metric after switching manufacturers.
This analysis includes asset data from 20 Air Force installations that were spatially distributed across the contiguous United States. These installations are located in different climate zones that are subjected to varying amounts and extremes of climate variables like temperature, rainfall, humidity, and solar irradiance. Also, indoor and outdoor chillers and air handlers were included, which have different levels of climatic exposure depending on their location. Climate variables may play a role in asset condition and ultimately affect asset performance, which could be investigated by analyzing any trends in performance across climate zones. Further research could be carried out to analyze any possible influences of climatic conditions on asset performance. This level of analysis may lead to conclusions on manufacturer performance in a particular climate zone.

Conclusion

Analyzing the technical performance of assets offers a data-driven solution to provide facility managers at all levels of infrastructure management the analytical tools to make and validate performance-based manufacturer selection decisions. The measurement of the technical performance of assets shows the utility of a metric to assess the technical aspects of an asset. As research points out, this technical performance has not widely been included in TCO evaluations, which is a limiting factor of TCO models. Without any consideration for the technical performance of assets, TCO evaluations miss the opportunity to capture the operational capabilities of assets. Ultimately, a performance-based metric should be incorporated as a criterion in the total cost of ownership assessments (Fig. 3).
Using asset condition, a measure of remaining asset service life, and a variation in the asset condition, a performance metric is created to assess the technical performance of an asset. Using condition data collected on thousands of Air Force buildings, performance metrics are calculated for each asset type, and bidirectional boxplots are used to visualize the results. The performance metric of chillers and air handlers at 20 different Air Force installations shows how different levels of asset management can utilize this analysis to make manufacturer selection decisions. The multilevel analysis shows the utility for different levels of organizations, local installation levels that are managing assets day-to-day, and enterprise-level management that oversees the operation of several locations. The location-specific boxplots, as well as bidirectional boxplots, highlight the benefit of performance metric calculation to make manufacturer selection decisions at many tiers of an organization. The equation for the performance metric is customizable with the use of weighting factors that allow for decision-maker preference in which an asset parameter should carry the most weight for technical performance. Additional parameters can also be included in the calculation to account for other asset-related variables depending on the goal of decision-makers.
Future research should implement the technical performance of assets into a total cost of ownership model to capture all asset costs over their life cycle. With the addition of the initial procurement, maintenance, and repair costs, the performance can be added to aid decision-makers in choosing the right asset for their inventory. Performance is one aspect of the total cost of ownership, but combining it with other costs provides a holistic assessment of all costs incurred over the lifespan of an asset.
A technical performance analysis provides a starting point for manufacturer selection decisions and enables facility managers to choose the brand of the manufacturer that offers the highest technical performance. This methodology can empower facility managers in all industries and at all echelons of built asset management with the solution to choose the right manufacturer for their portfolio.

Data Availability Statement

Some or all data, models, or code generated or used during the study are proprietary or confidential in nature and may only be provided with restrictions (e.g., anonymized data). The raw dataset consists of BUILDER Sustainment Management System data, which belongs to the US Air Force. Anonymized summary data could be provided.

References

ASHRAE (The American Society of Heating, Refrigerating and Air-Conditioning Engineers). 2021. “ASHRAE owning and operating cost database.” Accessed April 8, 2021. http://weblegacy.ashrae.org/publicdatabase/system_service_life.asp?selected_system_type=1.
Bartels, L. B., L. Y. Liu, K. El-Rayes, N. El-Gohary, M. Golparvar, and M. N. Grussing. 2020. “Work optimization with association rule mining of accelerated deterioration in building components.” J. Perform. Constr. Facil. 34 (3): 04020033. https://doi.org/10.1061/(ASCE)CF.1943-5509.0001441.
BUILDER Sustainment Management System. 2012. “BUILDER™ sustainment management system.” Accessed September 21, 2020. https://www.erdc.usace.army.mil/Media/Fact-Sheets/Fact-Sheet-Article-View/Article/476728/builder-sustainment-management-system/.
Coffelt, D. P., and C. T. Hendrickson. 2010. “Life-cycle costs of commercial roof systems.” J. Archit. Eng. 16 (1): 29–36. https://doi.org/10.1061/(ASCE)1076-0431(2010)16:1(29).
Forasté, J. A., R. Goo, J. Thrash, and L. Hair. 2015. “Measuring the cost-effectiveness of LID and conventional stormwater management plans using life cycle costs and performance metrics.” In Low impact development technology: Implementation and economics, 54–73. Reston, VA: ASCE.
Functional Networks. 2005. Mathematics in science and engineering. New York: Elsevier.
Grussing, M. N. 2014. “Life cycle asset management methodologies for buildings.” J. Infrastruct. Syst. 20 (1): 04013007. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000157.
Grussing, M. N., L. Y. Liu, D. R. Uzarski, K. El-Rayes, and N. El-Gohary. 2016. “Discrete Markov approach for building component condition, reliability, and service-life prediction modeling.” J. Perform. Constr. Facil. 30 (5): 04016015. https://doi.org/10.1061/(ASCE)CF.1943-5509.0000865.
Jain, A., K. Nandakumar, and A. Ross. 2005. “Score normalization in multimodal biometric systems.” Pattern Recognit. 38 (12): 2270–2285. https://doi.org/10.1016/j.patcog.2005.01.012.
Jain, S., S. Shukla, and R. Wadhvani. 2018. “Dynamic selection of normalization techniques using data complexity measures.” Expert Syst. Appl. 106 (Sep): 252–262. https://doi.org/10.1016/j.eswa.2018.04.008.
Kappner, K., P. Letmathe, and P. Weidinger. 2019. “Optimisation of photovoltaic and battery systems from the prosumer-oriented total cost of ownership perspective.” Energy Sustainability Soc. 9 (1): 1–24. https://doi.org/10.1186/s13705-019-0231-2.
Lee, D. M., and P. R. Drake. 2010. “A portfolio model for component purchasing strategy and the case study of two South Korean elevator manufacturers.” Int. J. Prod. Res. 48 (22): 6651–6682. https://doi.org/10.1080/00207540902897780.
McGean, T. J. 2001. “Development of standards for automated people movers.” In Automated people movers, 1–8. Reston, VA: ASCE.
Neelamkavil, J. 2011. “Condition-based maintenance in facilities management.” In Computing in civil engineering (2011), 33–40. Reston, VA: ASCE.
Neill, S. P., and M. R. Hashemi. 2018. “Ocean modelling for resource characterization.” In Chap. 8 in Fundamentals of ocean renewable energy, edited by S. P. Neill, and M. R. Hashemi, 193–235. Cambridge, MA: Academic Press. https://doi.org/10.1016/B978-0-12-810448-4.00008-2.
Rehan, T., Y. Qi, and A. Werner. 2018. “Life-cycle cost analysis for traditional and permeable pavements.” In Proc., Construction Research Congress 2018, 422–431. Reston, VA: ASCE.
Roda, I., and M. Garetti. 2014. “The link between costs and performances for total cost of ownership evaluation of physical asset: State of the art review.” In Proc., 2014 Int. Conf. on Engineering, Technology and Innovation (ICE), 1–8. New York: IEEE. https://doi.org/10.1109/ICE.2014.6871544.
Roda, I., M. Macchi, and S. Albanese. 2020. “Building a total cost of ownership model to support manufacturing asset lifecycle management.” Prod. Plann. Control 31 (1): 19–37. https://doi.org/10.1080/09537287.2019.1625079.
SMS (Sustainment Management System). 2020. “Sustainment management system.” Accessed September 22, 2020. https://www.sms.erdc.dren.mil/#portfolioModal7.
Tavakoli, A., E. D. Taye, and M. Erktin. 1989. “Equipment policy of top 400 contractors: A survey.” J. Constr. Eng. Manage. 115 (2): 317–329. https://doi.org/10.1061/(ASCE)0733-9364(1989)115:2(317).
Xu, Y., et al. 2012. “Cost engineering for manufacturing: Current and future research.” Int. J. Computer Integr. Manuf. 25 (4–5): 300–314. https://doi.org/10.1080/0951192X.2010.542183.

Information & Authors

Information

Published In

Go to Journal of Performance of Constructed Facilities
Journal of Performance of Constructed Facilities
Volume 35Issue 5October 2021

History

Received: Feb 1, 2021
Accepted: Apr 16, 2021
Published online: Jul 5, 2021
Published in print: Oct 1, 2021
Discussion open until: Dec 5, 2021

Authors

Affiliations

Sarah L. Brown [email protected]
Student, Dept. of Systems Engineering and Management, Air Force Institute of Technology, 2950 Hobson Way, Wright-Patterson Air Force Base, OH 45433 (corresponding author). Email: [email protected]
Steven J. Schuldt, Ph.D. [email protected]
P.E.
Assistant Professor of Engineering Management, Dept. of Systems Engineering and Management, Air Force Institute of Technology, 2950 Hobson Way, Wright-Patterson Air Force Base, OH 45433. Email: [email protected]
Michael N. Grussing, Ph.D., M.ASCE [email protected]
P.E.
Researcher, Engineer Research and Development Center, US Army Corps of Engineers, 2902 Newmark Dr., Champaign, IL 61822. Email: [email protected]
P.E.
Researcher, Engineer Research and Development Center, US Army Corps of Engineers, 2902 Newmark Dr., Champaign, IL 61822. ORCID: https://orcid.org/0000-0002-4029-3747. Email: [email protected]
Michael A. Johnson [email protected]
P.E.
Researcher, Engineer Research and Development Center, US Army Corps of Engineers, 2902 Newmark Dr., Champaign, IL 61822. Email: [email protected]
Justin D. Delorit, Ph.D., M.ASCE [email protected]
P.E.
Assistant Professor of Engineering Management, Dept. of Systems Engineering and Management, Air Force Institute of Technology, 2950 Hobson Way, Wright-Patterson Air Force Base, OH 45433. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • Evaluating Climatic Influences on the Technical Performance of Built Infrastructure Assets, Journal of Performance of Constructed Facilities, 10.1061/(ASCE)CF.1943-5509.0001707, 36, 2, (2022).

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share