Open access
Technical Papers
Sep 14, 2021

Installation Quality Framework: Investment Return Approach for Energy Savings on Building Product Installation

Publication: Journal of Construction Engineering and Management
Volume 147, Issue 11

Abstract

A case study was conducted on 44 residential homes using both traditional house wrap and ZIP System systems to measure the overall airtightness and compare estimated energy usages. The labor, material, and overhead and profit (O&P) costs were analyzed and used to determine the optimal choice for long-term benefits in terms of cost and performance. The impact of insulation installation is considered a key factor in improving the strategy of reducing energy consumption. Improved installation practices can affect the airtightness of common wall assemblies to reduce the building energy performance gaps and provide insight on how to allocate resources better. A framework was developed to analyze operational costs and building energy performance to address how installation quality is a factor in the return of investment in building construction for heating and cooling systems within the thermal envelope. With this methodology, aggressive energy performance goals will be met while balancing the tradeoff between installation techniques and building systems efficiency based on the introduced probabilistic investment return (PIR) metric.

Introduction

The construction and manufacturing industries have evaluated the interconnectedness of time, quality, and cost. The emphasis in project planning and scheduling has been on managing the relationship between time and cost, with an implicit assumption of a fixed level of quality that is seldom explicitly explained (Liberatore and Pollack-Johnson 2013). Cost is typically considered the most important factor in choosing a contractor, but often reduced cost comes with the risk of contractors using inferior materials or unskilled labor (Hu and He 2014). In addition, the failing or shortcoming in performance has been linked to materials and the services related to the building construction (Love 2002). Through continual evaluation, the trade-off among time, quality, and cost has provided insight on the optimum choice of a contractor for the job (Forcada et al. 2017); their skills ultimately ensure standard performance on the job with pristine quality and at the desired cost, while achieving maximum productivity. The effectiveness of the chosen contractor for the job is completely dependent on their acclaimed guarantee of meeting job requirements and producing quality work. The risk in the selection of a contractor has been related to varying issues and has the potential to negatively affect the outcome of the overall project (Cheung et al. 2006; Chen et al. 2014). Because the concept of quality is subjective in nature, the construction industry has struggled to define and quantify quality and value (Sullivan 2011).
In recent decades, there has been an effort to reduce the energy demands of residential and commercial buildings (Clean Energy Solutions Center 2015). To address this challenge, the US DOE Building America program established a research agenda targeting market-relevant strategies to achieve 40% reductions in existing home energy use by 2030. Deep energy retrofits are part of the strategy to meet and exceed this goal (Less and Walker 2014; Brennan and Iain 2015). Another method for reducing energy consumption has been focused on closing the performance gap. The term performance gap is used to denote deviations between a building’s planned and actual performances (Frei et al. 2017). This measure corresponds to an increase in energy use and, therefore, a decrease in energy savings. The discrepancy between actual and theoretical savings is caused (among other factors) by the energy performance gap, which is the discrepancy between the actual and calculated energy consumption of a building. One study illustrates that it is not possible to explain residential energy consumption by solely relying on building simulation models (van den Brom et al. 2019). Closing this gap and moving toward buildings with predictable energy demands will aid in achieving the widespread goal of decreased energy consumption. To move toward this goal, building performance is broken down in terms of the performance of individual products that make up the building and its envelope. Throughout this research, a framework is provided for analyzing the impact of product performance on building performance through an installation-quality perspective.

Literature Review

A performance approach is concerned with matching the description of what a product is required to achieve in terms of functional, technical, and sometimes economic requirements (Almeida et al. 2010). In Product Reliability: Specification and Performance (Murthy et al. 2008), manufacturers design a product with certain performance requirements specified during the product development process. Before production begins, prototypes are designed with the goal of meeting this desired performance, set forth by the manufacturer, which is then modified until the desired performance is reached (Murthy et al. 2008). In terms of whole building performance, the International Energy Agency states in Annex 55 that achieving the expected performance requires that the factors that the design concept is based upon fulfill certain standards and are within expected ranges. These factors could include artisanship, interior and exterior climate, and maintenance and/or material properties (Hagentoft 2017). Although it is recognized there are many factors that can affect performance, this present research focuses heavily on the impact of the craft during installation and not the quality of the products themselves. Furthermore, improper installation can lead to poor performance and energy inefficiency. This discrepancy will be used to develop a return on investment (ROI) to save costs upfront.
In a field study, the airtightness barriers of 12 buildings with identical designs were measured, and the results revealed that the least performing building was more than twice as leaky as the most airtight building (Pallin et al. 2017). In another study of indoor air quality and ventilation systems, it was noted that complex ventilation systems were more prone to installation and performance issues (Less et al. 2015). Because the product performance gap is considered to be the difference between a product’s expected performance and its actual performance, if this manufacturer’s claimed performance is not met once the product is in use, this would be a performance gap on the product level. Within the scope of a building, there are different occurrences identified from product performance gaps, and the present research focused on products that were specifically susceptible to poor performance due to errors that occurred during installation as a result of the craft. Pallin et al. (2017) suggested that a traditional blower door test is used to measure the ACH50 (air change per hour at 50 Pa) of buildings. Such a blower door test represents the building’s overall performance and, consequently, not the airtightness of different building elements, such as roofs, walls, or foundations. Without such distinctions, it is difficult to identify which locations, the connections and/or details, and whether air leakage exists. Using a nontraditional methodology, Pallin et al. (2017) were able to measure the airtightness of building elements by simultaneously measuring the air leakage through roofs, walls, foundations, and their connections individually. This segmental approach is similar to what is being proposed in this paper; more specifically, product installations are examined to capture the individual faults that makeup up overall building inefficiency.
Certain building products rely more on proper craft performed by skilled laborers than others. Heating, ventilation, and air-conditioning (HVAC) systems, heat pumps, and insulation systems are products that are notably susceptible to poor performance due to installation errors. The lack of broader educational content and deficiencies in engineering knowledge will have profound negative impacts on both the performance and market acceptance of heat pumps (Gleeson 2016). A further sensitivity analysis of heat pump installation faults performed by Domanski et al. (2014) researched the impact of installation on performance in five different climatic zones and concluded that installation faults could be responsible for a 30% increase in energy usage. This study was further substantiated by the American Council for Energy Efficient Economy (2019), which claimed that energy savings from high-efficiency air conditioners and heat pumps could be negated by a 20%–40% loss in energy efficiency due to poor installation (“2019 Efficiency Programs: Promoting High Efficiency Residential Air Conditioners and Heat Pumps,” 2019). To raise awareness and promote the prevention of these defects, the Air Conditioner Contractors of America have set forth the HVAC quality installation specification, which states that there is a need to establish a performance bar to improve the core competencies of contractors to ensure that quality installations occur (ACCA 2015). In addition, the performance of other building products and systems are also affected by craft. In research by Langmans et al. (2017), the insulation installed in cavity walls, especially rigid-board-insulated, was highly dependent on the installation quality. Sulakatkoa et al. (2017) reported on the performance of the External Thermal Insulation Composite System with respect to installation and concluded that onsite shortcomings during the construction process led to a loss of technical performance of the product and reduced thermal efficiency.
In a broader sense, the knowledge of contractors and their crews on building retrofit projects affects the quality of the retrofit. In Deep Residential Retrofits in East Tennessee, 10 homeowners were guided through the energy retrofit process. For one particular retrofit experience in this project, the authors stated that for homeowners who are not actively involved in their project or who have little or no knowledge of how to do energy retrofit work correctly, the quality of the work would vary significantly and depend on the knowledge, training, and pride of work of the contractors and construction crews (Boudreaux et al. 2012). The implication of this observation has an impact on the success of the project, as it was also asserted that the quality of the retrofit work would correlate to energy savings (Boudreaux et al. 2012). It is then necessary to ensure high-quality construction to reach the expected energy savings proposed in deep energy retrofit projects.
Zero Carbon Hub (2014) identified poor installation of insulation as a contributor to the building performance gap in Closing the Gap Between Design and As-Built Performance. In their findings, some of the test houses were constructed carefully in a manner that was described as good quality, while others were constructed in such a way as to mimic poor quality. Features associated with poor quality could, in some cases, cause the U-value to rise by as much as 310% (Zero Carbon Hub 2014). Other investigations in which a thermographic test and visual inspection were performed found that the quality of installation of insulation can be very important and that areas, where insulation is poorly fitted, can incur high levels of heat loss (Doran and Carr 2008). From these examples, it is evident that poor quality during installation can be a contributor to poor building performance. This research study by Zero Carbon Hub (2014) also stated that a total of 15 issues had been found to be both supported by strong evidence from multiple sources and likely to have a significant impact on the performance gap, and it was determined that the majority of these issues result from a lack of knowledge and skills. Similarly, it was noted that the feasibility of installation for a product is related to an installer’s ability and training; their training is composed of the knowledge and skills acquired and their use of standard industry practices. In 2009, the DOE announced that approximately $450 million in the Recovery Act Funding would be allocated to weatherization programs in 13 states. Nonetheless, 20% of the funds were planned to be spent on hiring and training workers that would have a positive long-term impact on the economy. In addition, opportunities would be presented to many inexperienced green job workers to serve and do important work for our communities in the market where experience is key (DOE 2009; Soratana and Marriott 2010). The parties involved in the installation process contribute their time and knowledge, which can vary significantly given their time constraints and educational backgrounds; therefore, the building performance becomes sensitive to the product installation process. This framework can be universally applied across multiple products to understand the implications of installation.
This study focuses on the relationship between a person’s skill level and work knowledge and the role they play on installation quality and building energy performance. There are multiple factors that influence job productivity, which adversely affects product installation. In chapter four of Project Management for Construction, labor, material, and equipment used were discussed in terms of factors that affect jobsite productivity (Hendrickson and Au 1989; Smith 2016). Regardless of whether the installer formally gained knowledge in an area or gained it through experience, the overall understanding of information in related areas has an influence on performance in said field.
In the manufacturing industry, product development knowledge is defined by production efficiency. Development knowledge and other intangible properties have become the most important and valuable assets for product development (Wu et al. 2014). The same study revealed that for most manufacturing companies in China, the shortage of knowledgeable workers or experts is one of the biggest gaps standing in the path of the companies’ growth. There is a gap in acquiring knowledge and achieving optimized productivity within the manufacturing industry due to the lack of inclusion of installation perspectives throughout the development of products. In today’s knowledge economy, the primary factor in determining economic development is the worker’s own knowledge and ability (Quintino et al. 2012).
Despite the obvious room for potential savings, there is still a slight implementation of market-based solutions within the residential energy efficient market. This issue was further investigated in Increasing Innovation in Home Energy Efficiency: Monte Carlo Simulation of Potential Improvements, in which the importance of the auditor’s experience was included during the home energy audit process (Soratana and Marriott 2010). In their study, an auditor is defined as an installer, and gathered savings were collected by deducting the costs of improvements during interviews and inspections. These improvements were vital to the proposed reduction energy consumption goals but, more importantly, were strongly based on the auditor’s experience from initial home visits to the decision-making process to determine the feasibility of the improvements. The auditor’s experience factor was named the energy savings threshold and used as one of the variables in the Monte Carlo simulation model. The auditor only installed improvements if the energy savings efficiency exceeded the energy-saving threshold. Soratana and Marriott (2010) found that the residential energy services company (RESCO) model results determined that the energy-saving threshold percentage is a critical factor and depends substantially on an auditor’s experience. Their RESCO-based study concluded that the energy improvement market should rely on technology as well as the qualification of the worker. As evidenced in their results, an experienced auditor can choose improvements better than a tool and aid RESCOs in allocating their improvements effectively. From the understanding that individual products contribute to the overall performance of a building and that the performance is affected by the quality of the installation, a framework for analyzing the impact of installation was developed. The proposed framework emphasizes the effects of installation and its importance for ensuring performance and reducing energy consumption. Inevitably, an optimized performance would result in a decrease in operational cost during the lifetime of the product. The overall goal of this paper is to introduce the probability investment return (PIR) metric that accounts for variations in cost and energy performance for any given product or system. From this metric, consumers can gain an understanding of how the installation quality of a product and its associated energy savings can provide information on the expected payback and its variance. This research explores building system performance variance and highlights optimized installation as a key factor in improving energy performance gaps.

Framework

PIR is based on the probabilistic variation of installation and performance aspects of building products, materials, and systems. PIR is found from the inputs displayed in the concept map, which reveals the breakdown of the influencing variables contributing to the definition of PIR (Fig. 1). The methods provided in this framework consider the difficulty of installation, variance in product performance, energy consumption as a result of performance, and a resulting probabilistic investment return. The value of performance can be used toward a probabilistic assessment to estimate the returns on product investment. Performance depends on the trade-off between installers’ knowledge and time spent on installation. The conceptual map of Fig. 1 reflects the process of how each part within the developmental process of the PIR value was considered in the overall framework.
Fig. 1. Flow chart of paper outline. The proposed performance metric is defined based on the total cost of a product installation and the associated energy savings.

Installation Cost Factors

The installation costs are impacted by the qualifications of the foremen on site. The impacts on installation cost cannot be evaluated in separate categories without overlap between qualification and experience. This is related to the installation quality and cost variation, which would later affect how efficiently a building system may conserve energy. Arguably, a more experienced and qualified worker will require higher pay rates, which impact the installation costs. The time component of this framework was determined to be inherently variable due to the nature of the installation process. There is not a finite maximum on the time scale to measure the installation time. Installers with different knowledge capacities utilize varying time allotments to install their products. However, because this framework applies to a range of different building products, the time was defined in a relative way that could be applied to all products. Time in this scenario is a dependent variable of reaching an output of desired product performance. The time required to install a product is correlated to an installer’s knowledge; knowledge is then regarded as an independent variable for this case. Classifying knowledge is quite complex, as measuring knowledge requires a scale to measure that ability. The proposed evaluation framework focuses on the comprehensive ability of installers based on their knowledge and experience.
In the UK, the national vocational qualifications (NVQs) consist of a system for understanding vocational skill sets in terms of a ranking system. Different qualification levels can be met through performance-based assessments of vocational skills within specialized fields (Gann and Senker 1998). The lapsed synergism between training experience and qualification adds to the uncertainty and unpreparedness aspects of the installation process. Receiving a qualification or certification might indicate some level of skillfulness, but it fails to account for the fact that skill can also be attained through experience. In Residential Heat Pump Installations, Gleeson (2016) recognizes that among heat pump installations, there is still a wide range of efficiencies, even when performed by registered installers. The terms certified and registered should then be used with caution, recognizing that these qualifications do not guarantee maximum skills and knowledge. It can be assessed that a more accurate depiction of knowledge must be formed from training and educational perspectives as opposed to evaluating them independently. Currently, a knowledge worker’s human capital scale exists, which includes the dimensions of education, work experience, learning ability, and training (Guo et al. 2012).

Knowledge Score Criteria

Assessment for NVQs is based on statements of competence. It is the process of collecting evidence and making judgments on whether the performance criteria for each statement of competence have been met (Callender 1992). Throughout that evaluation, NVQ candidates provided their evidence of competence from various sources, such as college, training centers, or their workplace. The lack of accountability for an installers’ knowledge was considered throughout this framework and is presented in separate categories subsequently, and a ranking system has been suggested for an installer based on a number of units of competence. Each unit represents a discrete area of competence and can be further divided into subcategories, and within each category, the set of performance defines the standard required. The scale is partly a rubric to differentiate between areas of understanding and artistry, which better categorizes where an individual may score regarding their level of competency. Application variability and range of competency are defined by the knowledge score criteria (KSC) level statements. Fundamentally, the scale is based on industry needs to reflect the reality of working life better. Lead industry bodies (LIBs) were established to develop the standards of occupational competence, and these were devised through a functional analysis of work roles with particular attention being paid to the purpose and outcome (Callender 1992). The categories and scoring values of the ranking system presented in Fig. 2 are arbitrarily chosen. The user may choose to change/add/remove categories as needed. Also, the scoring values (here, 1) can also be adjusted.
Fig. 2. Experience-based installation impact table.
Overall, the scoring system allows installers and their ability to install a product/system to be evaluated by adding up the received score from Fig. 2. In this example, each criterion is considered equal in weight and can be attributed a value of 1 if fulfilled. The total knowledge score, K, is meant as an indicator of overall installer knowledge and experience with the product/system. For example, a score of 0 would indicate minimum knowledge, and a score of 9 would indicate a maximum;K is the representation of an individual’s knowledge or human capital within a product installation context. This scoring system would be utilized during the questionnaire that would be administered to manufacturers and laborers in future studies.
This study focused on the means by which expected performance might be reached, specifically through targeting installation. As previously discussed, the two factors that make an installation process feasible are time and knowledge capability. To distinguish what relationship exists between time, knowledge, and performance, information must first be gathered pertaining to these factors on a product installation basis. The proposed methods include the following:
a questionnaire given to product manufacturers, and
random sampling of unskilled/skilled laborers.
From either of these two methods, the goal is to provide a graphical representation of the correlation between time and knowledge. These methodologies were solely developed to determine the possible values of the random installation variable, and the random sampling of both unskilled and skilled laborers installing products onsite would provide the main information pertaining to possible gaps in performance. It is expected that this probabilistic method would provide a distribution curve for future use.

Time Analysis—Questionnaire

The first method includes a questionnaire for the product manufacturer and installer. This method involves a series of questions that target the aspects of the installation concerned with the relationship between knowledge and required time for installation/assembly, an example of which is given by Fig. 3. Questioning the manufacturer will determine the most significant aspects of product installation and its relationship to knowledge. This retrieval of information renders quick results because the manufacturer is likely very knowledgeable about the product and its capabilities. Note that data acquired from manufacturer knowledge could be subject to an idealized view of the product capabilities. This perspective is rooted in the fact that the products are typically tested in controlled environments under perfect installation conditions and by an installer with a high knowledge level.
Fig. 3. Manufacturer-installer questionnaire.
If the questionnaire is completed by an installer, said installer should have spent a considerable amount of time learning about the product to the extent that they might be consistently recommended by a manufacturer for their installation services (manufacturer installer network). Overall, gaining information requested by the questionnaire provides a holistic view of the installation process from a wide range of experienced persons (i.e., experts). Including multiple expert installers in the questionnaire is beneficial so that conclusions are not based on the observations of one individual. Preferably, the questionnaire is completed by both the manufacturer and the installer. In cases in which the deviation between answers is large, more experts may be required to participate in reducing uncertainties. Alternatively, monitoring the site conditions of an onsite installation can better capture product performance after it has been handled. This could lead to a reduction in the performance gap of the building construction products.

Time Analysis—Field Study

The second method consists of collecting information on product installation time by a range of installers. Unlike the questionnaire, this method provides a data set based on real, onsite installations. It is recommended that installation times should be collected from a wide range of installers with a variety of knowledge levels. Both methods (questionnaire and field study) serve one purpose: to gain an understanding of the relationship between installation time and knowledge level of the installer. Although the two methods vary in approach, they are designed to gather data that are representative of installation occurrences among the full range of knowledge levels for a specific product.

Methodology

For most products, materials, and systems, assuming a unique installation time for each knowledge level is less than realistic. Instead, a variation for the installation time is expected within each knowledge level. Fig. 4 reveals how the installation cost factors can take shape under a probabilistic viewpoint. In this display, the expected installation time is defined by a normal distribution, and the area under the curve represents a combination of time and knowledge, which together are a representation of the effort required to reach the expected performance (Hughes et al. 2020). As knowledge levels vary, so does the probabilistic range of installation time. This display (Fig. 4) represents the perfect scenario in which a total understanding exists of the installation time required for a product; however, many times, such complete analysis is difficult to conduct. In cases in which time data can be collected with little to no understanding of the installers’ knowledge levels, such data points can still be useful, assuming the installers’ knowledge from the collected data is representative of average levels. Fundamentally, a probability distribution is a function that represents the likelihood of obtaining possible values that a random value can assume. This statistical tool is useful in the investigation of the likely outcomes, potential values, and varied results. An example of how installation times may be presented by this graphical representation was presented in the installation of offshore wind turbines and their distribution (Lacal-Arántegui et al. 2018). If a standard random variable is used to characterize the distribution of activity durations, then only a few parameters are required to calculate the probability of any particular duration (Hendrickson and Au 1989). Fig. 5 illustrates how data sets are expected to look based on previous examples in the literature (Hendrickson and Au 1989; Lacal-Arántegui et al. 2018).
Fig. 4. Probabilistic representation of installation factors.
Fig. 5. Random collection of times for a given product or system. The discrete data set can be represented with a probabilistic distribution of expected installation times.
Because there is a correlation between installation time, t, and knowledge level, time can be described
t=f(K)
(1)
The discrete-time distribution in Fig. 5 can be expressed as a continuous probability distribution—in this case a normal distribution. Hence, the installation time, t, is defined under the following condition
tN(μt,σt)
(2)
in which
σt=n=09(f(Kn)μ)2n1
(3)
and the expected mean value of
μt=E[f(K)]
(4)
where t = installation time (h/unit); K = knowledge level (0–9); σt = standard deviation of time (h/unit); μt = average installation time (h/unit); and n = sample size.
This work also seeks to understand costs associated with installation, that is, the labor cost. To determine the most cost-efficient knowledge level to install a product, the cost range for different knowledge levels must be accounted for, as seen in Fig. 6. Because the knowledge and skills of a laborer are often reflected in the hourly wage, the knowledge level can assumingly represent the hourly wages of the installer.
Fig. 6. Graphical presentation of a time-knowledge correlation (TCK) curve of an arbitrary product. The predicted installation time will vary depending on a knowledge score (0–9).

Energy Savings

Depending on the expected performance level of a product, the associated energy savings can be found by a comparison with existing conditions. To the improved performance to which a system/product/material will contribute, the estimated savings, ξs, can be found
ξs=ξpreξpost
(5)
where ξpre (joule or watt) is the energy performance before the installation; and ξpost (joule or watt) is the performance afterward. Obviously, the difference between the energy performance values represents the energy savings. Eq. (5) is valid for both deterministic values and/or probabilistic variations of ξpre and ξpost. Fig. 7 illustrates a probabilistic approach and how the expected energy savings can be found from two arbitrary normal distributions—one representing preinstallation and one representing postinstallation.
Fig. 7. Difference in performance before (pre) and after (post) a product has been installed. The difference in performance is used to estimate the associated energy savings.
For probabilistic distributions, as seen in Fig. 7, performance savings can be expressed using Eq. (6)
[ξpostN(μpost,σpost)][ξpreN(μpre,σpre)]=[ξsN(μpostμpre,σpost2+σpre2)]
(6)
In cases in which the existing performance is known, meaning ξpre has an exact value, while the expected performance after the installation is assumed to follow a probabilistic distribution, a combination of Eqs. (5) and (6) applies
ξpre[ξpostN(μpost,σpost)]=[ξsN(μpreμpost,σpost)]
(7)
Once the improved performance has been found (i.e., the energy savings), the associated cost savings is given
S=ξs·EΔt
(8)
where S = time-dependent cost savings ($/year); and E = cost of energy ($/J).
Using Eqs. (6) and (8), the probabilistic energy savings becomes a variable under the normal distribution
SN(μpostμpreE1·Δt,σpost2+σpre2E1·Δt)
(9)

Deterministic and Probabilistic Investment Return

The costs of installation, product, and associated energy savings provide a general ROI or payback. The number of years required to receive an ROI for installing a product is defined by the ratio between installation and material cost, ctoti, over energy savings, s. The energy savings, as previously noted, would be dependent on the performance level of the product reached
ROI=ctotis
(10)
where ctoti = exact value of installation cost ($); and s = exact value of cost savings ($/year).
However, the focus of the present study is on understanding and appreciating the importance and impact of probabilistic variations. So, a probabilistic payback metric, known as the probabilistic investment return (PIR), is proposed. The PIR represents the expected variation of a product/system/material payback time utilizing Eqs. (8)(10) combining to form
PIR=[CtotiN(μi,σi)][SN(μs,σs)]
(11)
where the variables are from the following relationships
μ1=Cmat+μt·WK,σ1=σt·WK,μs=(μpostμpre)E1·Δt,andσ2=σpost2+σpre2E1·Δt
Because both the numerator and denominator of the PIR include a probabilistic varying variable, the solution will hold a probabilistic variation as well. The solution for Eq. (11) can be found by using mathematical tools (Palisade 2019) or by using a ratio distribution approximation (Hinkley 1969) for uncorrelated noncentral normal distributions
PIRf(t)
(12)
where
f(t)=b(t)·d(t)a3(t)·2π·σi·σs,[Φ(b(t)a(t))Φ(b(t)a(t))]+ec2a2(t)·π·σi·σs,a(t)=1σi2t2+1σs2,b(t)=μiσi2t+μsσs2,c(t)=μi2σi2+μs2σs2,d(t)=eb2(t)c·a2(t)2a2(t),andΦ(b(t)a(t))12[1+erf(b(t)a(t)2)]
In Eq. (12), PIR is a variable under the probability density function of P(t)—a probability density as a function of time, t.
Naturally, neither the probabilistic variation of installation and material cost nor the associated savings always take the form of a normal distribution. In such cases, Eq. (11) can be written to be valid for any arbitrary probabilistic distribution
PIR=CtotiS
(13)
where
Ctotif(Ctoti),Sf(S),andPIRf(t)
An arbitrary distribution of PIR is depicted in Fig. 8. In this figure, PIR is a variable under the probabilistic distribution of f(t), in which t is the time in years. Notably, Fig. 8 allows the reader to appreciate how much the ROI can vary for a product and why it is preferable to treat such an assessment in a probabilistic manner. In addition, PIR can be compared with different products, systems, and materials to evaluate the most cost-efficient alternative.
Fig. 8. PIR of an arbitrary product, material, or system. The shape and values of the distribution depend on the variation of the installation cost and energy savings.

Case Study

Oak Ridge National Laboratory has studied the performance of installed air barriers in 44 homes built between 2010 and 2016 by measuring the airtightness of each building. Typically, the airtightness of a building is measured by conducting a blower door test. For residential buildings, the airtightness is many times measured as ACH50. This unit stands for air changes per hour at a 50 Pa air pressure gradient. Even if buildings do not typically see this high air pressure gradient, it allows the evaluation of air barrier performance, code compliance, and comparisons between other buildings. The results of the measurements are presented in Fig. 9. Twenty-two of the homes were installed with a traditional house wrap as the air barrier, and 18 homes were constructed with the ZIP System. The two materials and systems are displayed in Fig. 10. The house wrap is typically mounted and fastened on the outside of the sheathing by staples. The ZIP System acts both as the sheathing and weather barrier. The seams between the boards are sealed using a ZIP System approved tape.
Fig. 9. Variation of airtightness of homes installed with either house wrap (red), or ZIP System (blue). Probability density functions are also given based on the data measured. The airtightness is measured as ACH50 and represents the number of times the air inside the building is exchanged if the building of a 50 Pa pressure difference exists between the inside and outside of the building.
Fig. 10. (a) House wrap installed as a weather and air barrier; and (b) house constructed with the ZIP System. (Images by Simon Pallin.)
The performance of the air barrier is most relevant for the overall energy performance of a building. The financial benefit will be evaluated for installing the ZIP System as an air barrier over a traditional house wrap using the proposed PIR metric. This evaluation would allow for the expected installation cost, Ctot and the expected savings, S, associated with using the ZIP System over house wrap to be ascertained.
The costs associated with installing a house wrap and a ZIP System are given in Table 1. For house wrap, the cost is estimated using RSMeans data (Gordian 2021) and includes material, labor, and overhead. For the ZIP System, the material cost is approximately twice that of house wrap (JLS Construction 2011; Luig 2020). The labor cost between the two materials and systems are almost identical (Dupont 2021). For the estimated variations in installation cost, the RSMeans City Cost Index was utilized (Gordian 2019). Using the data in Table 1, the probabilistic variation of installation cost between house wrap and the ZIP System can be defined. Using Eq. (6), the difference in cost between the two materials and systems is found
[CZIPN(6.86,1.26)][CHWN(3.79,0.70)]=[CtotN(5.45,1.12)]
(14)
where CZIP($/m2) = probabilistic variation in cost for the ZIP System; and CHW ($/m2) = equivalent for house wrap. It was necessary to determine the time needed for the more expensive ZIP System to become financially beneficial in terms of payback, so the cost difference between the two systems, Ctot($/m2), was now defined.
Table 1. Cost of installation for house wrap and the ZIP system
HousewrapMaterialLaborTotal O&PStandard deviation, σ
($/ft2)($/m2)($/ft2)($/m2)($/ft2)($/m2)($/ft2)($/m2)
House wrap
Sheathing2.0622.170.768.183.5037.670.646.93
Fabric0.151.610.080.860.293.120.060.59
Total3.7940.800.707.52
ZIP system4.4247.580.849.026.8673.821.268.12

Note: Bold values indicate the differences in the two systems, House wrap and Zip System.

Estimation of the performance difference between a house wrap and the ZIP System based on the conducted blower door tests in the 44 homes was then conducted. Measured data given in Fig. 9 was employed as input data for whole-building energy performance simulations. One of the prototype buildings given by the Department of Energy (EnergyCodes 2020) was used for analysis, which was a small office building, and it is very similar in construction to a residential building. The house was simulated in Climate zone 4A (Baltimore), with a thermal characteristics meeting code (ANSI/ASHRAE/IESNA 2016). The coefficient of performance of the heating and cooling system was set to 0.9 and 3.5, respectively. Further, the cost of electricity and gas was assumed at $0.11 and $0.05  perkW·h.
The variation in energy usage and cost of energy for the simulated buildings is presented in Fig. 11. According to the two probabilistic distributions, the average cost of energy is somewhat lower than that of house wrap. The difference in cost between the two systems will define the savings
S=[N(3170,458)][N(2840,429)]
(15)
Fig. 11. Annual variation in the cost of energy for the simulated house with house wrap (blue) and the ZIP System (red).
It was determined that $3,170 and $458 are the average annual cost of energy and the standard deviation for house wrap, while $2,840 and $429 are the average annual cost and standard deviation for the ZIP System.
Finally, the PIR can be calculated, using Eq. (13)
PIR=CtotS
(16)
Utilizing Eq. (12), the following was found
PIRIG(12.9,15.0)
(17)
where IG = inverse gaussian distribution, which is also depicted in Fig. 12. According to the PIR value from Eq. (17), it takes on average 12.9 years for the ZIP System to outperform house wrap in terms of return of investment but with a significant spread in variance. Fig. 12 only presents the probabilistic distribution for which installing the ZIP System over a house wrap is favorable. Because the two distribution curves in Fig. 11 overlap, it is understood that the value of PIR is not always positive, which signifies that there will be times for which the house wrap outperforms the ZIP System. By utilizing a risk assessment simulation tool (Palisade 2019), the PIR values were estimated in both the positive and negative range. The simulation result is presented in Fig. 13.
Fig. 12. Probabilistic investment return for using the ZIP System over a traditional house wrap as an air barrier.
Fig. 13. Discrete distribution of PIR from simulating 10,000 simulations of estimated performance and costs of installation.
By comparing Fig. 12 with Fig. 13, it can be stated that the distribution of PIR in Fig. 12 is also represented in Fig. 13. However, a significant part of the expected variation of PIR in the negative range can also be seen, which represents the portion for which house wrap outperforms the ZIP System.

Conclusion

The present research provided a methodology to quantify the impact of installation quality on overall energy performance and cost. To accomplish this, the PIR metric was introduced, which accounts for the probabilistic nature of installation time, knowledge, performance, cost, and energy savings. The research also presented an approach and framework to visually describe the correlation between installation time and installer knowledge level.
PIR represents the relationship between the required installation time of a product, system, or material and an installer’s level of expertise in relation to the expected variation of performance. This metric allows the user to appreciate how much the installation time, and thus cost, can vary depending on the skill set of the installer and how the difficulty of installation can be compared relative to other products. PIR is basically an approach to evaluate the ROI in a probabilistic manner. Such assessment can then be considered a framework for a holistic understanding of the product installation process and its far-reaching effects on building performance.
A case study compared the estimated energy performance for two air barrier systems: a traditional house wrap and the ZIP System. Both air barrier systems had similar labor costs, but choosing the option that had higher material/overhead costs led to an investment return that will continue to pay for itself. This may be intuitive but is often not done due to cheap labor regardless of another system that could be more energy efficient. This example showcased the obvious choice but also introduced how a probabilistic relationship could sway what is actually utilized during construction. Most notably, a negative PIR value represented what would be cost-effective and energy-efficient upfront, especially when comparing two systems or materials that have similar labor costs and total overhead and profit (O&P) almost doubling in comparison.
Considering the PIR metric during the planning stages of building construction would be a major step toward addressing the challenges of building energy inefficiencies. Designers would have a better understanding of how counteracting compromised airtightness practices with increased budgets on installation quality could result in significant energy savings. The PIR metric could be used by designers to carry out a cost-benefit analysis and help them understand how installation quality impacts energy performance prior to construction so that more informed decisions can be made regarding products and contractors/installers.

Data Availability Statement

All data, models, and code generated or used during the study appear in the published article.

Acknowledgments

The authors would like to thank Oak Ridge National Laboratories for the hosting and implementation of this project work. We would also like to express gratitude to the Program of Excellence in STEM (PE-STEM) at Florida A&M University supported by the US Department of Education’s MSEIP Program, Grant No. P120A160115, and the GAANN Program in Civil Engineering at Florida A&M University (Grant No. P200A180074) for sponsorship of the lead author.

References

ACCA (Air Conditioning Contractors of America). 2015. “HVAC quality installation specification.” In ACAA standard 5: Air conditioning contractors of America. Alexandria, VA: ACCA.
Almeida, N., V. Sousa, L. Dias, and F. Branco. 2010. “A framework for combining risk-management and performance-based building approaches.” Build. Res. Inf. 38 (2): 157–174. https://doi.org/10.1080/09613210903516719.
American Council for Energy-Efficient Economy. 2019. 2019 efficiency programs: Promoting high efficiency residential air conditioners and heat pumps, 1–5. Washington, DC: American Council for Energy-Efficient Economy.
ANSI/ASHRAE/IESNA (American Society of Heating, Refrigerating and Air- Conditioning Engineers/Illuminating Engineering Society of North America). 2016. Energy standard for buildings except low-rise residential buildings. Standard 90.1-2016. Atlanta: ASHRAE.
Boudreaux, P. R., T. P. Hendrick, J. E. Christian, and R. K. Jackson. 2012. “Deep residential retrofits in East Tennessee.” Accessed December 1, 2020. https://www.osti.gov/servlets/purl/1039244.
Brennan, L., and S. W. Iain. 2015. “Deep energy retrofit guidance for the building America solutions center. Accessed February 5, 2021. https://eta.lbl.gov/publications/deep-energy-retrofit-guidance.
Callender, C. 1992. Will national vocational qualifications work? Evidence from the construction industry. Brighton, UK: Sussex Univ.
Chen, Y., Y. Zhang, and S. Zhang. 2014. “Impacts of different types of owner-contractor conflict on cost performance in construction projects.” J. Constr. Eng. Manage. 140 (6): 04014017. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000852.
Cheung, S., K. T. W. Yiu, and S. Yeung. 2006. “A study of styles and outcomes in construction dispute negotiation.” J. Constr. Eng. Manage. 132 (8): 805. https://doi.org/10.1061/(ASCE)0733-9364(2006)132:8(805).
Clean Energy Solutions Center. 2015. “Quadrennial technology review, an assessment of energy technologies and research opportunities.” Accessed February 5, 2021. https://cleanenergysolutions.org/resources/quadrennial-technology-review-assessment-energy-technologies-research-opportunities.
DOE. 2009. “Obama administration delivers more than $453 million for weatherization programs in 15 states.” Accessed February 5, 2021. https://www.energy.gov/articles/obama-administration-delivers-more-453-million-weatherization-programs-15-states.
Domanski, P., H. Henderson, and W. V. Payne. 2014. Sensitivity analysis of installation faults on heat pump performance: NIST technical note 1848. Washington, DC: US Department of Commerce. https://doi.org/10.6028/NIST.TN.1848.
Doran, S., and B. Carr. 2008. “Thermal transmittance of walls of dwellings before and after application of cavity wall insulation.” Accessed February 5, 2021. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/48187/3146-thermal-transmittance.pdf.
Dupont. 2021. “Installation speed—Not so fast.” Accessed February 5, 2021. https://www.dupont.com/tyvek-weatherization/zip-sheathing-system.html?src=gg-kg_tyvek-X-us_zip-system.
EnergyCodes. 2020. “Commercial prototype building models.” Accessed on January 30, 2021. https://www.energycodes.gov/development/commercial/prototype_models.
Forcada, N., M. Gangolells, M. Casals, and M. Macarulla. 2017. “Factors affecting rework costs in construction.” J. Constr. Eng. Manage. 143 (8): 04017032. https://doi.org/10.1061/(ASCE)CO.1943-7862.0001324.
Frei, B., C. Sagerschnig, and D. Gyalistras. 2017. “Performance gaps in Swiss buildings: An analysis of conflicting objectives and mitigation strategies.” Energy Procedia 122 (Sep): 421–426. https://doi.org/10.1016/j.egypro.2017.07.425.
Gann, D., and P. Senker. 1998. “Construction skills training for the next millennium.” Constr. Manage. Econ. 16 (5): 569–580. https://doi.org/10.1080/014461998372105.
Gleeson, C. P. 2016. “Residential heat pump installations: The role of vocational education and training.” Build. Res. Inf. 44 (4): 394–406. https://doi.org/10.1080/09613218.2015.1082701.
Gordian. 2019. “RSMeans city cost index—2019.” Accessed February 1, 2021. https://www.rsmeans.com/rsmeans-city-cost-index.
Gordian. 2021. 2021 building construction costs with RSMeans data. 79th ed. Rockland, MA: Gordian Construction Publishers & Consultants.
Guo, W., H. Xiao, and X. Yang. 2012. “An empirical research on the correlation between human capital and career success of knowledge workers in enterprise.” Phys. Procedia 25 (Jan): 715–725. https://doi.org/10.1016/j.phpro.2012.03.148.
Hagentoft, C.-E. 2017. “Reliability of energy efficient building retrofitting—Probability assessment of performance and cost (Annex 55, RAP-RETRO).” Energy Build. 155 (Nov): 166–171. https://doi.org/10.1016/j.enbuild.2017.09.007.
Hendrickson, C., and T. Au. 1989. Project management for construction: Fundamental concepts for owners, engineers, architects, and builders. Upper Saddle River, NJ: Prentice-Hall.
Hinkley, D. V. 1969. “On the ratio of two correlated normal random variables.” Biometrika 56 (3): 635–639. https://doi.org/10.1093/biomet/56.3.635.
Hu, W. F., and X. H. He. 2014. “An innovative time-cost-quality tradeoff modeling of building construction project based on resource allocation.” Sci. World J. 2014: 1–10. https://doi.org/10.1155/2014/673248.
Hughes, J., S. Pallin, and C. Clark II. 2020. “A framework for installation impact analysis on building performance.” In Proc., Durability of Building Materials and Components, edited by C. Serrat, J. R. Casas, and V. Gibert. Berlin: Springer. https://www.scipedia.com/public/Hughes_et_al_2020a.
JLS Construction. 2011. OSB with Tyvek or ZIP system sheathing. Chennai, India: JLS Construction.
Lacal-Arántegui, R., J. M. Yusta, and J. A. Domínguez-Navarro. 2018. “Offshore wind installation: Analysing the evidence behind improvements in installation time.” Renewable Sustainable Energy Rev. 92 (2018): 133–145. https://doi.org/10.1016/j.rser.2018.04.044.
Langmans, J., M. Indekeu, and S. Roels. 2017. “The impact of workmanship on the thermal performance of cavity walls with rigid insulation boards: Where are we today?” Energy Procedia 132 (Oct): 255–260. https://doi.org/10.1016/j.egypro.2017.09.711.
Less, B., N. Mullen, B. Singer, and I. Walker. 2015. “Indoor air quality in 24 California residences designed as high-performance homes.” Sci. Technol. Built Environ. 21 (1): 14–24. https://doi.org/10.1080/10789669.2014.961850.
Less, B., and I. Walker. 2014. “A meta-analysis of single-family deep energy retrofit performance in the US.” Accessed February 5, 2021. https://eta.lbl.gov/publications/meta-analysis-single-family-deep.
Liberatore, M. J., and B. Pollack-Johnson. 2013. “Improving project management decision making by modeling quality, time, and cost continuously.” IEEE Trans. Eng. Manage. 60 (3): 518–528. https://doi.org/10.1109/TEM.2012.2219586.
Love, P. E. 2002. “Influence of project type and procurement method on rework costs in building construction projects.” J. Constr. Eng. Manage. 128 (1): 18–29. https://doi.org/10.1061/(ASCE)0733-9364(2002)128:1(18).
Luig, A. 2020. “How much does the Zip system cost?” Accessed February 5, 2021. https://askinglot.com/how-much-does-the-zip-system-cost.
Murthy, D. N. P., T. Osteras, and M. Rausand. 2008. Product reliability, specification and performance. London: Springer.
Palisade, C. 2019. “@Risk—Risk and uncertainty analyses add-in tool for Microsoft excel.” Accessed January 18, 2021. http://www.palisade.com/risk/.
Pallin, S., P. Boudreaux, and A. Gehl. 2017. “Airtightness of common wall assemblies and its effect on R-value.” In Advances in hygrothermal performance of building envelopes: Materials, systems and simulations, edited by P. Mukhopadhyaya and D. Fisler, 83–94. West Conshohocken, PA: ASTM International. https://doi.org/10.1520/STP159920160099.
Quintino, L., I. Fernandes, and R. M. Miranda. 2012. “Impact of the qualification of personnel in the manufacturing industry.” Weld. World 56 (7–8): 130–137. https://doi.org/10.1007/BF03321373.
Smith, J. 2016. “Toward error management in construction: Moving beyond a ‘zero vision’.” J. Constr. Eng. Manage. 142 (11): 04016058. https://doi.org/10.1061/(ASCE)CO.1943-7862.0001170.
Soratana, K., and J. Marriott. 2010. “Increasing innovation in home energy efficiency: Monte Carlo simulation of potential improvements.” Energy Build. 42 (6): 828–833. https://doi.org/10.1016/j.enbuild.2009.12.003.
Sulakatkoa, V., E. Liismaa, and E. Soekov. 2017. “Increasing construction quality of external thermal insulation composite system (ETICS) by revealing on-site degradation factors.” Procedia Environ. Sci. 38 (Jan): 765–772.
Sullivan, K. 2011. “Quality management programs in the construction industry: Best value compared with other methodologies.” J. Manage. Eng. 27 (4): 210–219. https://doi.org/10.1061/(ASCE)ME.1943-5479.0000054.
van den Brom, P., A. R. Hansen, K. Gram-Hanssen, A. Meijer, and H. Visscher. 2019. “Variances in residential heating consumption—Importance of building characteristics and occupants analysed by movers and stayers.” Appl. Energy 250 (Sep): 713–728. https://doi.org/10.1016/j.apenergy.2019.05.078.
Wu, Z., X. Ming, L. He, M. Li, and X. Li. 2014. “Knowledge integration and sharing for complex product development.” Int. J. Prod. Res. 52 (21): 6296–6313. https://doi.org/10.1080/00207543.2014.923121.
Zero Carbon Hub. 2014. Closing the gap between design & as-built performance evidence. London: Zero Carbon Hub.

Information & Authors

Information

Published In

Go to Journal of Construction Engineering and Management
Journal of Construction Engineering and Management
Volume 147Issue 11November 2021

History

Received: Mar 11, 2021
Accepted: Jun 23, 2021
Published online: Sep 14, 2021
Published in print: Nov 1, 2021
Discussion open until: Feb 14, 2022

Authors

Affiliations

Dept. of Civil and Environmental Engineering, Florida Agricultural and Mechanical Univ.-Florida State Univ. College of Engineering, Florida A&M Univ., Tallahassee, FL 32310. ORCID: https://orcid.org/0000-0002-3278-7026. Email: [email protected]
Simon Pallin [email protected]
Energy and Transportation Science Division, Building Envelope and Urban Systems Research, Oak Ridge National Laboratory, Oak Ridge, TN 37830. Email: [email protected]
Antonio J. Aldykiewicz Jr. [email protected]
Energy and Transportation Science Division, Building Envelope and Urban Systems Research, Oak Ridge National Laboratory, Oak Ridge, TN 37830. Email: [email protected]
Professor, Dept. of Civil and Environmental Engineering, Florida Agricultural and Mechanical Univ.-Florida State Univ. College of Engineering, Florida A&M Univ., Tallahassee, FL 32310 (corresponding author). ORCID: https://orcid.org/0000-0003-4211-5810. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share