Technical Papers
Aug 26, 2021

CLOI: An Automated Benchmark Framework for Generating Geometric Digital Twins of Industrial Facilities

Publication: Journal of Construction Engineering and Management
Volume 147, Issue 11

Abstract

This paper devised, implemented, and benchmarked a novel framework, named CLOI, that can generate accurate individual labelled point clusters of the most important shapes of existing industrial facilities with minimal manual effort in a generic point-level format. CLOI employs a combination of deep learning and geometric methods to segment the points into classes and individual instances. The current geometric digital twin generation from point cloud data in commercial software is a tedious, manual process. Experiments with our CLOI framework revealed that the method reliably can segment complex and incomplete point clouds of industrial facilities, yielding 82% class segmentation accuracy. Compared with the current state of practice, the proposed framework can realize estimated time-savings of 30% on average. CLOI is the first framework of its kind to have achieved geometric digital twinning for the most important objects of industrial factories. It provides the foundation for further research on the generation of semantically enriched digital twins of the built environment.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code used during the study were provided by a third party. Direct requests for these materials may be made to the provider as indicated in the Acknowledgements.

Acknowledgments

We thank our colleague Graham Miatt, who provided insight, expertise, and data that greatly assisted this research. We also express our gratitude to Bob Flint from BP International Centre for Business and Technology (ICBT), who provided data for evaluation. The research leading to these results received funding from the Engineering and Physical Sciences Research Council (EPSRC) and the US National Academy of Engineering (NAE). AVEVA Group and BP International Centre for Business and Technology (ICBT) partially sponsored this research under Grant agreements RG83104 and RG90532, respectively. We gratefully acknowledge the collaboration of all academic and industrial project partners. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the aforementioned institutes.

References

Agapaki, E. 2021. “CLOI point cloud platform demo.” Accessed May 18, 2021. https://youtu.be/K3rnBctMYAU.
Agapaki, E., and I. Brilakis. 2020a. “CLOI-NET: Class segmentation of industrial facilities’ point cloud datasets.” Adv. Eng. Inf. 45 (Aug): 101121. https://doi.org/10.1016/j.aei.2020.101121.
Agapaki, E., and I. Brilakis. 2020b. “Instance segmentation of industrial point cloud data.” Preprint, submitted December 24, 2020. http://arxiv.org/abs/2012.14253.
Agapaki, E., A. Glyn-Davies, S. Mandoki, and I. Brilakis. 2019. “CLOI: A shape classification benchmark dataset for industrial facilities.” In Proc., 2019 ASCE Int. Conf. on Computing in Civil Engineering. Reston, VA: ASCE.
Agapaki, E., G. Miatt, and I. Brilakis. 2018. “Prioritizing object types for modelling existing industrial facilities.” Autom. Constr. 96 (Dec): 211–223. https://doi.org/10.1016/j.autcon.2018.09.011.
Agapaki, E., and M. Nahangi. 2020. “Scene understanding and model generation.” Chap. 3 in Infrastructure computer vision. 1st ed., edited by I. Brilakis and C. Haas. Amsterdam, Netherlands: Elsevier.
Agarwal, R., S. Chandrasekaran, and M. Sridhar. 2016. “The digital future of construction.” Accessed May 23, 2021. https://www.globalinfrastructureinitiative.com/sites/default/files/pdf/The-digital-future-of-construction-Oct-2016.pdf.
Armeni, I., O. Sener, H. Jiang, M. Fischer, and S. Savarese. 2016. “3D semantic parsing of large-scale indoor spaces.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 1534–1543. New York: IEEE.
Barbosa, F., J. Woetzel, J. Mischke, M. J. Ribeirinho, M. Sridhar, M. Parsons, and S. Brown. 2017. “Reinventing construction: A route to higher productivity.” Accessed July 23, 2021. https://www.mckinsey.com/~/media/McKinsey/Business%20Functions/Operations/Our%20Insights/Reinventing%20construction%20through%20a%20productivity%20revolution/MGI-Reinventing-Construction-Executive-summary.pdf.
Bassier, M., B. Van Genechten, and M. Vergauwen. 2019. “Classification of sensor independent point cloud data of building objects using random forests.” J. Build. Eng. 21 (Jan): 468–477. https://doi.org/10.1016/j.jobe.2018.04.027.
Bauer, F. L., and H. Wössner. 1972. “The ‘Plankalkül’ of Konrad Zuse: A forerunner of today’s programming languages.” Commun. ACM 15 (7): 678–685.
Borenstein, E., and S. Ullman. 2008. “Combined top-down/bottom-up segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 30 (12): 2109–2125. https://doi.org/10.1109/TPAMI.2007.70840.
Borrmann, A., and V. Berkhahn. 2018. “Principles of geometric modeling.” In Building information modeling, 27–41. Cham, Switzerland: Springer.
Chen, J., Z. Kira, and Y. K. Cho. 2019. “Deep learning approach to point cloud scene understanding for automated scan to 3D reconstruction.” J. Comput. Civ. Eng. 33 (4): 04019027. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000842.
Chen, J., Z. Kira, and Y. K. Cho. 2021. “LRGNet: Learnable region growing for class-agnostic point cloud segmentation.” IEEE Rob. Autom. Lett. 6 (2): 2799–2806. https://doi.org/10.1109/LRA.2021.3062607.
ClearEdge. 2019. “Plant modeling capabilities.” Accessed July 23, 2021. https://new.clearedge3d.com/edgewise/plant-modeling/.
Coleman, C., M. Chandramouli, S. Damodaran, and E. Deuel. 2017. “Making maintenance smarter.” Accessed May 23, 2021. https://www2.deloitte.com/us/en/insights/focus/industry-4-0/using-predictive-technologies-for-asset-maintenance.html.
Devaux, A., M. Brédif, and N. Paparoditis. 2012. “A web-based 3D mapping application using WebGL allowing interaction with images, point clouds and models.” In Proc., ACM Int. Symp. on Advances in Geographic Information Systems. New York: Association for Computing Machinery.
Dilda, V., L. Mori, O. Noterdaeme, and J. Van Niel. 2018. “Using advanced analytics to boost productivity and profitability in chemical manufacturing.” Accessed May 23, 2021. https://www.mckinsey.com/industries/chemicals/our-insights/using-advanced-analytics-to-boost-productivity-and-profitability-in-chemical-manufacturing.
Dimitrov, A., and M. Golparvar-Fard. 2015. “Segmentation of building point cloud models including detailed architectural/structural features and MEP systems.” Autom. Constr. 51 (Mar): 32–45. https://doi.org/10.1016/j.autcon.2014.12.015.
Fumarola, M., and R. Poelman. 2011. “Generating virtual environments of real world facilities: Discussing four different approaches.” Autom. Constr. 20 (3): 263–269. https://doi.org/10.1016/j.autcon.2010.08.004.
Gerbert, P., S. Castagnino, C. Rothballer, A. Renz, and R. Filitz. 2016. “Digital in engineering and construction.” Accessed May 23, 2021. http://futureofconstruction.org/content/uploads/2016/09/BCG-Digital-in-Engineering-and-Construction-Mar-2016.pdf.
Glaessgen, E. H., and D. S. Stargel. 2012. “The digital twin paradigm for future NASA and U.S. Air force vehicles.” In Proc., AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conf.: Collection of Technical Papers. Reston, VA: American Institute of Aeronautics and Astronautics.
Grieves, M. 2014. Digital twin: Manufacturing excellence through virtual factory replication. Vélizy-Villacoublay, France: Dassault Systèmes.
Huang, J., and S. You. 2013. “Detecting objects in scene point cloud: A combinational approach.” In Proc., 2013 Int. Conf. on 3D Vision, 3DV 2013, 175–182. New York: IEEE.
Hullo, J.-F., G. Thibault, C. Boucheny, F. Dory, and A. Mas. 2015. “Multi-sensor as-built models of complex industrial architectures.” Remote Sens. 7 (12): 16339–16362. https://doi.org/10.3390/rs71215827.
Jain, S. D., and K. Grauman. 2016. “Active image segmentation propagation.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Jin, Y.-H., and W.-H. Lee. 2019. “Fast cylinder shape matching using random sample consensus in large scale point cloud.” Appl. Sci. 9 (5): 974. https://doi.org/10.3390/app9050974.
Kalogerakis, E., M. Averkiou, S. Maji, and S. Chaudhuri. 2017. “3D Shape segmentation with projective convolutional networks.” In Proc., 30th IEEE Conf. on Computer Vision and Pattern Recognition, CVPR 2017. New York: IEEE.
Kawashima, K., S. Kanai, and H. Date. 2014. “As-built modeling of piping system from terrestrial laser-scanned point clouds using normal-based region growing.” J. Comput. Des. Eng. 1 (1): 13–26. https://doi.org/10.7315/JCDE.2014.002.
Klokov, R., and V. Lempitsky. 2017. “Escape from cells: Deep Kd-networks for the recognition of 3D point cloud models.” In Proc., IEEE Int. Conf. on Computer Vision. New York: IEEE.
Komori, J., and K. Hotta. 2019. “AB-PointNet for 3D point cloud recognition.” In Proc., 2019 Digital Image Computing: Techniques and Applications (DICTA), 1–6. New York: IEEE.
Krizhevsky, A., I. Sutskever, and G. E. Hinton. 2017. “ImageNet classification with deep convolutional neural networks.” Commun. ACM 60 (6): 84–90. https://doi.org/10.1145/3065386.
Landrieu, L., and M. Simonovsky. 2018. “Large-scale point cloud semantic segmentation with superpoint graphs.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 4558–4567. New York: IEEE.
LeCun, Y., B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. “Backpropagation applied to handwritten zip code recognition.” Neural Comput. 1 (4): 541–551. https://doi.org/10.1162/neco.1989.1.4.541.
Li, B., Y. Shi, Z. Qi, and Z. Chen. 2019a. “A survey on semantic segmentation.” In Proc., IEEE Int. Conf. on Data Mining Workshops, ICDMW. New York: IEEE.
Li, Y., L. Ma, Z. Zhong, D. Cao, and J. Li. 2019b. “TGNet: Geometric graph CNN on 3-D point cloud segmentation.” IEEE Trans. Geosci. Remote Sens. 58 (5): 3588–3600. https://doi.org/10.1109/TGRS.2019.2958517.
Li, Z., et al. 2016. “A three-step approach for TLS point cloud classification.” IEEE Trans. Geosci. Remote Sens. 54 (9): 5412–5424. https://doi.org/10.1109/TGRS.2016.2564501.
Liang, M., B. Yang, S. Wang, and R. Urtasun. 2018. “Deep continuous fusion for multi-sensor 3D object detection.” In Proc., Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). New York: Computer Vision Foundation.
Liang, Z., M. Yang, L. Deng, C. Wang, and B. Wang. 2019. “Hierarchical depthwise graph convolutional neural network for 3D semantic segmentation of point clouds.” In Proc., 2019 Int. Conf. on Robotics and Automation (ICRA), 8152–8158. New York: IEEE.
Liu, T., Y. Cai, J. Zheng, and N. M. Thalmann. 2021. “BEACon: A boundary embedded attentional convolution network for point cloud instance segmentation.” Visual Comput. 1–11. https://doi.org/10.1007/s00371-021-02112-7.
Liu, Y.-J., J.-B. Zhang, J.-C. Hou, J.-C. Ren, and W.-Q. Tang. 2013. “Cylinder detection in large-scale point cloud of pipeline plant.” IEEE Trans. Visual Comput. Graphics 19 (10): 1700–1707. https://doi.org/10.1109/TVCG.2013.74.
Lu, Q., C. Chen, W. Xie, and Y. Luo. 2020. “PointNGCNN: Deep convolutional networks on 3D point clouds with neighborhood graph filters.” Comput. Graphics 86 (Feb): 42–51. https://doi.org/10.1016/j.cag.2019.11.005.
Ma, J. W., T. Czerniawski, and F. Leite. 2020. “Semantic segmentation of point clouds of building interiors with deep learning: Augmenting training datasets with synthetic BIM-based point clouds.” Autom. Constr. 113 (May): 103144. https://doi.org/10.1016/j.autcon.2020.103144.
Marshall, G. F. 2016. Handbook of optical and laser scanning. Boca Raton, FL: CRC Press.
Marton, Z. C., R. B. Rusu, and M. Beetz. 2009. “On fast surface reconstruction methods for large and noisy point clouds.” In Proc., IEEE Int. Conf. on Robotics and Automation, 2009: ICRA ’09, 3218–3223. New York: IEEE.
Maturana, D., and S. Scherer. 2015. “VoxNet: A 3D convolutional neural network for real-time object recognition.” In Proc., 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 922–928. New York: IEEE.
McKinsey Global Institute. 2015. “Digital America: A tale of the haves and have-mores.” Accessed May 23, 2021. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/digital-america-a-tale-of-the-haves-and-have-mores#.
Meagher, D. 1980. Octree encoding: A new technique for the representation, manipulation and display of arbitrary 3-D objects by computer. New York: Rensselaer Polytechnic Institute.
Pang, Y., L. Li, W. Hu, Y. Peng, L. Liu, and Y. Shao. 2012. “Computerized segmentation and characterization of breast lesions in dynamic contrast-enhanced MR images using fuzzy c-means clustering and snake algorithm.” Comput. Math. Methods Med. 2021: 634907. https://doi.org/10.1155/2012/634907.
Patil, A. K., P. Holi, S. K. Lee, and Y. H. Chai. 2017. “An adaptive approach for the reconstruction and modeling of as-built 3D pipelines from point clouds.” Autom. Constr. 75 (Mar): 65–78. https://doi.org/10.1016/j.autcon.2016.12.002.
PECI (Portland Energy Conservation, Incorporated). 1999. “Portable data loggers diagnostic tools for energy-efficient building operations.” Accessed May 23, 2021. https://www.av8rdas.com/uploads/1/0/3/2/103277290/dataloggers.pdf.
Perez-Perez, Y., M. Golparvar-Fard, and K. El-Rayes. 2016. “Semantic and geometric labeling for enhanced 3D point cloud segmentation.” In Proc., Construction Research Congress 2016, 2542–2552. Reston, VA: ASCE.
Perez-Perez, Y., M. Golparvar-Fard, and K. El-Rayes. 2021. “Segmentation of point clouds via joint semantic and geometric features for 3D modeling of the built environment.” Autom. Constr. 125 (May): 103584. https://doi.org/10.1016/j.autcon.2021.103584.
Peyghambarzadeh, S. M. M., F. Azizmalayeri, H. Khotanlou, and A. Salarpour. 2020. “Point-PlaneNet: Plane kernel based convolutional neural network for point clouds analysis.” Digital Signal Process. 98 (Mar): 102633. https://doi.org/10.1016/j.dsp.2019.102633.
Pham, Q., D. T. Nguyen, B. Hua, G. Roig, and S. Yeung. 2019. “JSIS3D: Joint semantic-instance segmentation of 3D point clouds with multi-task pointwise networks and multi-value conditional random fields.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 8827–8836. New York: IEEE.
Qi, C. R., L. Yi, H. Su, and L. J. Guibas. 2017a. PointNet++: Deep hierarchical feature learning on point sets in a metric space. New York: IEEE.
Qi, R., H. Su, K. Mo, and L. J. Guibas. 2017b. “PointNET: Deep learning on point sets for 3D classification and segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 652–660. New York: IEEE.
Rabbani, T. 2006. “Automatic reconstruction of industrial installations using point clouds and images.” Ph.D. thesis, Dept. of Civil Engineering and Geosciences, Delft Univ. of Technology.
Rusu, R. B., N. Blodow, Z. C. Marton, and M. Beetz. 2009. “Close-range scene segmentation and reconstruction of 3D point cloud maps for mobile manipulation in domestic environments.” In Proc., 2009 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS 2009, 1–6. New York: IEEE.
Sampath, A., and J. Shan. 2010. “Segmentation and reconstruction of polyhedral building roofs from aerial lidar point clouds.” IEEE Trans. Geosci. Remote Sens. 48 (3): 1554–1567. https://doi.org/10.1109/TGRS.2009.2030180.
Schnabel, R., R. Wahl, and R. Klein. 2007. “Efficient RANSAC for point-cloud shape detection.” Comput. Graphics Forum 26 (2): 214–226. https://doi.org/10.1111/j.1467-8659.2007.01016.x.
Schuetz, M. 2016. “Potree: Rendering large point cloud in web browsers.” Ph.D. thesis, Institute of Visual Computing and Human-Centered Technology, Univ. of TU Wien.
Schwartz, G., and K. Nishino. 2019. “Recognizing material properties from images.” IEEE Trans. Pattern Anal. Mach. Intell. 42 (8): 1981–1995. https://doi.org/10.1109/TPAMI.2019.2907850.
Shao, T., Y. Yang, Y. Weng, Q. Hou, and K. Zhou. 2018. “H-CNN: Spatial hashing based CNN for 3D shape analysis.” IEEE Trans. Visual Comput. Graphics 26 (7): 2403–2416. https://doi.org/10.1109/TVCG.2018.2887262.
Son, H., and C. Kim. 2016. “Automatic segmentation and 3D modeling of pipelines into constituent parts from laser-scan data of the built environment.” Autom. Constr. 68 (Aug): 203–211. https://doi.org/10.1016/j.autcon.2016.05.010.
Su, H., S. Maji, E. Kalogerakis, and E. Learned-Miller. 2015. “Multi-view convolutional neural networks for 3D shape recognition.” In Proc., IEEE Int. Conf. on Computer Vision. New York: IEEE.
Taha, A. A., and A. Hanbury. 2015. “Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool.” BMC Med. Imaging 15 (1): 1–28. https://doi.org/10.1186/s12880-015-0068-x.
Tatarchenko, M., A. Dosovitskiy, and T. Brox. 2017. “Octree generating networks: Efficient convolutional architectures for high-resolution 3D outputs.” In Proc., IEEE Int. Conf. on Computer Vision. New York: IEEE.
Teichmann, M., M. Weber, M. Zöllner, R. Cipolla, and R. Urtasun. 2018. “MultiNet: Real-time joint semantic reasoning for autonomous driving.” In Proc., IEEE Intelligent Vehicles Symp. New York: IEEE.
Thomas, D. S. 2018. “The costs and benefits of advanced maintenance in manufacturing.” Accessed May 23, 2021. https://nvlpubs.nist.gov/nistpubs/ams/NIST.AMS.100-18.pdf.
Thomas, H., C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas. 2019. “KPConv: Flexible and deformable convolution for point clouds.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 6411–6420. New York: IEEE.
Vosselman, G. 2009. “Advanced point cloud processing.” In Proc., Photogrammetric Week ’09, 137–146. Heidelberg, Germany: Wichmann.
Wang, L., Y. Huang, Y. Hou, S. Zhang, and J. Shan. 2019a. “Graph attention convolution for point cloud semantic segmentation.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 10296–10305. New York: IEEE.
Wang, S., S. Suo, W. C. Ma, A. Pokrovsky, and R. Urtasun. 2018a. “Deep parametric continuous convolutional neural networks.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Wang, W., R. Yu, Q. Huang, and U. Neumann. 2018b. “SGPN: Similarity group proposal network for 3D point cloud instance segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2569–2578. New York: IEEE.
Wang, X., X. Shen, C. Shen, and J. Jia. 2019b. “Associatively segmenting instances and semantics in point clouds.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Wei, L., Q. Huang, D. Ceylan, E. Vouga, and H. Li. 2016. “Dense human body correspondences using convolutional networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
West, T., and M. Blackburn. 2017. “Is digital thread/digital twin affordable? A systemic assessment of the cost of DoD’s latest Manhattan Project.” Procedia Comput. Sci. 114 (Oct): 47–56. https://doi.org/10.1016/j.procs.2017.09.003.
Wu, Z., S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 2015. “3D ShapeNets: A deep representation for volumetric shapes.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1912–1920. New York: IEEE.
Xie, Y., J. Tian, and X. X. Zhu. 2019. “Linking points with labels in 3D: A review of point cloud semantic segmentation.” Preprint, submitted August 23, 2019. http://arxiv.org/abs/1908.08854.
Xiong, X., A. Adan, B. Akinci, and D. Huber. 2013. “Automatic creation of semantically rich 3D building models from laser scanner data.” Autom. Constr. 31 (May): 325–337. https://doi.org/10.1016/j.autcon.2012.10.006.
Zhang, J., Q. Huang, and X. Peng. 2015. “3D reconstruction of indoor environment using the Kinect sensor.” In Proc., 2015 5th Int. Conf. on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), 538–541. New York: IEEE.
Zhang, J., X. Lin, and X. Ning. 2013. “SVM-based classification of segmented airborne LIDAR point clouds in urban areas.” Remote Sens. 5 (8): 3749–3775. https://doi.org/10.3390/rs5083749.
Zhou, Y., and O. Tuzel. 2017. “VoxelNet: End-to-end learning for point cloud based 3D object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 4490–4499. New York: IEEE.

Information & Authors

Information

Published In

Go to Journal of Construction Engineering and Management
Journal of Construction Engineering and Management
Volume 147Issue 11November 2021

History

Received: Jan 12, 2021
Accepted: Jun 24, 2021
Published online: Aug 26, 2021
Published in print: Nov 1, 2021
Discussion open until: Jan 26, 2022

Permissions

Request permissions for this article.

Authors

Affiliations

Senior Software Developer, Innovation Lead, PTC Inc., 121 Seaport Blvd., Boston, MA 02210 (corresponding author). ORCID: https://orcid.org/0000-0002-2962-9203. Email: [email protected]
Ioannis Brilakis, Ph.D., M.ASCE
Laing O’Rourke Reader, Dept. of Engineering, Univ. of Cambridge, Cambridge CB2 1PZ, UK.

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • Digital twinning of civil infrastructures: Current state of model architectures, interoperability solutions, and future prospects, Automation in Construction, 10.1016/j.autcon.2023.104785, 149, (104785), (2023).
  • A Systematic Review of Artificial Intelligence Applied to Facility Management in the Building Information Modeling Context and Future Research Directions, Buildings, 10.3390/buildings12111939, 12, 11, (1939), (2022).
  • CAM-K: a novel framework for automated estimating pixel area using K-Means algorithm integrated with deep learning based-CAM visualization techniques, Neural Computing and Applications, 10.1007/s00521-022-07428-6, 34, 20, (17741-17759), (2022).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share