Technical Papers
Dec 30, 2022

Cost-Efficient Image Semantic Segmentation for Indoor Scene Understanding Using Weakly Supervised Learning and BIM

Publication: Journal of Computing in Civil Engineering
Volume 37, Issue 2

Abstract

Image segmentation is an essential step in vision sensing and image processing. It enables the understanding of the object’s classes, spatial locations, and extents in the scene, which can be used to support a wide range of construction applications such as progress monitoring, safety management, and productivity analysis. The recent ground-breaking achievements of deep learning-based approaches for semantic segmentation are at the cost of expensive large-scale training datasets annotated at the pixel level. Although building information modeling (BIM) has been leveraged to alleviate labeling costs using automatically generated, color-coded images as semantic labels, the differences between the BIM models and the real-world scenes make it difficult to apply networks trained on BIM-generated labels to real images. Furthermore, it takes nontrivial efforts to reduce such differences. To address these problems, this paper proposes a weakly supervised segmentation approach that uses inexpensive image-level labels. The missing boundary information in image-level labels is compensated by BIM-extracted object information. The proposed method consists of three modules: (1) detect initial object locations from image-level labels; (2) extract object information from BIM as prior knowledge; and (3) incorporate the prior knowledge into the network to enhance the detected object locations. Three extensive experiments are designed to evaluate the effectiveness of the proposed method. Results show that the proposed method substantially improves the detected object areas by using prior knowledge of target objects from BIM and outperforms the state-of-the-art weakly supervised methods.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

The annotations for the image datasets and the proposed models are available from the corresponding author upon reasonable request.

References

Acharya, D., K. Khoshelham, and S. Winter. 2019a. “BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images.” ISPRS J. Photogramm. Remote Sens. 150 (Apr): 245–258. https://doi.org/10.1016/j.isprsjprs.2019.02.020.
Acharya, D., K. Khoshelham, and S. Winter. 2019b. “Unimelb corridor synthetic dataset.” Accessed April 24, 2022. https://melbourne.figshare.com/articles/dataset/UnimelbCorridorSynthetic_zip/10930457.
Acharya, D., R. Tennakoon, S. Muthu, K. Khoshelham, R. Hoseinnezhad, and A. Bab-Hadiashar. 2022. “Single-image localisation using 3D models: Combining hierarchical edge maps and semantic segmentation for domain adaptation.” Autom. Constr. 136 (Apr): 104152. https://doi.org/10.1016/j.autcon.2022.104152.
Ahn, J., and S. Kwak. 2018. “Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 4981–4990. New York: IEEE.
Alawadlhi, M., and W. Yan. 2020. “BIM hyperreality—Data synthesis using BIM and hyperrealistic rendering for deep learning.” In Proc., 40th Annual Conf. of the Association of Computer Aided Design in Architecture (ACADIA) 2020, 228–236. Fargo, ND: Association of Computer Aided Design in Architecture.
Álvares, J. S., and D. B. Costa. 2019. “Construction progress monitoring using unmanned aerial system and 4D BIM.” In Proc., 27th Annual Conf. of the Int. Group for Lean Construction (IGLC) 2019, 1445–1456. Dublin, Ireland: International Group for Lean Construction.
An, X., L. Zhou, Z. Liu, C. Wang, P. Li, and Z. Li. 2021. “Dataset and benchmark for detecting moving objects in construction sites.” Autom. Constr. 122 (Feb): 103482. https://doi.org/10.1016/j.autcon.2020.103482.
Bearman, A., O. Russakovsky, V. Ferrari, and L. Fei-Fei. 2016. “What’s the point: Semantic segmentation with point supervision.” In Proc., European Conf. on Computer Vision, 549–565. Berlin: Springer.
Behnam, A., D. C. Wickramasinghe, M. A. A. Ghaffar, T. T. Vu, Y. H. Tang, and H. B. M. Isa. 2016. “Automated progress monitoring system for linear infrastructure projects using satellite remote sensing.” Autom. Constr. 68 (Aug): 114–127. https://doi.org/10.1016/j.autcon.2016.05.002.
Caesar, H., J. Uijlings, and V. Ferrari. 2018. “COCO-stuff: Thing and stuff classes in context.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 1209–1218. New York: IEEE.
Chattopadhay, A., A. Sarkar, P. Howlader, and V. N. Balasubramanian. 2018. “Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks.” In Proc., 2018 IEEE Winter Conf. on Applications of Computer Vision (WACV) 2018, 839–847. New York: IEEE.
Chen, J., S. Li, D. Liu, and W. Lu. 2022. “Indoor camera pose estimation via style-transfer 3D models.” Comput. Civ. Infrastruct. Eng. 37 (3): 335–353. https://doi.org/10.1111/mice.12714.
Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2017. “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.” IEEE Trans. Pattern Anal. Mach. Intell. 40 (4): 834–848. https://doi.org/10.1109/TPAMI.2017.2699184.
Chu, C., A. Zhmoginov, and M. Sandler. 2017. “CycleGAN, a master of steganography.” Preprint, submitted December 8, 2017. https://arxiv.org/abs/1712.02950.
Czerniawski, T., and F. Leite. 2020. “Automated segmentation of RGB-D images into a comprehensive set of building components using deep learning.” Adv. Eng. Inf. 45 (Aug): 101131. https://doi.org/10.1016/J.AEI.2020.101131.
Dai, J., K. He, and J. Sun. 2015. “BoxSup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation.” In Proc., IEEE Int. Conf. on Computer Vision, 1635–1643. New York: IEEE.
Du, Y., Z. Fu, Q. Liu, and Y. Wang. 2022. “Weakly supervised semantic segmentation by pixel-to-prototype contrast.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 4320–4329. New York: IEEE.
Everingham, M., L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. 2010. “The pascal visual object classes (VOC) challenge.” Int. J. Comput. Vision 88 (2): 303–338. https://doi.org/10.1007/s11263-009-0275-4.
Fang, W., B. Zhong, N. Zhao, P. E. D. Love, H. Luo, J. Xue, and S. Xu. 2019. “A deep learning-based approach for mitigating falls from height with computer vision: Convolutional neural network.” Adv. Eng. Inf. 39 (Jan): 170–177. https://doi.org/10.1016/j.aei.2018.12.005.
Gao, Y., and K. M. Mosalam. 2018. “Deep transfer learning for image-based structural damage recognition.” Comput. Civ. Infrastruct. Eng. 33 (9): 748–768. https://doi.org/10.1111/mice.12363.
German, S., J.-S. Jeon, Z. Zhu, C. Bearman, I. Brilakis, R. DesRoches, and L. Lowes. 2013. “Machine vision-enhanced postearthquake inspection.” J. Comput. Civ. Eng. 27 (6): 622–634. https://doi.org/10.1061/(asce)cp.1943-5487.0000333.
Ghosh, S., A. Pal, S. Jaiswal, K. C. Santosh, N. Das, and M. Nasipuri. 2019. “SegFast-V2: Semantic image segmentation with less parameters in deep learning for autonomous driving.” Int. J. Mach. Learn. Cybern. 10 (11): 3145–3154. https://doi.org/10.1007/s13042-019-01005-5.
Gu, Z., J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, T. Zhang, S. Gao, and J. Liu. 2019. “CE-net: Context encoder network for 2D medical image segmentation.” IEEE Trans. Med. Imaging 38 (10): 2281–2292. https://doi.org/10.1109/TMI.2019.2903562.
Guo, J., Q. Wang, and J. H. Park. 2020. “Geometric quality inspection of prefabricated MEP modules with 3D laser scanning.” Autom. Constr. 111 (Mar): 103053. https://doi.org/10.1016/j.autcon.2019.103053.
He, K., G. Gkioxari, P. Dollár, and R. Girshick. 2017. “Mask R-CNN.” In Proc., IEEE Int. Conf. on Computer Vision, 2961–2969. New York: IEEE.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 770–778. New York: IEEE.
Hong, Y., S. Park, H. Kim, and H. Kim. 2021. “Synthetic data generation using building information models.” Autom. Constr. 130 (Oct): 103871. https://doi.org/10.1016/j.autcon.2021.103871.
Hou, Q., P. T. Jiang, Y. Wei, and M. M. Cheng. 2018. “Self-erasing network for integral object attention.” In Proc., Int. Conf. on Neural Information Processing Systems, 549–559. New York: Curran Associates.
Huang, Z., X. Wang, J. Wang, W. Liu, and J. Wang. 2018. “Weakly-supervised semantic segmentation network with deep seeded region growing.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 7014–7023. New York: IEEE.
Isailović, D., V. Stojanovic, M. Trapp, R. Richter, R. Hajdin, and J. Döllner. 2020. “Bridge damage: Detection, IFC-based semantic enrichment and visualization.” Autom. Constr. 112 (Apr): 103088. https://doi.org/10.1016/j.autcon.2020.103088.
ISPRS (International Society for Photogrammetry and Remote Sensing). 2019. “The ISPRS benchmark on indoor modelling.” Accessed June 25, 2022. https://www2.isprs.org/commissions/comm4/wg5/dataset/.
Jo, S. H., I. J. Yu, and K. S. Kim. 2022. “RecurSeed and CertainMix for weakly supervised semantic segmentation.” Preprint, submitted April 14, 2022. https://arxiv.org/abs/2204.06754.
Kervadec, H., J. Dolz, M. Tang, E. Granger, Y. Boykov, and I. Ben Ayed. 2019. “Constrained-CNN losses for weakly supervised segmentation.” Med. Image Anal. 54 (May): 88–99. https://doi.org/10.1016/j.media.2019.02.009.
Khoreva, A., R. Benenson, J. Hosang, M. Hein, and B. Schiele. 2017. “Simple does It: Weakly supervised instance and semantic segmentation.” In Proc., 30th IEEE Conf. on Computer Vision and Pattern Recognition, (CVPR) 2017, 1665–1674. New York: IEEE.
Kim, H., and H. Kim. 2018. “3D reconstruction of a concrete mixer truck for training object detectors.” Autom. Constr. 88 (Apr): 23–30. https://doi.org/10.1016/j.autcon.2017.12.034.
Kim, H., H. Kim, Y. W. Hong, and H. Byun. 2018. “Detecting construction equipment using a region-based fully convolutional network and transfer learning.” J. Comput. Civ. Eng. 32 (2): 04017082. https://doi.org/10.1061/(asce)cp.1943-5487.0000731.
Kim, H., K. Kim, and H. Kim. 2016. “Data-driven scene parsing method for recognizing construction site objects in the whole image.” Autom. Constr. 71 (Part 2): 271–282. https://doi.org/10.1016/j.autcon.2016.08.018.
Kolesnikov, A., and C. H. Lampert. 2016. “Seed, expand and constrain: Three principles for weakly-supervised image segmentation.” In Proc., European Conf. on Computer Vision. Berlin: Springer.
Krähenbühl, P., and V. Koltun. 2012. “Efficient inference in fully connected CRFS with Gaussian edge potentials.” In Proc., Int. Conf. on Neural Information Processing Systems, 24. New York: Curran Associates.
Kropp, C., C. Koch, and M. König. 2014. “Drywall state detection in image data for automatic indoor progress monitoring.” In Proc., 2014 Int. Conf. on Computing in Civil and Building Engineering, 347–354. Reston, VA: ASCE.
Kropp, C., C. Koch, M. König, and I. Brilakis. 2012. “A framework for automated delay prediction of finishing works using video data and BIM-based construction simulation.” In Proc., 2014 Int. Conf. on Computing in Civil and Building Engineering, 10–12. Reston, VA: ASCE.
Kropp, C., M. König, and C. Koch. 2013. “Object recognition in BIM Registered videos for indoor progress monitoring.” In Proc., EG-ICE Int. Workshop on Intelligent Computing in Engineering. Plymouth, UK: European Group for Intelligent Computing in Engineering.
Kwak, S., S. Hong, and B. Han. 2017. “Weakly supervised semantic segmentation using superpixel pooling network.” In Proc., AAAI Conf. on Artificial Intelligence. Sacramento, CA: Association for the Advancement of Artificial Intelligence.
Ladický, L. U., C. Russell, P. Kohli, and P. H. Torr. 2009. “Associative hierarchical CRFS for object class image segmentation.” In Proc., 2009 IEEE 12th Int. Conf. On Computer Vision, 739–746. New York: IEEE.
Lamio, F., R. Farinha, M. Laasonen, and H. Huttunen. 2018. “Classification of building information model (BIM) structures with deep learning.” In Proc., 2018 7th European Vision Information Processing, 1–6. New York: IEEE.
Lee, J., E. Kim, S. Lee, J. Lee, and S. Yoon. 2019. “Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 5262–5271. New York: IEEE.
Lee, S., J. Lee, J. Lee, C.-K. Park, and S. Yoon. 2018. “Robust tumor localization with pyramid grad-CAM.” Preprint, submitted May 29, 2018. https://arxiv.org/abs/1805.11393.
Li, Q., R. Cao, J. Zhu, X. Hou, J. Liu, S. Jia, Q. Li, and G. Qiu. 2022. “Improving synthetic 3D model-aided indoor image localization via domain adaptation.” ISPRS J. Photogramm. Remote Sens. 183 (Jan): 66–78. https://doi.org/10.1016/j.isprsjprs.2021.10.005.
Li, Y., Y. Duan, Z. Kuang, Y. Chen, W. Zhang, and X. Li. 2021. “Uncertainty estimation via response scaling for pseudo-mask noise mitigation in weakly-supervised semantic segmentation.” Preprint, submitted December 14, 2021. https://arxiv.org/abs/2112.07431.
Liang, X. 2019. “Image-based post-disaster inspection of reinforced concrete bridge systems using deep learning with Bayesian optimization.” Comput. Civ. Infrastruct. Eng. 34 (5): 415–430. https://doi.org/10.1111/mice.12425.
Lin, D., J. Dai, J. Jia, K. He, and J. Sun. 2016. “ScribbleSup: Scribble-supervised convolutional networks for semantic segmentation.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 3159–3167. New York: IEEE.
Lin, T. Y., M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. 2014. “Microsoft COCO: Common objects in context.” Preprint, submitted May 1, 2014. https://arxiv.org/abs/1405.0312v3.
Liu, L., W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen. 2020. “Deep learning for generic object detection: A survey.” Int. J. Comput. Vision 128 (2): 261–318.
Lu, W., J. Chen, and F. Xue. 2022. “Using computer vision to recognize composition of construction waste mixtures: A semantic segmentation approach.” Resour. Conserv. Recycl. 178 (Mar): 106022. https://doi.org/10.1016/j.resconrec.2021.106022.
Mapillary. 2018. “OpenSfM: Open source structure-from-Motion pipeline.” Accessed June 25, 2022. https://github.com/mapillary/OpenSfM.
Milioto, A., and C. Stachniss. 2019. “Bonnet: An open-source training and deployment framework for semantic segmentation in robotics using CNNs.” In Proc., IEEE Int. Conf. on Robotics and Automation, 7094–7100. New York: IEEE.
Oh, S. J., R. Benenson, A. Khoreva, Z. Akata, M. Fritz, and B. Schiele. 2017. “Exploiting saliency for object segmentation from image level labels.” In Proc., 30th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2017, 5038–5047. New York: IEEE.
Pan, J., P. Zhu, K. Zhang, B. Cao, Y. Wang, D. Zhang, J. Han, and Q. Hu. 2022. “Learning self-supervised low-rank network for single-stage weakly and semi-supervised semantic segmentation.” Int. J. Comput. Vision 130 (5): 1181–1195. https://doi.org/10.1007/s11263-022-01590-z.
Paperwithcode. 2022. “PASCAL VOC 2012 test benchmark (weakly-supervised semantic segmentation).” Accessed September 15, 2022. https://paperswithcode.com/sota/weakly-supervised-semantic-segmentation-on-1.
Park, K., and S. Ergan. 2022. “Toward intelligent agents to detect work pieces and processes in modular construction: An approach to generate synthetic training data.” In Proc., Construction Research Congress, 802–811. Reston, VA: ASCE.
Paszke, A., S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. 2019. “PyTorch: An imperative style, high-performance deep learning library.” In Vol. 32 of Advances in Neural Information Processing Systems, 8024–8035. New York: Curran Associates.
Pathak, D., P. Krahenbuhl, and T. Darrell. 2015. “Constrained convolutional neural networks for weakly supervised segmentation.” In Proc., IEEE Int. Conf. on Computer Vision, 1796–1804. New York: IEEE.
Pinheiro, P. O., and R. Collobert. 2015. “From image-level to pixel-level labeling with convolutional networks.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1713–1721. New York: IEEE.
Porwal, A., and K. N. Hewage. 2013. “Building information modeling (BIM) partnering framework for public construction projects.” Autom. Constr. 31 (May): 204–214. https://doi.org/10.1016/j.autcon.2012.12.004.
Qu, Y., Y. Chen, J. Huang, and Y. Xie. 2019. “Enhanced PIX2PIX dehazing network.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 8152–8160. New York: IEEE.
Qurishee, M. A., W. Wu, B. Atolagbe, J. Owino, I. Fomunung, and M. Onyango. 2020. “Creating a dataset to boost civil engineering deep learning research and application.” Engineering 12 (3): 151–165. https://doi.org/10.4236/eng.2020.123013.
Rebolj, D., N. Č. Babič, A. Magdič, P. Podbreznik, and M. Pšunder. 2008. “Automated construction activity monitoring system.” Adv. Eng. Inf. 22 (4): 493–503. https://doi.org/10.1016/j.aei.2008.06.002.
Roberts, D., and M. Golparvar-Fard. 2019. “End-to-end vision-based detection, tracking and activity analysis of earthmoving equipment filmed at ground level.” Autom. Constr. 105 (Sep): 102811. https://doi.org/10.1016/j.autcon.2019.04.006.
Roh, S., Z. Aziz, and F. Peña-Mora. 2011. “An object-based 3D walk-through model for interior construction progress monitoring.” Autom. Constr. 20 (1): 66–75. https://doi.org/10.1016/j.autcon.2010.07.003.
Rubaiyat, A. H. M., T. T. Toma, M. Kalantari-Khandani, S. A. Rahman, L. Chen, Y. Ye, and C. S. Pan. 2017. “Automatic detection of helmet uses for construction safety.” In Proc., 2016 IEEE/WIC/ACM Int. Conf. on Web Intelligence Workshops, 135–142. New York: IEEE.
Selvaraju, R. R., M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. 2017. “Grad-CAM: Visual explanations from deep networks via gradient-based localization.” In Proc., IEEE Int. Conf. on Computer Vision, 618–626. New York: IEEE.
Shamsollahi, D., O. Moselhi, and K. Khorasani. 2021. “A timely object recognition method for construction using the mask R-CNN architecture.” In Proc., Int. Symp. on Automation and Robotics in Construction, 372–378. Oulu, Finland: International Association for Automation and Robotics in Construction.
Shotton, J., J. Winn, C. Rother, and A. Criminisi. 2009. “Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context.” Int. J. Comput. Vision 81 (1): 2–23. https://doi.org/10.1007/s11263-007-0109-1.
Siam, M., C. Jiang, S. Lu, L. Petrich, M. Gamal, M. Elhoseiny, and M. Jagersand. 2019. “Video object segmentation using teacher-student adaptation in a human robot interaction (HRI) setting.” In Proc., IEEE Int. Conf. on Robotics and Automation, 50–56. New York: IEEE.
Silberman, N., D. Hoiem, P. Kohli, and R. Fergus. 2012. “Indoor segmentation and support inference from RGBD images.” In Proc., European Conf. on Computer Vision, 746–760. Berlin: Springer.
Singh, K. K., and Y. J. Lee. 2017. “Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization.” In Proc., IEEE Int. Conf. on Computer Vision, 3544–3553. New York: IEEE.
Soltani, M. M., Z. Zhu, and A. Hammad. 2016. “Automated annotation for visual recognition of construction resources using synthetic images.” Autom. Constr. 62 (Feb): 14–23. https://doi.org/10.1016/j.autcon.2015.10.002.
Son, H., and C. Kim. 2010. “3D structural component recognition and modeling method using color and 3D data for construction progress monitoring.” Autom. Constr. 19 (7): 844–854. https://doi.org/10.1016/j.autcon.2010.03.003.
Song, S., S. P. Lichtenberg, and J. Xiao. 2015. “SUN RGB-D: A RGB-D scene understanding benchmark suite.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 567–576. New York: IEEE.
Tremblay, J., A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield. 2018. “Training deep networks with synthetic data: Bridging the reality gap by domain randomization.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1082–1090. New York: IEEE.
Tuttas, S., A. Braun, A. Borrmann, and U. Stilla. 2017. “Acquisition and consecutive registration of photogrammetric point clouds for construction progress monitoring using a 4D BIM.” J. Photogramm. Remote Sens. Geoinf. Sci. 85 (1): 3–15. https://doi.org/10.1007/s41064-016-0002-z.
Tzutalin. 2017. “labelImg: LabelImg is a graphical image annotation tool and label object bounding boxes in images.” Accessed June 25, 2022. https://github.com/tzutalin/labelImg.
Ullman, S. 1979. “The interpretation of structure from motion.” Proc. R. Soc. London, Ser. B 203 (1153): 405–426. https://doi.org/10.1098/rspb.1979.0006.
Vernaza, P., and M. Chandraker. 2017. “Learning random-walk label propagation for weakly-supervised semantic segmentation.” In Proc., 30th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2017, 2953–2961. New York: IEEE.
Wang, X., S. You, X. Li, and H. Ma. 2018. “Weakly-supervised semantic segmentation by iteratively mining common object features.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1354–1362. New York: IEEE.
Wang, Y., J. Zhang, M. Kan, S. Shan, and X. Chen. 2020. “Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 12272–12281. New York: IEEE.
Wang, Z., Y. Zhang, K. M. Mosalam, Y. Gao, and S. L. Huang. 2022. “Deep semantic segmentation for visual understanding on construction sites.” Comput. Civ. Infrastruct. Eng. 37 (2): 145–162. https://doi.org/10.1111/mice.12701.
Wei, Y., and B. Akinci. 2021. “Synthetic image data generation for semantic understanding in ever changing scenes using BIM and unreal engine.” In Proc., Int. Computing in Civil Engineering, 934–941. Reston, VA: ASCE.
Wei, Y., J. Feng, X. Liang, M. M. Cheng, Y. Zhao, and S. Yan. 2017. “Object region mining with adversarial erasing: A simple classification to semantic segmentation approach.” In Proc., 30th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2017, 6488–6496. New York: IEEE.
Wei, Y., H. Xiao, H. Shi, Z. Jie, J. Feng, and T. S. Huang. 2018. “Revisiting dilated convolution: A simple approach for weakly- and semi-supervised semantic segmentation.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 7268–7277. New York: IEEE.
Weimer, D., B. Scholz-Reiter, and M. Shpitalni. 2016. “Design of deep convolutional neural network architectures for automated feature extraction in industrial inspection.” CIRP Ann. 65 (1): 417–420. https://doi.org/10.1016/j.cirp.2016.04.072.
Weston, J., F. Ratle, H. Mobahi, and R. Collobert. 2012. Deep learning via semi-supervised embedding. Berlin: Springer.
Wu, J., N. Cai, W. Chen, H. Wang, and G. Wang. 2019a. “Automatic detection of hardhats worn by construction personnel: A deep learning approach and benchmark dataset.” Autom. Constr. 106 (Oct): 102894. https://doi.org/10.1016/j.autcon.2019.102894.
Wu, Z., C. Shen, and A. van den Hengel. 2019b. “Wider or deeper: Revisiting the ResNet model for visual recognition.” Pattern Recognit. 90 (Jun): 119–133. https://doi.org/10.1016/j.patcog.2019.01.006.
Xu, X., and G. H. Lee. 2020. “Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 13706–13715. New York: IEEE.
Zeiler, M. D., and R. Fergus. 2014. “Visualizing and understanding convolutional networks.” In Proc., European Conf. on Computer Vision, 818–833. Cham, Switzerland: Springer.
Zhang, Z., S. Fidler, and R. Urtasun. 2016. “Instance-level segmentation for autonomous driving with deep densely connected MRFs.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 669–677. New York: IEEE.
Zheng, S., S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. Torr. 2015. “Conditional random fields as recurrent neural networks.” In Proc., IEEE Int. Conf. on Computer Vision, 1529–1537. New York: IEEE.
Zhou, B., A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. 2016. “Learning deep features for discriminative localization.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2921–2929. New York: IEEE.
Zhou, B., H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba. 2017. “Scene parsing through ADE20K dataset.” In Proc., 30th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2017, 5122–5130. New York: IEEE.
Zhou, Z., M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang. 2018. Unet++: A nested U-net architecture for medical image segmentation, 3–11. Berlin: Springer.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 37Issue 2March 2023

History

Received: Jun 26, 2022
Accepted: Nov 14, 2022
Published online: Dec 30, 2022
Published in print: Mar 1, 2023
Discussion open until: May 30, 2023

Permissions

Request permissions for this article.

Authors

Affiliations

Ph.D. Student, Division of Construction Engineering and Management, Lyles School of Civil Engineering, Purdue Univ., 550 Stadium Mall Dr., West Lafayette, IN 47907. Email: [email protected]
Professor, Division of Construction Engineering and Management, Lyles School of Civil Engineering, Purdue Univ., 550 Stadium Mall Dr., West Lafayette, IN 47907 (corresponding author). ORCID: https://orcid.org/0000-0003-4527-1974. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share