Technical Papers
May 22, 2019

Novel System for Rapid Investigation and Damage Detection in Cultural Heritage Conservation Based on Deep Learning

Publication: Journal of Infrastructure Systems
Volume 25, Issue 3

Abstract

Rapid investigation and damage assessment are crucial for cultural heritage conservation. At present, mobile crowd sensing (MCS) techniques are very effective for cultural heritage investigation and data collection. Unfortunately, data collected based on MCS techniques cannot be fully utilized and analyzed. To overcome this limitation, this study combines MCS techniques and a state-of-the-art deep learning algorithm to realize rapid investigation and damage detection of the Great Wall in China. The GreatWatcher system, based on MCS techniques and a deep learning algorithm, was developed in this study, focusing on big data collection and damage detection for the Great Wall. The system highlights the significance and emerging revolution of the combination MCS techniques with deep learning methods in the cultural heritage field. System components include a mobile client (data collection), web platform (data storage database), and computing terminal (data analysis and automatic damage detection). Two field investigations and data collection for the Great Wall were performed to verify the feasibility and effectiveness of the system. Based on the collected data, a deep learning method was used to automatically analyze damage to the Great Wall at the computing terminal. Moreover, various validation experiments of different conditions were performed to verify the good performance of the deep learning method.

Get full access to this article

View all available purchase options and get full access to this article.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grant No. 51479031) and the Key Training Program of Dalian University of Technology (Grant No. DUT16ZD219).

References

Andolina, S., D. Pirrone, G. Russo, S. Sorce, and A. Gentile. 2012. “Exploitation of mobile access to context-based information in cultural heritage fruition.” In Proc., 2012 7th Int. Conf. on Broadband, Wireless Computing, Communication and Applications, 322–328. New York: IEEE.
Bridgelall, R. 2014. “Inertial sensor sample rate selection for ride quality measures.” J. Infrastruct. Syst. 21 (2): 04014039. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000225.
Cha, Y. J., W. Choi, and O. Büyüköztürk. 2017. “Deep learning-based crack damage detection using convolutional neural networks.” Comput.-Aided Civ. Infrastruct. Eng. 32 (5): 361–378. https://doi.org/10.1111/mice.12263.
Chatfield, K., K. Simonyan, A. Vedaldi, and A. Zisserman. 2014. “Return of the devil in the details: Delving deep into convolutional nets.” In Proc., BMVC 2014. Guildford, UK: BMVA.
Coupe, R. 2013. The Great Wall of China. New York: Rosen Publishing.
CSC (China’s State Council). 2006. Rules on Great Wall protection. CSC486262. Beijing: CSC.
Everingham, M., S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. 2015. “The PASCAL visual object classes challenge: A retrospective.” Int. J. Comput. Vis. 111 (1): 98–136. https://doi.org/10.1007/s11263-014-0733-5.
Everingham, M., L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. 2010. “The PASCAL visual object classes (VOC) challenge.” Int. J. Comput. Vis. 88 (2): 303–338. https://doi.org/10.1007/s11263-009-0275-4.
Gopalakrishnan, K., S. K. Khaitan, A. Choudhary, and A. Agrawal. 2017. “Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection.” Constr. Build. Mater. 157 (28): 322–330. https://doi.org/10.1016/j.conbuildmat.2017.09.110.
Guo, J., X. Xie, R. Bie, and L. Sun. 2014. “Structural health monitoring by using a sparse coding-based deep learning algorithm with wireless sensor networks.” Pers. Ubiquitous Comput. 18 (8): 1977–1987. https://doi.org/10.1007/s00779-014-0800-5.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2016, 770–778. New York: IEEE.
Hinton, G. E., S. Osindero, and Y. W. Teh. 2006. “A fast learning algorithm for deep belief nets.” Neural Comput. 18 (7): 1527–1554. https://doi.org/10.1162/neco.2006.18.7.1527.
Hubel, D. H., and T. N. Wiesel. 1962. “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex.” J. Physiol. 160 (1): 106–154. https://doi.org/10.1113/jphysiol.1962.sp006837.
Kafka, F. 1946. “The great wall of China.” Commentary 2 (2): 368.
Koukopoulos, D., and G. Styliaras. 2013. “Design of trustworthy smartphone-based multimedia services in cultural environments.” Electron. Commer. Res. 13 (2): 129–150. https://doi.org/10.1007/s10660-013-9112-5.
Koukopoulos, Z., D. Koukopoulos, and J. J. Jung. 2017. “A trustworthy multimedia participatory platform for cultural heritage management in smart city environments.” Multimed. Tools Appl. 76 (24): 25943–25981. https://doi.org/10.1007/s11042-017-4785-8.
Krizhevsky, A., I. Sutskever, and G. E. Hinton. 2012. “ImageNet classification with deep convolutional neural networks.” In Proc., Advances in Neural Information Processing Systems 2012, 1097–1105. Long Beach, CA: NIPS.
LeCun, Y., B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. “Backpropagation applied to handwritten zip code recognition.” Neural Comput. 1 (4): 541–551. https://doi.org/10.1162/neco.1989.1.4.541.
LeCun, Y., L. Bottou, Y. Bengio, and P. Haffner. 1998. “Gradient-based learning applied to document recognition.” Proc. IEEE 86 (11): 2278–2324. https://doi.org/10.1109/5.726791.
Li, C., Q. Kang, G. Ge, Q. Song, H. Lu, and J. Cheng. 2016. “DeepBE: Learning deep binary encoding for multi-label classification.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) Workshops 2016, 39–46. New York: IEEE.
Li, C., H. Zhao, and W. Han. 2006. “Research on the protection of the Great Wall in Gansu Province.” [In Chinese.] Dunhuang Res. 32 (6): 219–228.
Liu, Y., J. Gao, and Y. Yang. 2003. “A holistic approach towards assessment of severity of land degradation along the Great Wall in Northern Shaanxi Province, China.” Environ. Monit. Assess. 82 (2): 187–202. https://doi.org/10.1023/A:1021882015299.
Martella, C., J. Li, C. Conrado, and A. Vermeerenc. 2017. “On current crowd management practices and the need for increased situation awareness, prediction, and intervention.” Saf. Sci. 91 (1): 381–393. https://doi.org/10.1016/j.ssci.2016.09.006.
Merlino, G., S. Arkoulis, S. Distefano, C. Papagianni, A. Puliafito, and S. Papavassiliou. 2016. “Mobile crowdsensing as a service: A platform for client locations on top of sensing clouds.” Future Gener. Comput. Syst. 56 (3): 623–639. https://doi.org/10.1016/j.future.2015.09.017.
Oomen, J., and L. Aroyo. 2011. “Crowdsourcing in the cultural heritage domain: Opportunities and challenges.” In Proc., 5th Int. Conf. on Communities and Technologies, 138–149. New York: ACM.
Rattanarungrot, S., M. White, Z. Patoli, and T. Pascu. 2014. “The application of augmented reality for reanimating cultural heritage.” In Vol. 8526 of Proc., Int. Conf. on Virtual, Augmented and Mixed Reality, 85–95. Heraklion, Greece: Springer.
Ren, S., K. He, R. Girshick, and J. Sun. 2015. “Faster R-CNN: Towards real-time object detection with region proposal networks.” In Proc., Advances in Neural Information Processing Systems 2015, 91–99. Long Beach, CA: NIPS.
Ren, S., K. He, R. Girshick, and J. Sun. 2017. “Faster R-CNN: Towards real-time object detection with region proposal networks.” IEEE Trans. Pattern Anal. 39 (6): 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031.
Riccardi, M. T. 2016. “The power of crowdsourcing in disaster response operations.” Int. J. Disaster Risk Reduct. 20 (6): 123–128. https://doi.org/10.1016/j.ijdrr.2016.11.001.
Rolando, A., and A. Scandiffio. 2013. “Mobile applications as tool for exploiting cultural heritage in the region of Turin and Milan.” In Proc., Int. Arch. Photogrammetry, Remote Sens. Spat. Inf. Sci. XL-5/W2, 525–529. Gottingen, Germany: Copernicus Gesellschaft mbH.
Schill, S. W. 2009. “Tearing down the Great Wall: The new generation investment treaties of the People’s Republic of China.” Transl. Dispute Manage. 6 (1): 73.
Simonyan, K., and A. Zisserman. 2014. “Very deep convolutional networks for large-scale image recognition.” Preprint, submitted September 4, 2014. http://arXiv.org/abs/1409.1556.
SVL (Stanford Vision Lab). 2017. “Large scale visual recognition challenge (ILSVRC) [EB/OL].” Accessed April 24, 2017. http://www.image-net.org/challenges/LSVRC/.
Szegedy, C., S. Ioffe, V. Vanhoucke, and A. Alemi. 2016. “Inception-v4, Inception-ResNet and the impact of residual connections on learning.” Preprint, submitted February 23, 2016. http://arxiv.org/abs/1602.07261.
Szegedy, C., W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. 2015. “Going deeper with convolutions.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, 1–9. New York: IEEE.
Walker, R. S., J. A. Pettitt, K. T. Scruggs, and P. F. Mlakar. 2013. “Data collection and organization by smartphone for infrastructure assessment.” J. Infrastruct. Syst. 20 (1): 06013001. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000166.
Wang, N., Q. Zhao, S. Li, X. Zhao, and P. Zhao. 2018. “Damage classification for masonry historic structures using convolutional neural networks based on still images.” Comput.-Aided Civ. Infrastruct. Eng. 33 (12): 1073–1089. https://doi.org/10.1111/mice.12411.
Xu, Y., S. Li, D. Zhang, Y. Jin, F. Zhang, N. Li, and H. Li. 2018. “Identification framework for cracks on a steel structure surface by a restricted Boltzmann machines algorithm based on consumer-grade camera images.” Struct. Control Health Monit. 25 (2): e2075. https://doi.org/10.1002/stc.2075.
Zeiler, M. D., and R. Fergus. 2014. “Visualizing and understanding convolutional networks.” In Proc., European Conf. on Computer Vision, 818–833. Zurich, Switzerland: Springer.
Zhang, O. 2007. “Research on the present distribution and damage protection of the Ming Great Wall in Beijing.” Master’s thesis, Dept. of Resource Environment and Tourism, Capital Normal Univ.
Zhao, X., N. Wang, R. Han, B. Xie, Y. Yu, M. Li, and J. Ou. 2018. “Urban infrastructure safety system based on mobile crowdsensing.” Int. J. Disaster Risk Reduct. 27 (1): 427–438. https://doi.org/10.1016/j.ijdrr.2017.11.004.
Zhao, X., N. Wang, L. Wang, Y. Yu, M. Li, and J. Ou. 2016. “Public participatory integrity monitoring of the Great Wall based on smart phones.” In Proc., ASME 2016: Smart Materials, Adaptive Structures and Intelligent Systems. New York: ASME.

Information & Authors

Information

Published In

Go to Journal of Infrastructure Systems
Journal of Infrastructure Systems
Volume 25Issue 3September 2019

History

Received: Jun 26, 2018
Accepted: Jan 17, 2019
Published online: May 22, 2019
Published in print: Sep 1, 2019
Discussion open until: Oct 22, 2019

Permissions

Request permissions for this article.

Authors

Affiliations

Niannian Wang [email protected]
Doctoral Candidate, School of Civil Engineering, Dalian Univ. of Technology, No. 2 Linggong Rd., Ganjingzi District, Dalian 116024, China. Email: [email protected]
Xuefeng Zhao, Ph.D., A.M.ASCE [email protected]
Professor, School of Civil Engineering, Dalian Univ. of Technology, No. 2 Linggong Rd., Ganjingzi District, Dalian 116024, China (corresponding author). Email: [email protected]
Linan Wang, Ph.D. [email protected]
Engineer, Chinese Academy of Cultural Heritage, Institute of Architecture Conservation, No. 2 Gaoyuan St., Chaoyang District, Beijing 100029, China. Email: [email protected]
Doctoral Candidate, School of Civil Engineering, Dalian Univ. of Technology, No. 2 Linggong Rd., Ganjingzi District, Dalian 116024, China. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share