Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Precise identification of spatial unit functional features in the city is a pre-condition for urban planning and policy-making. However, inferring unknown attributes of urban spatial units from data mining of spatial interaction remains a challenge in geographic information science. Although neural-network approaches have been widely applied to this field, urban dynamics, spatial semantics, and their relationship with urban functional features have not been deeply discussed. To this end, we proposed semantic-enhanced graph convolutional neural networks (GCNNs) to facilitate the multi-scale embedding of urban spatial units, based on which the identification of urban land use is achieved by leveraging the characteristics of human mobility extracted from the largest mobile phone datasets to date. Given the heterogeneity of multi-modal spatial data, we introduced the combination of a systematic data-alignment method and a generative feature-fusion method for the robust construction of heterogeneous graphs, providing an adaptive solution to improve GCNNs’ performance in node-classification tasks. Our work explicitly examined the scale effect on GCNN backbones, for the first time. The results prove that large-scale tasks are more sensitive to the directionality of spatial interaction, and small-scale tasks are more sensitive to the adjacency of spatial interaction. Quantitative experiments conducted in Shenzhen demonstrate the superior performance of our proposed framework compared to state-of-the-art methods. The best accuracy is achieved by the inductive GraphSAGE model at the scale of 250 m, exceeding the baseline by 25.4%. Furthermore, we innovatively explained the role of spatial-interaction factors in the identification of urban land use through the deep learning method.

Details

Title
Semantic-Enhanced Graph Convolutional Neural Networks for Multi-Scale Urban Functional-Feature Identification Based on Human Mobility
Author
Chen, Yuting 1   VIAFID ORCID Logo  ; Zhao, Pengjun 2   VIAFID ORCID Logo  ; Lin, Yi 3 ; Sun, Yushi 3   VIAFID ORCID Logo  ; Chen, Rui 4 ; Yu, Ling 4   VIAFID ORCID Logo  ; Liu, Yu 5   VIAFID ORCID Logo 

 Department of Urban Planning and Design, Shenzhen Graduate School, Peking University, Shenzhen 518055, China; [email protected] (Y.C.); ; Key Laboratory of Earth Surface System and Human-Earth Relations of Ministry of Natural Resources of China, Shenzhen 518055, China 
 Department of Urban Planning and Design, Shenzhen Graduate School, Peking University, Shenzhen 518055, China; [email protected] (Y.C.); ; Key Laboratory of Earth Surface System and Human-Earth Relations of Ministry of Natural Resources of China, Shenzhen 518055, China; School of Urban and Environmental Sciences, Peking University, Beijing 100091, China 
 Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China 
 Department of Urban Planning and Design, Shenzhen Graduate School, Peking University, Shenzhen 518055, China; [email protected] (Y.C.); ; School of Urban and Environmental Sciences, Peking University, Beijing 100091, China 
 Institute of Remote Sensing and Geographical Information System, School of Earth and Space Sciences, Peking University, Beijing 100091, China 
First page
27
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
22209964
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2918768210
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.