Abstract
Trajectory representation learning transforms raw trajectory data (sequences of spatiotemporal points) into low-dimensional representation vectors to improve downstream tasks such as trajectory similarity computation, prediction, and classification. Existing models primarily adopt self-supervised learning frameworks, often employing models like Recurrent Neural Networks (RNNs) as encoders to capture local dependency in trajectory sequences. However, individual mobility within urban areas exhibits regular and periodic patterns, suggesting the need for a more comprehensive representation from both local and global perspectives. To address this, we propose TrajRL-TFF, a trajectory representation learning method based on time-domain and frequency-domain feature fusion. First, considering the heterogeneous distribution of trajectory data in space, a quadtree is employed for spatial partitioning and coding. Then, each trajectory is converted into a quadtree-code based time series (i.e., time-domain signal), with its corresponding frequency-domain signal derived via Discrete Fourier Transform (DFT). Finally, a trajectory encoder, combining an RNN-based time-domain encoder and a Transformer-based frequency domain encoder, is constructed to capture the trajectory’s local and global features, respectively, and trained by a self-supervised sequence encoding-decoding framework with trajectory perturbation-reconstruction task. Experiments demonstrate that TrajRL-TFF outperforms baselines in downstream tasks including trajectory querying and prediction, confirming that integrating time- and frequency-domain signals enables a more comprehensive representation of human mobility regularities and patterns, which provides valuable guidance for trajectory representation learning and trajectory modeling in future studies.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Chinese Academy of Sciences, Shenzhen Institutes of Advanced Technology, Shenzhen, China (GRID:grid.9227.e) (ISNI:0000000119573309)
2 Southern University of Science and Technology, Shenzhen, China (GRID:grid.263817.9) (ISNI:0000 0004 1773 1790); Chinese Academy of Sciences, Shenzhen Institutes of Advanced Technology, Shenzhen, China (GRID:grid.9227.e) (ISNI:0000000119573309)
3 Sun Yat-Sen University, Shenzhen, China (GRID:grid.12981.33) (ISNI:0000 0001 2360 039X)
4 Peking University, College of Urban and Environmental Sciences, Beijing, China (GRID:grid.11135.37) (ISNI:0000 0001 2256 9319)
5 Aerospace Information Research Institute, Chinese Academy of Sciences, State Key Laboratory of Remote Sensing Science, Beijing, China (GRID:grid.507725.2); University of Chinese Academy of Sciences, Beijing, China (GRID:grid.410726.6) (ISNI:0000 0004 1797 8419)
6 SmartSteps, Beijing, China (GRID:grid.410726.6)




