To forecast the condition of traffic networks in the future, it is crucial to model the spatial and temporal correlation of traffic series. The majority of current research has been on creating complicated graph neural networks that can capture common patterns using preconfigured graphs. In this paper, we claim that predefined graphs may be avoided and that adaptive graphs can be used to capture spatial correlations between traffic series and improve the performance of graph neural networks. In order to capture the temporal relationships of sequences, we also aggregated gated recurrent neural networks. Then, we encode the relative time position of the sequence in order to fully extract the characteristics of the traffic sequence. Finally, we add the values of the previous day and the same day of the previous week as a reference in the final prediction to improve the accuracy of our prediction. Experimental results on two sets of real-word traffic data (PeMSD4 and PeMSD8) demonstrate that our method is better than the existing methods.
We propose a two-stage label noise learning framework. Classification by deep learning with datasets with noisy labels. Filtering noise samples based on the loss of the warm-up stage is a common method, but it is impossible to judge the optimal warm-up length and thus suffers from memory effects. This paper is inspired by the use of contrastive learning self-supervision as pre-training to replace the warm-up part, and pseudo-labels for subsequent loss calculations, which can avoid the influence of noisy labels on pre-training. The second part is a semi-supervised algorithm that applies data augmentation. The adopted boosting strategy is to use weak boosting for any loss analysis task and strong boosting for gradient descent. Learn a more accurate representation for each image. Through these two steps, the performance of the network can be improved. Experiments show that our framework is more robust to data classification problems with noisy labels, and also works well in datasets with high noise rates. At the same time, since it is a two-step framework, it is also easier to split and apply in combination with other noisy label learning algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.