Multi‑camera trajectory matching based on hierarchical clustering and constraints
The fast improvement of deep learning methods resulted in breakthroughs in image classification, object detection, and object tracking. Autonomous driving and traffic monitoring systems, especially the on-premise installed fixed position multi-camera configurations, benefit greatly from recent advances. In this paper, we propose a Multi-Camera Multi-Target (MCMT) vehicle tracking system using a constrained hierarchical clustering solution, which improves trajectory matching, and thus provides a more robust tracking of objects transitioning between cameras. YOLOv5, ByteTrack, and ResNet50-IBN ReID networks are used for vehicle detection and tracking. Static attributes such as vehicle type and vehicle color are determined from ReID features with SVM. The proposed ReID feature-based attribute categorization shows better performance, than its pure CNN counterpart. Single-camera trajectories (SCTs) are combined into multi-camera trajectories (MCTs) using hierarchical agglomerative clustering (HAC) with time and space constraints (our proposed algorithm is denoted by MCT#MAC). Similarities between SCTs are measured by comparing the mean ReID features cumulated on the trajectory. The system was evaluated on more datasets, and our experiments demonstrate that constraining HAC by manipulating the proximity matrix greatly improves the multi-camera IDF1 score.
PDF