Deep learning of appearance affinity for multi-object tracking and re-identification: a comparative view Articles uri icon

publication date

  • October 2020

start page

  • 1

end page

  • 28

issue

  • 11, 1757

volume

  • 9

International Standard Serial Number (ISSN)

  • 2079-9292

abstract

  • Recognizing the identity of a query individual in a surveillance sequence is the core of Multi-Object Tracking (MOT) and Re-Identification (Re-Id) algorithms. Both tasks can be addressed by measuring the appearance affinity between people observations with a deep neural model. Nevertheless, the differences in their specifications and, consequently, in the characteristics and constraints of the available training data for each one of these tasks, arise from the necessity of employing different learning approaches to attain each one of them. This article offers a comparative view of the Double-Margin-Contrastive and the Triplet loss function, and analyzes the benefits and drawbacks of applying each one of them to learn an Appearance Affinity model for Tracking and Re-Identification. A batch of experiments have been conducted, and their results support the hypothesis concluded from the presented study: Triplet loss function is more effective than the Contrastive one when an Re-Id model is learnt, and, conversely, in the MOT domain, the Contrastive loss can better discriminate between pairs of images rendering the same person or not.

keywords

  • appearance affinity; triplet model; contrastive loss function; deep convolutional; neural network; re-identification; multi-object tracking