2 April 2018 Person reidentification using deep foreground appearance modeling
Gregory Watson, Abhir Bhalerao
Author Affiliations +
Abstract
Person reidentification is the process of matching individuals from images taken of them at different times and often with different cameras. To perform matching, most methods extract features from the entire image; however, this gives no consideration to the spatial context of the information present in the image. We propose using a convolutional neural network approach based on ResNet-50 to predict the foreground of an image: the parts with the head, torso, and limbs of a person. With this information, we use the LOMO and salient color name feature descriptors to extract features primarily from the foreground areas. In addition, we use a distance metric learning technique (XQDA), to calculate optimally weighted distances between the relevant features. We evaluate on the VIPeR, QMUL GRID, and CUHK03 data sets and compare our results against a linear foreground estimation method, and show competitive or better overall matching performance.
© 2018 SPIE and IS&T 1017-9909/2018/$25.00 © 2018 SPIE and IS&T
Gregory Watson and Abhir Bhalerao "Person reidentification using deep foreground appearance modeling," Journal of Electronic Imaging 27(5), 051215 (2 April 2018). https://doi.org/10.1117/1.JEI.27.5.051215
Received: 10 January 2018; Accepted: 15 March 2018; Published: 2 April 2018
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Feature extraction

RGB color model

Cameras

Head

Image resolution

Laser sintering

Back to Top