Facial re-identification using deep learning on combined real-virtual environments
Face recognition, identification and verification are common problems in the field of biometrics. Many approaches have been developed over the years resulting in robust and accurate methodologies, especially with the arrival of deep neural networks. However, in spite of the ongoing progress, algorithms still suffer from scalability to hundreds/thousands of users and the multiple sources of variability introduced by the environment and the user context, such as head orientation, facial accessories, different backgrounds and the use of different recording devices.
The aim of this work is to address these limitations and to develop a novel unified framework for feature extraction and metric learning using deep learning architectures. By assimilating face recognition to re-identification, we aim to extend the application of our methodology to vast datasets such as current social networks or national police databases, where in most cases only few images per subject are available ( for example their profile picture or police mugshots). The algorithm should be able to use these few photos to identify subjects in CCTV footage at low resolution and image quality in different view-points and poses. The testing of such a virtually enhanced face re-identification paradigm on real world surveillance cameras will be the underlying objective of this project. The project will aim to develop an approach which can use one or two images from police mugshots to identify individuals in CCTV stream.
- To implement deep convolutional neural to tackle automatic feature extraction in facial imaginery
- To combine automatic feature extraction with metric learning in Siamese network architectures to specifically address the problem of verification and re identification
- To investigate new configurations for face recognition in zero-shot and one-shot scenarios
- To extend the reidentification framework to allow image to video recognition able to tackle poor quality CCTV footage.
- To explore robust strategies such as data augmentation and dropout to address facial occlusions and other sources of variation which may affect identification, such as different poses, clothing, cosmetics, glasses, scarfs, hats, etc.
- To develop new deep learning architectures that enhance visual facial reidentification using semantic information from the context or the subject profile
To evaluate the performance and limitation of verification versus re-identification to link real life snapshots with social network profiles and/or CCTV captures.
How to Apply
Applicants should apply electronically through the Queen’s online application portal at: https://dap.qub.ac.uk/portal/
Dr. Jesus Martinez del Rincon
Email: : firstname.lastname@example.org
Telephone: +44 (0)28 9097 1779