Notre Dame       UCCS      NIST

The IJCB 2017 Face Recognition Challenge


Overview of Competition

Face recognition remains one of the most significant challenges within the field of biometrics. In spite of advances in machine learning, algorithms that can generalize to new settings and tolerate the myriad configurations that the human face can take when acquired by a sensor are elusive. The IJCB 2017 Face Recognition Challenge is designed to evaluate state-of-the-art face recognition systems with respect to cross-dataset generalization, open set face detection, and open set face recognition – all of which remain unsolved problems.

The competition consists of three distinct challenges. Participants are invited to compete on one or more of these.

Challenge 1: Since 1993 there has been a continuing sequence of challenges and evaluations in face recognition. At any given time, the face recognition community concentrates on one or two challenges. Success is measured by performance on the most popular evaluation at the time. Performance on past challenges is not considered or discussed, as a recent paper by one of our organizers points out. There are two innovations in this first challenge when compared to previous face recognition competitions. First, the challenge measures algorithm performance over multiple challenge problems. Second, one can determine if recent progress in face recognition has come at the cost performance on “solved” problems. By including qualitatively different data sets (FRGC, GBU, PaSC), the competition will measure the ability of algorithms to generalize across datasets. Further, the challenge will compare human and algorithm performance across all data sets. Participants will be expected to submit similarity matrices for all data sets in this challenge.

Challenges 2 and 3 address unconstrained face detection and open set face recognition. In existing face detection/recognition datasets, the majority of images are “posed,” i.e., the subjects know they are being photographed, and/or the images are selected for publication in public media. Hence, blurry, occluded and badly illuminated images are uncommon in these sets. Also, most of these challenges are close-set, in that the list of subjects in the gallery is the same as the one used for testing. With parts 2 and 3 of this challenge, we hope to foster face detection and recognition research towards surveillance applications. Parts 2 and 3 of the challenge will use an extended version of the UnConstrained College Students (UCCS) dataset which contains 18 mega-pixel high-resolution images taken at the University of Colorado Colorado Springs, where the camera at a range of 100-150 meter capturing people walking on a sidewalk. To remove the potential bias of using automated face detection (which selects only easy faces), more than 70,000 face regions were hand-identified and cropped. From these, we have labeled in total 1732 identities. Each labeled sequence contains around ten images. For approximately 20% of the identities, we have sequences from two or more days.

Challenge 2 is unconstrained face detection, wherein the participants are to detect all the faces (independent on the identity label) inside the UCCS images. Participants of part 2 are supposed to provide bounding boxes for all faces that they can detect in the images of the UCCS dataset. They are allowed to use the training set to train their algorithm, and may also use external training data.

Challenge 3 addresses open-set detection and recognition, wherein the participants are to detect the faces (as in challenge 2) and label each detection as known/unknown and provide an identity label for each of the bounding boxes detected if the face inside the bounding box is known. All known identities will be given in the training set, and participants are supposed to build gallery models for each identity using the training set. In practice, for each bounding box, scores for several identities including their recognition scores will be accepted. The number of possible identity labels per bounding box will be limited. As some bounding boxes contain images with unknown identities, both the “unknown” label and empty label lists will be accepted. Assigning an identity label to an unknown identity will be an error. The evaluation will use the Detection and Identification Rate (DIR) curve.

Participate in the Challenges

Instructions for Challenge 1
Instructions for Challenges 2 and 3

Important Dates

Evaluation announcement: January 15th, 2017
First round similarity matrices and summary of approach given to Notre Dame (send via email for challenge 1 to: walter.scheirer@nd.edu; submission instructions for challenges 2 and 3 can be found here): April 15, 2017
Final similarity matrices delivered to Notre Dame and option to supply modified approach description: May 5, 2017
Updated report delivered to IJCB 2016: May 15, 2017
Final notification of IJCB 2017 decision on report: June 30, 2017
IJCB 2017 Meeting Dates: Oct. 2nd - 4th, 2017

Organizers

Terrance Boult, University of Colorado Colorado Springs
Patrick J. Flynn, University of Notre Dame
Manuel Günther, University of Colorado Colorado Springs
P. Jonathon Phillips, National Institute of Standards and Technology
Walter J. Scheirer, University of Notre Dame

Related Prior Challenges

BTAS 2016 Video Person Recognition Evaluation
FG 2015 Video Person Recognition Evaluation
IJCB 2014 Handeld Video Face and Person Recognition Competition