Hand Biometrics

I. Introduction

Personal authentication in our highly inter-connected information society is becoming a crucial issue. Biometrics involves identifying an individual based on his physiological (e.g., fingerprint, iris, face, hand, voice) or behavioral (e.g., signature, gait, etc.) characteristics. Biometric identification provides more security and convenience than traditional authentication methods which rely in what you know (such as a password) or what you have (such as an ID card). While fingerprint and iris recognition are the most popular biometrics due to their unique identification capabilites, in many access control applications in which user acceptability is a significant factor, these traits may not be acceptable. In such situations, hand identification systems, characterized by their non-intrusive data collection, play an important role.

Traditional hand recognition systems can be split in three modalities: geometry, texture and hybrid. As a starting point, we focus on hand geometry because it is the simplest to implement.

II. Hand-geometry verification system

The architecture of our automatic hand verification system is depicted below:

The first step is an image preprocessing module where the input image is binarized and the hand silhouette is extracted. The radial distance from a fixed reference point (wrist) to all the points in the silhouette is computed to find the finger tips and valleys. Then, some distance-based measures considering these reference points are calculated to conform the feature vector representation of the hand. Given several hand images of test and enrolled subjects, the matching is based on a distance measure between their feature vectors.

In this paper (download it from here) we describe in detail the image processing pipeline and study which geometric features lead to better discriminability between users. The set of studied features include 17 measures from different zones of the hand, specifically 5 finger lengths, 9 finger widths and 3 palm widths (see image on the right).

III. Experiments and Results

The first experiments were carried out using a database containing 500 images of the right hand of 50 subjects (10 samples per subject). The images were acquired with a desktop scanner under controlled conditions: the position of the hand in the scanner surface was relatively invariant, scanner surface was cleaned between consecutive samples, illumination was uniform, etc. Hence, high quality images were obtained (see below):

The results (shown in the Table below) revealed that features based on the thumb finger (L5) and the palm widths (P1 to P3) are the least discriminative ones. Excluding the length of the thumb finger (L5) from the feature vector reduced the equal error rate (EER) from 9.6% to 1.7% (feature subset #1 versus #2). This may be due to the higher freedom of movement of this finger, which hinders an accurate estimation of its valley points. For the four remaining fingers, we have concluded that their lengths (L1 to L4) are the most discriminative features. For example, removing any of these lengths deteriorates the performance by at least a factor of 2 (feature subset #2 versus #3-5). The finger widths (W1 to W4) are not particularly discriminant, as including them in the feature vector improves the EER only slightly from 1.68% to 1.24% (feature subset #2 versus #7). The palm features (P1 to P3) degrade the predictive power by approximately a factor 3 (feature subset #2 versus #6; and subset #7 versus #8) perhaps due to their relation with the thumb valley points. Finally, the best feature combination (feature subset #7) improves the performance of a reference system by a factor of 2.4 (1.24% vs. 2.97% EER).

IV. Image quality detection

In a follow-up work (download it from here) we tested the system on a larger database containing 12,800 hand images from 400 subjects (32 samples per subject). In this case, we found more variability in the quality of the images as compared to the previous dataset. Specifically, there were some low-quality images associated to a few users that degraded the system performance (see three images on the bottom row):

Our hypothesis is that if these low-quality images could be automatically detected and excluded from the analysis, the performance of the system should notably increase. To test the hypothesis, we developed a “validity detection” module that checks if a given feature vector satisfies some anatomically valid geometric proportions:

The geometric constraints that we defined are three ratios between pairs of finger lengths:

  • r1=L3/L4

  • r2=L2/L3

  • r3=L2/L1

Using a large population of high-quality samples, we modelled every ratio by a Gaussian distribution with mean μ and typical deviation σ. A new (unseen) sample is considered valid if each of the three ratios fall inside the range [μ−kσ, μ+kσ], where k is a tunable parameter that controls the width of the rejection band.

The results, shown below in the form of a detection error tradeoff (DET) graph, demonstrate that the performance of the system improves as more invalid samples are discarded (smaller values of k). In this particular database, the EER improved from 3% to 0.15% when k=3. With this value of k, approximately 5% of the samples in the database were considered invalid. In other words, by rejecting 5% of the samples the error can be improved by a factor of 20.

All the details of this project are provided in my BSc Thesis (in Spanish, download it from here)