Hand biometrics

Geomtric features extracted from a hand image.

Personal authentication in our highly inter-connected information society is becoming a crucial issue. Biometrics involves identifying an individual based on his physiological (e.g., fingerprint, iris, face, hand, voice) or behavioral (e.g., signature, gait, etc.) characteristics. Biometric identification provides more security and convenience than traditional authentication methods which rely in what you know (such as a password) or what you have (such as an ID card). While fingerprint and iris recognition are the most popular biometrics due to their unique identification capabilites, in many access control applications in which user acceptability is a significant factor, these traits may not be acceptable. In such situations, hand identification systems, characterized by their non-intrusive data collection, play an important role. Traditional hand recognition systems can be split in three modalities: geometry, texture and hybrid. In the present work, we focus on hand geometry biometrics.

The architecture of our automatic hand verification system is depicted below:

Block diagram of a hand authentication system

The first step is an image preprocessing module where the input image is binarized and the hand silhouette is extracted. The radial distance from a fixed reference point (wrist) to all the points in the silhouette is computed to find the finger tips and valleys. Then, some distance-based measures considering these reference points are calculated to conform the feature vector representation of the hand. Given several hand images of test and enrolled subjects, the matching is based on a distance measure between their feature vectors.

In this paper (download it from here) we describe in detail the image processing pipeline and study which geometric features lead to better discriminability between users. The set of studied features include 17 measures from different zones of the hand, specifically 5 finger lengths, 9 finger widths and 3 palm widths (see featured image on top of this page).

Experiments were carried out using a database containing 500 images of the right hand of 50 subjects (10 samples per subject). The images were acquired with a desktop scanner under controlled conditions: the position of the hand in the scanner surface was relatively invariant, scanner surface was cleaned between consecutive samples, illumination was uniform, etc. Hence, high quality images were obtained (see featured image on top of this page).

Features studied

The results (shown in the Table above) revealed that the features based on the thumb finger ($L5$) and the palm widths ($P1$ to $P3$) are the least discriminative. Excluding the length of the thumb finger ($L5$) from the feature vector reduced the error from 9.6% to 1.7% EER (subset 1 vs. 2). This may be due to the higher freedom of movement of this finger, which hinders an accurate estimation of its valley points. For the four remaining fingers, we have concluded that their lengths ($L1$ - $L4$) are the most discriminative features. For example, removing any of the lengths deteriorates the performance by at least a factor of 2 (subset 2 vs. 3-5). The finger widths ($W1$ - $W4$) are not particularly discriminant, as including them in the feature vector only improves the error from 1.68% to 1.24% EER (subset 2 vs. 7). The palm features ($P1$ to $P3$) degrade the predictive power by approximately a factor 3 (subset 2 vs. 6; subset 7 vs. 8), perhaps due to their relation with the thumb valley points. Finally, the best feature combination (subset 7) improves the performance of a reference system by more than a factor of 2 (1.24% vs. 2.97% EER).

In a follow-up work (download it from here) we tested the system on a larger database containing 12,800 hand images from 400 subjects (32 samples per subject). In this case, we found more variability in the quality of the images as compared to the previous dataset. Specifically, there were some low-quality images associated to a few users that degraded the system performance:

Features studied

Our hypothesis is that if these low-quality images could be automatically detected and excluded from the analysis, the performance of the system should notably increase. To test the hypothesis, we developed a “validity detection” module that checks if a given feature vector satisfies some anatomically valid geometric proportions:

Block diagram of a hand authentication system

The geometric constraints that we define are three ratios between pairs of finger lengths:

  • $r_1 = L3/L4$
  • $r_2 = L2/L3$
  • $r_3 = L2/L1$

Using a large population of high-quality samples, we modelled every ratio by a Gaussian distribution with mean $\mu$ and typical deviation $\sigma$.

Characterization of geometric constraints

A new (unseen) sample is considered valid if each of the three ratios fall inside the range $[\mu − k\sigma, \mu + k\sigma]$, where $k$ is a tuneable parameter that controls the width of the rejection band.

The results, shown below in the form of a detection error tradeoff (DET) graph, demonstrate that the performance of the system improves as more invalid samples are discarded (smaller values of $k$):

DET graph for three values of k

Using a value of $k = 3$, approximately 5% of the samples are considered invalid and the error of the system is reduced from 3% to 0.15%, which represents a factor 20 improvement. Therefore, we can conlude that by rejecting 5% of the samples the error can be improved by 2000%.

Javier Burgués
Javier Burgués
Technical Lead

Developing the new generation of chemical sensors for automotive, industrial and home automation applications.