mgmfw 43 ..l f A 2.. .. .1- ‘ 41.» 9,1 1'. . (.i‘llth I!- v II «r 5? 1. LV 2; 2 Fsmumuun Haw-aux. 3.31.... l. {Ill 315% a 1. Julian... A Do a» to. ‘. 3.33:... 3“}! 5... ..s11tbtvzt “ . 4. \«L. .5 ¢ . 11‘! . 5"“. A 1» 11...; z. a w 0 {a w! mm“ :5 ‘ \ V . $1 a. vfim firm“. .... i : a? .r,.m7w.1. ; v n l _‘ .z... \. . I a: u. an 5,... u L immmwrw C, q .. 3...,1hflfl? .... P. in... «awn . 34...... .1 M szwWLHMT x .5 a .1‘.\I(u . . .3, MW. 4 am 1% .. 3h . u...” x $1. 4150 f . _ _ 1f fitiv. In}... « .1.th » .u. . 2“... A ufinfiil .31....5. . { is... in». Thaw LIBRARY ' Michigan State \ University This is to certify that the dissertation entitled EXTENDED FEATURE SET AND TOUCHLESS IMAGEING FOR FINGERPRINT MATCHING presented by Y! CHEN has been accepted towards fulfillment of the requirements for the Doctoral degree in Computer Science MW Major Professor’s Signature . Wlt‘ '89”)? Date MSU is an Affirmative Action/Equal Opportunity Employer PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 5/08 K:lProjIAoc&Pres/CIRCIDateDue.indd EXTENDED FEATURE SET AND TOUCHLESS IMAGING FOR FINGERPRINT MATCHING By Yi Chen A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Computer Science 2009 ABSTRACT EXTENDED FEATURE SET AND ToUCHLEss IMAGING FOR FINGERPRINT MATCHING By Yi Chen In recent decades, fingerprint matching has undertaken a tremendous transition from a tedious manual procedure for criminal investigation to the most widely de- ployed biometric technology for government and civilians applications. In this thesis, we address two critical issues related to this transition: i) automatic systems have not fully utilized the knowledge gained by forensic experts in manual fingerprint matching (e.g., extended features, matching with distortion); ii) interoperability be— tween advanced sensing technology and legacy fingerprint databases has not been fully achieved in automatic systems. To address the first issue, we investigate the use of extended features, often uti- lized in manual fingerprint matching, in automatic systems. These extended features include ridge skeletons, pores, dots and incipients. We propose methods to automat- ically extract and compare these extended features in a hierarchical fashion. Our experiments show performance improvement from each of the proposed extended fea- tures in live—scan matching on MSU database (full vs. full and partial vs. full). We also show that ridge skeletons are more effective than pores, dots and incipients in improving latent matching on NIST-27 database (latent vs. roll). In addition, we con- duct statistical analysis on the individuality of fingerprints using extended features, demonstrating their discriminative nature both theoretically and empirically. To address the second issue, we investigate the inter0perability between a new fingerprint sensing technology based on touchlass imaging and the legacy rolled fin- gerprint data. We propose a non-parametric virtual rolling method to unwrap the 3D touchless fingerprints into 2D rolled-equivalent fingerprints. We also develop a qual- ity measure and an enhancement algorithm for the unwrapped touchless fingerprints. Our experiments on the TBS database demonstrate effectiveness of the proposed methods in achieving compatibility in matching touchless fingerprints with legacy rolled fingerprints. To my family iv ACKNOWLEDGMENTS I owe deep gratitude to those who have made everything possible for me to complete this thesis. First and foremost, I would like to thank my advisor Dr. Anil K. Jain for his continuous encouragement and support, especially through all the obstacles and setbacks. His dedication, profound insights and attention to details have been true inspirations to my research and my personal life. I appreciate everything he has done that has made me a better person and has guided me to a better future. Along with Dr. Jain, Dr. George Stockman, Dr. Arun Ross, Dr. Sarat Dass, Dr. Bill Punch and Dr. Richard Enbody have provided me tremendous help and encouragement as my committee members. Their scholarship and personality have both won my deep respect. Special thanks also go to all the faculty and staff members in the Department of Computer Science and Engineering for their support and assistance. I would like to thank all the institutions and companies that provided funding for this research, including National Institute of Justice (2007-RG-CX-K183, 2007-DN- BX—OOOS), Army Research Offce (W911NF-06-1-O418), Center for Identification Tech- nology Research at West Virginia University, TBS and Sagem Morpho. I would also like to thank Jean-ChristOphe Fondeur, Vincent Bouatou, Dr. Christophe Champod, Alexandre Anthonioz, Ed German, Austin Hicklen, John Woodward, and Vladimir Dvornychenko for their help and support to my research on extended feature set; Geppy Parziale and Dr. Bill Long for their support Of my research on touchless imaging; Dr. Brian Martin, Jianguo Wang, Eva Diaz Santana and the Baker family for their help during my internships. I also owe thanks to my coworkers, in partic- ular, Dr. Salil Prabhakar and Alexandar Ivanisov at DigitalPersona Inc. for their understanding and encouragement. I am very lucky to have worked with my labmates at both EI and PRIP labs. In V particular, I would like to thank Xiao Huang and David Cherba for their encourage- ment and friendships though my hardest times; Hong Chen, Xiaoguang Lu, Yongfang Zhu, Martin Law, Unsang Park, Umut Uludag, Pavan Kumar Mallapragada, Meltem Demirkus, Abhishek Nagar, Leonardo Max Batista Claudino, Jung-Eun Lee, Miguel Figueroa-Villanueva, Yilu Zhang, Shuqing Zeng, and Nan Zhang for sharing research ideas and their company at countless nights in the lab; Karthik Nandakumar for his help and encouragement at each and every step of my research; Dirk Colbry for his enthusiasm and support of my research; Jianjiang Feng for providing valuable suggestions and comments on my thesis. I also enjoyed meeting and working with researchers from around the world, including Julian Fierrez-Aguilar, Norman Poh, Li Liu, Yunhong Wang, Hugo Gamboa, Marilena Nardelli and Miguel GallegO-Ruiz. Finally, I am grateful for all the friends who have brought me warmth and sun- shine in the cold Michigan winters. Special thanks to the Lansing Chinese Christian Ministry and my friendship family, Deborah and Peter, with whom I have found my second home. Last but not the least, a very big thank you to my family for their endless love and support; to my wonderful husband Steve for making this journey so special and memorable. vi TABLE OF CONTENTS LIST OF TABLES ................................ x LIST OF FIGURES ............................... xii 1 Introduction . . . .4 .............................. 1 1.1 Fingerprint Formation ............................ 3 1.2 Fingerprint Acquisition ............................ 6 1.3 Sensor Interoperability ............................ 12 1.4 Fingerprint Representation .......................... 14 1.5 Fingerprint Matching ............................. 16 1.5.1 Manual Fingerprint Matching ....................... 17 1.5.2 Automatic Fingerprint Matching ..................... 18 1.5.3 Semi-automatic Fingerprint Matching .................. 20 1.5.4 State-Of-the-Art Performance ....................... 21 1.6 Thesis Contributions ............................. 25 2 Fingerprint Features ............................ 28 2.1 Introduction .................................. 28 2.2 Feature Classification ............................. 29 2.2.1 Level 1 ................................... 29 2.2.2 Level 2 ................................... 31 2.2.3 Level 3 ................................... 33 2.3 Feature Correlation .............................. 36 2.4 Feature Reproducibility ........................... 37 2.5 Standards ................................... 39 2.5.1 Minutiae standards ............................. 40 2.5.2 Standards for Extended Features ..................... 40 2.6 Summary ................................... 43 3 Extended Feature Extraction ....................... 45 3.1 Introduction .................................. 45 3.2 Previous Work ................................ 45 3.3 Proposed Approach .............................. 46 3.3.1 Segmentation ................................ 48 3.3.2 Quality Assessment ............................. 51 3.3.3 Ridge Extraction .............................. 54 3.3.4 Pore Extraction ............................... 61 3.3.5 Dot and Incipient Extraction ....................... 64 3.4 Experiments .................................. 68 3.5 Summary ................................... 77 vii 4 Extended Feature Matching ........................ 79 4.1 Introduction .................................. 79 4.2 Previous Work ................................ 79 4.3 Proposed Approach .............................. 80 4.3.1 Ridge Matching ............................... 80 4.3.2 Pore Matching ............................... 93 4.3.3 Dot and Incipient Matching ........................ 94 4.3.4 Fusion .................................... 95 4.3.5 Score-Level Fusion ............................. 95 4.3.6 Rank-Level Fusion ............................. 100 4.4 Experiments .................................. 101 4.5 Summary ................................... 109 5 Individuality of Fingerprints ....................... 112 5.1 Introduction .................................. 112 5.2 Previous Work ................................ 113 5.2.1 Individuality based on Formation Models ................. 113 5.2.2 Individuality based on Feature Models .................. 114 5.3 Proposed Model ................................ 116 5.3.1 Modeling Minutiae ............................. 117 5.3.2 Modeling Ridges .............................. 120 5.3.3 Modeling Pores ............................... 125 5.4 Model Evaluation and Validation ...................... 127 5.5 Summary ................................... 141 6 Touchless Sensing and Sensor Interoperability ............ 142 6.1 Introduction .................................. 142 6.2 Touchless Sensing ............................... 142 6.2.1 3D Touchless Sensing ............................ 143 6.3 Interoperability ................................ 146 6.3.1 Virtual Rolling ............................... 147 6.3.2 Image Enhancement . . ., ......................... 157 6.3.3 Image Quality ................................ 161 6.4 Interoperability Experiments ......................... 162 6.5 Summary ................................... 166 7 Summary and Future Work ........................ 168 APPENDICES .................................. 172 A Numerical Implementation of Thin-Plate Spline with Tension . . 173 B Estimating Mixture Density using EM algorithm .......... 175 A Incorporating Minutiae ............................ 175 B Incorporating Ridge Period ......................... 178 C Incorporating Ridge Curvature ....................... 178 BIBLIOGRAPHY ................................ 180 1.1 1.2 2.1 2.2 3.1 3.2 3.3 3.4 5.1 5.2 5.3 5.4 LIST OF TABLES Comparison Of various fingerprint sensing technologies. .......... Effects of image size in matching right and left index fingers from the NIST Special Database 29 as a function Of crOpped image Size [127]. . . . . Fingerprint features and required image resolution ............. Extended features proposed in the “Data Format for the Interchange of Extended Fingerprint and Palrnprint Features” [3]. .......... Representation of the proposed extended features .............. Description of databases used in experiments. ............... Average error rates of the proposed pore and dot / incipient extraction algo— rithm at 500 ppi, compared with those of the state-of—the—art minutiae extraction algorithm [12] on 258 rolled prints in the NIST-27 database. Average error rates of the proposed pore and dot / incipient extraction algo- rithm at 1000 ppi, compared with those of the state-of-the-art minutiae extraction algorithm [12] on 100 flats in the MSU database ....... Theoretical probabilities Of matching a minutiae pair between impostor fingerprints belonging to class A=arch, TA=tented arch, L=1eft loop, R=right loop and W=whorl based on the proposed model. ...... Empirical probabilities of matching a minutiae pair between impostor fin- gerprints belonging to class A=arch, TA=tented arch, L=1eft loop, R=right loop and W=whorl based on NIST-4. ............. Theoretical probabilities of matching a minutiae pair and their local ridge features between impostor fingerprints belonging to class A=arch, TA=tented arch, L=1eft loop, =right loop and W=whorl based on the proposed model. ........................... Empirical probabilities of matching a minutiae pair and their local ridge features between impostor fingerprints belonging to class A=arch, TA=tented arch, L=1eft loop, R=right loop and W=whor1 based on NIST-4. .................................. 11 23 35 42 48 70 74 74 131 131 5.5 Theoretical probabilities of matching a minutiae pair, their local ridge and pore features between impostor fingerprints belonging to class A=arch, TA=tented arch, L=1eft loop, R=right loop and W=whorl based on the proposed model. ........................... 5.6 An example of comparing “12—point rule” PRCs from Pankanti et al.’s uniform model, Zhu et al.’s mixture model and the proposed model based on minutiae with the empirical estimates using NIST-4. Empiri- cal values are calculated using Equation 5.5 by substituting A with the mean number of matches (weighted by the class distribution) found empirically. ................................ 6.1 Comparisons between touchless and touch-based sensing technologies. . . xi LIST OF FIGURES 1.1 Ridge units, the building blocks of friction ridges: (a) a sketch of ridge units (reproduced from [36]); (b) an area of a fingerprint showing fric- tion ridges fused by ridge units, each containing a pore ......... 4 1.2 The most common ridge characteristics. .................. 4 1.3 A finger seen during the maceration process shows (A) the friction ridges Of a finger; (B) rows of vessels underneath the friction ridges, revealing perfect correspondence with the fingerprint (reproduced from [116]). . 5 1.4 Example of (a) rolled, (b) plain, and (c) latent fingerprint. ........ 6 1.5 An example of the latent lifting process: (a) dust with powder; (b) photo- graph with a ruler; (c) apply transparent tape and then lift (reproduced from [28]) .................................. 6 1.6 A tenprint card containing three types of fingerprints, namely (1-10) rolls of each of the ten fingers, (11,14) Slaps of the left and right hand (index, middle, ring and little fingers) and (12,13) plains of left and right thumbs (reproduced from [16]). .................. 7 1.7 Acquiring a fingerprint using (a) the traditional ink-on-paper method [6], and (b) a live—scan device that captures fingerprints electronically [26]. 8 1.8 Fingerprint image resolution. Three impressions of the same fingerprint are captured at (a) 380 ppi (Identix 200DFR) (b) 500 ppi (CrossMatch ID1000) and (c) 1000 ppi (CrossMatch ID1000). High image resolutiOn increases the visibility and extractability of fingerprint details. . . . 10 1.9 Two touchless fingerprint images from a video sequence captured us- ing a super resolution sensor (z 7000 ppi). Perspiration activ- ities (Opening and closing) of pores are visible and can be used for spoof detection (courtesy: TBS Inc. and Geppy Parziale (geppy.parziale©invasivecodecom)) .................... 10 1.10 Fingerprint images of the same finger acquired using different commercial sensors, including Optical sensors (a) Biometrika FX2000; (b) Digital Persona UareU2002; (c) Identix DFR200; (d) Ethentica TactilSense T-FPM, and solid—state sensors (e) ST-Microelectronics TouchChip; (f) Veridicom FPSllO; (g) Atmel FingerChip; (h) Authentec AES4000 (reproduced from [95]) ........................... 12 xii —_—— — r 1.11 Fingerprint features used in manual fingerprint matching, categorized as Level 1 (upper row), Level 2 (middle row) and Level 3 (lower row) features [18, 100]. ............................. 1.12 Minutiae features currently defined in the ANSI-NIST-CSL 1-2000 stan— dard [33]: (a) ridge bifurcation; (b) ridge ending. They are a subset of the Level 2 features used in manual fingerprint matching (see Figure 1.11) ..................................... 1.13 Manual fingerprint matching using three levels of features, including Level 2 minutiae (indicated by numbers) and Level 3 ridge Shapes (illustrated in the magnified portions) (reproduced from [2]) ............. 1.14 Automatic fingerprint matching using minutiae points: (a) minutiae set in a query; (b) minutiae set in a template; (c) alignment based on the pivot minutiae marked with green circles in (a) and (b); (d) the matching results where corresponding minutiae are connected by green lines (adopted from [71]) .......................... 1.15 Semi-automatic procedure for latent search. ................ 1.16 Matching error rates. The curves Show False Accept Rate (FAR) and False Reject (FRR) Rate for a given threshold t over the genuine and impos- tor score distributions. FAR is the percentage of non-match (impostor) pairs whose matching scores are greater than or equal to t, and FRR is the percentage of match (genuine) pairs whose matching scores are less than t [108] ............................... 2.1 Henry’s fingerprint pattern classes. Core and delta points are denoted by red and blue dots, respectively. ..................... 2.2 Percentage of occurrences of major fingerprint classes in 222 million rolled fingerprints [129]. ............................. 2.3 An example of Level 1 features, including the pattern class, delta and core locations and orientation field overlaid on a fingerprint image. . . 2.4 Galton’s ridge characteristics (minutiae): (a) ridge ending; (b) ridge bi- furcation (adopted from [18]). ...................... 2.5 Composite ridge characteristics: (a) fork; (b) Spur; (c) crossover; ((1) bridge; (e) trifurcation (adopted from [18]) ................ 2.6 Percentage of occurrences of various minutiae types in 977 rolled finger- prints [46] .................................. 2.7 An example of Level 2 features, including ridge endings (red), ridge bifur- cations (blue) and dots (green). ..................... xiii 14 15 18 I9 21 22 29 30 30 32 32 32 33 2.8 Variations Of ridge width and shape, demonstrated in detail at ridge end- ings (red), bifurcations (green) and dots (blue). ............ 2.9 Examples of Level 3 features, including (a) pores (red), incipients (green) and scars (yellow) and (b) creases (blue) ................. 2.10 Ridge path deviations do not form in isolation. The location and certainty of a ridge characteristic can be found by examining the surrounding ridge paths even if the feature may taper due to lack of pressure (re- produced from [36]). ........................... 2.11 Effects of image quality (e.g., distortion, non-ideal skin conditions) on the presence of minutiae captured in two impressions of the same finger- print. Minutiae are manually marked (red) by forensic experts. . 2.12 Effects of increasing pressure in a sequence of impressions on the appear- ances of incipients. Light pressure makes it difficult to distinguish incipients and dots (adopted from [100]). ................ 2.13 Effects of varying amount of pressure (50g, 100g, 200g, 300g, and 400g) on the number of manually detected pores (adopted from [104]). . . . 3.1 Illustration of the proposed extended features ................ 3.2 A high-level algorithm of the segmentation procedure ............ 3.3 Segmentation of a fingerprint image using block-wise average gradient magnitude: (a) original image; (b) block-wise average gradient magni- tude; (c) foreground detection after thresholding and post-processing; the red contour is the convex hull extracted based on the foreground blocks; ((1) segmented image ........................ 3.4 A high-level algorithm of the quality estimation procedure ......... 3.5 Quality estimation of a fingerprint image using the proposed coherence measure: (a) segmented image; (b) block-wise local quality; (c) cascade local quality; ((1) smoothed local quality using circular average filtering. White pixels mean good quality and black pixels mean poor quality. . 3.6 Gabor filters: (a) 3D graphical representation; (b) a bank of filters with eight different orientations ......................... 3.7 Peak-detection for ridge extraction. In (a) pixel intensities on the segment centered at (9:3,y3) are projected onto the normal of the local ridge orientation to obtain the gray-level profile in (b). ........... xiv 34 35 36 38 38 39 47 49 50 52 53 56 3.8 Ridge enhancement and extraction: (a) a cropped fingerprint image; (b) block-wise local orientation; (c) enhanced ridges using Gabor filters; (d) extracted ridge skeletons. ...................... 3.9 Extraction of ridge Skeletons and neighboring ridge information based on the inter-ridge distances. ......................... 3.10 A high-level algorithm of the ridge Skeleton extraction procedure. 3.11 Appearance of pores. A pore can be open or closed due to its perspiration activity. .................................. 3.12 Extraction of pores: (a) enhanced image using the sigmoid function; (b) band-pass filtering using the Mexican-hat wavelet transform; (c) mask- ing out filtering responses in the valleys using enhanced ridges (see Fig- ure 3.8 (c)); (d) thresholding and blob detection; (e) extracting ridge ownership (overlayed with ridge skeleton) and removing spurious pores (green blobs); (f) extracted pores (red circles) .............. 3.13 A high-level algorithm of the pore extraction procedure. ......... 3.14 Extraction of dots and incipients: (a) a cropped fingerprint image; (b) enhanced image using the sigmoid function; (c) masking out ridges using the enhanced ridges; (d) estimated phase symmetry of features in the valleys; (e) extracting connected components with responses surpassing thresholding Td; (f) extracted dots/incipients. ....... 3.15 A high-level algorithm of the dot and incipient extraction procedure. 3.16 Sample images from (a) NIST-27, (b) MSU-full and (c) MSU-partial databases. ................................. 3.17 Examples of manually marked minutiae (blue), pores (red) and dots/incipients (green) in (a) a latent and (b) the corresponding rolled print from NIST-27. ........................... 3.18 Histograms of the number of (a) pores and (b) dots/incipients automati- cally extracted in MSU-full, MSU-partial and from rolls in NIST-27 as well as those manually extracted from latents in NIST-27. ...... 3.19 Evaluating (a) pore and (b) dot / incipient extraction accuracy by com- paring automatically extracted features (red) with manually extracted features (blue). The missing (spurious) error rate Of pore and dot / incipient extraction in the fingerprint area shown here is 0.16 (0.34) and 0 (0.17), respectively. ........................ 3.20 Automatically extracted feature representation on a roll of NIST-27: (a) original rolled image; (b) minutiae ([12]); (c) extended features (in- cluding ridge skeletons (blue), pores (red) and dots/incipients (green)). XV 58 59 62 63 65 67 69 76 3.21 Automatically extracted feature representation on a partial print of MSU- 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 partial: (a) original fingerprint image; (b) minutiae ([12]); (c) ex- tended features (including ridge skeletons (blue), pores (red) and dots/incipients (green)). ......................... Difficulty in aligning ridge skeletons in two fingerprints Of the same finger due to non-linear distortion. ....................... Illustration of the proposed ridge matching. The procedure includes ini- tial alignment based on pivot minutiae, refined alignment after ridge propagation and final match score calculation. Notice how ridge prop- agation enables the correction of distortion at the finger tip between the two fingerprints. ........................... Ridge propagation: (a) a partial fingerprint and its template are pre- aligned using pivot minutiae (arrows); (b) matching the neighboring ridges associated with the pivot minutiae; (c)-(f) iterative ridge prop- agation. Note how distortion is corrected after each propagation step. Ridge matching based on (a) Feng’s method [58] and (b) the prOposed method. Feng’s method only compares the length of the correspond— ing segments between two ridge skeletons, while the proposed method averages the distances between all corresponding ridge point pairs. The overshooting problem of the TPS can be reduced by introducing the tension effects. The tension parameter in each plot is (a) 1' = 0.1, (b) T = 50 and (c) ’T = 1000, respectively (see Equation 4.9). ....... The behavior of the interpolated surface of displacement between two fin- gerprints using (a) a TPS model and (b) a TPS with tension model. The tension effects increases the elasticity of the TPS model. The points marked with red crosses are control points (points with known correspondences) and the blue bar at each control point represents the amount of displacement between the template and the query. The higher the bar, the larger the displacement ................ Aligning a partial fingerprint to its template based on (a) minutiae only; (b) minutiae and ridges without distortion correction; (c) minutiae and ridges with distortion correction using TPS; and (d) minutiae and ridges with distortion correction using TPS with tension. ....... The proposed integration framework. Minutiae and extended feature (ridge, pore and dot / incipient) matchings are integrated in a hierar- chical fashion ................................ Decision-tree fusion using a two-level tree structure. M is the number of matchers. ................................. 76 81 82 84 88 90 91 92 96 4.10 Verification ROCS on MSU-full database (flat to flat matching at 1000 ppi) using (a) the sum-rule fusion; (b) three fusion rules on four dif- ferent scores: Sm (minutiae matching), 3,- (ridge matching), 3,, (pore matching) and 5d (dot / incipient matching). Results are combined us- ing ten-fold cross validation. ...................... 4.11 Verification ROCS on MSU-partial database (partial to flat matching at 1000 ppi) using (a) the sum-rule fusion; (b) three fusion rules on four different scores: Sm (minutiae matching), 5',- (ridge matching), Sp (pore matching) and Sd (dot / incipient matching). Results are com- bined using ten-fold cross validation. .................. 4.12 An example of matching a partial print (blue) to a flat (black) based on (a) minutiae; (b) ridges; (c) pores. The normalized match scores based on each of these features are Sm = 0.015, S,- = 0.79, Sp z 0.56, respectively, and the final fused score is 0.334. ............. 4.13 CMC curve for identification using highest rank fusion on minutiae and extended features in NIST-27 (latent to rolled prints matching at 500 ppi). .................................... 4.14 An example of improving latent matching by using extended features. (a) a latent image and (b) the corresponding rolled print with extracted features (minutiae (red), ridges (blue), dots (green), pores (magenta)) (c) minutiae matching (Sm = 0.76, Rm = 4) (d) ridge matching (Sr = 0.79, RT = 1) and dot/ incipient matching (Sd = 0.67, Rd = 2). Using extended features help retrieve the correct candidate of the latent at 102 104 105 rank 1. Features in latents were manually extracted by a forensic expert. 106 4.15 CMC curve for identification using minutiae and extended features across different image quality by dividing latents in NIST-27 into three quality bins (a) good quality, (b) bad quality, and (c) ugly quality manually assigned by forensic experts ........................ 4.16 CMC curve for identification using minutiae and extended features across different sizes of fingerprint area by dividing latents in NIST-27 into three bins (a) large size, (b) medium size, and (c) small Size based on automatically segmented foreground area. ............... 5.1 Fingerprint features used in our individuality model (a) two different pat- tern types; (b) minutiae, ridges and pores. ............... xvii 111 118 5.2 Local ridge structures associated with each minutiae can be converted to a canonical form by centering at the minutiae and rotating by the minutiae orientation. As a result, matching two minutiae and their associated ridge structures is equivalent to matching the position and orientation of minutiae and the period and curvature of the local ridge structure. ................................. 121 5.3 Sixteen ridge shapes used in Fang’s individuality model: (a) ridge models; (b) corresponding ridge shapes. These ridge Shapes are defined based on the relative positions between the minutiae (circles) and two ridge anchor points (squares) [56]. ....................... 122 5.4 Ridge features: (a) ridge period (average of inter-ridge distances r) and (b) ridge curvature (average of inverse radius c) in the neighborhood (30 x 30 pixels) of a minutiae ....................... 122 5.5 (a) Illustration of ridge curvature estimation. (b) Estimated ridge curva- tures (blue curves with estimated value) at each minutiae (red arrows) in a fingerprint image. Higher curvature values represent larger changes in ridge flow orientations .......................... 123 5.6 Aligning rolls of each class in NIST-4 at red circles with orientation marked by a red line based on the locations of core (blue circles) and delta (blue triangles) points. ............................. 128 5.7 Empirical distribution of minutiae positions (empirically extracted [12]) in each of the five fingerprint classes (a) arch, (b) tented arch, (0) left loop, ((1) right loop, (e) whorl, obtained from 2,000 pairs of rolls in NIST-4. The darker the area, the higher the density of minutiae in that region. Note the images are manually classified into 5 classes by forensic experts and aligned by the author ................ 128 5.8 Theoretical distribution of minutiae positions (based on the mixture model) in each of the five fingerprint classes: (a) arch; (b) tented arch; (c) left loop; ((1) right loop; (e) whorl ................... 129 5.9 Empirical and theoretical distributions of (a) ridge curvature and (b) ridge period associated with minutiae in the arch fingerprint classes. The empirical distribution strictly follows the theoretical one in both cases. 133 5.10 Empirical distribution of ridge curvature associated with minutiae in each of the five fingerprint classes (a) arch, (b) tented arch, (c) left loop, (d) right loop, (e) whorl, Obtained from 2,000 pairs of rolls in NIST-4. The darker the area, the higher the ridge curvature in that region. . . 135 xviii 5.11 Empirical distribution of ridge period in each of the five fingerprint classes (a) arch, (b) tented arch, (0) left loop, ((1) right 100p, (e) whorl, ob- tained from 2,000 pairs of rolls in NIST-4. The darker the area, the higher the ridge period in that region. ................. 5.12 Theoretical distribution of ridge curvature associated with minutiae (based on the mixture model) in each of the five fingerprint classes: (a) arch; (b) tented arch; (0) left loop; ((1) right loop; (e) whorl. ........ 5.13 Theoretical distribution of ridge period associated with minutiae (theoret- ically generated using the mixture model) in each of the five fingerprint classes: (a) arch; (b) tented arch; (c) left loop; ((1) right loop; (e) whorl. 136 5.14 Distribution of intra-ridge pore spacings obtained from 720 fingerprints in N IST-30 (1000 ppi) using the proposed pore extraction algorithm (see Section 3.3.4) ................................ 6.1 The same fingerprint acquired using (a) a touch-based optical device (Crossmatch) and (b) a RTS device (TBS) (reproduced from [105]). Here the finger skin is very dry, so the touch-based sensing results in poor image representation ......................... 6.2 The Surround Imager, a multi-camera system for 3D touchless fingerprint acquisition [105]. ............................. 6.3 3D touchless fingerprint reconstruction: (a) five steoroscopic views of the same finger captured by the Surround Imager; (b) the reconstructed 3D touchless fingerprint (courtesy: TBS Inc [17].) ............ 138 144 144 145 6.4 A detailed view of the ridge-valley structure of a 3D touchless fingerprint. 145 6.5 Globe Unwrapping using (a) a cylindrical model and (b) a conic model (reproduced from [10]) ........................... 6.6 Parametric unwrapping using a cylindrical model (top-down view). Point (x,y,z) on the 3D finger is projected to (0, z) on the 2D plane. . . . . 6.7 Fingerprint unwrapping using the cylindrical model. Relative distances between points on the finger surface are not preserved after the un- wrapping procedure. ........................... 6.8 3D representation of a finger (viewed from the bottom to the tip of a finger). Vertices of the triangular mesh are represented in Slices. . . . 6.9 Slice interpolation. We interpolate between given slices with a step-size h to obtain denser representation in the vertical direction (y-axis) for unwrapping ................................. xix 147 149 152 6.10 Equidistant sampling. We sample points on each slice with equal distance h to Obtain finer representation in the horizontal direction for unwrap- ping. The baseline defines the central column of the 2D image that the fingerprint will be mapped to. ...................... 6.11 Unwrapping a 3D fingerprint (ridge enhanced) using (a) the cylindrical- based parametric method and (b) the proposed non-parametric method. Note the stretching effects at the fingertip in (a) are more severe than those in (b). ......................... 6.12 3D finger “virtual rolling” from (a) left to right, and (b) right to left. . . 6.13 Directional “virtual rolling” by setting the baseline at (a) the left nail side, and (b) the right nail side. Each simulates the skin distortion during rolling from left to right, and from right to left, respectively. Two sets of minutiae points and their relative distances are marked to Show the effects Of distortion caused by directional rolling ............. 6.14 Directional distortion in legacy rolled fingerprints: (a) rolling from left to right; (b) rolling from right to left. Similar skin distortion patterns are illustrated with two sets of minutiae points. .............. 6.15 Homomorphic filtering procedure ....................... 6.16 Touchless fingerprint enhancement: (a) an unwrapped touchless finger- print image; (b) homomorphic enhancement of (a); (c) Gabor filtering of (b). ................................... 6.17 Cascade quality assessment in local regions (80 x 80 pixels) with (a) excel- lent quality, (b) good quality, (c) medium quality, and (d) poor quality. The first and second row Show the fingerprint regions and their cor- responding quality estimates, respectively. The darker a region of an image in the second row, the lower the image quality in that region. . 6.18 Comparing image quality: (a) an unwrapped touchless fingerprint image; (b) local quality map of (a); (c) enhanced image of (a); ((1) local quality map of (c); (e) corresponding rolled fingerprint; (f) local quality map of (e). White pixels denote good quality and black pixels mean poor quality. The global quality of (a), (c) and (e) is 0.27, 0.68, and 0.51, respectively. ................................ 6.19 Sample images from the second database: (a) an unwrapped touchless fingerprint and (b) the corresponding rolled fingerprint ......... 6.20 Box plots of estimated global quality indices on touchless fingerprints (with and without enhancement) and their corresponding rolled fingerprints. XX 153 156 156 157 158 158 159 162 163 164 165 6.21 Verification ROCS on matching touchless and mated rolled fingerprints on the TBS database. ............................ 165 6.22 Matching an enhanced touchless fingerprint and its corresponding rolled fingerprint shown in Figure 6.19. The two fingerprints are aligned with the matched minutiae marked with red circles .............. 166 Images in this dissertation are presented in color. Chapter 1 Introduction Personal characteristics exist in much more minute particulars. Perhaps the most beautiful and characteristic of all superficial marks are the small furrows with the intervening ridges and their pores that are disposed in a singularly complex yet even order on the under surfaces of the hands and the feet. — Francis Galton, 1888, Nature [61] Human fingerprints have been discovered on a large number of archaeological artifacts and historical items, including clay tablets for business transactions in an- cient Babylon and clay seals in ancient China [95]. However, it was not until 1880 that Dr. Henry Faulds proposed to use printer ink to obtain fingerprints for the purpose of personal identification [57]. In 1899, Sir Edward Henry established the well-known “Henry system” of fingerprint classification [63], which started what is considered “the modern era of fingerprint identification” [36]. In 1924 the United States congress required the collection of fingerprints for criminal records [63]. Faced with continually growing databases and recognizing the potential for latent print searches, law enforcement agencies began to automate the fingerprint identification procedure by introducing the Automated Fingerprint Identification Systems (AFIS) 1 in 1977 [88]. By 1999, the FBI implemented the Integrated Automated Fingerprint Identification System (IAFIS), allowing submission of electronic records and search capabilities from local AFIS (states and counties) directly to the national database [88]. In addition to law enforcement applications, IAFIS also serves in large-scale government applications, such as civilian background checks. By 2007, IAFIS had more than 74 million computerized fingerprint records [63] and conducted ~ 100, 000 searches per day [50]. Automatic fingerprint matching has been shown to be a reliable, rapid, cost— effective solution to large-scale verification and identification needs. However, it has not completely replaced the need for manual examination by experienced forensic experts. For example, although 67.6% of criminal identification cases submitted to IAFIS were automatically or semi-automatically identified [27], it is only the iden- tifications verified by forensic experts that are acceptable in courts of law. This is because manual fingerprint matching is considered to be more accurate, especially in the case of latent prints, as forensic experts rely on more detailed fingerprint fea- tures [34] compared to automatic systems. AS a result, “the level of sophistication of automatic systems in matching fingerprints today cannot rival that of a dedicated, well-trained fingerprint expert” [95]. In addition, complete sensor interoperability has not been achieved in automatic systems, which needs immediate attention due to the fast advances in fingerprint sensing technologies. In this thesis, we study the integration of human expertise and technology in order to achieve better accuracy and interoperability of automatic systems. More specifically, we introduce i) the extended set of fingerprint features (e.g., pores and dots) into automatic matching systems and develop algorithms to extract and match them to improve the matching accuracy and ii) techniques to achieve interoperability between touchless and legacy rolled fingerprints. We believe both these issues will play a major role in advancing the next generation automatic fingerprint matching 2 technology. 1.1 Fingerprint Formation A fingerprint is the pattern of friction ridges on a human finger, which provides increased friction for gripping. Many scientists believe that the friction ridges are constructed from small “ridge units” (see Figure 1.1 (a)) whose size, shape, density and alignment are “remarkably unique” to individuals [36]. During friction ridge formation, “ridge units” are fused together under random forces into various ridge characteristics, the most representative of which are ridge bifurcations and endings (see Figure 1.2). No two persons, not even twins have been found to have finger- prints that share exactly the same location, shape and inter-relationship of these ridge characteristics [36]. Nevertheless, friction ridge formation is genetically con- trolled as “statistically significant familial correlation values and high heritability estimates (more than 60%)” have been observed for some of the ridge characteristics such as forks, endings and the total number of ridge characteristics [48]. Dankmeijer et al. have demonstrated a higher correlation of the number of ridge characteristics on each finger for monozygotic twins as opposed to dizygotic twins [52]. Gungadin [67] has also shown that the density of friction ridges is significantly greater in females than males. It is now known that the friction ridge formation process starts deep in the skin, where skin cells are generated and migrate upward to the epidermal surface [46]. A recent study on microcirculation of a human finger even revealed that the regular dis- position of capillaries beneath the dermis “sharply followed the fingerprint pattern, reproducing an identical vascular fingerprint with the same individual architecture” (see Figure 1.3) [116]. Such scientific evidence explains why fingerprints are gener- ally considered “permanent” and only a very deep cut would result in changes in a 3 Figure 1.1: Ridge units, the building blocks of friction ridges: (a) a sketch of ridge units (reproduced from [36]); (b) an area of a fingerprint showing friction ridges fused by ridge units, each containing a pore. _ fl’ . V ending 14’” {Q crossover sur ‘- I,,-/4g\. p [ii/[if ~\ . "J Figure 1.2: The most common ridge characteristics. Figure 1.3: A finger seen during the maceration process shows (A) the friction ridges of a finger; (B) rows of vessels underneath the friction ridges, revealing perfect corre- spondence with the fingerprint (reproduced from [116]). fingerprint. 1.2 Fingerprint Acquisition In order to acquire a fingerprint, various acquisition (sensing) technologies have been developed. In general, based on the acquisition process, a fingerprint can be acquired as either a latent or a tenprint (roll, slap or plain), as shown in Figure 1.4. Figure 1.5: An example of the latent lifting process: (a) dust with powder; (b) photograph with a ruler; (c) apply transparent tape and then lift (reproduced from [28]). .‘j N a. - M. 9 .. I}... 'mf’fi‘ im] t“) he... nu up}. u. Hm _ .. ‘ ‘7 _. E,“ Mm rum 13.41an ‘ FILE“? &_i-,L_J,.“ n) H" ale}? nc wrung W 1" not .,-._Mu..ll ans-um 5 ._ . Lon Tor—can Mn ”3.4541” :- I“ I mu. :6: m m ............ Figure 1.6: A tenprint card containing three types of fingerprints, namely (1-10) rolls of each of the ten fingers, (11,14) slaps of the left and right hand (index, middle, ring and little fingers) and (12,13) plains of left and right thumbs (reproduced from [16]). Figure 1.7: Acquiring a fingerprint using (a) the traditional ink-on-paper method [6], and (b) a live-scan device that captures fingerprints electronically [26]. Latent acquisition: A latent, or a mark, is a chance or accidental impression left behind on a surface by an unknown individual. To acquire latents, forensic experts need to carefully lift up or photograph the residue using special treatments and imaging including optical (e.g., UV imaging), physical (e.g., powdering), or chemical (e.g., ninhydrin) [46] (see Figure 1.5). The choice of treatment is dependent on the surface where the latents are found. Because latents often exhibit a small portion of the finger surface with few number of features (e.g., 17 minutiae on average [75]), and are of poor quality due to smudginess, distortion, and poor deposition [21], they generally contain “less clarity, less content, and less undistorted information than a fingerprint taken under controlled conditions, and much less detail compared to the actual patterns of ridges and grooves of a finger [53]” T enprint acquisition: A tenprint is a set of fingerprint impressions collected and registered with the consent of a known individual. Tenprints are often stored in the databases and used as templates in matching with other tenprints or latents during identification. A complete tenprint record contains a set of 14 fingerprint images collected as rolls, plains and slaps from all the ten fingers of a person (see Figure 1.6). 8 0 Roll, or a rolled fingerprint, is captured by rolling a finger (tip to the first joint) from “nail-to-nail” on the sensing surface. It provides the largest fingerprint area and contains, on average, about 80 minutiae, allowing for extremely accurate classification and matching. However, the rolling procedure is tedious and can result in poor quality images due to improper pressure, distortion, slippage, and smearing. 0 Plain (Flat) fingerprint is captured by pressing a finger against the sensing sur- face. This process is faster compared to rolling, but the acquired fingerprint area is smaller than that captured by rolls, containing ~ 40 minutiae, on average. 0 Slap, or a four-finger simultaneous impression, is captured by pressing four (index, middle, ring, and little) fingers of a hand simultaneously against the sensing surface. A slap image can be segmented into individual plain impressions and can be used for sequence check1 by matching with corresponding rolls. A tenprint can be captured off-line using a tenprint card (ink-on-paper) or on-line using an electronic device (live-scan). In the off-line method, the subject’s finger is coated with black ink and pressed or rolled against a paper card (see Figure 1.7 (a)). In the on-line method, fingerprints are electronically scanned using optical scanners (see Figure 1.7 (b)). Other live-scan technologies based on solid-state or ultrasound have also been developed [95], however, these devices are rarely used in law enforcement applications as they are too small to capture slap or rolled fingerprints. New live-scan technologies based on multispectral imaging (M81) and touchless imaging have been introduced [9, 17]. Unlike conventional optical live-scan devices, MSI devices scan both the surface and sub-surface of a fingerprint using different wavelengths of light (e.g., 470nm (blue), 574nm (green), and 636nm (red)) and provide a composite fingerprint image [114]. This technology is effective in improving image 1Sequence check is a process that compares rolled prints and corresponding plain impressions to determine if the sequence of fingers is correct during acquisition. 9 1 (a) (b) (C) Figure 1.8: Fingerprint image resolution. Three impressions of the same fingerprint are captured at (a) 380 ppi (Identix 200DFR) (b) 500 ppi (CrossMatch ID1000) and (c) 1000 ppi (CrossMatch ID1000). High image resolution increases the visibility and extractability of fingerprint details. Figure 1.9: Two touchless fingerprint images from a video sequence captured using a super resolution sensor (3 7000 ppi). Perspiration activities (opening and closing) of pores are visible and can be used for spoof detection (courtesy: TBS Inc. and Geppy Parziale (geppy.parziale©invasivecode.com)). 10 Table 1.1: Comparison of various fingerprint sensing technologies. Technology Image type Resolution1 Law enforcement2 optical rolls / plains / slaps 500-1000 yes solid-state plains 500 no ultrasound [20] plains 500 no multispectral [9] plains 500 no 3D touchless [17] rolls3 500-1000 possible 1 in pixels per inch (ppi) 2 applicability in law enforcement 3 rolled-equivalent quality of dry, wet, or worn out fingerprints [9]. However, M81 is not suited for law enforcement applications because the algorithm that fuses multiple spectrum images tends to blur or even remove fingerprint details during the fusion. The touchless imaging, on the other hand, involves imaging the fingerprint surface without a direct contact between the skin and sensor surface. This technology results in fingerprint acquisition free of smearing, slippage, and skin distortion. As a result, in principle, it effectively preserves the “ground truth” of a fingerprint while producing a more complete representation (from “nail to nail” and “tip to bottom”) compared to the conventional live-scan devices (see Section 6). Different sensing technologies have their advantages and disadvantages in terms of cost, time-efiiciency, user experience, etc. The mostly widely used criterion to eval- uate different fingerprint sensing technologies is the quality (clarity) of the captured images, often measured by the image resolution, or pixels per inch (ppi). Higher im- age resolution results in better image quality with finer details being more visible (see Figure 1.8). Other criteria include whether the technology is capable of capturing all types of tenprints (e.g., rolls, plains, and slaps), is applicable in law enforcement and has high security level (e.g., robustness to spoof attacks) (see Figure 1.9). Table 1.1 shows a comparison of different fingerprint sensing technologies. 11 1.3 Sensor Interoperability ._,"< ~ ,. - 5 x ' '-«-&-,—_‘a:::: . “saw. ~ ..- ' —-~..:' "“ '1 . vs?“ 7 ‘ ‘ '22: Figure 1.10: Fingerprint images of the same finger acquired using different commer- cial sensors, including optical sensors (a) Biometrika FX2000; (b) Digital Persona UareU2002; (c) Identix DFR200; (d) Ethentica TactilSense T-FPM, and solid-state sensors (e) ST—Microelectronics TouchChip; (f) Veridicom FPSllO; (g) Atmel Fin- gerChip; (h) Authentec AES4000 (reproduced from [95]). In manual fingerprint matching, forensic experts are capable of comparing fin- gerprint images acquired under different environments (e.g., latents, tenprints) and using various acquisition methods (e.g., optical, solid-state, ink—on-paper). In auto- matic fingerprint matching, however, AFIS systems are often designed and optimized for fingerprint images originating from the same type of sensors [112]. When finger- print images acquired using different sensing technologies are compared, automatic 12 systems often have trouble discerning the appropriate fingerprint information, due to different image characteristics caused by sensor noise, light sources, pressure sensi— tivity, etc., a problem also known as sensor interoperability (see Figure 1.10) [112]. Note that the differences in fingerprint scales captured by different sensors are usually not a problem, as they can be addressed by re—scaling the images based on the image resolution, measured by pixels per inch (ppi) or dots per inch (dpi). Specifically, if the fingerprint is captured by printing, the scale of the fingerprint is determined by the scanning resolution; if the fingerprint is captured by imaging (e.g., touchless fin- gerprints, latents), the corresponding resolution is determined during the acquisition process. Because sensor interOperability occurs at the image level, it is important to inves- tigate the fundamental differences in the intrinsic characteristics of different sensing technologies and develop algorithms to address this issue at the image level. An ex— ample of achieving sensor interoperability between touchless and rolled fingerprints at the image level is presented in Chapter 6. It is also possible to address the sen- sor interoperability problem at the feature level, where a canonical representation of fingerprint features (e.g., ridge lines with constant inter-ridge distances) can be used to offset the variability in data. However, such representation is often limited and incompatible with manual fingerprint matching. Note that sensor interOperabil- ity cannot be fully solved by adopting a common biometric data exchange format [31, 39]. Such a format will only assist in the exchange of fingerprint templates be- tween different systems or vendors [107], but does not ensure compatibility between fingerprint images captured using different sensing technologies [112]. 13 1.4 Fingerprint Representation Fingerprint images are not directly compared in fingerprint matching. Instead, a set of salient and discriminatory features that represent the underlying characteristics of fingerprints are extracted from the images before matching. The reason for feature extraction can be related to dimensionality reduction [54], where a raw image is considered of high dimensionality, containing redundant information. Features are also typically more robust to noise and distortion than the input image pixels. As a result, the accuracy and efficiency of fingerprint matching greatly depends on the selection and extraction of fingerprint features. LEVEL 1 FEATURES \ \ . ' I ‘ I 'l' . \ ' ' ' r',-‘. ‘ I ' ‘ ‘ ' ’ ‘ I," f "I r"_‘-\ K a \k‘ ', , f: - _ _.\\ / {f/%// r'\\\ . -' '7 1 " . .‘ /~ 31"”, ' r :_ \ \ .‘\ f . PM. . . ‘I {r//// ‘ ’ ,. .. ~ ._ R‘h ' ‘I’y/ \ \ .. ~ /’ . _. , . . an”? r/ \ I j ,l -\ . \ I ’ ,‘w . ‘lvtvir‘ ‘ ‘ ““er , f I r‘l I. ' : k V. I ‘r .' { ,vi»1..-i.‘ . ,1“, H. l 1'; i l l ‘ I 4' . i ' 1 “il l k ' I‘ ‘ i 1 l ‘ A a r l ‘s ] I ‘o . . ‘ u; . ‘ ‘ \ _ \ \._.\\\ . , \ - l \l ‘\\x ‘0»; V. {.9 ‘ first “. 1‘. I.“ / \\ . (, , ’5 < _/ {/- ENDING BIFURCATION FORK SPUR LEVEL 3 FEATURES PORES RIDGE SHAPE INCIPIENTS CREASES WARTS SCARS Figure 1.11: Fingerprint features used in manual fingerprint matching, categorized as Level 1 (upper row), Level 2 (middle row) and Level 3 (lower row) features [18, 100]. In manual feature extraction, forensic experts first analyze the clarity of the print to determine the “extractability” of features. Next, a large variety of features, that 14 have been established based on their evidential value observed during decades of forensic practice, are encoded and recorded. These features are typically categorized into three levels, namely, Level 1, Level 2 and Level 3 features (see Figure 1.11). The higher the feature level, the finer the feature detail. In automatic feature extraction, pre-compiled algorithms are used to determine the strength of ridge-valley signals in order to determine the area in which to extract fingerprint features. Then, a set of features pre-defined by the matching algorithm are extracted and encoded for matching. Unlike the three levels of features defined in manual matching, the features extracted in automatic matching do not regularly have to have a particular physical counterpart (e.g., local orientation map or filter responses) as long as they lead to high matching accuracy. Figure 1.12: Minutiae features currently defined in the ANSI-NIST—CSL 1-2000 stan— dard [33]: (a) ridge bifurcation; (b) ridge ending. They are a subset of the Level 2 features used in manual fingerprint matching (see Figure 1.11). Regardless of the extraction process, either manual or automatic, fingerprint fea- tures are often required to be represented in a standard feature format. Standardiza- tion of fingerprint representation ensures the compatibility of templates and allows effective exchange and communication among different automatic systems or between automatic systems and forensic experts. Currently, fingerprint feature standards (e.g., the ANSI-NIST-CSL 1-2000 standard [33]) are mostly based on minutiae, which are 15 highly discriminative features that can be reliably extracted by both forensic experts and automatic systems (see Figure 1.12). In practice, however, minutiae may not always be sufficient in establishing a correct match when the fingerprint size is small or the image quality is poor. In such cases, additional features other than minutiae, also known as extended features, are needed. More details of fingerprint features and feature standards are presented in Chapter 4. 1.5 Fingerprint Matching Fingerprint matching is the process that compares features extracted from two differ- ent fingerprint images and determines if they are from the same finger. The image, or the features derived from it at the enrollment time, is stored in the database, called a template. The image, or the features derived from it at authentication time, is called a query. Fingerprint matching serves two objectives in personal authentica- tion: verification and identification. In verification, a query with a claimed identity is submitted and is only compared with the template of the same claimed identity. In identification, no identity information is provided with the query, which is then compared with all the templates in the database. The query is identified only if a template is found to be closely matched with the query. In practice, fingerprint matching is conducted in three different modes, namely, manual, semi-automatic and automatic. These modes present different trade-offs between efficiency (time and cost) and accuracy. For example, manual matching requires a significant amount of effort on the part of highly trained forensic experts to provide accurate identification results. Automatic matching, on the other hand, is more efficient but may not be as accurate, especially for small and noisy quality prints (e.g., latents). Semi-automatic matching attempts to take advantage of both by using automatic matching to filter and reduce the number of templates needed for 16 manual matching. 1.5.1 Manual Fingerprint Matching In manual fingerprint matching, fingerprint features are extracted and compared by forensic experts. Prior to 1973, the forensics community relied on a “point rule” [63] to perform the task. That is, two fingerprints were declared to match as long as a certain minimum number (12) of minutiae points were found in correspondences. This rule was later abandoned as manual fingerprint matching is a “qualitative and quantitative analysis” [122, 123] that often benefits from utilizing both minutiae and extended features. Currently, forensic experts follow an ACE—V protocol [122] to perform manual fingerprint matching. There are four consecutive components in the ACE—V proto- col: Analysis, Comparison, Evaluation, and Verification [73]. In Analysis experts categorize and encode all three levels of features that are visible in the print. Com- parison is an iterative procedure between the unknown print (query) and a known print (template), focusing successively on Level 1, Level 2, and Level 3 features (when identified in both) and taking into account the tolerances dictated by the quality of the print (see Figure 1.13). Evaluation takes place at the same time as the comparison and addresses two fundamental questions [36]: (i) IS there an agreement between the query and a template fingerprint? (ii) Is there sufficient uniqueness to individualize the identity of the query? Consequently, the evaluation results can be expressed in three terms: elimination, individualization, or declaring an insufficient uniqueness to individualize or eliminate. Finally, Verification is carried out by peer review to pre- vent false identification. Note that compared to the “point rule”, ACE-V is a more comprehensive procedure for manual fingerprint matching. However, recent court rulings have raised questions on whether the ACE-V methodology has a reliable fac- tual foundation for latent print identification [30]. Court arguments suggested that 17 ' 10. Little Finqar/ Figure 1.13: Manual fingerprint matching using three levels of features, including Level 2 minutiae (indicated by numbers) and Level 3 ridge shapes (illustrated in the magnified portions) (reproduced from [2]). “ACE-V is a rather vague and general outline, compared to point counting, and there are no objective or universal standards that govern the application of the ACE-V to establish its reliability” [30]. 1.5.2 Automatic Fingerprint Matching Although manual fingerprint matching has started shifting away from the “point rule” to the ACE-V protocol, automatic fingerprint matching still relies mainly on minutiae 18 points. Essentially, automatic fingerprint matching consists of three components: i) extraction of minutiae sets from the template and the query; ii) establishing the alignment between the two minutiae sets; iii) match score calculation based on the number of correspondences between the two minutiae sets and other properties (see Figure 1.14). (c) (d) Figure 1.14: Automatic fingerprint matching using minutiae points: (a) minutiae set in a query; (b) minutiae set in a template; (0) alignment based on the pivot minutiae marked with green circles in (a) and (b); (d) the matching results where corresponding minutiae are connected by green lines (adopted from [71]). One of the biggest challenges in automatic fingerprint matching is to establish the correct alignment between the template and the query, which subsequently permits a 19 meaningful calculation of the match score. Unlike forensic experts who utilize ridges to visually determine the alignment of two fingerprints, minutiae-based automatic systems often rely on maximizing the number of minutiae correspondences to establish an alignment. As a result, automatic fingerprint matching has limited performance in partial fingerprint (e.g., latent) matching, where the number of minutiae is often small and the image quality is poor. Despite these limitations, automatic fingerprint matching has been shown to be an accurate, efficient and cost—effective solution for human identification. As an example, the commercial AFIS system by Cogent can perform up to 500,000 matches per second with a very high accuracy [1, 130]. 1.5.3 Semi-automatic Fingerprint Matching Considering the high accuracy of manual fingerprint matching especially for latents and the high throughput of automatic fingerprint matching, semi-automatic finger- print matching employs both strategies to improve the overall performance. Semi- automatic fingerprint matching is mainly employed in two different identification tasks, namely, tenprint searches and latent searches. All law enforcement AFIS sys- tems are required to have the capability of conducting both types of search. T enprint searches are driven by the needs for fast and accurate identification on a large-scale database with high-quality enrolled images or templates. Speed is a critical factor because in criminal processing, a fast turnaround is required to en- sure rapid retrieval of information; and in civil applications, a search also needs to be concluded in a reasonable time. With improved feature extraction and matching Speed and accuracy, the AFIS systems have played a major role in tenprint searches. Although human verification is needed in certain applications to compare the can- didates produced by AFIS or to validate the hit / non-hit decision, some agencies are pursuing “lights-out” tenprint search, where responses from AFIS are directly sent to the inquiring agency without any human intervention [88]. 20 Manual Latent / Encoding / Automatic Matcher Manual ————> (AFIS) Automatic Tenprint / Encoding / Decision Figure 1.15: Semi-automatic procedure for latent search. Latent searches require interaction between AF IS systems and human experts. The process is also known as “semi-lights-out” as human intervention can occur at both (either) feature extraction (frontend processing) and (or) matching verification (backend processing) stages, as shown in Figure 1.15. In latent searches, AFIS systems only provide information, not a determination [88]. That is, the system usually returns the top N (e.g., N=50) matches, each of which is then verified by a human expert. 1.5.4 State-of-the-Art Performance Regardless of the matching mode, the matching performance can be evaluated by two types of error rates, namely, False Accept Rate (FAR) and False Reject Rate (FRR). FAR refers to the error of falsely accepting an impostor match, while FRR refers to the error of falsely rejecting a genuine match (see Figure 1.16). A Receiver Operating Characteristic (ROC) curve is often used to plot the FAR and FRR at different score thresholds. Note that FAR is the result of small variability between impressions of different fingers, referred to as inter-class variability and FRR is caused by large variability in different impressions of the same finger (e. g., poor quality, skin 21 distortion), referred to as intro-class variability. For a given system, reducing one error rate will cause the other to rise. Fingerprint systems deployed in high security applications often operate at a low value of FAR. When a system Operates in the identification mode, the False Positive Identification Rate (FPIR) and False Negative Identification Rate (FNIR) are the counterparts of FAR and FR at a given rank, or the number of retrieved candidates. Increasing the rank value would improve the chance of correct retrievals, but would also increase the burden of verification afterwards, especially if the verification is done manually. The relationship between rank and the identification rate can be also reported using a Cumulative Match Curve (CMC), where the percentage of retrieving the correct candidate is plotted against the rank number. A Decision threshold (t) Genuine distribution lmposter distribution Probability (p) Matching score (5) Figure 1.16: Matching error rates. The curves show False Accept Rate (FAR) and False Reject (FRR) Rate for a given threshold t over the genuine and impostor score distributions. FAR is the percentage of non-match (impostor) pairs whose matching scores are greater than or equal to t, and FRR is the percentage of match (genuine) pairs whose matching scores are less than t [108]. A number of large-scale tests have been conducted by the National Institute of 22 Standards and Technology (NIST) to evaluate the state-of-the-art fingerprint match- ing performance [130, 25, 29, 55]. These tests involve AFIS systems from major ven— dors in the industry that were evaluated on large-scale fingerprint databases. Results from these tests are summarized as follows. In tenprint matching, recent evaluations tests (e.g., NIST Fingerprint Vendor Technology Evaluation (FpVTE) test [130] and N IST Proprietary Fingerprint Tem- plate (PFT) Test [25] have consistently Shown highly accurate performances by top AFIS vendors (i.e., FRR of 0.005 at FAR of 0.0001). Large sets of operational data were used in these tests to generate statistical significance in the resulting perfor- mance. These tests also revealed that tenprint matching accuracy can vary signifi- cantly based on the number of fingers available, image quality, image size and image type. For example, accuracy on larger images was Significantly higher than that on smaller images (see Table 1.2); accuracy using multiple fingers was significantly higher than that using a single finger. Table 1.2: Effects Of image size in matching right and left index fingers from the NIST Special Database 29 as a function of cropped image size [127]. Image size1 Right finger2 Left finger3 368 X 368 0.041 0.037 320 X 320 0.063 0.032 280 X 280 0.060 0.053 200 X 200 0.304 0.292 180 X 180 0.350 0.356 1 pixels in width and height 2’3 FRR at FAR of 0.001 In latent matching, recent evaluation tests (e.g., Phase I and Phase II of NIST Automated Latent Fingerprint Identification Technologies (ELFT) test [55, 75]) have shown a wide range of performances by top AFIS vendors on different databases. For example, in Phase I Of the ELFT test, the latent identification rate of the cur- rent IAFIS ranges from 54% (NIST-27 database) to 94% (DCJS database) and 97% 23 (Secrete Service database). This test confirmed the tremendous importance Of latent quality. In Phase II of the ELFT test, a latent database with an overall quality “higher than typical operational latent case work” was used to evaluate the “semi-lights-out” performances of AFIS vendors. In this test, the best AFIS vendor achieves the rank- 1 identification rate of 97.2% at 1000 ppi, or FNIR of 0.149 at FPIR of 0.01. Note that compared to tenprint matching, latent matching remains a big challenge with a much larger room for performance improvement. The challenge in latent matching include: i) automatic segmentation of latent foreground from the noisy background often caused by dirt, surface texture, etc.; ii) improving the image quality of latents, especially to increase the ridgevalley contrast in noisy areas; iii) automatic extraction of fingerprint features (both minutiae and extended features) in latents with or with- out human assistance; iv) utilizing extended features in latent matching, especially when the number of minutiae is small; v) accounting for the distortions present in latents and correct them during matching. Forensic experts have suggested that potential improvement for latent matching could come from utilizing extended features in AFIS systems [36]. The main reasons are i) extended features have shown evidential values in manual latent matching; ii) some extended features (e.g., ridges) can be used to account for distortion when the number of minutiae is too few; iii) using extended features in AF IS systems would increase the interchangeability between manually and automatically encoded features; iv) higher resolution (1000 ppi) tenprints and latents are available in practice. Consequently, ANSI/NIST established a committee (CDEFFS) [45] of a group of forensic experts and AFIS vendors to initiate systematic studies on the potential use of extended features in the Next Generation Identification Technologies (NGI) [32]. One of the main efforts of CDEFFS is to look beyond minutiae points and formally define a set of extended features, including pores, ridges, dots and incipients, that can be utilized interchangeably by both automatic systems and forensic experts. 24 1.6 Thesis Contributions There are some fundamental differences in how forensic experts and automatic sys- tems conduct fingerprint matching, especially for latent matching. Identifying these differences and integrating the benefits of both approaches are essential. First and foremost, there is a strong need for automatic systems to integrate human expertise of extended fingerprint feature extraction and matching. For example, automatic systems mainly focus on the quantitative measures of minutiae (ridge ending and bifurcation points), while forensic experts perform qualitative friction ridge analysis, including examination of extended features. Secondly, most automatic systems are developed and operated under the assumption that the fingerprint data (or images) to be compared are Obtained using the same type of sensing technology, and, hence, are restricted in their ability to match or compare fingerprints originating from differ- ent sensors. As a result, achieving sensor interoperability by Obtaining compatibility between new and legacy fingerprint data is essential, especially in law enforcement applications. In this thesis, we have addressed the above challenges and made the following contributions: 0 Developed automatic algorithms to extract a set of extended features, includ- ing ridge skeletons, pores, dots and incipients. In addition, interrelationship among features (e.g., ridge ownership Of pores, neighboring ridge information) is extracted and utilized. The resulting feature representation is consistent with that of manually encoded features. We Show that for 1000 ppi live-scan images, the errors of pore and dot/ incipient extraction are two times of those of the state-of-the-art minutiae extraction algorithm. However, the large quantity of pores may possess sufficient discriminative power in matching and potentially offset the extraction errors. 0 Developed a two-stage alignment algorithm that utilizes both minutiae and 25 extended features (ridge skeletons). Unlike other ridge-based methods, this alignment algorithm propagates through all the ridge skeletons while accounting for ridge distortion incrementally, based on the ridge correspondences. We show that the alignment process automatically corrects for distortion for all the extracted features, which is a key to the matching of densely distributed extended features. Developed matching algorithms for each of the prOposed extended features, in- cluding ridge skeletons, pores, dots and incipients. The matching is conducted in a hierarchical fashion to achieve both higher accuracy and efficiency. Matching scores are combined using various score-level and rank-level fusion techniques. Three databases are used to evaluate the matching performance and we Show that the proposed matching algorithm improves the accuracy in all the three databases. We also demonstrate that low quality or small area fingerprints ben- efit the most from extended feature matching. Further, ridge skeleton matching is more effective than matching based on pores, dots and incipients. We extended an existing fingerprint individuality model to incorporate both minutiae and extended features. The model evaluates the statistical distribu- tions of minutiae position and direction, ridge pattern type, ridge period and curvature, and pore spacing, assuming certain dependencies. We use the model to evaluate the probability of random correspondence (PRC), and demonstrate that the use of extended features can theoretically increase the individualization power of a fingerprint. We also demonstrate that our theoretical evaluation consistently follows the empirical values estimated using a public fingerprint database. Developed algorithms to achieve interoperability between 3D touchless and 2D legacy rolled fingerprints. The algorithm based on non-parametric equidistant 26 unwrapping is developed to simulate a virtual rolling process to unfold 3D touch- less fingerprints as 2D rolled images. Methods to evaluate and enhance the im- age quality of touchless fingerprints are also proposed. The resulting touchless fingerprints are demonstrated to be “rolled-equivalent” and quite compatible with legacy rolled fingerprints based on a small database. 27 Chapter 2 Fingerprint Features 2.1 Introduction In fingerprint matching, images are not directly compared. Instead, discriminatory features are extracted from fingerprint images and compared in an effective and ef- ficient way. In general, any distinctive and permanent fingerprint ridge formation is a potential fingerprint feature. The most commonly used fingerprint features include the ridge flow patterns (whorl, left loop, right lOOp, arch, etc.) and minutiae, where ridges branch, start, stop, turn or twist. In addition, numerous sweat pores (one on each ridge unit) and even distal creases and scars are considered as salient fingerprint features as they are distinctive and “generally do not change” [36]. In forensics, fin- gerprint features are typically classified into three levels based on their clarity and usability. Understanding the characteristics of these features is essential for devel- oping effective and efficient algorithms for extended feature matching in automatic systems. 28 ”I 2.2 Feature Classification 2.2.1 Level 1 Level 1 features refer to the general ridge flow and pattern configuration of a fingerprint [36]. ”M95" . -‘ 1 A4 «carry/rd (e) Central Pocket (h) Accidental Figure 2.1: Henry’s fingerprint pattern classes. Core and delta points are denoted by red and blue dots, respectively. Level 1 features describe the ridge flow pattern of a fingerprint. According to the commonly used “Henry classification system,” there are eight major pattern classes, comprised of whorl, left loop, right loop, twin loop, arch, tented arch, central pocket and accidental (see Figure 2.1) [69, 132]. These pattern classes can be determined by the position(s) of the “core” (the north most point Of the innermost ridge line) and the “delta” (the triradial point with three ridges radiating from it) in a fingerprint [69]. For example, a fingerprint belongs to the whorl pattern class if it contains two or more delta points, or is of single loop pattern class if it contains one delta and one core (see Figure 2.1). The plain arch class has neither delta nor core points. 29 3% 4% 28% I Arch I Left Loop 5 Right Loop l Tented Arch I Whorl 31% 34% Figure 2.2: Percentage of occurrences of major fingerprint classes in 222 million rolled fingerprints [129]. Pattern: left loop core point - delta point i' orientation field Figure 2.3: An example of Level 1 features, including the pattern class, delta and core locations and orientation field overlaid on a fingerprint image. 30 Because Level 1 features characterize a fingerprint class rather than the fingerprint individuality, they are only used to narrow the search via classification and exclusion. Fingerprint pattern classes are not uniformly distributed among the population. For example, central pockets, twin loops, and accidentals are very rare and are often ignored for classification purposes. The percentage of occurrences of the other five major classes are approximately 3.7%, 33.8%, 31.7%, 2.9% and 27.9% for the arch, left loop, right loop, tented arch and whorl, respectively [129] (see Figure 2.2). Note that left lOOps, right loops, and whorls are the most common patterns, making up 93.4% of all fingerprints. When fingerprints are partial (e.g., latents), the complete ridge flow pattern, or pattern class, is often not available. As a result, Level 1 features also include any information towards determining the pattern of a fingerprint, such as orientation field, and core and delta points (see Figure 2.3). 2.2.2 Level 2 Level 2 features refer to the features that occur on individual ridge paths, including the turns that each ridge takes and the places where ridges terminate or split [36]. Level 2 features describe various ridge path deviations where single or multiple ridges form abrupt stops, splits, spurs, enclosures, etc. These features, known as the Galton points or minutiae, have two basic forms: ridge ending and ridge bifurcation (see Figure 2.4). Composite minutiae (i.e., forks, spurs, bridges, crossovers and trifur- cations) can all be considered as combinations of these basic forms (see Figure 2.5). For example, a fork (enclosure) contains two ridge bifurcations facing each other, and a spur consists of a ridge bifurcation and a ridge ending. Sometimes, a dot formed by a Single ridge unit is also considered as a minutiae, though often times it is classified as Level 3. The percentages of occurrences of each minutiae (including composite 31 Ill (8) (b) Figure 2.4: Galton’s ridge characteristics (minutiae): (a) ridge ending; (b) ridge bifurcation (adopted from [18]). __ I (a) (b) (C) (d) (e) (0) till Ml Figure 2.5: Composite ridge characteristics: (a) fork; (b) spur; (c) crossover; (d) bridge; (6) trifurcation (adopted from [18]). I Ridge Ending I Ridge Bifurcation I Dot l Fork I Spur l Crossover a Bridge I Trifurcat'ion Figure 2.6: Percentage of occurrences of various minutiae types in 977 rolled finger- prints [46]. minutiae) type are shown in Figure 2.6 [46]. It is Shown that ridge ending, ridge bifurcation and dot are the most common minutiae types in fingerprints. ridge bifurcation: 26 ridge ending: 62 Figure 2.7: An example of Level 2 features, including ridge endings (red), ridge bifurcations (blue) and dots (green). Level 2 features, unlike Level 1 features, have individualization power and con— tribute the most in fingerprint matching. On average, a fingerprint generally contains 75—175 minutiae (see Figure 2.7). At times, however, only a small number of minutiae are available in the captured fingerprint image and the extraction of additional Level 3 features may be necessary. 2.2.3 Level 3 Level 3 features refer to all dimensional attributes of a ridge, such as ridge path deviation, width, shape, pores, edge contour, incipient ridges, breaks, creases, scars and other permanent details [36]. 33 Level 3 features describe micro features of the friction ridges. For example, vari- ations of ridge width and shape (see Figure 2.8), pores, incipients, creases and scars are all Level 3 features (see Figure 2.9). Generally, Level 3 features are the result of differential growth or random damage (such as from scarring) at the ridge unit level [36]. In particular, the alignment or misalignment of ridge units, ridge unit shape and width, immature incipient ridges, and relative pore location are caused by differential growth of the friction skin. Greases and scars, on the other hand, are caused by physical or genetic damage to the friction skin. @ I ‘ ‘ t l l r ' r .. q , . ‘ . , ' . l A ,. Figure 2.8: Variations of ridge width and shape, demonstrated in detail at ridge endings (red), bifurcations (green) and dots (blue). Level 3 features are believed to be unique and have individualization power [36, 123]. Wentworth and Wilder suggested that 20 to 40 pores should be sufficient to determine the identity of a person [128]. In practice, however, Level 3 features may not be reproducible due to variations of skin condition (dryness), skin distortion and pressure applied while forming the impression. AS a result, Level 3 features are always used in concert with Level 2 features and have “unconsciously become part of the comparison at Level 2” [36]. In fact, 52% of forensic experts claim to always 34 Figure 2.9: Examples of Level 3 features, including (a) pores (red), incipients (green) and scars (yellow) and (b) creases (blue). utilize Level 3 features, when present distinctively, to weigh their match decisions [35]. Table 2.1: Fingerprint features and required image resolution Resolution Level 1 features Level 2 features Level 3 features 250 ppi yes noI no 500 ppi yes yes no2 1000 ppi yes yes yes 1 cannot capture dots or discern the type of Level 2 features (e.g., ridge ending, ridge bifurcation) some Level 3 features may be visible (e.g., pores, scars and creases) 2 Note that the feature level may not be the most appropriate way to classify a fingerprint feature. For example, an incipient ridge can be “Level 2” if clarity permits while a dot can be “Level 3” if barely visible. In fact, when the image resolution increases, many of the Level 3 features become so stable that they possess as much discriminative power as Level 2 features. As a result, the level of features available in a fingerprint is relative to the image resolution. For example, 250 ppi is the minimum resolution to capture Level 2 features, while the FBI standard is 500 ppi. It is generally accepted in the forensic community that 1000 ppi is the minimum 35 resolution to capture Level 3 features (see Table 2.1) [34]. 2.3 Feature Correlation Fingerprint features at Level 1, Level 2 and Level 3 do not exist independently. Rather, they are mutually correlated and the existence of one feature usually depends on the presence of other features. Galton showed that there is a strong correlation between the friction ridge pattern at Level 1 and the occurrences of minutiae at Level 2 [62]. This is because friction ridges often contain more minutiae around core and delta points, whose relative positions and numbers also indicate the fingerprint pattern (see Figure 5.7). In fact, the average minutiae density in the core and delta regions is 0.49, compared to 0.18 outside these focal points [46]. Further, ridge paths at Level 2 determine where pores at Level 3 may occur as friction ridges are composed of ridge units, each of which contains a pore. W c} We.-.” I Figure 2.10: Ridge path deviations do not form in isolation. The location and cer- tainty of a ridge characteristic can be found by examining the surrounding ridge paths even if the feature may taper due to lack of pressure (reproduced from [36]). Features at the same level are also highly correlated. For example, at Level 1 ridge orientation indicates positions of delta and core points; at Level 2 neighboring ridges determine the location and certainty of a ridge path deviation as they converge to fill the vacant space after a ridge ends or two ridges merge (see Figure 2.10); at Level 36 3 incipients often occur in groups and contain no pores. Understanding the correla- tion among various features is the key for forensic experts to develOp sophisticated identification skills. This is also the case for automatic systems, where effective and efficient feature extraction and matching algorithms result from in-depth knowledge of fingerprint features and their properties. 2.4 Feature Reproducibility When a fingerprint is acquired, not all of its features would reproduce from one impression to another. For example, minutiae present in one impression can be dis- placed or even missing in another impression of the same finger due to sensor noise, skin distortion or non-ideal skin conditions, etc (see Figure 2.11). In addition, the pressure applied during image acquisition also affects the feature reproducibility. For example, appearances of incipients can be affected by pressure, as shown in Figure 2.12. Light finger pressure reduces the visibility of pores, as demonstrated on five inked impressions (1000 ppi) by Parsons et al. [104] (see Figure 2.13). Feature reproducibility refers to the consistency of a feature being present in a fin- gerprint image regardless of the capturing mechanism. There is a general belief that the higher the feature level, the less reproducible the features are due to their finer details. However, this fact has not been quantitatively established and there is no general consensus among forensic experts regarding feature reproducibility, especially at Level 3 [35]. The main Obstacle is that feature reproducibility is a relative term that depends on many factors, such as image quality, skin condition, pressure and feature representation. For example, J oliat et a1. [84] suggested that Level 3 features such as pores and ridge edge shapes are reproducible in very high quality prints, and only in terms of their presence, not Size or shape. Another difficulty in evaluating feature reproducibility is that obtaining ground-truth information of fingerprint fea- 37 Figure 2.11: Effects of image quality (e.g., distortion, non-ideal skin conditions) on the presence of minutiae captured in two impressions of the same fingerprint. Minutiae are manually marked (red) by forensic experts. Figure 2.12: Effects of increasing pressure in a sequence of impressions on the ap- pearances of incipients. Light pressure makes it difficult to distinguish incipients and dots (adopted from [100]). 38 Do ‘ o 0~ F::l .8") 0 ' ' t3 —— 1 : ‘1— “ E 4— 8 3 8- o QN “— O O L I a, I E . O r DB-tMI : Z I —l— l l I l r 509 1009 2009 3009 4009 Pressure Figure 2.13: Effects of varying amount of pressure (50g, 100g, 200g, 300g, and 400g) on the number of manually detected pores (adopted from [104]). tures is very time-consuming. As a result, a more efficient way to evaluate feature reproducibility is to utilize them in automatic fingerprint matching and evaluate the matching performance. 2.5 Standards In manual fingerprint matching, forensic experts often conduct a case-by-case analysis to determine what features to use in a comparison. In automatic fingerprint match- ing, however, a standardization of feature format is required to ensure interoperability among widely distributed networks of various fingerprint systems (i.e., AFIS). Because minutiae are the most commonly used fingerprint features, both ANSI / NIST and the FBI have established minutiae standards for AFIS systems. Realizing the importance of non-minutiae features, especially those at Level 3, the National Institute of Stan- dards and Technology (NIST) augmented these standards to include a large set of 39 extended features [3]. This is a significant step towards utilizing extended features in the next generation AFIS systems. 2.5. 1 Minutiae standards There are two major minutiae standards currently used in the AFIS community. The AN SI/ NIST-CSL 1-2000 standard [33] and the FBI’s Electronic Fingerprint Transmis~ sion Specification (EFTS) standard [5]. Both standards provide effective and efficient means of exchanging fingerprint feature format among various commercial systems used in large-scale applications. The ANSI/NIST—CSL 1-2000 standard defines four types of minutiae (i.e., ending, bifurcation, compound or undetermined). Note that this standard does not include dots as minutiae due to their small size and lower reproducibility. In the FBI’s EFS standard, each minutiae is represented by its type, location, direction, quality and ridge counts (number of ridges between each minutiae and its neighboring minutiae). The Minutiae Interoperability Exchange Test (MINEX) [29] and the Proprietary Fingerprint Template (PFT) Test [25] conducted by NIST have revealed that stan- dard minutiae templates result in lower matching accuracy compared to proprietary templates. For example, using the proprietary templates of one of the leading AFIS vendors can significantly reduce the error rate compared to using minutiae standard templates (FRR reduces from 0.013 to 0.005 at a FAR of 0.01) [66]. This strongly suggests that current minutiae standards should be extended to including additional features that can be potentially used to improve the automatic matching performance as well as to provide better system interoperability. 2.5.2 Standards for Extended Features In 2005 NIST formed the Committee to Define an Extended Fingerprint Feature Set (CDEFFS) [45] to expand the current fingerprint feature standards by identifying, 40 defining, and providing guidance on extended features. Extended features, as sug- gested by the name, refer to fingerprint features at all three levels that have not been defined in the current standards, but have been generally accepted and used in manual fingerprint matching. Note that unlike some fingerprint features (e.g., filter responses [82]) proposed in automatic matching, the extended features referred to here have direct physical counterparts (e.g., ridges, pores, dots, etc.), allowing visualization by forensic experts. The purpose of defining extended features is two-fold: i) to stan- dardize the additional features utilized by forensic experts that can be potentially used to improve the performance of AFIS, and ii) to facilitate communication and information exchange regarding these features among forensic experts and between forensic experts and designers of AFIS [45]. A working draft of the “Data Format for the Interchange Of Extended Fingerprint and Palmprint Features” has been published by the CDEFFS committee [3] and has been proposed to be included as an adden- dum to the ANSI/NIST—ITL 1-2007 standard [99]. More discussions and revisions are underway to finalize this standard. Table 2.2 provides a list of extended features (mostly at Level 3) proposed by CDEFFS [3]. Definitions and representations of the proposed features reflect a quan- tifiable, reproducible, and clear characterization of the information content in a fin— gerprint. Note that the representation also needs to be compact as encoding these features should not significantly increase the burden on forensic experts, although they may be used in both manual and automatic systems. Currently, this standard is still a working draft 0 Pores: openings of subcutaneous sweat glands on friction ridges. The defini- tion and extraction of pores are practical for high resolution images (e. g., ~1000 ppi). Pores can be quantitatively represented by their centroid positions. When appearing in large quantity, pores can be very distinctive and reliable for iden- tification, especially in fragmentary latent examination. 41 Table 2.2: Extended features proposed in the “Data Format for the Interchange of Extended Fingerprint and Palmprint Features” [3]. Features Code Representation pores POR center point dots DOT center point, length incipients INR starting and ending points creases, linear discontinuities CLD starting and ending points, crease type ridge edge features REF center point, type ridge skeletons SIM a skeletonized image of thinned ridges ridge path segments RPS a ridge skeleton that connects two minutiae o Dots: short ridges and short enclosures. Dots can be quantitatively represented by their centroid position and length. 0 Incipients: friction ridges that are not fully developed, which often appear shorter and thinner in appearance, or more intermittent than fully developed friction ridges. Inicipients can be quantitatively represented by positions of the starting and ending points. 0 Creases and linear discontinuities: ridge discontinuities caused by creases, cracks, cuts, and thin or non-permanent scars. They can be represented by the starting and ending points. Permanent flexion creases are also noted by their type (i.e., proximal interphalangeal crease, distal interphalangeal crease). 0 Ridge edge features: morphological features (width, major deviation, etc.) defining the contour or shape of the ridge edge. These features can be quantita- tively represented by their center locations and types (e. g., protrusion: widening of a ridge edge; indentation: thinning of a ridge edge; or discontinuity: broken ridge edge). 0 Ridge skeletons: tracing of thinned ridges for the entire region of interest. They are stored as a binary skeletonized image with a white background and a black single-pixel-wide thinned representation of each ridge. 42 0 Ridge path segments: the portion of a ridge skeleton that connects two minutiae. Each ridge path segment is an ordered array of vertices. In the infrequent case in which a ridge segment forms a complete loop back on itself without intersecting another ridge segment (such as near the core of some plain whorls or central pocket loops), the ridge path starts and stops at a single arbitrary point on the ridge. Incipient ridges, dots, ridge discontinuities and protrusions are not included in the ridge path representation. In order to utilize the above extended features, especially those at Level 3, AN 81/ NIST has also proposed to increase the standard image resolution for latent, tenprint, and palm print images from 500 ppi to 1000 ppi. Consequently, the updated “ANSI/NIST-ITL Standards for Fingerprint and Other Biometric Information [99]” requires 1000 ppi resolution for latent scanning and recommends all live-scan devices be equipped with 1000 ppi scanning capability. 2.6 Summary Fingerprint features are generally described by forensic experts at three levels of detail. However, not all fingerprint features that are utilized in manual fingerprint matching are employed in automatic matching systems. In fact, only a small subset of Level 2 features, namely, minutiae, have‘ been standardized for automatic fingerprint matching. This greatly limits the performance of automatic systems especially for poor quality tenprint and latent matching. The availability of higher resolution (1000 ppi) fingerprint sensing has made automatic extraction and matching of high level extended features feasible. As a result, the forensic community together with AFIS vendors have become very active in standardizing a large set of extended features and quantifying their relevance and reliability [45, 3]. There has been no general consensus on the reproducibility or evidential value of 43 extended features. A systematic study to determine how much performance gain one can achieve by introducing extended features into AFIS can provide an answer. Such a study would also shed light on the Next Generation Identification (NGI) needs, as FBI is actively searching for means to improve AF IS performance especially for latent matching. 44 Chapter 3 Extended Feature Extraction 3.1 Introduction Compared to minutiae extraction, extended feature extraction is a more challenging problem due to their finer details. For example, pores are often too small (60 microns in diameter) for reliable extraction, especially when the fingerprint images are cap- tured at low resolution (3500 ppi), excessive or very low finger pressure, or poor skin condition. Dot and incipient extraction can also be greatly affected by the amount of pressure, sensor noise and distortion. In addition, representation of extended features and their comprehensive interrelationship information play major roles in the success of utilizing extended features in fingerprint matching. These challenges, however, should not discount the benefits of utilizing extended features as the large quantity of these features can offset the feature extraction errors. 3.2 Previous Work Automatic extraction Of extended features has not been extensively studied. Exist- ing literature has prOposed several methods to extract a few specific non-minutiae features. However, these features were mainly introduced to assist in minutiae ex- 45 traction and have not been utilized as extended features in matching. For example, various ridge extraction algorithms have been proposed to facilitate the extraction or alignment of minutiae [92, 79], but the extracted ridges were often discarded after minutiae extraction. Crease detection algorithms were also introduced for the pur- poses of removing spurious minutiae [131, 135] or evaluating local fingerprint quality [74]. To extract and utilize extended features for matching purposes, Feng et a1. [58] proposed to extract and trace ridge skeletons associated with each minutiae. When the number of minutiae is small, however, the number of extracted ridge features is very limited. Jain et al. [78] extended Feng’s work by extracting the full ridge skeleton map for matching. Roddy and Stosz [120] introduced a pore extraction algorithm based on skeletonization. Kryszczuk [90] used a combination of binarization and skeletonization to extract pores. Both methods were demonstrated effective for good quality high resolution fingerprint images (~ 2000 ppi) [120], however, when image quality degrades or resolution decreases, these methods become more sensitive to noise and low image contrast. Recently, Parsons et al. [104] proposed a pore extraction method for 1000 ppi fingerprint images using the difference of Gaussian (DOG) filtering, which approximates the Mexican-hat wavelet we proposed in [76] for pore extraction. 3.3 Proposed Approach We propose to systematically extract a set of extended features, including ridge skele- tons, pores, dots and incipients at both 500 ppi and 1000 ppi resolutions (see Figure 3.1). We follow the representation of these features prOposed in the ANSI/NIST extended feature standard [3] (see Table 2.2). For example, ridge skeletons are repre- sented as single-pixel-wide thinned ridges; dots and incipients are represented by their 46 skeletons (including center, starting and ending points) with orientation; and pores are represented by their central positions, as illustrated in Table 3.1. In addition, we extract feature interrelationship information such as neighboring ridge information and ridge ownership of pores. This contextual information has proven to be useful in manual fingerprint matching as the perception and comparison of one set of fea- tures are Often influenced by the presence or absence of other features [43]. Note that the interrelationship information can be derived from features extracted either automatically or manually as long as they are compatible. incipient dot ridge skeleton Figure 3.1: Illustration of the proposed extended features. There are three main steps in the proposed feature extraction algorithm: 0 Segmentation: This module separates the fingerprint area (foreground) from the background. 47 Table 3.1: Representation of the proposed extended features. Features Proposed Representation pores center point, ridge ownership dots point arrays of the skeleton, orientation incipients point arrays of the skeleton, orientation ridge skeletons a skeletonized image with neighboring ridge information 0 Quality Assessment: This module estimates the quality of a fingerprint based on the clarity of ridge-valley patterns in local regions. Poor quality (unusable) regions are severely corrupted by distortion and noise, making the extracted features in these regions unreliable. 0 Feature Extraction: This module extracts the features and their interrelation- ship information. Image enhancement is often employed to increase the feature clarity before they are extracted. 3.3.1 Segmentation A fingerprint image often contains a noisy background caused by ink stains, finger- print residues, moisture, etc. during the collection process. As a result, it is necessary to separate the fingerprint foreground from the background. The discriminating char- acteristic between the foreground and background is the presence of an oriented ridge pattern. We apply a gradient—based segmentation method [92] to segment the finger- print foreground (see Figure 3.2). Before computing gradients, we normalize the given fingerprint image I using the min-max rule such that the intensity values in the resulting image In have a common range of [0 1]: I — min(I) I” = max(I) —min(I)' (3.1) To calculate the gradients, we apply the following 1D convolution kernels at each 48 Image normalization l Gradient estimation l Block-wise average gradient magnitude l Thresholding l Convex-hull segmentation Figure 3.2: A high-level algorithm of the segmentation procedure. pixel in In, - ‘8 In (3.2) a. = so; : :T where G3 and Cy are gradients in the horizontal and vertical directions, respectively. Next, we partition the image In into non-overlapping blocks of size b x b (b = 16). For each block B, we calculate the average magnitude of gradients as: .__ 1 G3 = — 2 G2,, + 0,3,, (3.4) b2 363 where (G33,Gys) denote the gradients at Site 3 = (2:3,ys) E B. Finally, 23—; is compared to a threshold T9 (= 0.03) to determine if block B is part of the foreground 49 (C) Figure 3.3: Segmentation of a fingerprint image using block-wise average gradient magnitude: (a) original image; (b) block-wise average gradient magnitude; (c) fore- ground detection after thresholding and post—processing; the red contour is the convex hull extracted based on the foreground blocks; (d) segmented image. 50 (F B = 1) or background (F B = 0), that is: FB = 1, 1f GB > T9; (3.5) 0, otherwise. After thresholding, the foreground is not necessarily fully connected and may contain holes Post-processing methods, including connected component extraction, flood-fill operation, and median filtering, are employed automatically to ensure a clean, fully connected foreground. The convex hull that contains all the detected foreground blocks is saved as the final segmented fingerprint foreground. Figure 3.3 demonstrates the major steps of the proposed segmentation process. 3.3.2 Quality Assessment Once the fingerprint foreground is identified, we evaluate the quality of the fingerprint by determining the strength of the friction ridge pattern in each foreground block B [49] (see Figure 3.4). Let J B be the covariance matrix of the gradient vectors for block B, 0:233 G33 G313 J B = Z 2 (3.6) SEB GySst 03/3 This 2 X 2 symmetric matrix is positive semidefinite with eigenvalues 1 2 ’\B,1 = -2—(trace(JB) + \/trace (J3) — 4det(JB)) (3.7) A32 = -21-(trace(JB) — \/traceZ(JB) — 4det(JB)) , where trace(JB) = 2363G§3+G§s, detUB) = (ZSEBGg3X2363032/s) _ (ZseB (1,30%)2 and AB,1 2 A33. The block-wise quality is then measured by 51 the normalized coherence, defined as 03,1 - 33,2)2 03,1 + A3,2)2 qB =10310 )‘B X (3-8) Note that the first term in Equation 3.8 reflects the contrast of the block, while the second term measures the strength of the local orientation. When contrast is very low, or loglo A B is negative, QB is set to zero. Figure 3.5 (b) shows the resulting block-wise local quality measure. Block-wise covariance matrix of gradients I l Cascading Calculating eigenvalues 4 Normalized coherence measure l Smoothing using circular average filtering l Global quality measure Figure 3.4: A high-level algorithm of the quality estimation procedure. The size of a block plays a very important role in local quality estimation. When the block size is too small, the local quality estimation can be very noisy; when the block size is too big, the estimation loses its capability of capturing the local information. As a result, we extend the above quality estimation procedure into a cascade framework by progressively investigating quality. That is, instead of estimat- ing quality block by block, we first divide the fingerprint foreground into relatively 52 (d) Figure 3.5: Quality estimation of a fingerprint image using the proposed coherence measure: (a) segmented image; (b) block-wise local quality; (0) cascade local quality; (d) smoothed local quality using circular average filtering. White pixels mean good quality and black pixels mean poor quality. 53 large regions (each with size 4b x 4b), and estimate the quality using the coherence- based measure in Equation 3.8. If the quality value of a region is sufficiently high > T9 = (0.5), the quality estimation process for this region is terminated, meaning that this region is of good quality. Otherwise, this region is divided into 4 quadrants (each with size 2b x 2b) for more detailed quality evaluation. This process continues until individual blocks (each with size b x b, b = 16) are reached. Figure 3.5 (c) shows the cascade local quality of a fingerprint image. The cascade implementation is highly efficient since detailed quality analysis is only conducted in poor quality regions. Fi- nally, the local quality is smoothed using a circular averaging filter with radius 20 pixels (see Figure 3.5 (d)) and the regions that have quality values larger than Tq are identified; Level 3 features (e.g., pores, dots and incipients) will be extracted only in these regions. The overall quality of the fingerprint image In is also obtained by taking the average of local qualities in each block: where N B is the total number of foreground blocks in the fingerprint image. 3.3.3 Ridge Extraction To extract ridge skeletons, we first enhance the ridge-valley patterns by spatially applying Gabor filters of size b x b (b = 11) in each block B. Gabor filters are selected because they have frequency-selective and orientation-selective properties that allow the filters to give maximal response to ridges at a specific orientation and frequency while reducing noise. The Gabor filter is defined as a cosine wave modulated by a 54 Gaussian [77]: 2 2 s 1 33,0 y ,0 HB=exp —§ 633+ 632,3 cos(27rf:r3,gB), (3.10) where zeoB = 008(93) 8111(93) e. (311) 313,93 -sin(93) C08(93) ys Parameters f (= 0.125) and 63 (= 4), 6y (= 4) are the ridge frequency and standard deviations of the Gaussian envelope applied along the x— and y-axes, respectively (see Figure 3.6)[72]. The filter orientation 0 B is determined by the local ridge orientation, which is estimated based on the gradients at each site 3 = (1:3, ys) E B [71]: Z 20$3Gys 368 z (0%. — 0%,.) ’ sEB 93 = gatan (3.12) where Z ZstGys and 2 (G2,, — 032,3) approximate the sine and cosine compo- nents rife 316 doubled localeofientation (26), respectively. The atan function should preserve the quadrant information of the angle. Once ridges are enhanced, we em- ploy the peak-detection algorithm proposed in [109] to extract them. That is, for each block centered at pixel ($3,313), the gray-level profile is obtained by projection of the pixel intensities onto the central section and smoothing through local averaging (see Figure 3.7). The peaks and their two neighboring pixels on each side constitute the ridges in the resulting binary image, which is then skeletonized to extract single- pixel-wide ridge skeletons. An example of ridge enhancement and extraction is shown in Figure 3.8. Because skeletonization preserves ridge continuity, the extracted ridge skeletons are saved as ridge points traced in order from one end to the other. The set of ridge skeletons U is represented as U = {u(i)a},i = 1,2,...,na,a = 1,2,...,NU, where 55 (a) Figure 3.6: Gabor filters: (a) 3D graphical representation; (b) a bank of filters with eight different orientations. l . I" ‘ (XSJ’S) 5:: \\ ,l .4 .3 \ \E.“ ~. "" (a) (b) Figure 3.7: Peak-detection for ridge extraction. In (a) pixel intensities on the segment centered at (553,313) are projected onto the normal of the local ridge orientation to obtain the gray—level profile in (b). 56 Figure 3.8: Ridge enhancement and extraction: (a) a cropped fingerprint image; (b) block-wise local orientation; (c) enhanced ridges using Gabor filters; ((1) extracted ridge skeletons. 57 u(i)a = (z(i)a, y(z')a) denotes the location of the ith ridge point on ridge a, na is the total number of ridge points on ridge (1 and NU is the total number of individual ridge skeletons in the fingerprint image. Note that for a ridge bifurcation, we generate two separate ridge skeletons by connecting the main ridge with each of the two branches. current ridge tracing Neighboring ridges Figure 3.9: Extraction of ridge skeletons and neighboring ridge information based on the inter-ridge distances. As the topology of ridges also follows a certain ordering, it is important to capture the interrelationship between ridges during their extraction. Given two ridges, say a and b, in a fingerprint image, we perform the following operations to determine their distance and possible neighborhood information: c Find the closest point pair (n,m) between ua and ub, where n and m denote the index of the points on ridge a and b, respectively. 0 Establish correspondences {u(i)a, u(j)b} between the two ridges based on the pair (n, m), starting at locations 1' = n — min(n — 1,m — 1), and j = m — 58 Block-wise ridge orientation l Gabor filtering 4 Ridge skeletonization (peak-detection) l Ridge tracing i Calculating inter-ridge distances l Neighboring ridge information extraction Figure 3.10: A high-level algorithm of the ridge skeleton extraction procedure. 59 min(n — 1,m — 1), ending at locations 2' = n + min(na — n, nb — m), and j =m-l—min(na —n,nb—m). 0 Calculate the Euclidean distance d,- j between each corresponding point pair ({U(i)a,U(j)b}) d(i,j) = IIU(i)a — U(J')b|| (3-13) 0 Calculate the inter-ridge distance between a and b as 1 “a, b) = E 20%) (3-14) M where k = min(na -— n, nb — m) + min(n — 1, m — 1) is the length of the corre— sponding segment between the two ridges. Inter-ridge distance measure can be used to calculate the ridge period (see Section 5.3.2). 0 Reverse the point sequence in one of the ridges, say ua, and repeat the above process. Assign the minimum inter-ridge distance to l (a, b). 0 Determine ridges a and b to be neighbors if [(0, b) < Tr, (3.15) where Tr = 13 (26) pixels at 500 (1000) ppi is empirically determined by the maximum of minimum inter-ridge ridge distance. Note a ridge can have more than two neighboring ridges due to the pres- ence of minutiae (see Figure 3.9). Finally, the extracted ridge skeletons along with their interrelationship information can be represented as U = {u(z')a} = {{x(i)a,y(i)a}?__‘fl,ha}iV=Ul, where ha denotes the indices of neighboring ridges of ridge a. A flowchart of the ridge extraction process is shown in Figure 3.10. 60 3.3.4 Pore Extraction The appearance of a pore in a fingerprint image can differ in both shape and size due to its perspiration activity, as the pore may be open in one image and closed in another image [36] (see Figure 3.11). On the other hand, pores appear only on the ridges and can be associated with the ridges already extracted. We propose to extract pores with regard to their central positions as well as their ridge ownership information. Closed pores Open pores Figure 3.11: Appearance of pores. A pore can be open or closed due to its perspiration activity. In ridge extraction, an enhancement algorithm based on Gabor filtering was pro- posed. This algorithm successfully enhanced the ridge—valley pattern, but also re- moved the fine details such as pores. As a result, a different enhancement based on the sigmoid function is proposed for pore extraction. Given a normalized fingerprint image In, the enhanced image 16 can be obtained as: 1 I =———-———, e 1+exp{g>< (In—m» (3.16) where g (= 10) is the percentage of contrast increase compared to In, and m (= 0.5) is the normalized gray value above which contrast is increased. Figure 3.12 (a) shows the enhanced version of the image in Figure 3.8 (a). Note that the Level 3 details are more clear in Figure 3.12 (a) compared to the ridge enhancement shown in Figure 3.8 (c). 61 Figure 3.12: Extraction of pores: (a) enhanced image using the sigmoid function; (b) band-pass filtering using the Mexican-hat wavelet transform; (c) masking out filtering responses in the valleys using enhanced ridges (see Figure 3.8 (c)); (d) thresholding and blob detection; (e) extracting ridge ownership (overlayed with ridge skeleton) and removing spurious pores (green blobs); (f) extracted pores (red circles). 62 Sigmoid transformation l Wavelet transform l Masking out valleys i Connected component detection v Ridge ownership extraction and spurious pore removal Figure 3.13: A high-level algorithm of the pore extraction procedure. After enhancement, the Mexican-hat wavelet [42] is used as a band pass filter to capture the abrupt change of intensity values at pores (see Figure 3.12 (b)). The wavelet transform is applied in the frequency domain, given as where F(w1,w2) 9001,6112) = x/3F(w1,w2)¢(w1,w2) (3-17) = f (32 rec. y) exp{—j2«exp{jzvr}. (3.20) After filtering, pores that have high negative frequency responses are represented as small blobs with low intensities. Because pores only appear on ridges, we also mask out the filter responses in the valleys based on the enhanced ridge-valley patterns (see Figure 3.12 (c)). Connected components whose negative filter responses are below an empirically determined threshold Tp (= —2) are identified as pores (see Figure 3.12 (d)). Let P = {Pi} = {(x,, 35)}:2; denote the extracted pores, where (1;, 312') denotes the central location of pore p,- and Np is the total number of pores in the image. Next, for each extracted pore pi, we establish its ridge ownership by calculating the distances between p,- and the extracted ridges U = {ua},a = 1, 2, ..., Nr. The pore p,- is assigned to its closest ridge 02-, given as 0,- : argrrzin(min(||p,;,u(j)a||)),j = 1, 2, ...,na, a =1, 2, ..., NU (3.21) where “pi, u(j)a|| denotes the Euclidean distance between pore p,- and the jth point on ridge a. Note if a pore lays more than 3 (6) pixels away from its closest ridge skeleton at 500 ppi (1000 ppi), it will be discarded as it does not belong to any ridges (see Figure 3.12 (e)). By assigning ridge ownership, pores are grouped based on their associated ridges. The final extracted pores are shown in Figure 3.12 (f) and are represented as P = {113,3 yi, 032:1}. A flowchart of the pore extraction process is shown in Figure 3.13. 3.3.5 Dot and Incipient Extraction Dots are short ridge segments and incipients are immature ridges that are significantly thinner than normal ridges. An incipient can be broken into a sequence of dots if sufficient finger pressure was not applied during image acquisition (see Figure 2.12). 64 Because of this and the fact that both dots and incipients occur in between the ridges, it is difficult to design algorithms that can discriminate between dots and incipients. As a result, we do not distinguish these two feature types in our extraction algorithm and both dots and incipients are represented as oriented skeletons. (d) Figure 3.14: Extraction of dots and incipients: (a) a cropped fingerprint image; (b) enhanced image using the sigmoid function; (0) masking out ridges using the enhanced ridges; (d) estimated phase symmetry of features in the valleys; (e) extract- ing connected components with responses surpassing thresholding Td; (f) extracted dots / incipients. To extract dots and incipients, we first use the same enhancement algorithm used for pore extraction (see Figure 3.14 (b)). Because dots and incipients only occur in the valleys, we also mask out ridges using the Gabor enhanced ridge image (see Figure 3.14 (c)). After removing ridges, dots, incipients as well as edge features of ridges that are wider than the inverse of the constant frequency used in the Gabor filtering would remain in the valleys (see Figure 3.14 (c)). To separate dots and incipients from ridge 65 edge features, we use the log Gabor filters [89] to estimate the phase symmetry since dots and incipients present higher symmetry in the valleys than other features. Let Ie be the enhanced fingerprint image, and M g and M 30 denote the even-symmetric (cosine) and odd-symmetric (sine) components of the log Gabor filters at scale s. We convolve 16 with both the filters at each pixel (2:, y) to produce the filter responses, given as [63(15, y), 03(55a 31)] = [1(3, 31) ® M§,1($,y) 69 M31- (3-22) The amplitude of the responses at scale s is defined as As(1=,y)= \/ 10 degrees), both ori- entations will be recorded. The final representation of the extracted dots and incip- ients is D = {da} = {{(x(i)a,y(i)a,)}?§1,00}£V=D, where (:r:(i)a,y(i)a) denotes the location of the ith pixel on the skeleton of dot / incipient (1, 0a and no are the orien- tation and size (number of pixels) of dot / incipient a and ND is the total number of dots / incipients extracted. A flowchart of the dot and incipient extraction process is shown in Figure 3.15. 3.4 Experiments Performance of extended feature extraction is evaluated on two databases, namely NIST Special Database 27 (NIST-27) [13] and MSU database (including two parts: MSU-full and MSU-partial), as shown in Table 3.2). The MSU database was col- lected by the Pattern Recognition and Image Processing (PRIP) laboratory at Michi- gan State University as part of its “High Resolution Fingerprint Matching” project. Sample images of each database can been seen in Figure 3.16. The NIST-27 is a public database that contains 258 latent and rolled pairs at 500 ppi, with a total of 516 images. In this database, minutiae, dots / incipients and pores have been manually extracted for both latents and rolled prints by forensic experts (see Figure 3.17). The MSU database was collected with an interval of three days, using a commercial optical sensor (CrossMatch ID1000) at both 500 ppi and 1000 ppi. This database has two parts, the first part (MSU-full) contains 4 flat fingerprints of 41 users, 10 fingers for each user, resulting in a total of 1,640 images; the second part (MSU-partial) contains 1 flat and 5 partial fingerprints of 30 users, 10 fingers for each user, resulting in a total of 1,800 images. Users were asked to press to the left, right, top, bottom and 68 center of their finger on the sensor to create partial prints. The MSU-partial database was collected particularly to study the effects of small image size and distortion often seen in latents, as no 1000 ppi latent database was available at the time of this study. Figure 3.16: Sample images from (a) NIST—27, (b) MSU-full and (c) MSU—partial databases. We apply the proposed automatic extended feature extraction algorithm on both NIST-27 and MSU databases. Note that features in the latents of NIST-27 were only manually extracted. This is because this database contains very challenging latents with average case work quality. These latents have noisy background that are too difficult for our extraction algorithms to work well (see Figure 3.17). Figure 3.18 shows the histograms of the number of dots and pores automatically extracted from 69 Table 3.2: Description of databases used in experiments. Database name MSU-full MSU-partial NIST-27 fingerprint type flats partials and flats latents and rolled prints no. of fingers 410 300 258 impressions per finger 4 6‘ 2+ total no. of images 1,640 1,800 516 resolution 1000 ppi 1000 ppi 500 ppi I 5 partial impressions and 1 flat impression + l latent and 1 rolled print Figure 3.17: Examples of manually marked minutiae (blue), pores (red) and dots/incipients (green) in (a) a latent and (b) the corresponding rolled print from NIST-27. 70 images of MSU-full, MSU-partial and rolls of NIST-27, as well as those of manually extracted features from latents of NIST-27. As we can see, higher image resolution in MSU-full and MSU-partial databases (1000 ppi) results in larger number of Level 3 features compared to NIST-27 database (500 ppi). On the average, the number of pores (dots/incipients) automatically extracted from images in MSU-full, MSU- partial, rolls in NIST-27 databases and manually extracted from latents in NIST-27 are 469 (13), 429 (7), 134 (13) and 5 (1), respectively. The number of Level 3 features manually extracted from latents in N IST—27 is very low due to the poor image quality and small fingerprint area. 71 — MSU-partial DB 0.9 - — MSU-full DB [i —- NIST SD 27 (latents) 0.8 -- NIST SD 27 (tenprints) “ 0.7 5‘ 0.6 5 g 0.5 “- 0.4 0.3 r . 0 l 0 200 400 600 800 1000 1200 No. of extracted pores (a) 1 r r 1 fl 1 1 —- MSU-partial DB 0.9 - — MSU-full DB F —— NIST SD 27 (latents) 0.8 -— NIST so 27 (tenprints) i 0.7 - - 30 40 50 60 70 No. of extracted dots/incipients (b) Figure 3.18: Histograms of the number of (a) pores and (b) dots/incipients automat- ically extracted in MSU-full, MSU-partial and from rolls in NIST-27 as well as those manually extracted from latents in NIST-27. 72 Figure 3.19: Evaluating (a) pore and (b) dot/incipient extraction accuracy by com- paring automatically extracted features (red) with manually extracted features (blue). The missing (spurious) error rate of pore and dot/ incipient extraction in the finger— print area shown here is 0.16 (0.34) and 0 (0.17), respectively. 73 Table 3.3: Average error rates of the proposed pore and dot/ incipient extraction algorithm at 500 ppi, compared with those of the state-of-the-art minutiae extraction algorithm [12] on 258 rolled prints in the NIST-27 database. manual count automatic count missing rate spurious rate Pores 134.5 173.3 0.29 0.32 Dots 13.1 10.6 0.4 0.29 Minutiae[12] 106 104 0.21 0.17 Table 3.4: Average error rates of the proposed pore and dot / incipient extraction algorithm at 1000 ppi, compared with those of the state-of-the-art minutiae extraction algorithm [12] on 100 flats in the MSU database. manual count automatic count missing rate spurious rate Pores 497.2 531.4 0.09 0.14 Dots 18.8 18.7 0.15 0.17 Minutiae[12] 56.6 55.4 0.07 0.08 In order to estimate the accuracy of our Level 3 feature extraction algorithm, we evaluate the consistency of positions (a: and y coordinates) between automatically and manually extracted pores and dots/incipients from 258 rolls in the NIST-27 database and 100 flats in the MSU database. The result is also compared with the accuracy of automatic minutiae extraction using a state-of-the-art algorithm [12]. In our evaluation, a feature is considered as correctly extracted if it is found in both manual and automatic extraction within a t x t tolerance window (see Figure 3.19). In general, there are two types of errors that can occur: i) missing errors, or failures of missing a manually extracted feature in the automatic extraction, and ii) spurious errors, or failures of removing a falsely detected feature in automatic extraction that is not manually extracted. In our experiment, the value of t is set to 8 (16) pixels for minutiae and dots/incipients, and 5 (10) pixels for pores at 500 ppi (1000 ppi), which are the same values we use in matching these features between two fingerprints. Note that when the tolerance box is too small (large), less (more) automatically extracted 74 features are likely to be matched with manually extracted features, resulting in both high (low) missing and spurious errors. That is, enlarging (reducing) the tolerance boxes would decrease (increase) both error rates simultaneously. Let Ng, Ne, Nm, N; be the number of manually extracted, automatically ex- tracted, missing and spurious features, respectively. The missing error rate is cal- culated as the ratio of the missing features to the total number of manually extracted features, denoted as 419:. The spurious error rate is calculated as the ratio of the spu- rious features to the total number of automatically extracted features, denoted as 71%. Note that these error rates are only relative to the manual extraction performance, which is not perfect and has its own errors compared to the ground truth. Tables 3.3 and 3.4 show the average error rates of missing and spurious features on 258 rolls of NIST-27 database and 100 flats in MSU database. Note the flats in MSU database has smaller fingerprint area than the rolls in NIST-27, so fewer minutiae are extracted from prints in MSU database. On the other hand, Level 3 feature extraction benefits from high image resolution and good image quality of the 1000 ppi live-scan images, as more pores and dots (incipients) are extracted from the MSU database, compared to the rolls in NIST-27 database. In general, the error rates of automatic pore and dot / incipient extraction are higher than those of minutiae extraction. However, the large quantity of pores, especially at 1000 ppi, could possess sufficient discriminative power in matching and potentially offset the extraction errors. Finally, Figures 3.20 and 3.21 demonstrate how the extracted extended features could enrich the feature representation of a fingerprint, especially those with small area. The proposed extraction algorithms are currently implemented in MATLAB on a 3G HZ Intel Duo Core machine with 3G RAM. At 500 ppi, the average time to extract ridge skeletons, pores and dots/incipients in a full print is 1.8, 1.2 and 5.9 seconds, respectively. When pores and dots/incipients are extracted at 1000 ppi, the average time is 3.3 and 12.8 seconds, respectively. Note that ridge skeleton extraction 75 I" ‘ In“, - ‘ .ezusa. IV ’1: 7; ~, .‘ I ‘ x , [If r’ ”l ’. 5 x 7., l s I ‘\_ a " [-v ‘n .I ‘__, - Figure 3.20: Automatically extracted feature representation on a roll of NIST—27: (a) original rolled image; (b) minutiae ([12]); (c) extended features (including ridge skeletons (blue), pores (red) and dots/incipients (green)). (b) Figure 3.21: Automatically extracted feature representation on a partial print of MSU-partial: (a) original fingerprint image; (b) minutiae ([12]); (0) extended features (including ridge skeletons (blue), pores (red) and dots/incipients (green)). 76 at 1000 ppi is redundant and not necessary since ridges are well separated at 500 ppi. Note that currently, our extended feature extraction algorithms are not applicable to latents. This is mainly because latents are lifted from various surfaces (e. g., paper, wood, metal, etc.), often of poor quality with noisy background. In many cases, the background of a latent can include hand writing, surface texture patterns, dirt, other fingerprint residue, etc. As a result, a reliable segmentation algorithm is needed to separate the latent from the background. In good quality latents with clear back- ground, however, automatic extended feature extraction is possible and may greatly reduce the workload of forensic experts, even if manual supervision is still required. 3.5 Summary Extended feature extraction is an essential step towards utilizing them in automatic fingerprint matching. To our knowledge, little attention has been paid to developing algorithms for automatic extraction of extended features, especially Level 3 features. We have developed algorithms to automatically extract ridge skeletons, pores, dots and incipients. These algorithms also perform fingerprint foreground segmentation, quality assessment and feature enhancement. In addition to individual features, we also extract relational information among features, such as ridge neighboring infor- mation and ridge ownership of pores. In Chapter 4, this information will be used as constraints in matching ridges and pores. To evaluate the performance of the prOposed extended feature extraction algo- rithm, we compared automatically and manually extracted features at both Level 2 and Level 3 on the N IST—27 (rolls only) and MSU databases. We did not evaluate the performance on latents because the image quality of latents in NIST-27 was too poor for our algorithm to detect Level 3 features. Instead, we collected the MSU-partial database to simulate high quality latents with small fingerprint size and uncontrolled 77 distortion. The evaluation results show that the proposed pore and dot / incipient extraction algorithms are less accurate than the state-of-the-art minutiae extraction. For example, for 1000 ppi live—scan images, the errors of pore and dot / incipient ex- traction are two times of those of the state—of-the-art minutiae extraction algorithm. However, the large quantity of pores may possess sufficient discriminative power in matching and potentially offset the extraction errors. 78 Chapter 4 Extended Feature Matching 4.1 Introduction In Chapter 3, we developed algorithms to extract extended fingerprint features, in- cluding ridge skeletons, pores, dots and incipients. In this chapter, matching algo- rithms are developed to match these extended features. A fusion framework to sys— tematically integrate the matching components of each feature type is also proposed. This framework is designed in a hierarchical fashion to achieve both accuracy and efficiency. Our experiments demOnstrate that the proposed extended feature match- ing can be successfully integrated into automatic systems. Our experiments show performance improvements from extended features in both full (MSU-full), partial (MSU-partial) livescan and latent (NIST-27) matching. 4.2 Previous Work Automatic extended feature matching has been previously studied on ridges and pores. In ridge matching, Feng et al. [58] proposed an algorithm to match ridges in local neighborhoods of minutiae. Ross et al. [111] proposed to construct an average distortion model based on ridges associated with minutiae, which can be applied to 79 templates during matching. Both of these methods, however, still rely on minutiae for establishing an alignment. As a result, the benefits of ridge matching were not fully exploited when minutiae-based alignment fails. In pore matching, Stosz et al. [120] proposed to match pores in local segments of pre-aligned fingerprints. Kryszczuk et a1. [90] conducted a similar study to demonstrate the benefits of pore matching on partial fingerprints. Vasta et al. proposed a fusion method based on the DezertSmarandache theory to combine scores generated using J ain’s minutiae matching algorithm [7 9] and Kryszczuk’s pore matching algorithm [126]. 4.3 Proposed Approach We propose to develop extended feature matching algorithms for ridge skeletons, pores and dots/incipients. Compared to the previous work, our approach differs in the following ways: i) we utilize ridges together with minutiae for establishing align- ment; ii) we estimate distortion based on ridges and apply distortion correction to all the extended features (e.g., ridge, pores, incipients) during matching; iii) we employ l state-of-the—art minutiae matching and integrate it with the proposed extended fea- ture matching using various fusion techniques; iv) our experiments are performed on both livescan and latent databases. The effects of image size and image quality on the matching performance are also studied. We believe that incorporating extended fea- tures in alignment and distortion correction is consistent with the manual matching procedure employed by forensic experts and would ultimately improve the accuracy and efliciency of current minutiae-based matching systems. 4.3.1 Ridge Matching One of the major challenges in ridge matching is to align ridges in the presence of distortion. As shown in Figure 4.1, ridges can be distorted due to the elasticity of the 80 skin and pressure applied during acquisition. To address this problem, we propose to align ridges in an incremental fashion, during which distortion is estimated and corrected, a procedure we call “ridge propagation.” The inspiration of this ridge matching procedure comes from the training exercise of manual fingerprint matching, where experts iterate through each focal point (e.g., a minutiae) in the query, find its correspondence in the template by comparing local ridge information and propagate to their neighboring ridges and minutiae until a decision can be made [36]. If“ i M \i. H . [lair/,7“ ”“1 i741 / ”I '1’” [I] My" ' 4 i] / Figure 4.1: Difficulty in aligning ridge skeletons in two fingerprints of the same finger due to non-linear distortion. Note that the proposed ridge matching requires an initial alignment to start with. For computational efficiency, we obtain such an alignment based on minutiae [79]. That is, only one initial alignment is necessary if more than 12 minutiae correspon- dences can be obtained, otherwise up to 5 different alignments that maximize the number of matching minutiae are generated. Note that each initial alignment is rep- resented by a pair of pivot minutiae from the template and query, respectively. Figure 4.2 illustrates the general procedure of the proposed ridge matching. 81 Featureextraction Alignmentfrom Alignmentaflergmatdtv pivotminutia ridgepropagationdlstance Final alignment Distortion _ . match distance too large, f'.. f _' . I . = 0.32 can not be "" aligned / \“ Figure 4.2: Illustration of the proposed ridge matching. The procedure includes initial alignment based on pivot minutiae, refined alignment after ridge propagation and final match score calculation. Notice how ridge propagation enables the correction of distortion at the finger tip between the two fingerprints. 82 Ridge Propagation Given an initial alignment, the ridge propagation starts with comparing the ridge pair and their neighbors associated with the pivot minutiae pair. Once these ridges are matched, the distortion is estimated based on the corresponding ridge points and is applied to all the ridges in the query. This process continues to grow by matching neighbors of the matched ridges until all the ridges in the overlap region are examined. An example of ridge propagation in matching a partial fingerprint to its mated template is shown in Figure 4.3. Let Q = {q} and T = {t} be the minutiae sets and U = {u} and V = {v} be the indices of ridges in the template and query, respectively. To implement the ridge propagation algorithm, we utilize two FIFO (First In First Out) queues, P to record indices of matched ridge pairs and L to keep track of indices of all the to-be- matched ridges that are neighbors of those in P. When two ridges are matched, point correspondences between the two ridges are established and saved in C, which is then used to estimate the distortion. The ridge propagation procedure can be summarized in the following steps: 1. For each initial alignment represented by a pivot minutiae pair (92': t j), where q,- is the ith minutiae from the query and tj is the jth minutiae from the template. (a) Initialize the ridge candidate list L, matched ridge pair list P, and ridge point correspondence list C. (b) Add the ridges (u, v) associated with the pivot minutiae and their neigh- boring ridges to L. (c) While [L U P] C [U U V] i. Retrieve all ridges from L, calculate distances between ridges from different prints and establish correspondences (details are presented 83 .3 / . 2/@ M (C) , \ cry/..., 2 % s s (b) \\ \VWWMV \\\\ \\7 : “weeks a . .x. mex§a® .. g/ V \N (f) (e) 84 (d) Figure 4.3: Ridge propagation: (a) a partial fingerprint and its template are pre- aligned using pivot minutiae (arrows); (b) matching the neighboring ridges associated with the pivot minutiae; (c)-(f) iterative ridge propagation. Note how distortion is corrected after each propagation step. later). Add matched ridge points to C, move matched ridge pairs from L to P, and add their neighbors that are not in P to L. ii. Estimate non-linear distortion based on the corresponding ridge points in C (details are presented later). If the energy of the distortion is smaller than a threshold Td, apply it to all the features (ridge skeletons, dots, pores) in the query fingerprint; otherwise, go back to step 1. (d) Calculate the match distance based on the matched ridges in P and non- matched ridges in L for the given initial alignment. 2. Select the alignment (with distortion corrected) that gives the smallest ridge match distance. Ridge Correspondences During ridge propagation, when two ridges (u, v) are matched, all their neighbors are retrieved for comparison, resulting in a a X b distance matrix, where a and b are the number of neighbors of u and 22, respectively. To establish correspondences among these ridge neighbors, we estimate their matching distances and model the distance matrix as a cost matrix, where we want to obtain 1-to—1 assignment while minimizing the total cost (ridge distances) C’. This optimization problem can be solved using the Hungarian algorithm [103]. That is, given a workers and b tasks, let L be the cost matrix 111 I12 llb l l ...l L = 21 22 2b , (4.1) _ lal la2 lab _ 85 where l,- j is the cost of the ith worker to perform the jth task; in our case lij = d(i, j) is the distance between ridge i and ridge j. The Hungarian algorithm then finds the optimal 1-to—1 assignment that minimizes the total cost 0 in polynomial time. Details of the algorithm are described as below: 1. Disable assignments that should be forbidden. For example, if the distance between two ridges is too large (> 2Tr, where Tr is the average ridge period), then this distance is more likely caused by a nonmatch rather than distortion. We set this distance to co. 2. Remove any column or row in L if distances in that column or row are all 00. 3. For each row, subtract its smallest element from all its elements. Then, for each column, subtract its smallest element from all its elements. The resulting matrix is called the reduced cost matrix. 4. Select a zero element from each row of the reduced cost matrix, if any, and cross out all the elements in its corresponding row and colunm. 5. If all rows are selected with a zero element, the assignment is completed; other- wise, for each row that has not been assigned, mark all columns that have zeros in that row and all rows that have been assigned in the given column. Repeat this until a closed loop is obtained. 6. Find the smallest element in the unmarked area and subtract it from all un- marked elements. Mark the column and the row of this element. Repeat this until all assignments are obtained. Note that this algorithm generates at most min(a, b) pairs of 1-to-1 correspondences. Finally, ridge pairs that are found matched using the Hungarian algorithm are moved from L to P, and their neighbors that are not in P are added to L. 86 Ridge Distance Calculation To calculate the distance between two ridges, we use an algorithm similar to the one proposed for finding ridge neighbors in Section 3.3.3. Let u = {u(i)},i = 1, 2, ...,nu be a ridge with length nu from the query, and v = {v(j)},j = 1,2, ...,nv be a ridge with length nv from the template. The closest points between the two ridges are denoted as an and cm. The distance between a and v is then defined as: 1 d(u,v) = ,1 + ,2 2 (NW) — 220)”), (4.2) i= n— k1,...,n+k2 j=m—k1,...,m+k2 where k1 = min(n — 1,m — 1) (4.3) kg = min(nu — n, nv — m). (4.4) If u and v are matched during ridge prOpagation, their corresponding ridge points, denoted as {u(i),i = n—k1,...,n+k2} and {v(j),j = m—k1,...,m+k2}, respectively, will be added to C. Figure 4.4 illustrates the ridge distance calculated using Feng’s method [58] and the proposed method. As we can see, the proposed method is more accurate as it averages the distances between all corresponding ridge point pairs. Distortion Estimation During ridge propagation, distortion is estimated whenever the point correspondence list C gets updated; namely, when new ridge correspondences are established. By continuously measuring distortion and applying distortion correction to the template, ridge matching becomes more robust to distortion and propagates more effectively. Let U = (u1,u2, ...,ul)T and V = (v1,v2, ...,vl)T be all the corresponding ridge 87 e e 0 ' . a : I I e e e :::: ' I e e u u ' e e e l e n n 5 . e , . : .0 . I O '.:..'I|c.ao: ...:..,a:. nnnnn e:‘lu.oe: ,,,,,,,, uuuuuuu ....... nnnnnn ..... fn“l I'e : e I u ‘ I] d = (ll-(l2 _ v(m) 51' u(i) - v(i) | v(m) maX(dl,d2) ‘ 0 d = n = 43.9 (pixels) (3) ‘ (b) Figure 4.4: Ridge matching based on (a) Feng’s method [58] and (b) the proposed method. Feng’s method only compares the length of the corresponding segments be- tween two ridge skeletons, while the proposed method averages the distances between all corresponding ridge point pairs. points stored in the list 0 from the template and query, respectively, where (u.,-, Vi) 6 R2 denote the spatial coordinates of the ith corresponding pair and l is the total number of ridge point correspondences. The distortion function F between U and V can be modeled using Thin-Plate Splines (TPS) [40], defined as F(U) = c + AU,- + WTS(U) = v, (4.5) where parameters c and A represent an affine transform and W defines non-linear dis- tortion. The distance measure S (U), defined as a vector (0(U—u1),a(U—u2), ..., 0(U— ul))T where aw) = llUllzlog nun, (4.6) is also called the radial basis function. To simulate the physical prOperty of thin 88 plates, the objective of TPS is to minimize the following function: 201 — F(U))T(v — F(U)) + mar), (4.7) where 2 x 2 2 x 2 2 x 2 >=//..{(8:—; 3») (—) (—) }.., represents the bending energy of the spline and /\ is the smoothing parameter that controls the smoothness of the splines. When A increases, the resulting spline becomes more robust to correspondence errors. Unfortunately, the bending energy of TPS contains only second-order derivatives, which controls the rigidity of the spline (or stiffness of the surface). This often results in overshooting, which appears in regions with rapid change of gradient as a result of simulating the physical property of a thin plate. In order to reduce the overshooting problem, we introduce a tension model [60] by adding the first-order derivatives in the objective function, simulating the elasticity of the skin. This is given as 2w — F(U))T(v — F(U)) + AJ(F)*, (4.8) where —// {(22:”)Z+2(———i§‘22)2+(—-i??“)2}dm R2 x213 21 +7 2/ [R2 [(a—L—Fa: y ))2 + (fly—£9392) 2} dxdy’ (49) The parameter T is the tension parameter that controls the degree of tension (elas- ticity) effects. If 7' is set at 0, the objective function is the same as in the traditional 89 TPS. As the 7' value increases, the stiffness of the plate is reduced so that the interpo- lated surface resembles the shape of a membrane passing through the control points (see Figure 4.5 and Figure 4.6). Accordingly, the radial basis function of TPS with tension model is defined as [41]: 1 0*(u) = —3(e-T“U" + Tun“). (4.10) 27' Compared to TPS, the TPS with tension model demonstrates more robustness to correspondence errors, resulting in less overshoots and more realistic skin distortion estimation and ridge alignment, as shown in Figure 4.7. The implementation of TPS with tension model is described in Appendix A. ll, . o a: 0a e z o o o o o a o o o a o u . ’o' '5 9 .» "'13.“? .4... I“. 0:.JO 'o 0. . o r"o. o e o ' o .3 o *l . 0. (a) (b) (C) Figure 4.5: The overshooting problem of the TPS can be reduced by introducing the tension effects. The tension parameter in each plot is (a) 'r = 0.1, (b) T = 50 and (c) 1' = 1000, respectively (see Equation 4.9). Finally, we can evaluate the amount of distortion based on the bending energy of the TPS with tension model, which is expressed as E = trace(WTSW), (4.11) where W and S are the non-linear distortion parameter and the distance matrix (see Equation 4.5), respectively. If the bending energy E is larger than a threshold Te, the ridge alignment process would be terminated for the given initial alignment. 90 TPS TPS with tension Figure 4.6: The behavior of the interpolated surface of displacement between two fingerprints using (a) a TPS model and (b) a TPS with tension model. The tension effects increases the elasticity of the TPS model. The points marked with red crosses are control points (points with known correspondences) and the blue bar at each control point represents the amount of displacement between the template and the query. The higher the bar, the larger the displacement. Ridge Match Score When ridge propagation is completed, the matching ridge list [PI contains all the matched ridge pairs and the ridge candidate list |L| contains all the ridges that are left unmatched. Let {u’} and {v’} be the set of unmatched ridges in [LI that belong to the template and query, respectively. Distances among these ridges are also computed using the Hungarian algorithm except that the assignment with large cost (distance) is no longer forbidden as the objective now is to calculate the total cost (distance) instead of establishing ridge correspondences. Overall, the ridge match score for a given initial alignment is calculated as { Z} PC(IPI)+{ if”; C(ILI) u,v e 14’, P d min(NU, Nv) , (4'12) 91 Figure 4.7: Aligning a partial fingerprint to its template based on (a) minutiae only; (b) minutiae and ridges without distortion correction; (c) minutiae and ridges with distortion correction using TPS; and (d) minutiae and ridges with distortion correc- tion using TPS with tension. 92 where NU and NV are the total number of ridges in the overlap region of the template and query, respectively. For matched ridges pairs (u, v) in P, their ridge distances are guaranteed to be less than 2T)», while ridge pairs (u’, 11’) not in P would have distances larger than 2Tr. This means the more ridge correspondences found in P, the higher the match score would be. The negative sign in Equation 4.12 is simply to map the dissimilarity measure (ridge distance) to a similarity measure (match score). Given 1: (k S 5) initial alignments, the final ridge match score is defined as the maximum among all k scores 5, = elitism. (4.13) The alignment and distortion correction associated with the maximum ridge match score is applied to pores and dots / incipients before they are matched. 4.3.2 Pore Matching According to [104], forensic experts tend to declare a match between two pores from two aligned fingerprints if they are less than about 10 (5) pixels apart at 1000 (500) ppi. We use this criteria to count the number of matched pores Na on corresponding ridges established during ridge matching. Recall that the ridge ownership of each pore was extracted in Section 3.3.4. For noncorresponding ridges, we also try to match the pores belonging to these ridges based on the 10 (5) pixel criteria and denote the number of matches as Nb- Note that pores matched between the corresponding ridges are more reliable and suffer lower distortion than those matched between non- corresponding ridges. Therefore, the pore match score is defined as a weighted sum givenby _axNa+(1—a)be _ Np+Nq ’ 3,, (4.14) where Np and Nq are the number of pores extracted in the overlap region of the template and query, respectively, and a (= 0.65) is the parameter used to give more 93 confidence on pores matched between corresponding ridges. The range of Sp is [0 1]. 4.3.3 Dot and Incipient Matching Let Ng and N h be the number of dots/incipients extracted from the template and query, respectively. Two dots/incipients da = {dai,l9a},i = 1,2, ...,na and db = {dbj, 0b}, j = 1, 2, ..., n), are matched based on the following criteria: 1, if [0a — 6b] < 71T- and min ”dai’dbj“ < Th 7;: 1, ...,na M(a, b) = , j = 1,...,nb L 0, otherwise where na and "b are the size (number of pixels) of dots/incipients a and b, respectively and Th is 16 (8) pixels at 1000 (500) ppi. M (a, b) is an indicator variable that denotes a match (=1) or a non-match (=0) between dots/incipients a. and b. Note this criteria is still applicable even if only the centroid of dots and incipients are extracted. Finally, the match score of dots / incipients is defined as Z M (a, b)(na+nb) a = 1, ..., Ng Generally, the range of Sd is within [0 1]. However, Sd may be larger than one if an incipient is matched to multiple dots. 94 4.3.4 Fusion We have discussed how to obtain individual match scores for various extended features (e.g., ridge skeletons, pores and dots/incipients). In order to systematically integrate these results, we propose an hierarchical framework to fuse the matching results at different levels, as demonstrated in Figure 4.8. At Level 2, both minutiae and ridges are utilized to establish alignment and compute match scores (Sm) and (Sr). If both scores are sufficiently high (Sm > Tm and S,- > Tr), the match is excluded; otherwise, matching proceeds to the next level, where match scores of pores (Sp) and dots/incipients (Sd) are computed. Finally, all match scores are integrated using one of the following fusion techniques. Note that our integration framework is highly efficient as it avoids matching Level 3 features when Level 2 features are in agreement. 4.3.5 Score-Level Fusion The general methodology of score-level fusion can be divided into three categories [113]: i) transformation-based fusion, where the match scores are first normalized (transformed) to a common domain and then appropriately weighted and combined; ii) classifier-based fusion, where scores from multiple matchers are treated as features and a classifier is used to determine whether a given set of match scores belongs to a genuine user or an impostor; iii) density-based fusion, where densities of the match score are estimated and fusion is conducted based on the likelihood ratio of the score distributions. In our matching experiments, a method representing each of the three methodologies is evaluated and compared, namely sum-rule fusion, decision-tree fusion and likelihood-ratio fusion. Because these fusion methods require training data for parameter estimation or classification, we perform ten-fold cross validation in all our tests by dividing the match scores into ten bins, using nine bins for training and one for testing, and repeating this process ten times before combining the matching results. 95 minutiae f ridge . matching (sm) matching (Sr) i l" pore dot/incipient matching (Sp) match-ing (5d) F l l Fusion (Sm. Sr. Sp. 3d) * 4—Y ® N-P Figure 4.8: The proposed integration framework. Minutiae and extended feature (ridge, pore and dot / incipient) matchings are integrated in a hierarchical fashion. 96 Sum-rule Fusion Sum-rule fusion is a transformation-based fusion method. As one of the most effective and efficient fusion methods, sum-rule fusion combines the match scores provided by different matchers using a weighted sum. Because the combination is meaningful only when the scores of different matchers are compatible [113], score normalization is of- ten needed to first transform the match scores obtained from different matchers into a common domain. Min-max and z-score normalization are the most popular methods for score normalization. We use the min-max normalization in our experiments be— cause the impostor match scores often deviate from the Gaussian distribution assump- tion required by the z-score normalization [81]. Let 8';- denote the ith (i = 1, ..., N) match score output by the jth (j = 1, ..., M) matcher and {359, k 6 TS} denote the set of match scores available in the training set T8. The min-max normalized score N 3;, for the test score S; is defined as Si. — min 3’? J keTS 3 max s’i— min 319' keTS J keTS 3 N5;- = (4.15) This normalization retains the score distribution except for a scaling factor that trans- forms the scores into the range [0 1]. Finally, the normalized scores from each matcher are combined using a weighted sum, given as: M F53. = Z w, x NS], (4.16) i=1 M where w, ( Z w, = 1) denotes the weight for the jth matcher. Note that the weights i=1 M are obtained by minimizing the weighted error rate 2 ijj on the training data i=1 TS, where Ej is measured as the sum of false reject rates (FRR) for the jth matcher 97 at fixed false accept rates (e.g., FAR =10—4, 5 x 10‘4, 10‘3, 5 x 10-3). Decision-tree Fusion Decision-tree fusion is a classifier-based fusion method. Unlike the sum—rule fusion, this method is nonlinear as it divides the scores into a nested set of partitions and fits linear models in each partition to fuse the scores. In our experiment, we incorporate a log-linear probability model [85] in our decision tree. That is, there are two decision levels in the tree, each level is responsible of making a classification decision, and the decision at the second level is conditioned on the decision at the first level (see Figure 4.9). The terminal nodes are different linear classifiers while the nonterminal nodes combine the individual classification outputs using a weighted sum where weights are derived from the performance of each classifier. 6 a a @ Z11 Z12 21M 221 222 22M 2M1 ZM2 ZMM Figure 4.9: Decision-tree fusion using a two-level tree structure. M is the number of matchers. Let S,- = {S3-,i 6 TS, j = 1, ..., M} be the match score vectors from M different matchers with the true class label y, (match or nonmatch). During training, the score vectors are fed to the decision tree and the Expectation Maximization (EM) [54] algorithm is employed to iteratively estimate the parameters Omn of each classifier 98 by maximizing the likelihood function 16(0, 5,) = 2: 1n 2 P(zm|S,-) Z P(zmn[S,-, zm)P(y,-, 3,, zm, zmn), (4.17) i m n where 2m (m = 1, 2) are the indicator variables for the nonterminal nodes while zmn (m,n = 1,2) represent terminal nodes (see Figure 4.9). During testing, the probability of classifying a score vector St = {St-,t = 1, ..., N,j = 1, ..., M} to class yt is a mixture of the probabilities of generating class label yt from each of the classifiers, given by Pause) = Z P(Zm]3t) [Pamela Zm)P(3/tl3ti zm, ems), (4.18) m n where T egmnSt P(ytlSt,Zm, Zmn» 1' ———T__' (4.19) Zn eomnSt Likelihood-ratio Fusion Likelihood-ratio fusion is a density-based fusion method. According to the Neyman- Pearson theorem, at a given false accept rate (FAR), the optimal test for accept or reject a match 3 is the likelihood-ratio test, given by f(slgen[ 2(2): 1’ Whenf(slimp)2" f slgen) O, whenm < 77 where f (slgen) and f (slimp) are the density functions for genuine and impostor (4.20) scores, respectively, and 17 is the decision threshold. Let fj(s|gen) and fj(s|imp) be the density of the genuine and impostor scores of the jth (j = 1, ..., M) matcher. Assuming all M matchers are independent, the joint density of the M match scores is the product of the M marginal densities. As a result, the combined likelihood ratio 99 (or fused scores) can be written as fj_(__ ()5 lgen) st— _ H 1f3'( (~73 _—|—jimp)' (4.21) Because genuine and impostor densities are often unknown, we need to estimate them from the training data. Let K be the number of training data, and f, (s) be the density function at score 3 of the jth matcher. The empirical distribution function Fj(s) can be computed based on the training data [98], given as: K 1 = R 19:1 [(3]? g s}, (4.22) where I {556 S s} = 1 if 85-" S s, and = 0, otherwise. For values of s not contained in the training data, F, (s) is obtained by linear interpolation. Finally, samples simulated from Fj (s) are used to estimate the true density estimate of fj(s) using a Gaussian kernel density estimator [98]. 4.3.6 Rank-Level Fusion In an identification scenario, the output of a matcher is the rank of the template instead of a match score. As a result, a rank-level fusion method is needed to fuse the identification results. We implement the highest rank fusion method [70], which is known to work well as it utilizes the strength of each matcher effectively, even if only one matcher assigns a high rank to the correct user. This method assumes the number of users is large compared to the number of matchers, which is the case in our experiments. Let Rg’k be the rank of a match between users i and k using matcher j (j = 100 1, ..., M). The fused rank is defined as nil! 1,2,/re Ri’kz 'n 3:1 3 (4.23) Notice this may result in up to M ties due to the consolidation of decisions output by the M matchers, which need to be broken randomly to reach a strict ranking order. As a result, the accuracy from rank 1 to M — 1 of the highest rank method is usually less than that of the best individual matcher [97]. 4.4 Experiments Experiments to evaluate the performance gain by utilizing extended features are car- ried out on the three databases described in Section 3.4, namely MSU-full, MSU- partial and NIST Special Database 27 (NIST-27) [13] (see Figure 3.16 and Table 3.2). The extended features in these databases are either automatically extracted (MSU- full, MSU-partial and rolled prints in NIST-27) by using the algorithms proposed in Chapter 3 or manually extracted (latents in NIST-27)) by forensic experts. Matching of extended features is conducted automatically by using the algorithms proposed in this chapter. To provide a valid comparison with the state-of-the-art minutiae matching performance, we also evaluate the minutiae-based Neurotechnology matcher (Verifinger 4.2 [12]), which ranked lst on “average zero FMR” in FVC2006 [7]. Note, however, that this matcher performs much better on its proprietary minutiae-based templates than standard templates (where only minutiae position and directions are available). As a result, we use our in-house minutiae-based matcher on the NIST-27 database (where only minutiae position and directions are manually extracted) to achieve the optimal minutiae-based matching accuracy [80]. In the first experiment, we evaluate the verification performance of integrating extended features with minutiae for the MSU-full database (410 fingers, 4 impressions 101 99 v v v v v v v v Y v v r . . . e . . . e a u - . . . . . . . u n u . . . . . . . . . . e . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 . . . . . . . . . . . . . . . . . 5... .......... _. ............... .................. . v I e u I e n e o c I I . . . . . . . . . . . . . . . . . . . . . . . . . . . . I 98 I 975 ......... ,. ................ ,u ..... ..... 0 . . . . . . fl . . . . . . . . e u e e s . e '-' - . e . . .: 97.. ,,’ ........ ..... : ..... :....:...:..;..;..:.., 2 ‘ 2 ' 22‘.’ I I 2 2 I I I: 'I 96» .......... .,...5...,__Sum-ruleFusion(S +8) I g g _ Sum-rule Fusion (Sm + S d) 955 ’ l """" """ J ...—.... Sum-rule Fusion (Sm+Sp+Sd) ’ Genuine Accept Rate(%) \ g _ Sum-rule Fusion (Sm + S) [ , , _Sum-rule Fusion (S +8 +3 +8 ) . . m r p d 94.5 i 1 . . . - . . I . - . . . A - - 10‘3 10'2 10‘1 False Accept Rate(%) 95-] ........ ..... 99 98.5 98- 97.5 Genuine Accept Rate(%) I 95.5 _ -- -------- ----- _ Decision-tree Fusion (sm+sr+sp+sd) ‘l E ...... LR-ratio Fusion (Sm+Sr+Sp+Sd) ; __.Sum-rule Fusion (8 +8 +8 +3 ) . m r p d 94.5 .4”... ‘ 10'2 10’1 False Accept Rate(%) (b) Figure 4.10: Verification ROCS on MSU-full database (flat to flat matching at 1000 ppi) using (a) the sum-rule fusion; (b) three fusion rules on four different scores: Sm (minutiae matching), Sr (ridge matching), Sp (pore matching) and Sd (dot/ incipient matching). Results are combined using ten-fold cross validation. 102 87 85 .. ............................ § ‘6’ .5 83 -. ......................... a: It" 8 81 7‘ . ; ; : < - - ..S m . m C : - ' .5 795 . .. __Sum rule Fusron (Sm+Sp) ac, .._Sum-rule Fusion (Sm+Sd) 0 _. Sum-rule Fusion (Sm+Sp+Sd) 77 - .................... Sum-rule Fusion (sm+sr) ; ; ...—Sum-mle Fusion (S +8 +8 +8 ) . . m r p d 75 i 'L s r r r r r l L . s 4 . 4 r . 10‘3 1o“2 10‘1 False Accept Rate(%) (a) 87 85 on (a) N (D Genuine Accept Rate (%) on .. - - t' _ Decision-tree Fusion (S +8 +8 +3 ) . m r p d N V 1 ............... J ...... LR-ratio Fusion (Sm+Sr+Sp+Sd) —Sum-rule Fusion (S +8 +8 +8 ) m r p d 10'2 10 False Accept Rate (%) (b) Figure 4.11: Verification ROCS on MSU-partial database (partial to flat matching at 1000 ppi) using (a) the sum-rule fusion; (b) three fusion rules on four different scores: Sm (minutiae matching), S,» (ridge matching), Sp (pore matching) and Sd (dot / incipient matching). Results are combined using ten—fold cross validation. N U1 i -1 ..L O 103 as: / Q:;\\\§‘\\\\\\ ’\ \‘\\\\\\\ \ \\\ \\ \ ‘ . \“ \\‘\\\ 'i:\\ \\\.~ / Figure 4.12: An example of matching a partial print (blue) to a flat (black) based on (a) minutiae; (b) ridges; (c) pores. The normalized match scores based on each of these features are Sm = 0.015, Sr 2 0.79, 3,, = 0.56, respectively, and the final fused score is 0.334. per finger, 1000 ppi). Each impression is compared to all other impressions of the same finger, resulting in 2, 460 genuine comparisons; the first impression of each finger is compared to the first impression of all other fingers, resulting in 83, 845 impostor comparisons. As shown in Figure 4.10 (a), the fusion of every extended feature with minutiae using sum—rule fusion improves the matching performance. Although minutiae matching performance is already high, fusion with extended features still improves the GAR from 97% to 98.5%, at FAR=0.01%. This is equivalent to a 50% reduction in FRR (from 3% to 1.5%) at a FAR=0.01%. Figure 4.10 (b) demonstrates the fusion of all the proposed extended features with minutiae using the three different fusion schemes, namely sum-rule fusion, likelihood-ratio (LR) fusion and decision-tree fusion. In the second experiment, we evaluate the verification performance of integrating extended features with minutiae for the MSU-partial database (300 fingers, 1 full and 5 partial impressions per finger, 1000 ppi). Each partial print is compared to the flat print of the same finger, resulting in 1, 500 genuine comparisons; the first partial print 104 (D N I i (D O l ................................................................. a: do I ldentif‘ cation Rate (%) m 0) _ Highest Rank Fusion (Rm+Rr) 84 ‘5 ------ ...... Highest Rank Fusion (Rm+Rp) ~ 82 ________ _____ _ Highest Rank Fusion (Rm+Rd) ‘ i _Highest Rank Fusion (R +R +R +R ) _ m r p d UU ‘ I I I 5 1 0 25 30 15 Rank (k) Figure 4.13: CMC curve for identification using highest rank fusion on minutiae and extended features in NIST—27 (latent to rolled prints matching at 500 ppi). 105 I. n. "I“, .( [I ii“) \ N -“74?\\ (C) Figure 4.14: An example of improving latent matching by using extended features. (a) a latent image and (b) the corresponding rolled print with extracted features (minutiae (red), ridges (blue), dots (green), pores (magenta)) (c) minutiae matching (Sm = 076,an = 4) (d) ridge matching (Sr 2 0.79,Rr = 1) and dot/incipient matching (5d = 0.67,Rd = 2). Using extended features help retrieve the correct candidate of the latent at rank 1. Features in latents were manually extracted by a forensic expert. 106 of each finger is compared to the flat print of all other fingers, resulting in 88, 350 impostor comparisons. As seen in Figure 4.11 (a), because the query fingerprints are partial, minutiae matching performance dr0ps significantly compared to the MSU- full database (GAR decreases to 80% from 97%, at FAR=0.01%). When extended features are utilized, however, the GAR improves from 80% to 85%, at the operating point FAR=0.01% (see Figure 4.11 (a)). This is equivalent to a 25% reduction in FRR (from 20% to 15%) at a FAR=0.01%. Figure 4.11 (b) demonstrates the fusion of all the proposed extended features with minutiae using the three different fusion schemes. Figure 4.12 shows how the genuine match between a partial and a flat print can be improved as a result of integrating minutiae with ridges and pores. Note that among all the three different score-level fusion methods, sum-rule slightly outperforms the others on both MSU-full and MSU-partial databases. This is not surprising as sum-rule fusion is easier to Optimize than training based classifiers, such as decision-tree fusion and likelihood-ratio fusion. Nevertheless, performance improvement was observed for all three fusion results, suggesting the effectiveness of extended features in improving fingerprint verification performance. In the third experiment, we evaluate the identification performance of integrating extended features with minutiae for the NIST-27 database (258 fingers, 1 latent and 1 roll per finger, 500 ppi). In this database, each latent is compared to all rolled prints and only extended features in rolled prints are automatically extracted. The identification is performed by ranking the match scores of each feature and fusing the ranks using the highest-rank method. Note that this identification is based on a very small background database, compared to those used in recent papers by Jain [78] and NIST [7 5]. Figure 4.13 shows the identification performance on the NIST-27 database. Note that at ranks 1 to 3, the fused identification rates are not very high due to the random tie breaking in the highest-rank fusion [97]. At rank 4 (10), the rank-level fusion improves the identification rate from 88% (91%) to 90% (93.5%), respectively. 107 This is equivalent to a 16.7% (27.8%) reduction in identification error rate at rank 4 (10). We also notice that ridge matching provides the most performance improve— ment, while pore and dot / incipient matching fail to improve the performance. This is potentially because at 500 ppi, our pore and dot / incipient extraction algorithms do not perform well, for example, only 67 (131) out of 258 latents were found to contain dots/incipients (pores) and the average number of dots / incipients (pores) manually extracted is too small to be distinctive (only 0.6 (5.2)). Also note that when the minutiae or ridge match scores are sufficiently high, no pore or dot / incipient match- ing is necessary, as described in Section 4.3.4). Figure 4.14 demonstrates an example of improving latent identification using ridges and dots / incipients. In the fourth experiment, we study the performance gain of using extended fea- tures in an identification scenario across different image quality. Forensic experts have manually classified the latents in NIST-27 into three quality bins (i.e., good, bad and ugly). We obtain the identification results by matching the partial (latent) prints in each quality bin with all rolled prints. As shown in Figure 4.15, the performance gain of integrating extended features with minutiae increases from 1.1% to 2.4% to 3.5% at rank 4 as the image quality degrades from good to bad to ugly. This result suggests that high quality fingerprint images typically contain a sufficient number of minutiae for accurate matching. It is the fingerprints with low quality that gain the most from extended feature matching. Finally, we evaluate the performance gain of using extended features in an identi- fication scenario across different sizes of latent fingerprints. Based on the size of the latent area estimated using the segmentation algorithm proposed in Section 3.3.1 and manually verified, we divide the latents in NIST-27 into three fingerprint size bins (i.e., small, medium and large). Identification results by matching latents in each bin with all rolled prints are shown in Figure 4.16. As we can see, the performance gain of integrating extended features with minutiae increases from 1.1% to 1.2% to 4.7% 108 at rank 4 as the size of the fingerprint area decreases from large to medium to small. This result suggests that fingerprints with small area benefit more from extended feature matching compared to large fingerprints, which typically contain a sufficient number of minutiae for accurate matching. Currently, all proposed extended feature matching algorithms are implemented in MATLAB. The average time for ridge, pore and dot / incipient matching between a partial and a flat is 12, 0.02, and 0.0001 seconds, respectively, on a 3G HZ Intel Duo Core machine with 3G RAM. 4.5 Summary Automatic fingerprint matching using extended features is an important but chal- lenging problem. This is because, unlike minutiae, matching of Level 3 features (e.g., pores and dots) can be greatly affected by errors in feature extraction, incorrect align- ment and distortion. To address this, we propose a hierarchical matching system that utilizes Level 2 features such as minutiae and ridges to obtain alignment with dis- tortion correction before Level 3 features (pores and dots/incipients) are matched. The individual feature matching results are integrated using score-level or rank-level fusion. Our experimental results demonstrate that extended features possess some discriminative power and when combined with minutiae, can significantly improve the performance in both automatic and semi-automatic matching scenarios. More specif- ically, all proposed extended features (ridge skeletons, pores and dots/incipients) are effective in improving fingerprint matching (partial, full) at 1000 ppi, while ridges show the most improvement in latent identification at 500 ppi. In addition, per- formance gains are especially noticeable in low quality and small area livescans and latents, suggesting that extended features provide complementary discriminative in- formation to minutiae. These results strongly suggest that utilizing extended features 109 for fingerprint matching is both practical and beneficial. 110 Identification Rate (%) Fusion 35 " + R m (good) + Highest-rank 80 5 10 (a) ....... .... ..................... ................................. ' -e— R m (bad) _._ Highest-rank _ Highest—rank Fusion Fusion 5 10 15 20 70 5 10 15 20 Rank (k) (b) (c) Figure 4.15: CMC curve for identification using minutiae and extended features across different image quality by dividing latents in NIST-27 into three quality bins (a) good quality, (b) bad quality, and (c) ugly quality manually assigned by forensic experts. 95 Identification Rate (%) on U! 95 . 85 30 +R m (large) 80 —.:Rm (mediflm) Highest—rank + Highest—rank Fusion Fusion 75 . - . 75 s L , 5 10 15 20 5 10 15 20 Rank (k) (a) (b) 95 85 " 80' —Rm(small) "‘ Highest—rank Fusion 75 I n 1 S 10 15 20 (C) Figure 4.16: CMC curve for identification using minutiae and extended features across different sizes of fingerprint area by dividing latents in NIST-27 into three bins (a) large size, (b) medium size, and (c) small size based on automatically segmented foreground area. 111 Chapter 5 Individuality of Fingerprints 5.1 Introduction For over one hundred years of the history of fingerprint recognition, fingerprints have been generally believed to be unique for each individual. As a result, the individuality of fingerprints was rarely, if ever, questioned. Recent court challenges, however, have brought into question the validity and reliability of fingerprint individuality, the fundamental issue of fingerprint recognition as a science [23, 22, 24, 30]. These challenges are based on, among other factors, the lack of (i) conclusive evidence to support the claim of fingerprint uniqueness, and (ii) scientific evaluation of criteria used to determine a match between two fingerprints. Fingerprint individuality can be formulated as the probability that any two finger- prints from different fingers will be “sufficient” similar. Because similarity between fingerprints is often quantitatively defined based on the similarity of fingerprint fea- tures, it is equivalent to finding the probability of random correspondence (PRC), or the probability of matching k features, given that the two different fingerprints contain m and it features, respectively. This probability can be evaluated either em- pirically or theoretically. In the empirical case, representative sample fingerprints are 112 collected or synthetically generated and feature-based matching is conducted using a matcher (automatic or human). This process, however can be very time-consuming and expensive. For example, it was estimated in [95] that given a large database of over 200 million fingerprints (i.e., the FBI database), it would take approximately 1, 270 years to match all the fingerprints in the database with each other using a sys- tem with a speed of one million matches per second. In the theoretical case, models of fingerprint features are derived from sample data and the probabilities of random correspondence are directly calculated based on the models. In this chapter, we propose to address two main issues of theoretical evaluation of fingerprint individuality: i) how to model the distribution of fingerprint features and their correlation, as it is not only the number of features, but also their spatial distributions that account for the fingerprint individuality; ii) how to incorporate both minutiae and extended features in the individuality model as they all possess discriminative power as demonstrated in Chapter 4. 5.2 Previous Work The fingerprint individuality problem can be approached from two directions. One is to model the fingerprint formation or synthetic generation process and create a large enough database for empirical analysis. The other is to model the distribution of fin- gerprint features and theoretically calculate the probability of their correspondences. 5.2.1 Individuality based on Formation Models Fingerprint individuality is directly related to the fingerprint formation process, as fingerprints do not generally change once formed, except damage caused by scarring, disease, etc. If the formation process can be mathematically simulated, it can be used to analyze fingerprint individuality. Kucken et a1. [91] proposed a mathematical model 113 for fingerprint formation. In their model, the epidermis and dermis were simulated by two layers of nonlinear springs, and ridge formations were observed as a result of compressive stress applied on the two layers. This model has been used to successfully generate synthetic fingerprints with different class patterns, including whorls, loops and arches [91]. Cappelli et al. proposed another fingerprint generation method, called SFINGE (Synthetic Fingerprint Generation) [44]. This method was based on a much simpler formation model, which produced ridge patterns by modeling the orientation field of a fingerprint. Minutiae were naturally formed due to ridge line disparity produced by local convergence / divergence of the orientation. It was demonstrated that “the parameters of the SFINGE can be tuned to properly emulate the variations in real fingerprints” [95]. Experiments also showed that synthetic fingerprints exhibit very similar performance compared to real fingerprints [94, 93, 7]. However, a more in-depth investigation of the fingerprint formation process and synthetic fingerprint generation is needed before the proposed models can be used for reliable analysis of fingerprint individuality. 5.2.2 Individuality based on Feature Models Fingerprint individuality can also be related to the occurrences of fingerprint fea- tures. If the distribution of fingerprint features can be modeled, it can be used to estimate fingerprint individuality. Currently, two main feature models have been pro- posed, namely, configuration—based models and probability of random correspondence (PRC)-based models. In configuration-based models, fingerprint individuality is estimated as the proba- bility of observing a particular configuration of features in a target p0pulation. Based on the assumptions of feature dependence, the configuration-based models can be fur- ther classified as feature-independent models, cell-independent models and feature- dependent models. 114 e Feature-independent models use a fixed probability p for each feature occurrence and assume independence among features. The individuality of a fingerprint with N features is then estimated as pN [69, 37, 128, 51, 68]. o Cell-independent models divide fingerprints into fix-sized cells and assume de- pendence of fingerprint features within cells and independence among cells. As a result, the individuality of a fingerprint is estimated as the product of the probability of observing the particular feature occurrences in each of the cells [115, 62, 101]. Note that in cell-independent models, the determination of cell size is essential. When cell size is small, features fall separately in different cells, and cell independence becomes equivalent to feature independence. When cell size is big, however, the discriminative power of the model is weakened as the spatial relationship among features is lost. 0 Feature-dependent models assume dependency among fingerprint features by measuring the probability of observing a feature with respect to other features in a fingerprint [124, 86, 119]. A table of probability estimates of above configuration-based models can be found in [102]. Note that these models tend to give higher values of individuality to fingerprints that possess larger number of features. In reality, however, the individuality of a fingerprint is also dependent on locations of the features in the image. In PRC-based models, fingerprint individuality is measured by the probability of random correspondence (PRC). That is, the probability of matching k features in their spatial locations and other attributes, given the distribution of m and it features in the template and query, respectively. To estimate PRC, two types of generative models that describe the distributions of fingerprint features have been proposed: 0 Feature-independent models that model fingerprint features as independent vari- ables. For example, in Pankanti et al.’s model, minutiae are modeled as uni- 115 formly and independently distributed [102]. e Cluster-independent models that take considerations of the clustering tendency of fingerprint features [86]. For example, in Zhu et al.’s model, minutiae in each fingerprint are clustered and each cluster is independently modeled, while minutiae within a cluster are assumed dependent [136, 137]. 5.3 Proposed Model We prOpose to estimate fingerprint individuality using cluster-independent models on both minutiae and extended features, including ridge (period and curvature) and pore (spacing) features (see Figure 5.1). These extended features are selected because they are sufficiently discriminative and invariant to distortion. In general, we make the following assumptions in the proposed individuality mod- els: 0 Each fingerprint is represented by the following features: minutia (position and direction) and associated ridge period, ridge curvature, and pores spacing. Minutiae are either endings or bifurcations, and they are not distinguished from each other. 0 Fingerprint features (minutiae and ridge features) tend to cluster instead of being uniformly distributed. Features are dependent within a cluster, but inde- pendent between different clusters. The clustering tendencies (non-uniformity) of fingerprint features reflect the class patterns of fingerprints and have been observed and demonstrated in practice [86, 47] and statistically demonstrated in [136, 137]. 0 All fingerprint features are correctly extracted. This assumption is needed to ensure the correctness of the individuality model. In practice, however, this 116 rt“ '3“! ‘. is not true, especially for extended feature extraction. Hence, it is important to keep in mind that extraction errors would result in unreliable individuality models and the resulting estimates of PRC. o All fingerprints are pre-aligned and there is one and only one alignment between a pair of fingerprints. This assumption is needed to to register fingerprints in a common coordinate system. Note that our prOposed model is an extension of Zhu et al.’s original minutiae- based cluster-independent model [136, 137]. Besides that we incorporate additional extended features, there are two major differences between ours and Zhu’s approach: 0 Our model is fitted to five major fingerprint classes, not to individual finger- prints. Researchers have shown dependence between distributions of fingerprint features (e.g., minutiae, ridge flows) and finger pattern classes [86, 47]. Although Zhu also proposed to apply clustering on the finger-specific models to obtain an average model for each cluster (which showed strong correlations with finger- print classes), her model is prone to overfitting, especially when the number of features is small in a fingerprint. o Finger-specific models and their clustering are difficult to generalize as they need to be generated whenever applied to a new database or a new observed match. On the contrary, our model can be applied without being retrained. This is because our model is estimated based on five major classes of fingerprints, and given the class information of a target population or an observed match, the PRC can be directly calculated. 5.3. 1 Modeling Minutiae Given a large database of fingerprints, we classify each print into one of the five major classes or pattern types as defined in the Henry classification system [69]: whorl, left 117 pore (a) (b) Figure 5.1: Fingerprint features used in our individuality model (a) two different pattern types; (b) minutiae, ridges and pores. loop, right loop, arch and tented arch. For each fingerprint class, minutiae from fingerprints that belong to this class are consolidated and their spatial distribution is estimated. Following the work of Zhu et a1. these minutiae are then clustered based on the distribution of their positions (X) and orientations (0) using the Expectation Maximization (EM) algorithm [59]. In each cluster, the position X is modeled by a bivariate Gaussian distribution and the orientation 0 is modeled using a Von-Mises distribution. Combining the distributions of minutiae positions and orientations, it follows that each minutiae m(X, O) in a fingerprint with class G has the following mixture density: NG f(rwlea) = Z 79 - fx(w|#ga Es) - rows, as), (51) 9:1 where NG’ is the number of clusters in the mixture for class G, T9 is the weight for the gth cluster, 9G is the set of parameters describing the distribution of each cluster, f X (ml pg, 29) is the p.d.f. of the bivariate Gaussian distribution over minutiae positions in the gth cluster, and f0(o|z/g, Kg) is the p.d.f. over minutiae orientations in the gth cluster. Minutiae with positions corresponding to the gth cluster have 118 orientations corresponding to the same cluster, establishing a dependence between minutiae position and orientation [136]. Let T be a fingerprint belonging to class G with minutiae density f (r, oleg). Sim- ilarly, let Q be a fingerprint belonging to class H with minutiae density f (11:, olOH). Let m(XT, OT) and m(XQ, 0Q) be two minutiae from T and Q, respectively. The probability that these two minutiae would match is defined as No NH P222? ': 1}»... (3.. . ., Next, we calculate the theoretical probability of matching a minutiae pair between impostor fingerprints of each class using Equation 5.2. As shown in Table 5.1, impos- tor fingerprints from the same class have a higher matching probability (highlighted in gray) than those from different classes. This is consistent with the results of Jain et al.’s study [83], which revealed that fingerprints from the same class are more likely to be matched (have higher False Accept Rate) than fingerprints from difference classes. For comparison, we also calculate the empirical probabilities of matching a minu- tiae pair between impostor fingerprints based on NIST-4. To be compatible with our model, the matcher is restricted to register fingerprints within a 50 X 50 neighborhood 131 of the manually aligned origin. As a result, an in-house fingerprint matcher [79] was used to automatically establish minutiae correspondences between the impostor pairs in NIST-4. Note that due to the alignment restriction, the difference in performance between our in-house matcher and any commercial minutiae-based matcher could be minimum. A total of 1, 999, 000 impostor matches were conducted and the empirical probability of matching a minutiae pair between two impostor prints is computed by the average of k/ (m x n), where k is the number of matched minutiae and m and n are the number of minutiae in the two fingerprints. This probability is tabulated by the fingerprint class information, resulting in all possible intra-class and inter-class probability values (see Table 5.2). To incorporate ridge features into our model, we first show the empirical distribu- tions of ridge curvature (Figure 5.10) and ridge period (Figure 5.11) associated with minutiae in each of the five classes. These plots are generated by averaging the ridge curvature and ridge period at all minutiae positions and then smoothing the space using a disk filter with radius of 25 pixels; the center of each plot corresponds to the alignment origin. As we can see, the distribution of ridge curvature and ridge period is correlated with the fingerprint class, e.g., ridge curvature is especially high near core and delta points, and ridge period is low above the core points and high near the distal creases. Note that the range of the ridge curvature values is much larger than that of the ridge period, resulting in more prominent differences in its distribution among different fingerprint classes. Next, we retrain the clustering algorithm and fit the mixture model to the distri- bution of ridge features associated with minutiae. That is, for each class, 1,000,000 minutiae points are sampled from each modeled cluster and their ridge curvature and ridge period are also sampled based on their estimated distributions. The theoretical distributions are shown in Figure 5.12 and Figure 5.13. To measure the goodness of fit of the theoretical distribution with the empirical 132 0.8 a . . 0.35 . . — Empirical —— Empirical ] —Theoretieal 0.3» —Theoretical 0'6 0.25- .4? .12: 0.21 O D 0.15: 1 0.2 0.1 ' 0.05] 1 o A 0 I . . 0 20 4O 60 80 100 120 0 5 10 15 20 Ridge Curvature Ridge Period (9) (b) Figure 5.9: Empirical and theoretical distributions of (a) ridge curvature and (b) ridge period associated with minutiae in the arch fingerprint classes. The empirical distribution strictly follows the theoretical one in both cases. distribution of ridge curvature and ridge period in each fingerprint class, we perform the Pearson’s Chi-square test on the following null and alternative hypotheses: H 0 : the observed ridge curvature (period) follow the Poisson (Gaussian) mixture model, vs. H1 : not H0. (5.15) The above goodness-of-fit test can be carried out for each class by dividing a range of ridge curvature (period) values into equal-sized bins, denoted as B, and computing the observed number (02') and the expected number (19,-) of ridge curvature (period) values that fall in the i-th bin. In our test, we divide the ridge curvature (period) val- ues into 20 bins covering all possible values from observed data (see Figure 5.9). The Chi-square test on ridge curvature (period) gives p-values of 0.12, 0.13, 0.08, 0.09, 0.17 (0.33, 0.17, 0.35, 0.20, 0.23) for each of the five fingerprint classes (arch, tented arch, left loop, right loop, and whorl). Because these p—values are larger than the signifi- 133 canoe level of 0.05, the null hypotheses H 0 for both ridge curvature and ridge period distribution are accepted for all classes. 134 (c) (d) (e) Figure 5.10: Empirical distribution of ridge curvature associated with minutiae in each of the five fingerprint classes (a) arch, (b) tented arch, (c) left loop, ((1) right loop, (e) whorl, obtained from 2,000 pairs of rolls in NIST-4. The darker the area, the higher the ridge curvature in that region. Wfisstym '. ’ “$333.,“ a ‘5 x. § 2 (C) Figure 5.11: Empirical distribution of ridge period in each of the five fingerprint classes (a) arch, (b) tented arch, (c) left loop, (d) right loop, (6) whorl, obtained from 2,000 pairs of rolls in NIST-4. The darker the area, the higher the ridge period in that region. 135 Figure 5.12: Theoretical distribution of ridge curvature associated with minutiae (based on the mixture model) in each of the five fingerprint classes: (a) arch; (b) tented arch; (c) left loop; ((1) right loop; (e) whorl. 8.. . MT- -‘ m- WW,KN’,‘TA2-fcn~r- Figure 5.13: Theoretical distribution of ridge period associated with minutiae (the- oretically generated using the mixture model) in each of the five fingerprint classes: (a) arch; (b) tented arch; (c) left loop; (d) right loop; (e) whorl. 136 To reevaluate the empirical probability of matching a minutiae pair after ridge features are incorporated, we use ridge features to obtain new minutiae correspon- dences as those that disagree in local ridge period or curvature (with difference larger than 1'0 and c0, respectively) are rejected. The resulting number of matched minu— tiae pairs 16’ is then used to recalculate the empirical probability as the average of k’/ (m x n). The theoretical probability is also reevaluated using Equation 5.10. Both theoretical and empirical matching probability matrices tabulated by fingerprint class information after incorporating the ridge features are shown in Tables 5.3 and 5.4, respectively. As we can see, higher probabilities are still observed among impostor fingerprints from the same class than those from different classes. The use of ridge features reduces the probability of random minutiae correspondence both empirically and theoretically. Table 5.3: Theoretical probabilities of matching a minutiae pair and their local ridge features between impostor fingerprints belonging to class A=arch, TA=tented arch, L=1eft loop, R=right loop and W=whorl based on the proposed model. Type A TA [ L R W | A 5.74543 1. 82 x 10-4 1. 68 x 10-41.85 x 10“4 2.16 x 10-4 TA 1.82 x 10 4 , J: 31‘" 53.1%? 2. 55 x 10-4 2.39 x 10-4 2.20 x 10-4 L 1.68 x 10‘4 2. 55 x 10-4 ' 3,495,: pg 15 . 2. 61 x 10 4 R 1.85 x 10-4 2.39 x 10—4 2 43 x 10-4 (1511' 3.... .- 2 69 x 10 4 W 2.16 x 10-4 2.20 x 10-4 2.61 x 10 4 2 69 x 10 ' '- ‘ Table 5.4: Empirical probabilities of matching a minutiae pair and their local ridge features between impostor fingerprints belonging to class A=arch, TA=tented arch, L=1eft loop, R=right loop and W=whorl based on NIST-4. Type A TA L R W A 6. 8 x 104'; 5. 72 x 10-4 3.55 x 10-4 3.76 x 10-4 1.08 x 10-4 TA 5 72 x 10—4 6 17' x 10‘4“: 3.99 x 10-4 4.33 x 10-4 1.21 x 10-4 L 3.55 x 10 4 3.99 x 10 4 ___5.51_ x 10—4‘ 2.75 x 10-4 2.10 x 10-4 R 3.76 x 10-4 4.33 x 10-4 2.75 x 10-4 “:5140 '_x' 10:41 2.22 x 10-4 W 1.08 x 10-4 1.21 x 10-4 2.10 x 10-4 2.22 x 10-4 328 x 10-4' 137 0.6~ — Empirical measure - - -Gaussion fit (11 = 12.67. 0 = 032) 6 8 10 if 12 14 16 Pore spacing Figure 5.14: Distribution of intra-ridge pore spacings obtained from 720 fingerprints in NIST-30 (1000 ppi) using the proposed pore extraction algorithm (see Section 3.3.4). 138 When pores are further incorporated in the model, we reevaluate the theoretical probability of matching a minutiae pair using Equation 5.12. Because pores are more reliably captured at 1000 ppi than 500 ppi, we use the NIST-30 database [14], which contains 360 pairs (720 images) of rolled fingerprints at 1000 ppi, to extract (using the algorithm proposed in Section 3.3.4) and model the distribution of pore spacings. The empirical distribution and a Gaussian fit to the distribution with mean 12.67 and standard deviation 0.82 are shown in Figure 5.14. This is consistent with the estimates of Roddy and Stosz [110], who suggested that the most frequently observed pore spacing is 13 pixels at 1100 ppi. To evaluate the goodness of fit, we performed a Pearson’s Chi-square test that gives p—value of 0.85 at the significance level of 0.05, resulting in the acceptance of the null hypothesis that the observed pore spacing values are from a Gaussian distribution. The theoretical probability matrix of matching a minutiae and their local ridge and pore features tabulated by fingerprint class information is shown in Table 5.5. Table 5.5: Theoretical probabilities of matching a minutiae pair, their local ridge and pore features between impostor fingerprints belonging to class A=arch, TA=tented arch, L=1eft loop, R=right loop and W=whorl based on the proposed model. Type A TA L R W A 5 25 x 107-4 1. 67 x 10-4 1.54 x 10-4 1.70 x 10-4 1.98 x 10-T TA 1. 67 x 10-4 P622” :11: _j: 2. 34 x 10-4 2.19 x 10-4 2.01 x 10-4 L 1.54 x 1074 2. 34 x 10-4 5 33 x 1041 2. 22 x 10-4 2. 38 x 10-4 R 1.70 x 10-4 2.19 x 10"4 2. 22 x 10-4 565L10fl 2. 46 x 10-4 W 1.98 x 10-4 2.01 x 10-4 2.38 x 10-4 2. 46 x 10-4 4M Having obtained the probability of matching one minutiae pair, we can calculate the PRC based on the “12-point rule”, or probability of matching at least k = 12 minutiae pairs, given m and n minutiae in the two fingerprints using Equation 5.5. For example, when only minutiae position and orientation are incorporated, the the- oretical probability of having at least 12 minutiae matches given m = 52 and n = 52 is 2.5 x 10—5. This is more consistent with the empirical probability (2.6 x 10‘5) 139 Table 5.6: An example of comparing “12-point rule” PRCs from Pankanti et al.’s uniform model, Zhu et al.’s mixture model and the proposed model based on minutiae with the empirical estimates using NIST-4. Empirical values are calculated using Equation 5.5 by substituting A with the mean number of matches (weighted by the class distribution) found empirically. (m,n,k) Uniform [102] Mixture [136] Proposed Empirical (52,52,12) 4.3 x 10-8 4.4 x 10-3 2.5 x 10-5 2.6 x 10-5 (62,62,12) 2.9 x 10-7 6.9 x 10-2 5.9 x 10-4 1.0 x 10-3 we obtain than the previously published models (see Table 5.6). When ridge pe- riod and curvature are incorporated, both theoretical and empirical probabilities (for k = 12,m = 52,11 = 52) are reduced to 2.5 x 10‘7 and to 5.7 x 10—7, respec- tively. When pore spacing is incorporated, the theoretical probability drops further to 9.6 x 10—8. It is important to realize that the gain from extended features in our theoretical estimation of fingerprint individuality is achieved by assuming that these features are correctly and fully extracted. That is, there is no error in extracting these features. This is difficult to achieve in reality, however, because no feature extraction is perfect, neither manually nor automatically. In fact, the more features are used in modeling the fingerprint individuality, the more errors are accumulated in feature extraction. As a result, we conclude that: i) in theory, extended features, when correctly and fully extracted, provide significant contribution to the fingerprint individuality; ii) in reality, extraction errors would reduce the individualization power of extended features, however, the reduction would not offset all the gain from extended features (the PRC would always decrease) as long as the alignment is fixed and extended features are only used to confirm a corresponding minutiae pair. Finally, I would like to discuss the possibility of extending fingerprint individuality models to latents. Currently, all prOposed fingerprint individuality models evaluate the uniqueness of a finger based on the assumption that each fingerprint is a true and 140 complete representation of a finger. In practice, however, a fingerprint impression, especially a latent, is only a partial, distorted and often noisy representation of a finger. As a result, in order to evaluate the confidence associated with an observed match, we need to incorporate the effects of image size and quality in our individuality model. For example, the probability of matching two features from different finger- prints should be conditioned by the quality of the features. The poorer the feature quality, the more likely that it is a spurious feature extracted due to noise, and the less it should contribute to the fingerprint individuality. In addition, the probability of random correspondences should be conditioned by the size of each fingerprint. If at least one of the two fingerprints is of small image size, the alignment between them is likely to be established due to randomness, and therefore, the confidence of the match should be low. 5.5 Summary In this chapter, we have proposed a mixture model to evaluate fingerprint individ- uality based on five major fingerprint classes using minutiae and extended features. Experimental results show that the estimated matching probabilities between impos- tor fingerprints are highly correlated with the fingerprint class information. These probabilities are reduced when extended features are incorporated in the model. Our theoretical estimates of PRCs are consistent with those based on empirical matching. In practice, however, both theoretical and empirical estimates can be affected by fac- tors such as fingerprint sensing, image quality and resolution, as well as the feature extraction and matching accuracy. Finally, while our individuality model only esti- mates the inter-class variation of fingerprints, intra-class variation of fingerprints is equally important and should be considered in the future models. 141 .“ ...‘_. u Chapter 6 Touchless Sensing and Sensor Interoperability 6.1 Introduction In the previous chapters, methods to incorporate extended features in automatic systems were presented. In this chapter, we address the interoperability issue in automatic systems, where fingerprints captured using different sensing technologies are compared. Specifically, we address a new sensing technology based on 3D touchless imaging. In order to achieve interoperability between touchless and legacy rolled fingerprints, it is necessary to obtain: i) compatible image and feature representation, ii) equivalent image quality, and iii) comparable matching performance in automatic systems. 6.2 Touchless Sensing Touchless sensing can be viewed as a “remote sensing” technique that captures the image of an object’s surface without requiring contact between the object and a sensing surface. Compared to conventional touch-based fingerprint sensing, touchless 142 Table 6.1: Comparisons between touchless and touch-based sensing technologies. Touchless Touch-based skin distortion no yes slippage and smearing no yes fingerprint residue no yes tolerance on skin condition large small image contrast low high user’s comfort level high low fingerprint sensing has the potential to provide greater acquisition speed and user convenience. It can also address the following intrinsic problems of touch-based sens- ing [105] (see Table 6.1): (i) slippage and smearing due to moist fingers; (ii) improper contact due to finger dryness; (iii) dirt and fingerprint residue accumulating on the imaging surface; (iv) degraded image quality resulting from wear and tear on sur- face coatings; (v) halo effect generated by the temperature difference between the finger and the platen. Most importantly, touchless fingerprints are potentially free of skin distortion, resulting in more consistent fingerprint representations. There are generally two types of touchless sensing, namely Reflection-based Touchless Sensing (RTS) and 'Ifansmission-based Touchless Sensing (TTS) [11, 105]. The principle idea of RTS is imaging based on reflection of light sources from the object surface; while TTS is based on the optical transmittance of skin tissue. Figure 6.1 demonstrates an example of a fingerprint acquired from a dry finger using a touch-based and a RTS device. Note that the touchless image better captures the ridge-valley pattern of the fingerprint, however, the image contrast and ridge frequency varies across the image due to the differences in the distances between the light sources and the finger. 6.2.1 3D Touchless Sensing 3D touchless sensing technology for law enforcement and government application has been proposed to meet the challenge of fast capture (less than 15 seconds) of rolled- 143 Figure 6.1: The same fingerprint acquired using (a) a touch-based optical device (Crossmatch) and (b) a RTS device (TBS) (reproduced from [105]). Here the finger skin is very dry, so the touch-based sensing results in poor image representation. Figure 6.2: The Surround Imager, a multi-camera system for 3D touchless fingerprint acquisition [105]. 144 (a) Figure 6.3: 3D touchless fingerprint reconstruction: (a) five steoroscopic views of the same finger captured by the Surround Imager; (b) the reconstructed 3D touchless fingerprint (courtesy: TBS Inc [17].). Figure 6.4: A detailed view of the ridge-valley structure of a 3D touchless fingerprint. 145 equivalent fingerprints to improve the operational efficiency and accuracy of tenprint acquisition [4]. Figure 6.2 shows a schematic diagram of a 3D touchless sensing device developed by TBS Inc. [17]. This multi—camera system known as Surround Imager, engages five cameras to capture the surface of a finger. During acquisition, the five cameras synchronously capture snap shots of a finger from different view points under specialized lighting conditions. The exact position and orientation of each camera and mirror within a chosen reference coordinate system is computed using calibration algorithms [125, 134]. This information, combined with the images of each view, is used to reconstruct the 3D fingerprint based on the principles of stereovision and photogrammetry [106]. Figure 6.3 shows five stereoscopic views captured by the five cameras inside the Surround Imager and the reconstructed 3D touchless fingerprint. As we can see, this device captures the full nail-to-nail representation of a finger (see Figure 6.3 (b)). Compared to legacy rolls, the new 3D touchless fingerprints have a less distorted and more consistent representation that is potentially “idea ” for government and law enforcement applications. However, 3D touchless fingerprints are not rolled equiva- lent, and hence, cannot be directly matched with legacy rolls by AFIS. Also touchless fingerprints have lower ridge-valley contrast compared to the binary-like touch-based fingerprints (see Figure 6.4). In the following section, we propose several methods to address these interoperability issues between 3D touchless fingerprints and 2D touch—based rolls. 6.3 Interoperability To achieve compatibility between 3D touchless fingerprints and 2D legacy rolls, we first present an unwrapping algorithm to simulate the “virtual rolling” of a 3D fin- ger to obtain 2D touchless fingerprints. Next, quality analysis and enhancement 146 algorithms are proposed to achieve compatible image quality between touchless and touch—based fingerprints. The effectiveness of these algorithms will be evaluated in our experiments by matching a small database of converted 2D touchless fingerprints with corresponding legacy rolls. 6.3.1 Virtual Rolling The methodology to unwrap 3D objects to 2D has been extensively studied and applied in the field of Geographic Information Systems (GIS). One such example is map projection, which focuses on how to unwrap the globe to match with inherently flat geographic maps on paper and films [118, 133]. Other fields, including medical imaging, surface recognition and industrial design, also utilize unwrapping of 3D object surfaces [87, 121, 64]. (a) Figure 6.5: Globe Unwrapping using (a) a cylindrical model and (b) a conic model (reproduced from [10]). In general, there are two main types of unwrapping methods, parametric and non-parametric. 1. Parametric unwrapping refers to the projection of the 3D object onto a para- metric model (i.e., cylindrical or conic) and the unwrapping of the model. This 147 method often involves simple and straightforward transformations. It also re- quires that the chosen parametric model fits the shape of the object. Otherwise, large distortions may be introduced during unwrapping. 2. Non-parametric unwrapping involves no projections and is directly applied to an object by mapping points in 3D to 2D while preserving their relative distances or angular relations. This method is often employed for irregular-shaped objects. Figure 6.5 shows the unwrapping of the globe using two different parametric mod- els: cylindrical and conic. Note it is not possible _to unwrap a 3D sphere to a 2D plane without introducing some distortion. One can only try to minimize the distortion by using multiple models for different portions of the object to best approximate the shape locally, as shown in Figure 6.5. In the case of 3D fingerprint unwrapping, this limitation also applies because, although the human finger can be approximated as a cylinder or cone, distortion is still unavoidable, especially at the fingertip. In the next two sections, we will give examples of unwrapping 3D fingerprints using a para- metric cylindrical model and a nonparametric method based on equidistance. We will compare the two methods and show that the distortion introduced by the parametric method can be noticeably large, whereas the nonparametric method demonstrates more faithful representation of the “ground—truth” of fingerprints. Parametric Based Virtual Rolling Although human fingers vary in shape (for example, the middle finger is often more cylindrical than the thumb), it is generally true that they can be closely approximated by a cylinder. We adopt the cylindrical model for parametric unwrapping of 3D fingerprints. A simple illustration of the cylindrical-based unwrapping is to imagine projecting the fingerprint texture onto a cylinder surrounding the finger, and then unwrapping (flattening) the cylinder to obtain the 2D fingerprint. Mathematically, let the origin be positioned at the bottom of the finger, centered at the principle axis 148 of the finger. Let T be a point on the surface of the 3D finger, T = (2:, y, z)T. This 3D point is then projected (transformed) onto the cylindrical surface to obtain the corresponding 2D coordinates S = (0, z)T, where 0 = tan-16%). Figure 6.6: Parametric unwrapping using a cylindrical model (top-down view). Point (x,y,z) on the 3D finger is projected to (0, z) on the 2D plane. A top-down view of the unwrapping model is shown in Figure 6.6, where the Z axis points outward from the origin. It must be noted that the finger is represented as a triangular mesh after 3D reconstruction and each vertex on a triangle would be directly projected using the above transformation. As a result, each triangle on the 3D finger would be mapped to a triangle on the cylinder, whereas points in between vertices of the triangle would be mapped using a linear interpolation. Parametric fingerprint unwrapping using the cylindrical model is efficient and straightforward, but it does not preserve the relative distances between points on the finger surface. Figure 6.7 provides a visual illustration of the problem. For example, the surface distance d(A, B) between points A and B at the fingertip is much smaller than the distance d(C, D) between points 0' and D near the middle of the finger. However, since they both correspond to the same angle 0, the unwrapped distances d(A’, B') and d(C’, D’) become equal. In general, each cross section of the finger, big 149 Figure 6.7: Fingerprint unwrapping using the cylindrical model. Relative distances . between points on the finger surface are not preserved after the unwrapping procedure. 150 or small, is projected into a fixed-length row in the projected 2D image. As a result, horizontal distortion is introduced as the fingerprint will be noticeably stretched, especially at the fingertip, as shown in Figure 6.11(a). In addition to the stretching effects, parametric unwrapping also has limitations in preserving the size of the finger. Using the cylindrical model as an example, the mapping in the horizontal direction is based on the angle rather than the surface distance, and hence, size differences in the horizontal direction between diflerent fingers need to be compensated for after the unwrapping. Non-parametric Based Virtual Rolling In the non-parametric approach an object with arbitrary shape is directly unwrapped without being projected onto a model. The objective is to preserve the geodesic dis- tance between any two points in a local region on the object surface. This property is desirable in fingerprint unwrapping because fingerprint matching often involves distances between feature points. If we can preserve such distances during unwrap- ping, the problem of interoperability between touchless and touch-based fingerprints is then reduced to accounting for the skin deformation caused by contact. The non— parametric approach also guarantees that the variability in finger shape and size is preserved. The essential idea of the proposed non-parametric method is to “locally unfold” the finger surface such that both inter-point surface distances and scale are preserved to a maximum degree. More specifically, for a given 3D finger, we divide it into thin parallel slices, orthogonal to the principle axis of the finger, and unfold each slice without stretching. Because human fingers have very smooth structure, as long as each slice is sufficiently thin, the resulting unwrapped fingerprint texture will be smooth. Figure 6.8 shows the triangular mesh representation of a 3D finger, where only vertices (no lines) of triangles are shown. Note that these vertices are represented 151 / Figure 6.8: 3D representation of a finger (viewed from the bottom to the tip of a finger). Vertices of the triangular mesh are represented in slices. in slices at different heights of the finger. However, these slices are not dense enough to obtain a smooth unwrapping. As a result, linear interpolation is applied between vertices to obtain a denser representation. given slice Si 1. I interpolated slice $1.1 U 4".) interpolated slice S12 | given slice Si+1 Si+1.Pk Figure 6.9: Slice interpolation. We interpolate between given slices with a step-size h to obtain denser representation in the vertical direction (y-axis) for unwrapping. Let S,- and 8,-1.1 be the given slices from the triangular mesh and h be the step- size (distance between slices in the dense representation) for interpolation. Figure 6.9 152 gives an illustration of the procedure. Let Si.PJ-, Si.Pj+1 and Si+1-Pk be the three vertices of a given triangle. The position of the interpolated point Si,1.Pa is obtained as follows: Si,1-Pa--T = t X Si+l'Pk‘$ + (1 -- t) X Si.Pj.:L‘ (6.1) Si,1.Pa.y = t X Si+1.Pk.y + (1 — t) X Si.Pj.y (6.2) Si,1.Pa.Z = Si.Pj.Z + h, (6.3) where t Si,1.Pa.Z — Si.Pj.Z — Si.PJ-.z — Si+1.Pk.Z (6.4) is the proportion parameter. This procedure is repeated for every sampling distance h along the z-axis. Each slice in the dense representation corresponds to a row in the final unwrapped fingerprint image. I . , baseline I Si.Pa+1 k M Figure 6.10: Equidistant sampling. We sample points on each slice with equal dis- tance h to obtain finer representation in the horizontal direction for unwrapping. The baseline defines the central column of the 2D image that the fingerprint will be mapped to. Once a dense representation in the vertical direction is established, we apply similar interpolation horizontally across each slice to sample points equidistantly with the same sampling distance h to preserve the scale of the finger. The value of h is 153 chosen such that the resulting image has the same image resolution specified by the device. This procedure can be considered equivalent to virtually rolling a finger on a surface. The ith sample point on the jth slice corresponds to the pixel at ith row and jth column of a 2D image. A baseline to start unfolding each slice, is also defined as the intersecting line (curve) between the 3D finger and a plane passing through the principal axis in the center of the finger. In other words, the direction of unwrapping is established from the center of the finger to the nail side to minimize rolling distortion. We will show that the baseline can be reset at each nail side to simulate rolling from left to right and vice versa. Figure 6.10 illustrates the virtual rolling procedure; the algorithm is described in detail below: 154 for 2' = 1 : n (iterate through all slices) for j = 1 : m (sample to the right) dist = ”Si-Pa+j — Si-Qj—lll; if (dist > h) t=h- 25‘s? Si.Qj.$ = t X Si'Pa+j°x + (1 - t) X Si'Qj—Lx; Sz'Qj'y = t X Si.Pa+j.y + (I - t) X Si'Qj-l'y; else ”Si-Pa+j+1"Si-Pa+j”, 323jS = t X Si-Pa+j+1-$ +(1 — t) X Si.Pa+j.:lI; Si.Qj.y = t X Si-Pa+j+1'y + (I — t) X Si-Pa+j-y; for j = 1 : 1 (sample to the left) dist = “Sz'~Pa—j+1 - 310—341”; if (dist > h) _ h. t— ist’ Si.Q_j.:I: = t X Si-Pa—j+1-$ +(1-t) X Si'Q—j+l'$; Si-Q—joy = t X Si°Pa-—j+1-y + (1 - t) X 5562—34141; else t = h—d’iSt . IISi-Pa—j+1—Sz'-Pa—j||’ Si.Q_J-.:II = t X Si°Pa—j'$ + (1 -- t) X S’i'Pa-j+l'$; 3239—333! = t X Si'Pa—j'y + (1 — t) X Si'Pa—j+l'y; Figures 6.11 (a) and (b) show unwrapped touchless fingerprint images using the cylinder-based parametric model and the proposed non—parametric method, respec- tively. It can be seen that the proposed unwrapping method better preserves the surface distances than the cylinder-based parametric method, especially at the finger When the baseline of the unwrapping procedure moves from the center of the finger 155 Stretch (a) (b) Figure 6.11: Unwrapping a 3D fingerprint (ridge enhanced) using (a) the cylindrical- based parametric method and (b) the proposed non-parametric method. Note the stretching effects at the fingertip in (a) are more severe than those in (b). (a) Figure 6.12: 3D finger “virtual rolling” from (a) left to right, and (b) right to left. 156 to each nail side (see Figure 6.12), the procedure of legacy rolled print acquisition can be simulated. Figure 6.13 shows the unwrapped images obtained by virtually “rolling” the 3D fingerprint from left to right and from right to left. As we can see, increase in distortion can be observed as unwrapping proceeds, simulating the effects of pushing the skin from one nail side to the other during legacy rolling (see Figure 6.14). Generally, legacy fingerprint rolling requires all fingers (from their tip to the first joint) to be rolled from “nail to nail.” In particular, thumbs should be rolled from inside to outside of the nail, while other fingers of the right hand should be rolled from left to right, and from right to left for the left hand [19]. If such a protocol is to be strictly followed, the 3D touchless fingerprints should be unwrapped accordingly to produce compatible distortions caused by rolling. (a) (b) Figure 6.13: Directional “virtual rolling” by setting the baseline at (a) the left nail side, and (b) the right nail side. Each simulates the skin distortion during rolling from left to right, and from right to left, respectively. Two sets of minutiae points and their relative distances are marked to show the effects of distortion caused by directional rolling. 6.3.2 Image Enhancement In touch-based sensing, fingerprints are captured based on the presence/ absence of the ridges/valleys in contact with the sensor surface, resulting in binary-like images with ridges having gray values close to 0 and valleys close to 255. In touchless 157 (b) Figure 6.14: Directional distortion in legacy rolled fingerprints: (a) rolling from left to right; (b) rolling from right to left. Similar skin distortion patterns are illustrated with two sets of minutiae points. sensing, however, fingerprints are captured as images at a distance. As a result, ridges and valleys are not separated during acquisition and the resulting touchless fingerprint images often present lower contrast compared to the legacy fingerprint images. Therefore, image enhancement is necessary to increase the image contrast. I'(-r..v) Figure 6.15: Homomorphic filtering procedure. We propose to use homomorphic filters for touchless fingerprint enhancement. Homomorphic filtering has been commonly used to increase the image contrast by sharpening features and flattening lighting variations in an image [65]. The image formation model assumed for homomorphic filters is often characterized by two com- ponents: illumination L and reflectance R. Both components can be combined to 158 Figure 6.16: Touchless fingerprint enhancement: (a) an unwrapped touchless finger- print image; (b) homomorphic enhancement of (a); (c) Gabor filtering of (b). 159 give the image function F: f(x, 9) = 144.9) ° Maw), (65) where 0 < L(:1:,y) < co and 0 < R(a:, y) < 1. The illumination component captures the lighting conditions present during image acquisition, while the reflectance com- ponent represents the way the object reflects light, which is an intrinsic property of the object. The essential idea of homomorphic filtering is to separate the two compo- nents using a log transform and then a high-pass filter to enhance reflectance while at the same time reducing the contribution of illumination. Specifically, homomorphic filtering consists of the following five steps (see Figure 6.15): 0 Use natural log to transform equation (6.5) from multiplicative to additive: Z6619) = 1111109, 31)] =1n[L($,3/) - 13059)] = 11111413 31)] + 1an($, 9)] (66) 0 Apply Fourier transform: 921511) = 31111144. y>1 + 311118151111 , (6.7) 0 Apply a high pass filter H (u, v) in the fi'equency domain: S(u,v) = H(u, v)-Z(u,v) = H(u, v) -8‘ln[L(x,y)] +H(u,v)-8‘ln[R(a:,y)] (6.8) 0 Apply Inverse Fourier transform: E(2:, y) = (Cf—1501,12) = 8’1H(u, v) - 81n[L(x,y)] + 3_1H(u, v) - 8‘1n[R(:r, 11)] (6.9) 160 0 Obtain the final enhanced image by exponential operation: I’(a:, y) = exp E(2:, y) (6.10) The high pass filter H (u, u) used in the above transformation is defined as H(u, v) = (6-11) 1 1+ [$124 where D(u, v) is the Euclidean distance from the origin of the centered transform and Do = 0.1 and n = 1 are the cutofl frequency and order of the filter, respectively. The filter H (u, v) is designed to decrease the contribution of the low frequencies (illumina- tion) and amplify the contribution of mid- and high- frequencies (reflectance), which is the key of homomorphic filtering. To obtain a normalized image within the range [0 255], we also apply adaptive histogram equalization after homomorphic filtering. Figure 6.16 shows the enhancement of an unwrapped touchless fingerprint image. The proposed enhancement algorithm increases the image contrast and facilitates subsequent extraction of ridge/ valley patterns in the fingerprint image. 6.3.3 Image Quality Due to imaging artifacts caused by diffused illumination, self-occlusion and perspec- tive distortion, the image quality of the touchless fingerprint images vary in different local regions. This effect becomes very significant when different absorption prop- erties of the tissues and blood vessels in the finger cause non-uniform illumination during image acquisition. As a result, it is desirable to deve10p a quality measure for touchless fingerprint images that takes into consideration local quality variations. We apply the coherence-based local quality proposed in Section 3.3.2 to estimate the quality of touchless fingerprints. This quality index measures the contrast level 161 (a) (b) h (c) A (d) Figure 6.17: Cascade quality assessment in local regions (80 x 80 pixels) with (a) excellent quality, (b) good quality, (c) medium quality, and (d) poor quality. The first and second row show the fingerprint regions and their corresponding quality estimates, respectively. The darker a region of an image in the second row, the lower the image quality in that region. as well as the strength of the dominant orientation in the local region. The cascade implementation prOposed in Section 3.3.2 is also adopted for optimization purposes. Figure 6.17 shows four local regions (80 x 80 pixels) of Figure 6.16 (a) with quality estimates. Next, we compare the estimated qualities of an unwrapped touchless fingerprint image, its enhanced image and the corresponding rolled fingerprint (see Figure 6.18). The overall quality of each of the three images, computed as the average of local quality indices, is 0.28, 0.76 and 0.44, respectively. As we can see, the proposed enhancement method greatly improves the image quality of the unwrapped touchless fingerprint. 6.4 Interoperability Experiments To demonstrate the effectiveness of the proposed unwrapping and enhancement al- gorithms, TBS [17] kindly provided us a database that contains four touchless im- 162 (a) ' (b) (C) (d) (e) (0 Figure 6.18: Comparing image quality: (a) an unwrapped touchless fingerprint image; (b) local quality map of (a); (c) enhanced image of (a); ((1) local quality map of (c); (e) corresponding rolled fingerprint; (f) local quality map of (e). White pixels denote good quality and black pixels mean poor quality. The global quality of (a), (c) and (e) is 0.27, 0.68, and 0.51, respectively. 163 pressions captured using the Surround Imager and one mated ink-on-paper rolled impression from 102 fingers (see Figure 6.19). (a) (b) Figure 6.19: Sample images from the second database: (a) an unwrapped touchless fingerprint and (b) the corresponding rolled fingerprint. The touchless fingerprints are first converted from 3D to 2D using the proposed non-parametric “virtual rolling” algorithm and the quality of each image is estimated before and after enhancement using the proposed algorithms. The distributions of global image quality of the touchless fingerprints with and without enhancement is shown as boxplots in Figure 6.20 (a), together with the distribution of global image quality estimated from the mated rolled fingerprints. The figure shows that the en- hancement algorithm effectively improves the overall image quality of the touchless fingerprints to a level that is comparable with legacy fingerprints. Next, a commer— cial matcher [8] is used to evaluate the matching performance on all possible pairs of touchless impressions and rolled fingerprints. The verification ROCS are plotted in Figure 6.21. As we can see, the enhancement algorithm significantly improves the matching performance, resulting in higher compatibility between touchless and ink-on—paper rolled fingerprints. However, further investigation is needed (in both hardware and software) to address the remaining differences on image quality and distortions before achieving full compatibility between touchless and legacy rolled 164 9 9 00 VD _s 1mm] .0 \1 1—______._ G 1 Global Quality .0 g .o A 1 O O '—4 k) . touchless enhanced touchless ink-on-paper Figure 6.20: Box plots of estimated global quality indices on touchless fingerprints (with and without enhancement) and their corresponding rolled fingerprints. 100 95 - 90 85 80 75-. 4 —touch|ess vs. rolled (without enhancement) 70 —touchless vs. rolled (with enhancement) - WIUOChleSS VS. touchless (With Ullhdllhb'lllcllt) -2 Genuine Accept Rate(%) 65 10 10'1 10° 101 False Accept Rate(%) 102 Figure 6.21: Verification ROCS on matching touchless and mated rolled fingerprints on the TBS database. 165 Figure 6.22: Matching an enhanced touchless fingerprint and its corresponding rolled fingerprint shown in Figure 6.19. The two fingerprints are aligned with the matched minutiae marked with red circles. fingerprints. Note in the same plot that matching among touchless fingerprints shows almost perfect accuracy. This demonstrates that by avoiding contact, the touchless imaging technology produces highly consistent and reproducible fingerprint images that effectively reduces the intra—class variations in matching. Finally, an example of matching the two genuine fingerprints in Figure 6.19 is demonstrated in Figure 6.22. There are 130 minutiae (marked as blue dots) extracted from the touchless fingerprint and 64 minutiae extracted (marked as green dots) from the corresponding ink-on—paper rolled fingerprint. During matching, 49 pairs of minutiae are found to be in correspondence (marked as red circles). 6.5 Summary 3D touchless sensing is a novel fingerprint sensing technology. Because of its contact- free and full-area imaging technique, the resulting images are likely to lead to more 166 consistent and accurate fingerprint representation, and hence, is potentially “ideal” for many government and law enforcement applications. However, due to the intrin- sic difierences between the touchless and touch-based sensing, fingerprints generated from the two technologies are not fully compatible. We have proposed a “virtual rolling” process that unwraps the 3D touchless fingerprints as 2D images using a non-parametric equidistant approach. We have also prOposed an image enhancement algorithm to optimize the contrast between ridges and valleys in touchless finger- prints. The effectiveness of the proposed algorithms to achieve interoperability be- tween touchless and legacy rolled fingerprints is successfully demonstrated in our ex- periments. Note that touchless imaging is potentially beneficial for latent matching as it provides a truly complete and close-to—ground-truth representation of fingerprints. However, touchless fingerprints lack the large amount of skin-distortion that is often seen in latents, and further studies are needed to study such difference and determine the interoperability between touchless fingerprints and latents. 167 Chapter 7 Summary and Future Work Automatic fingerprint matching systems have played a major role in forensics and criminal investigations. However, these systems have not yet completely eliminated the need for manual examination, especially in matching latent prints. Automatic systems utilize a limited feature set compared to forensic experts and are not able to easily adapt to handle variations in image quality and resolution. In this thesis, we have proposed to address these issues by introducing extended features and touchless imaging in automatic fingerprint matching. A summary of this thesis is given below. Chapter 1 provided a brief background on fingerprint matching, including fin- gerprint formation, acquisition and representation. The manual and automatic fin— gerprint matching procedures and their differences were presented and implications of these differences with respect to matching performance were discussed. Major challenges in semi-automatic and automatic latent matching were also presented. Chapter 2 gave a brief introduction to fingerprint features and their categorization (e. g., Level 1, Level 2 and Level 3). We emphasized that fingerprint features do not exist in isolation, but are highly correlated. A wide range of extended features and their interrelationship information used in manual identification were described in detail. Fingerprint feature standards for both minutiae and extended features were 168 presented and discussed. Chapter 3 proposed algorithms to automatically extract a set of extended features (i.e., ridge skeletons, pores and dots/incipients) and their interrelationship informa- tion. The accuracy of the proposed extended feature extraction was evaluated by comparing with the manually extracted features. Experiments showed that the pro- posed algorithm for automatic extraction of Level 3 features (pores and dots) is less accurate compared to minutiae, however, the large quantity of pores may possess suf- ficient discriminative power in matching and potentially offset the extraction errors. Chapter 4 investigated matching of extended features. In particular, ridge-based matching establishes ridge correspondences while correcting for non-linear distortion so that Level 3 features can be better aligned and matched. A hierarchical fusion framework that combines scores or ranks of each feature was also prOposed. Experi— mental results demonstrated that i) all the proposed extended features (ridge skele- tons, pores, dots / incipients) are effective in improving the verification performance on both MSU—full and MSU—partial databases; ii) ridge matching is more effective than pores and dots / incipients in improving latent identification performance on the NIST-27 database, especially on latents with poor quality or small size. In Chapter 5 studies on fingerprint individuality were surveyed and the use of both minutiae and extended features in fingerprint individuality modeling was inves- tigated. To utilize extended features in assessing fingerprint individuality, we sepa- rately modeled minutiae as well as the ridge and pore features in their neighborhoods for fingerprints of five major classes. Experimental results based on both theoretical and empirical estimates on NIST-4 and N IST-30 databases demonstrated that (i) fin- gerprint individuality is dependent on the fingerprint class and (ii) extended features make significant contributions to the fingerprint individuality. In Chapter 6, we studied the interoperability between 3D touchless and 2D legacy rolled fingerprints. Algorithms to unwrap 3D touchless fingerprints by virtually 169 “rolling” the 3D finger were described. Methods to evaluate and enhance the im— age quality of unwrapped touchless fingerprints were also prOposed. Experimental results on the TBS database demonstrated that the proposed unwrapping algorithm is capable of generating “rolled-equivalent” touchless fingerprints and the enhance- ment algorithm is effective in improving the clarity of the unwrapped fingerprints across different image quality. In conclusion, the main contributions of this thesis are (1) algorithms for ex- traction and matching of extended features, (ii) individuality model of fingerprints using both fingerprint class, minutiae and extended features and (iii) interoperabil- ity of new generation touchless fingerprint acquisition systems with legacy fingerprint databases. Some additional topics for future research on extended features and sensor interoperability are proposed below. 0 Besides the extended features studied in this thesis, additional extended features such as minutiae shape and ridge width should be investigated. 0 As 1000 ppi latent and tenprint databases become available in the public do- main, the proposed extraction and matching of extended features should be reevaluated. o Real-time algorithms for extended feature extraction and matching as well as reduction of template size for efficient storage. 0 Fingerprint individuality is the fundamental basis for fingerprint matching. Cur- rent mathematical models are proposed under the assumption that fingerprints are of ideal quality and are large in size (rolled prints) with large number of minutiae. In practice, however, fingerprint images captured can be of various quality and size. These factors have tremendous impact on the individuality of fingerprints. As a result, studying the effects of image size and quality (e.g., latents) on the individuality of fingerprints is essential. 170 0 Evaluation of the interoperability between 3D touchless fingerprints and 2D legacy rolled fingerprints on a larger database. 171 APPENDICES 172 Appendix A Numerical Implementation of Thin-Plate Spline with Tension Thin-plate spline (TPS) [40] is a useful tool for surface interpolation over scattered data, which is a mathematical analogy of measuring the physical bending of a thin metal plate on point constraints. Because of its second-order property, TPS lacks the elasticity needed to model fingerprint distortion. To solve this, first-order terms are introduced as tension parameters into the TPS model. Given two set of corresponding points U = (211,212, ..., ul)T and V = (111, v2, ..., ul)T, let uk and vk denote the spatial coordinates of the kth corresponding pair, where l is the total number of correspondences. The deformation F can be expressed as a mapping function from point set U to V: F(U) = c + A . U + WTS(U), (A.1) where parameters c and A represent the affine transform and W defines non-linear distortion. The distance measure S(U) is the vector (0(U — 111), 0(U -— 112), ..., 0(U — ul))T with ._1_(e-TIIU|| + THU“), (A2) 0(U) : 27'3 173 where 7' is a tension parameter that controls the degree of tension (elasticity) effects [41]. In Equation (A.1), there are a total of (6 + 21) parameters to be estimated, where c is a 2 x 1 vector, A is a 2 x 2 matrix and W is a l x 2 matrix. As Equation (A.1) requires 2l constraints to be satisfied in the spatial domain, the number of degrees of freedom is reduced by 2l. To further reduce the complexity of the problem, we assume that the coefficients W satisfy (i) ifw = 0 (2 restrictions) and (ii) UTW = 0 (4 restrictions), where It is a l xl vector. Finally, the coefficients c, A and W of the TPS with tension model can be numerically solved using the following set of equations: ' ""1 s 1, U W V 1,T 0 0 cT = 0. . (A.3) UT 0 0 AT L0 The matrix in Equation A.3 gives rise to a TPS with tension model that minimizes the total bending energy subject to the perfect alignment constraints in Equation A.1. In order to achieve robust distortion estimation in the presence of noise, the distortion model can be further relaxed by replacing S by (S + AIN) in Equation A.3, where I N is the N x N identity matrix. 174 Appendix B Estimating Mixture Density using EM algorithm A Incorporating Minutiae The objective is to model the minutiae position and orientation as a mixture of Guassian and Von-Mises functions, respectively. Specifically, we model a class of fingerprints using a K -cluster mixture density with parameters a = (pk, u, 2, V, Ii), given as where 9(17l/4k, 2k) = (4(0le1 '41:) = K 114-.0101): 2121912211». 21.114014 14.) (13.1) k=1 . 1 1 _ 2 (2wD/2|2k|)1/2 exp {—5 ((13 — MUTE]: 1(3: — 1216)) }(B.2) nD/Z—l k exp{nkug10} (B3) (27r)D/2ID/2_1(nk) 175 are with dimension D = 2 and under the constraints K 21);: = 1 (B.4) kzlpk 2 0 (B5) llell = 1 (B-G) xk 2 0 (B7) The function g() is a multivariate Gaussian, h(-) is a multivariate Von-Mises, and I,() is the modified Bessel function of the first kind and order 7‘. Given a set of N points, X, their corresponding orientations, O, the likelihood function of our model can be defined as ll :12 A(X7 0'0) f(IL'n, Onla) (B.8) n=1 N K = H E pk9($n|l4k12k)h(0n|Vk1Kk) (3.9) n=1 k=1 under the assumption that the samples of X and 0 are independent and identically distributed. Given this likelihood function, the density estimation problem can be defined as d = arg max/\(X, Ola) (B.10) To compute the maximum likelihood estimates for the parameters of a, we first define the log likelihood as N K /\(X10|0t) = 2108 Z Pk9($nlflk12k)h(0nl'/ka “kl (B-11) n=1 k=1 176 Computing partial derivatives with respect to each of the parameters, introducing Langrangian multipliers when necessary for the Specific constraints, results in N — k a 1 ilk : Znfillfi Il'n 0n (1)55” (B12) Zn=1P(kl$na%aa) 25:1 P(k|$n, 0n, 01) N 17k = Zonpwlximtaz) (13.14) n=1 A FD—F3 l N Pk = fizmklxnmma) (B.16) n=1 where N p(klxn,on,a) = pkflxn’onla) (B.18) zlepkflxmonla) The MLE estimate for "k defined above is only an approximation derived in [38]. Computing the actual estimate involves an implicit equation that is a ratio of Bessel functions, where no closed form solution can be obtained. The above equations are closely coupled because the terms p(k|xn, 011,0) on the right-hand sides depend on the terms of the left hand side (a). This is where the EM algorithm is utilized; starting with an initial guess for the parameters a = (pk, pk, 2k, ”k: 5k), the E—step is performed, updating the distribution of the hidden variables p(k|:cn, on, a), followed by the M-step, where each parameter is updated. This process is iterated until convergence. Specifically, the E-step computes Equations BI and B.18 while the M-step computes Equations B.12, B.13, 8.14, B.15, and B.16. The convergence criterion used is when the difference in log likelihood from one iteration 177 to the next is less than a threshold (0.001). B Incorporating Ridge Period In order to incorporate local ridge period into the EM algorithm, a Gaussian function (used to model the frequency) is introduced into the model: K f (1?, 0, 7‘ IO) = Z 191:9(33 Ink, Ek)h(0|l’k, Hk)i(7‘ lwk, 0k) (319) k=1 where g(rlpk, 2k) and h(oIz/k, 1%) are the Gaussian and Von-Mises probability density functions, defined previously and i(r|wk, 0k) is a univariate Gaussian, with density exp (~M) . (B20) 2 20k function i(T|Wk,0k) = 2 Zack Again, the derivation of the equations for EM remains exactly the same except for the incorporation of Equation B.19 instead of B23 and the addition of the update equations for (Jk and 09k; N ka: 1* ar wk : Zn=1p( | 72,071,071) n, )7). (B21) Eg=1p(klxna 0n, 0n, Tna) of Zg=1p(k|$na0na0n,7'n,a)”7‘n "LUkHZ k = \ Egzlpwlxnmnmmrma) (3.22) C Incorporating Ridge Curvature Finally, in order to incorporate local ridge curvature into the EM algorithm described above, an additional Poisson function (used to model the curvature) is introduced 178 into the model probability function: K f(xi 0: T, Cla) = Z pkg($l#ka 2k)h(0Il/k, Kk)i(rlwkv Uk)j(C|/\k) (B23) k=l where j(c|)\k) is a Poisson with density function e‘AkAi . (B24) j(Cl’\k) = c! The derivation of the equations for EM remains exactly the same, except for the incorporation of Equation B23 instead of BI and the addition of the update equation for th X _ Zg=1p(k|$n10n,0n,a)cn . (3.25) 2£=1p(klxn,on,on,a) 179 BIBLIOGRAPHY 180 Bibliography [1] Cogent AFIS Products. http://ww.cogentsystems.com/cogentAFIS.asp. [2] Darby Wallace Analysis. http : //c1pex . com/ images/ Darby-Wallace-Analysis/Erroneous-Match.htm. [3] Data Format for Extended Fingerprint and Palmprint Features, An Adden- dum to AN 81/ NIST-ITL 1-2007 Data Format for the Interchange of F inger- print, Facial, & Other Biometric Information. http://fingerprint.nist. gov/standard/CdeffS/Docs/CDEFFS_DraftStd_v04_2009-06-12.pdf. [4] Fast Capture Fingerprint/Palm Print Technology. http://www.ncjrs.gov/ pdffilesl/nij/31000673.pdf. [5] FBI Electronic Fingerprint Transmission Specification. http: //www.fbi .gov/ hq/cjisd/iafis/efts71/efts71.pdf. [6] Fingerprint Ink Strips. http : //www . digi- ids . com/ fingerprint-strip-refills.html. [7] FVC2006: Fourth Fingerprint Verification Competition. http://bias.csr. unibo.it/fvc2006/. [8] Innovatrics. http://www.innovatrics.com/. [9] Lumidigm Multispectral Fingerprint Imaging. http: //ww.1umidigm.com/. [10] Map Projections: From Spherical Earth to Flat Map. http : //nationalat1as. gov/artiales/mapping/aprojections.html. [1 1] Mitsubishi Touchless Fingerprint Sensor. http : / / g1 obal . mitsubishielectric.com. [12] Neurotechnology Verifinger SDK. http://www.neurotechnology.com/. [13] NIST Special Database 27 (NIST SD 27): Fingerprint Minutiae from Latent and Matching Tenprint Images. http://www.iiist .gov/srd/nistsd27.htm. [14] NIST Special Database 30 (NIST SD 30): Dual Resolution Images from Paired Fingerprint Cards. http : //www . ni st .gov/srd/nistsd30 . htm. 181 [15] NIST Special Database 4 (NIST SD 4): NIST 8-Bit Gray Scale Images of Fingerprint Image Groups. http://ww.nist .gov/srd/nistsd4.htm. [16] Pennsylvania Department of Aging. http://www.aging.state.pa.us/ psonlinetraining/cwp. [17] TBS Touchless Fingerprint Imaging. http://ww/tbsinc.com. [18] The Thin Blue Line. http://ww.policensw.com/info/fingerprints. [19] Tips for Rolling Fingerprints. http://apps.mentoring.org/safetynet/ fingtips.pdf. [20] Ultra-scan Ultrasound Fingerprint Sensing. http://www.ultra-scan.com/. [21] Wikipedia: The Free Online Encyclopedia. http://en.wikipedia.org/. [22] US. v. Mitchell. Cr. No.96—407—1, 1999. [23] US. v. Plaza, et al. 179 F. Supp. 2d 492, ED. PA, 2002. [24] US. v. Mitchell. 365 F. 3d 215, 3d Cir., 2004. [25] NIST Proprietary Fingerprint Template (PFT) Testing. http : / / f ingerprint . nist.gov/SDK/,2005. [26] Custody Unit Opens. http://www.bbc.co.uk/cumbria/content/image_ galleries/ custody_suite_car1 isle-20061130_ga11ery . shtm1?7, 2006. [27] FBI IAF IS Overview. http : / / f ingerprint . ni st . gov/standard/ presentations/archives/IAFISoverview_Feb_2005 . pd: , 2006. [28] Obtaining and Recording Physical Evidence of Fingerprints. http://www. f ree-ed . net / sweethaven/ Crime Just ice /CSI / def ault . asp? iNum=07, 2006. [29] Effect of Image Size and Compression on One-to-One Fingerprint Match- ing. ftp : / / sequoyah . ni st . gov/ pub/ ni st_interna1_reports/ ir_7201 . pdf , 2007. [30] Hearing Document of State of Maryland vs. Bryan Rose, Case No.: K06—O545. In the Circuit Court for Baltimore County, 2007. [31] N IST Minutiae Interoperability Exchange Test II. http : / / f ingerprint .nist. gov/minexII/, 2007. [32] Requirements Overview and Analysis Project — Next Generation Integrated Au- tomated Fingerprint Identification System (NGI). http : / / f ingerprint . nist. gov/standard/presentations/archives/NGI-0verview_Feb_2005.pdf, 2007. 182 [33] American National Standard AN 81/ NIST-CSL 1. Data Format for the In- terchange of Fingerprint Information. http://www.it1.nist.gov/div893/ biometrics/ standards . html, 1993. [34] AN SI/ N IST ITL 1-2000 Standard Workshop. Extended Fingerprint Feature Set. http: / / f ingerprint . nist . gov/standard/cdef fs/ index . html/ , 2005. [35] A. Anthonioz, N. Egli, and C. Champod. Level 3 Details and Their Role in Fingerprint Identification: A Survey Among Practitioners. Journal of Forensic Identification, 58(5):562—589, 2005. [36] D. R. Ashbaugh. Quantitative-Qualitative Friction Ridge Analysis: An Intro- duction to Basic and Advanced Ridgeology. CRC Press, 1999. [37] V. Balthazard. De L’identification Par Les Empreintes Digitales. Comptes Rendus, des Academies des Sciences, 152:1862, 1911. [38] A. Banerjee, I. Dhillon, and J. Ghosh. Clustering on the Unit Hypersphere Using von Mises-Fisher Distributions. Journal of Machine Learning Research, 6:1345—1382, 2005. [39] R. Bolle, N. Ratha, A. Senior, and S. Pankanti. Minutia Template Exchange Format. In Proc. of IEEE Workshop on Automatic Identification Advanced Technologies, pages 74—77, 1999. [40] FL. Bookstein. Principal warps: Thin plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 11(6):567—585, 1989. [41] A. Bouhamidi and A. Mehaute. Radial Basis Functions under Tension. Journal of Approximation Theory, 127:135—154, 2004. [42] C. Burrus, R. Gopinath, and H. Guo. Introduction to Wavelets and Wavelet Transforms. Prentice Hall, 1998. [43] T. A. Busey and J. R. Vanderkolk. Behavioral and Electrophysiological Ev- idence for Configural Processing in Fingerprint Experts. Vision Research, 45:431—448, 2005. [44] R. Cappelli, D. Maio, and D. Maltoni. Synthetic F ingerprint-Database Genera- tion. In Proc. IEEE Conference on Pattern Recognition (I CPR ), pages 744—747, Quebec City, Canada, 2002. [45] CDEFFS. AN SI / N IST Committee to Define an Extended Fingerprint Feature Set. http: / / f ingerprint . ni st . gov/ standard/ cdef f s/ index . html/ , 2009. [46] C. Champod, C. Lennard, P. Margot, and M. Stoilovic. Fingerprints and Other Ridge Skin Impressions. CRC Press, 2004. 183 [47] C. Champod and P. Margot. Computer assisted analysis of minutiae occurrences on fingerprints. In Proc. International Symposium on Fingerprint Detection and Identification, page 305, Jerusalem, 1996. [48] S. Chandra. Genetics of epidermal ridge minutiae. International Journal of Anthropology, 20(1-2):51—62, January 2005. [49] Y. Chen, S. C. Dass, and A. K. Jain. Fingerprint Quality Indices for Predicting Authentication Performance. In Proc. International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA), pages 160—170, Rye Brook, NY, July 2005. [50] D. Cotton. The FBI’s Tenprint and Criminal History Record ‘Error Policy’. In Proc. International Association for Identification’s (IAI) 92nd International Educational Conference, San Diego, CA, 2007. [51] H. Cumrnins and M. Midlo. Finger Prints, Palms and Soles: An Introduction To Dermatoglyphics. Dover Publications, 1961. [52] J. Dankmeijer. Frequency of Epidermal Ridge Minutiae in The Calcar Area of Japanese Twins. In American Journal of Human Genetics, volume 19, pages 660—673, 1967. [53] I. Dror and D. Charlton. Why Experts Make Errors. Journal of Forensic Identification, 56—4z600-616, 2006. [54] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley & Sons, 2001. [55] V. N. Dvornychenko and M. D. Garris. Summary of NIST Latent Fingerprint Testing Workshop. Technical report, National Institute of Standards and Tech- nology (NIST), 2006. NIST Internal Report 7377. [56] G. Fang, S. Srihari, and H. Srinivasan. Generative Models for Fingerprint Individuality Using Ridge Types. In Proc. International Workshop on Compu- tational Forensics, pages 423—428, Manchester, UK, 2007. [57] H. Faulds. Poroscopy: The Scrutiny of Sweat Pores for Identification. In Nature, page 635, 1913. [58] J. Feng, Z. Ouyang, and A. Cai. Fingerprint matching using ridges. Pattern Recognition, 39:2131-2140, 2006. ' [59] M. Figueiredo and A. K. Jain. Unsupervised Learning of Finite Mixture Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(3):381— 396, 2002. [60] P. Franke. Thin plate splines with tension. Computer Aided Geometric Design, 2:87—95, 1985. 184 [61] F. Galton. Personal Identification and Description. Nature, 382173—177, June 1888. [62] F. Galton. Fingerprints (reprint). Da Capo Press, 1965. [63] E. German. Latent Print Examination. http://www.onin.com/fp, 2007. [64] A. S. Glassner. Graphics Gems. Elsevier, 1998. [65] R. C. Gonzalez and R. E. Woods. Digital Image Processing. Prentice Hall, Upper Saddle River, NJ, 2002. [66] P. Grother. Performance and Interoperability of the INCITS 378 Fingerprint Template. Technical report, National Institute of Standards and Technology (NIST), March 2006. NIST Internal Report 7296. [67] S. Gungadin. Sex Determination from Fingerprint Ridge Density. Internet Journal of Medical Update, 2(2):4-7, 2007. [68] S. Gupta. Statistical Survey of Ridge Characteristics. Jounal of International Criminal Police Review, 218(130):130—134, 1968. [69] E. Henry. Classification and Uses of Fingerprints. Routledge and Sons, London, 1900. [70] T. K. Ho, J. J. Hull, and S. N. Srihari. Decision Combination in Multiple Classi- fier Systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1):66—75, January 1994. ‘ [71] L. Hong. Automatic Personal Identification Using Fingerprints. PhD thesis, Department of Computer Science and Engineering, Michigan State University, 1998. [72] L. Hong, Y. Wan, and A. K. Jain. Fingerprint Image Enhancement: Algo- rithms and Performance Evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8):777—789, 1998. [73] R. A Huber. The Philosophy of Identification. In RCMP Gazette Magzine, 1972. [74] P. Hymer. Extraction and Application of Secondary Crease Information in Fingerprint Recognition Systems. Master’s thesis, Department of Science and Technology, Linkopings University, Sweden, March 2005. [75] M. Indovina, V. Dvornychenko, E. Tabassi, G. Quinn, P. Grother, S. Meagher, and M. Garris. ELFT Phase II - An Evaluation of Automated Latent Finger— print Identification Technologies. NIST Technical Report NISTIR 7577, N a- tional Institute of Standards and Technology (NIST), April 2009. 185 [76] A. K. Jain, Y. Chen, and M. Demirkus. Pores and Ridges: High-Resolution Fingerprint Matching Using Level 3 Features IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1):15—27, 2007. [77] A. K. Jain and F. U. Farrokhnia. Unsupervised texture segmentation using Gabor filters. Pattern Recognition, 24(12):167—186, 1991. [78] A. K. Jain and J. Feng. Latent Fingerprint Matching. Technical Report MSU- CSE—09-10, Michigan State University, 2009. [79] A. K. Jain, L. Hong, and R. Bolle. On—Iine Fingerprint Verification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):302—314, April 1997. [80] A. K. Jain, A. Nagar, and K. Nandakumar. Latent Fingerprint Matching. Technical Report MSU-CSE-07-203, Michigan State University, 2007. [81] A. K. Jain, K. Nandakumar, and A. Ross. Score Normalization in Multimodal Biometric Systems. Pattern Recognition, 38(12):2270—2285, December 2005. [82] A. K. Jain, S. Prabhakar, L. Hong, and S. Pankanti. Filterbank—based Finger- print Matching. IEEE Transactions on Image Processing, 9(5):846—859, May 2000. [83] A. K. Jain, S. Prabhakar, and S. Pankanti. On the similarity of identical twin fingerprints. Pattern Recognition, 35(8):2653—2663, 2002. [84] A. Joliat. Third Level Characteristics: Study of The Variability Within An Individual. Master’s thesis, Institut de Police Scientifique et de Crirninologie, University of Lausanne, 1999. [85] M. I. Jordan and R. A. Jacobs. Hierarchical Mixtures of Experts and the EM Algorithm. Neural Computation, 6(2):181-214, March 1994. [86] C. Kingston. Probabilistic Analysis of Partial Fingerprint Patterns. PhD thesis, University of California, Berkeley, 1964. [87] Y. Kita, D. L. Wilson, and J. A. Noble. Real-time Registration of 3D Cere- bral Vessels to X—ray Angiograms. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 1125—1133, Cam- bridge, MA, Octobor 1998. [88] P. Komarinski. Automated Fingerprint Identification Systems (AFIS). Elsevier Academic Press, 2005. ’ [89] P. Kovesi. Symmetry and Asymmetry From Local Phase. In Proc. Australian Joint Conference on Artificial Intelligence, pages 185—191, 1997. 186 [90] K. Kryszczuk, A. Drygajlo, and P. Morier. Extraction of Level 2 and Level 3 Features for Fragmentary Fingerprints. In Proc. COST Action 275 Workshop, pages 83—88, Vigo, Spain, 2004. [91] M. Kucken and A. Newell. Fingerprint formation. Journal of Theoretical Biol- ogy, 235:71—83, 2005. [92] D. Maio and D. Maltoni. Direct Gray-Scale Minutiae Detection in Fingerprints. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19—1:27—40, 1997. [93] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain. FVC2004: Third Fingerprint Verification Competition. In Proc. International Conference on Biometric Authentication (ICBA), pages 1—7, Hong Kong, China, July 2004. [94] D. Maio, D. Maltoni, J. L. Wayman, and A. K. Jain. FVC2002: Second Finger- print Verification Competition. In Proc. International Conference on Pattern Recognition (ICPR), pages 811—814, Quebec City, Canada, August 2002. [95] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar. Handbook of Fingerprint Recognition. Springer-Verlag, 2003. [96] K. V. Mardia. Statisiics of Directional Data. Academic Press, New York, 1972. [97] K. Nandakumar. Multibiometric Systems: Fusion Strategies and Template Secu- rity. PhD thesis, Department of Computer Science and Engineering, Michigan State University, 2008. [98] K. Nandakumar, Y. Chen, S. C. Dass, and A. K. Jain. Likelihood Ratio Based Biometric Score Fusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2):342-347, 2008. [99] National Institute of Standards and Tehcnology. ANSI/NIST—ITL Standards for Fingerprint and Other Biometric Information. httpz/ / f ingerprint .nist. gov/standard/,2007. [100] H. V. D. Nieuwendijk. Fingerprints. http://www.clpex.com/Articles/ TheDetail/ 100- 199/TheDetail 172 . htm, 2004. [101] J. Osterburg. Development of A Mathematical Formula for The Calculation of Fingerprint Probabilities Based on Individual Charectiristics. Journal of American Statistical Association, 772272, 1997. [102] S. Pankanti, S. Prabhakar, and A. K. Jain. On the Individuality of Fingerprints. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(8):1010— 1025, 2002. [103] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Prentice—Hall, 1982. 187 [104] N. R. Parsons, J. Q Smith, E. Thonnes, L. Wang, and R. G. Wilson. Ro- tationally Invariant Statisties for Examining the Evidence from the Pores in Fingerprints. Technical report, Centre for Research in Statistical Methodology (CRiSM), 2007. [105] G. Parziale. Touchless fingerprint technology. In N. K. Ratha and V. Govin- daraju, editors, Advances in Biometrics, pages 25—48. Springer, 2007. [106] G. Parziale, E. Diaz-Santana, and R. Hauke. The Surround Imager: A Multi- Camera Touchless Device to Acquire 3D Rolled-Equivalent Fingerprints. In Proc. IAPR International Conference on Biometrics (ICB), pages 244—250, Hong Kong, China, January 2006. [107] F. L. Podio, J. S. Dunn, L. Reinert, C. J. Tilton, L. O’Gorman, P. Collier, M. Jerde, and B. Wirtz. Common Biometric Exchange File Format (CBEFF). Technical Report NISTIR 6529, National Institute of Standards and Teclmology (NIST), January 2001. [108] S. Prabhakar, S. Pankanti, and A. K. Jain. Biometric Recognition: Security and Privacy Concerns. IEEE Security and Privacy Magazine, 1(2):33——42, 2003. [109] N. Ratha, S. Y. Chen, and A. K. Jain. Adaptive Flow Orientation-Based Feature Extraction in Fingerprint Images. Pattern Recognition, 28-11:1657—1672, 1995. [110] A. R. Roddy and J. D. Stosz. Fingerprint Features - Statistical Analysis and System Performance Estimates. Proc. the IEEE, 85(9):1390—-1421, 1997. [111] A. Ross, S. C. Dass, and A. K. Jain. Fingerprint Warping Using Ridge Curve Correspondences. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 28(1):19—30, 2006. [112] A. Ross and A. K. Jain. Biometric Sensor Interoperability: A Case Study in Fingerprints. In Proc. ECCV International Workshop on Biometric Authen- tication (BioAW), volume 3087, pages 134—145, Prague, Czech Republic, May 2004. Springer. [113] A. Ross, K. Nandakumar, and A. K. Jain. Handbook of Multibiometrics. Springer, 2006. [114] R. K. Rowe, S. P. Corcoran, K. A. Nixon, and R. E. Ostrom. Multispectral Imaging for Biometrics. In Proc. SPIE Conference on Spectral Imaging: In- strumentation, Applications, and Analysis, volume 5694, pages 90—99, Orlando, March 2005. [115] T. Roxburgh. On Evidential Value of Fingerprints. Indian Journal of Statistics, 1:189—214, 1933. 188 [116] S. Sangiorgi, A. Manelli, T. Congiu, A. Bini, G. Pilato, M. Reguzzoni, and M. Raspanti. Microvascularization of the Human Digit as Studied by Corrosion Casting. Journal of Anatomy, 204:123—131, 2004. [117] J. G. Skellam. The Frequency Distribution of the Difference Between Two Pois- son Variates Belonging to Different Populations. Jounal of the Royal Statistical Society, 109(3):294, 1946. [118] J. P. Snyde. Flattening the Earth: Two Thousand Years of Map Projections. The University of Chicago Press, 1993. [119] D. Stoney and J. Thornton. A Critical Analysis of Quantitative Fingerprints Individuality Models. Journal of Forensic Sciences, 31:1187—1216, 1986. [120] J. D. Stosz and L. A. Alyea. Automated System for Fingerprint Authentication Using Pores and Ridge Structure. In Proc. SPIE Conference on Automatic Systems for the Identification and Inspection of Humans, volume 2277, pages 210—223, San Diego, 1994. [121] D. Summers. Texturing: Concepts and Techniques. Charles River Media Inc., 2004. [122] SWGFAST. Scientific Working Group on Friction Ridge Analysis, Study and Technology. http://www.swgfast.org/, 2006. [123] J. Thornton. Latent Fingerprints, Setting Standards in the Comparison and Identification. In Proc. 84th Annual Training Conference of the California State Division of IAI, Nevada, May 2000. [124] M. Trauring. Automatic Comparison of Finger-ridge Patterns. Nature, 1972938— 940, 1963. [125] R. Tsai. An Efficient and Accurate Camera Calibration Technique for 3D Ma- chine Vision. In Proc. IEEE Conference on computer Vision and Pattern Recog- nition, pages 364-374, Florida, USA, 1986. [126] M. Vatsa, R. Singh, and A. Noore. Unification of Evidence Theoretic Fusion Algorithms: A Case Study in Level-2 and Level-3 Fingerprint Features. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 39(1):47—56, January 2009. [127] C. I. Watson and C. L. Wilson. Effect of Image Size and Compression on One— to—One Fingerprint Matching. Technical report, National Institute of Standards and Technology (NIST), Feburary 2005. NIST Internal Report 7201. [128] B. Wentworth and H. Wilder. Personal Identification. Cooke T.G. Press, 1932. [129] C. Wilson, G. Candela, and C. Watson. Neural Network Fingerprint Classifi- cation. Journal of Artifical Neural Networks, 1(2):203—228, 1993. 189 [130] C. Wilson, A. R. Hicklin, M. Bone, H. Korves, P. Grother, B. Ulery, R. Micheals, M. Zoepfi, S. Otto, and C. Watson. Fingerprint Vendor Technology Evaluation 2003: Summary of Results and Analysis Report. NIST Technical Report NIS- TIR 7123, National Institute of Standards and Technology (NIST), June 2004. [131] C. Wu, J. Zhou, Z. Bian, and G. Rong. Robust Crease Detection in Fingerprint Images. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 505-510, Madison, Wisconsin, June 2003. [132] N. Yager and A. Amin. Fingerprint classification: A review. Pattern Analysis Application, 7:77—93, 2004. [133] O. Yang, W. Tobler, J. Snyder, and Q. H. Yang. Map Projection Transforma- tion. Taylor & Francis, 2000. [134] Z. Zhang. Flexible Camera Calibration by Viewing A Plane from Unknown Orientations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11:1330—1334, 2000. [135] J. Zhou, C. Wu, and D. Zhang. Improving Fingerprint Recognition Based on Crease Detection. In Proc. International Conference on Biometric Authentica- tion (ICBA), pages 287—293, Hong Kong, China, July 2004. [136] Y. Zhu, S. Dass, and A. K. Jain. Statistical Models for Assessing the Individual- ity of Fingerprints. IEEE Transactions on Information Forensics and Security, 2:391—401, 2007. [137] Y. F. Zhu. Statistical Models for Fingerprint Individuality. PhD thesis, Depart- ment of Computer Science and Engineering, Michigan State University, 2008. 190 [Ill] 3062 904 ”D 1293 3 (Minimum