A N A CCURATE ,E FFICIENT , AND R OBUST F INGERPRINT P RESENTATION A TTACK D ETECTOR By TarangChugh AD ISSERTATION Submittedto MichiganStateUniversity inpartialoftherequirements forthedegreeof ComputerScienceŒDoctorofPhilosophy 2020 A BSTRACT A N A CCURATE ,E FFICIENT , AND R OBUST F INGERPRINT P RESENTATION A TTACK D ETECTOR By TarangChugh Theindividualityandpersistenceofisbeingleveragedforaplethoraofday-to- dayautomatedpersonrecognitionapplications,rangingfromsocialdisbursementsand unlockingsmartphonestolawenforcementandbordersecurity.Whiletheprimarypurposeof arecognitionsystemistoensurereliableandaccurateuserrecognition,thesecurity ofthesystemitselfcanbejeopardizedbytheuseofpresentationattacks(PAs).A PAis 1 asapresentsation,ofaspoof(fake),altered,orcadaver,tothedata capturesystemreader)intendedtointerferewiththerecordingofthetrue sample/identity,andtherebypreventingcorrectuserrecognition. Inthisthesis,wepresentanautomated,accurate,andreliablesoftware-onlypre- sentationattackdetector(PAD),called FingerprintSpoofBuster .,weproposeadeep convolutionalneuralnetwork(CNN)basedapproachutilizinglocalpatchescenteredandaligned usingntminutiae.TheproposedPADachievesstate-of-the-artperformanceonpublicly availablelivenessdetectiondatabases(LivDet)andlarge-scalegovernmentcontrolledtestsaspart oftheIARPAODINprogram 2 .Additionally,wepresentagraphicaluserinterfacethathighlights localregionsoftheimageas 3 orPAforvisualexamination.Thisofferssig- advantageoverexistingPADsolutionsthatrelyonasinglespoofscorefortheentire image. Deeplearning-basedsolutionsareinfamouslyresourceintensive(bothmemoryandprocessing) andrequirespecialhardwaresuchasgraphicalprocessingunits(GPUs).Withthegoalofreal-time inferenceinlow-resourceenvironments,suchassmartphonesandembeddeddevices,wepropose 1 ISOstandardIEC30107-1:2016,https://www.iso.org/standard/53227.html 2 ODIN,fiIARPA-BAA-16-04(Thor)fl,https://www.iarpa.gov/index.php/research-programs/odin/odin-baa,2016. 3 Intheliterature,thetermlivehasbeenprimarilyusedtoreferajuxtaposedtospoof However,inthecontextofallformsofpresentationattacks,isamoreappropriate termassomePAssuchasalteredalsoexhibitcharacteristicsofliveness[107]. aseriesofoptimizationsincludingsimplifyingthenetworkarchitectureandquantizingmodel weights(forbytecomputationsinsteadofpointarithmetic).Theseoptimizationsenabled ustodevelopalight-weightversionofthePAD,called FingerprintSpoofBusterLite ,asanAndroid application,whichcanexecuteonacommoditysmartphone(SamsungGalaxyS8)withaminimal dropinPADperformance(fromTDR= 95 : 7% to 95 : 3% @FDR= 0 : 2% )inunder 100 ms. Typically,deeplearning-basedsolutionsareconsideredasfiblack-boxflsystemsduetothelack of interpretability oftheirdecisions.OneofthemajorlimitationsoftheexistingPADsolutionsis theirpoorgeneralizationagainstPAmaterialsnotseenduringtraining.Whileitisobservedthat somematerialsareeasiertodetect(e.g.EcoFlex)comparedtoothers(e.g.Silgum)whenleftout fromtraining,theunderlyingreasonsareunknown.Wepresentaframeworktounderstandand interpretthegeneralization(cross-material)performanceoftheproposedPADbyinvestigatingthe materialpropertiesandvisualizingtheandPAsamplesinthemultidimensionalfeature spacelearnedbydeepnetworks.Furthermore,wepresenttwodifferentapproachestoimprovethe generalizationperformance:(i)astyletransfer-basedwrapper,called UniversalMaterialGener- ator (UMG),and(ii)adynamicapproachutilizingtemporalanalysisofasequenceof imageframes.Thetwoproposedapproachesareshowntosigimprovethegeneralization performanceevaluatedonlargedatabasesofandPAsamples. Lastly,readersbasedonconventionalimagingtechnologies,suchasoptical,capaci- tive,andthermal,onlyimagethe2Dsurfacemakingthemaneasytargetforpresentation attacks.Incontrast,OpticalCoherentTomography(OCT)imagingtechnologyprovidesrichdepth information,includingtheinternaleccrine(sweat)glands,aswellasPAinstruments (spoofs)placedoverskin.Asacontribution,wepresentanautomatedPADapproach utilizingcross-sectionalOCTdepthscanswhichisshowntoachieveaTDRof 99 : 73% @ FDRof 0 : 2% onadatabaseof 3 ; 413 and 357 PAOCTscans,fabricatedusing 8 different PAmaterials.WealsoidentifythecrucialregionsintheOCTscansnecessaryforPAdetection. Copyrightby T ARANG C HUGH 2020 Tomylovingparents,sister,andmylove v A CKNOWLEDGMENTS AsmyPh.D.approachesitsculminationwiththisdissertation,Iwouldliketoacknowledge therolesofseveralindividualswhowereinstrumentalinthesuccessfulcompletionofmyPh.D. research.Foremost,Iwanttoexpressmydeepestgratitudetomyadvisor,Prof.AnilK.Jain,for hisunwaveringsupportandencouragementtostriveforexcellence.Hisintuition,rigor, andpassionforresearchhasalwaysinspiredmetogivemybestinallmyendeavors.Hisabilityto explaincomplexthingsthroughsimpleexamples,systematicinvestigationofaproblem, andattentiontodetail,aresomeofthethingsthatIwillalwayslookuptofortherestofmylife. Apartfrombeingagreatscientist,heisanextraordinaryhumanbeingwithahumblenatureanda caringheart.Ialsowanttothankmyundergraduateadvisors,Prof.MayankVatsaandProf.Richa Singh,forbelievinginmeandencouragingmetopursuehigherstudies. IwouldalsoliketoexpressmysinceregratitudetomyPh.D.committee,Prof.ArunRoss, Prof.XiaomingLiu,andProf.VidyadharMandrekar,forevaluatingmyworkandprovidingvalu- ablecommentsandsuggestions.IamgratefultoDr.KaiCao,whosewillingnesstoanswermy numerousquestionswithenthusiasmhasnourishedmyintellectualmaturityduringtheinitialyears ofmyPh.D.IwouldalsoliketothankProf.JiayuZhou,ElhamTabassi,andNicholasG.Paulter Jr.,fortheirinvaluableguidancethatprovedmonumentaltowardsthesuccessofseveralstudies relatedtolatentalteredandminutiaeextractors.Iwouldalso liketothankProf.PhilipEisenlohr,Dr.AritraChakraborty,andGeetaKumarifromDept.of ChemicalEngineeringandMaterialScience,forprovidingtheirinsightsoninvestigatingpresen- tationattackmaterialcharacteristics;aspecialthankstoNataliaPajaresforherimmensehelpwith theexperiments. EverydayduringmyPh.D.studieshasbeenagreatopportunityforlearning,thankstomy colleaguesinPRIPLab,CVLab,andiPRoBeLab.Ourweeklymeetings,valuablediscussions andfeedbackssmyapproachtoresearch.AbigthankyoutoSunpreet, Radha,Lacey,Charles,Inci,Keyur,Debayan,Josh,Sixue,Yichun,Steven,Divyansh,Vishesh, vi Joel,Yaojie,Amin,Sudipta,Anurag,Shivangi,Renu,Cunjian,andThomas.Aspecialthanksto ChrisPerryformanagingintegrationofoursolutions,assistingwithassemblinghardware,and beingacheerfulperson. IwouldliketoexpressmysinceregratitudetoDr.SrimatChakradharandDr.YiYang,forgiv- ingmetheopportunitytointernatNECLabsAmerica,Princeton,NewJersey.Itwasawonderful industryexperience,whereIworkedonaveryimportantproblemofautomatedtattoodetection andrecognition. Lastbutnottheleast,theinvaluableroleplayedbymyparentsandsisterisbeyondwords. Iamblessedwithawonderfulfamilyandwillalwaysbeindebtedtothemfortheireverlasting supportandencouragement.Overthepastveyears, miamor ,Swati,hasstoodbymethrough thickandthin,bringingoutthebestinme.Beingfarawayfromfamilywasnoteasy,butIam gratefultothefriendships,ImadehereinMichigan,whohavebecomemyfamilynow;especially, Yashesh,Vikram,Prakash,Sabya,Kanchan,Garima,Kokil,Mayank,Abhinav,Kamla,Aritra,Sap Da,Preetam,Sayali,Rahul,andmanymore. ThisresearchisbaseduponworksupportedinpartbytheOfoftheDirectorofNationalIn- telligence(ODNI),IntelligenceAdvancedResearchProjectsActivity(IARPA),viaIARPAR&D ContractNo.2017-17020200004.Theviewsandconclusionscontainedhereinarethoseofthe authorsandshouldnotbeinterpretedasnecessarilyrepresentingtheofpolicies,eitherex- pressedorimplied,ofODNI,IARPA,ortheU.S.Government.TheU.S.Governmentisautho- rizedtoreproduceanddistributereprintsforgovernmentalpurposesnotwithstandinganycopyright annotationtherein. vii T ABLEOF C ONTENTS LISTOFTABLES ....................................... xi LISTOFFIGURES ...................................... xiv LISTOFALGORITHMS ................................... xxiv Chapter1Introduction .................................. 1 1.1MorphologyandDevelopmentofFrictionRidges..................3 1.1.1FundamentalTenetsofFingerprintRecognition...............6 1.2FingerprintRecognitionMilestones..........................7 1.2.1EarlyDevelopments.............................7 1.2.2SeminalStudies..........................8 1.2.3LandmarksinLawEnforcementApplications................10 1.2.4NotableUseinCivilandCommercialApplications.............11 1.3DesignofAutomatedFingerprintRecognitionSystems...............14 1.3.1FingerprintAcquisition............................16 1.3.1.1SensingTechnologies.......................18 1.3.2FeatureExtraction..............................22 1.3.3TemplateDatabase..............................23 1.3.4FingerprintMatching.............................24 1.4ChallengesinFingerprintRecognition........................25 1.4.1AutomaticLatentFingerprintRecognition..................26 1.4.2InteroperabilityofFingerprintReaders....................27 1.4.3VulnerabilitiesofanAFIS..........................27 1.4.3.1PresentationAttackDetection...................28 1.4.3.2TemplateProtection........................30 1.5DissertationContributions..............................31 Chapter2FingerprintPresentationAttackDetection .................. 34 2.1Introduction......................................35 2.2RelatedWork.....................................38 2.2.1StudiesonFingerprintSpoofDetection...................38 2.2.2StudiesonAlteredFingerprintDetection...................39 2.3FingerprintSpoofBuster...............................40 2.3.1MinutiaeExtraction..............................45 2.3.2LocalPatchExtraction............................46 2.3.3MobileNetCNN...............................46 2.3.4Fine-grainedFingerprintImageRepresentation...............47 2.3.5SpoofnessScore................................49 2.3.6OnRobustnessofPatch-basedRepresentation................49 2.3.7GraphicalUserInterface(GUI)........................51 viii 2.4AlteredFingerprints:DetectionandLocalization..................52 2.4.1AlteredFingerprintDetection.........................52 2.4.2LocalizationofAlteredRegions.......................53 2.4.3AlterationScore................................55 2.5End-to-EndPresentationAttackDetection......................55 2.6ExperimentalResults.................................56 2.6.1PerformanceEvaluationMetrics.......................56 2.6.2PresentationAttackDatasets.........................57 2.6.2.1LivDetDatasets..........................57 2.6.2.2MSUFingerprintPresentationAttackDataset...........58 2.6.2.3PreciseBiometricsSpoof-KitDataset...............60 2.6.2.4GovernmentEvaluationDatasets(GCT-I,II,andIII)......61 2.6.2.5AlteredFingerprintDataset....................62 2.6.3SpoofDetectionResults...........................63 2.6.3.1Intra-Sensor,KnownSpoofMaterials...............64 2.6.3.2Intra-Sensor,Cross-Material....................67 2.6.3.3Cross-SensorEvaluation......................68 2.6.3.4Cross-DatasetEvaluation.....................69 2.6.3.5GovernmentControlledTests...................70 2.6.4AlteredFingerprintDetectionandLocalization...............72 2.7VisualizingCNNLearnings..............................75 2.8ComputingTimes...................................78 2.9FingerprintSpoofBusterLite.............................78 2.9.1ProposedOptimizations............................79 2.9.2AndroidApplication.............................81 2.10Summary.......................................82 Chapter3FingerprintPADGeneralization ....................... 83 3.1Introduction......................................83 3.2DatabasesusedtoinvestigateFingerprintGeneralization..............86 3.3UnderstandingPADGeneralization..........................87 3.3.1PerformanceagainstUnknownMaterials...................88 3.3.2PAMaterialCharacteristics..........................89 3.3.2.1OpticalProperties.........................90 3.3.2.2MechanicalProperties.......................91 3.3.33Dt-SNEVisualizationofandPAs................93 3.3.4RepresentativeSetofPAMaterials......................94 3.4ImprovingPADGeneralization............................97 3.4.1UniversalMaterialGenerator.........................97 3.4.1.1RelatedWork............................99 3.4.1.2ProposedApproach........................99 3.4.1.3UMG-WrapperforPADGeneralization..............105 3.4.1.4ExperimentsandResults......................107 3.4.1.5ComputationalRequirements...................112 3.4.1.6FabricatingUnknownPAs.....................113 ix 3.4.2TemporalAnalysisforPADGeneralization.................115 3.4.2.1ProposedApproach........................117 3.4.2.2NetworkArchitecture.......................121 3.4.2.3ImplementationDetails......................122 3.4.2.4ExperimentalResults........................123 3.4.2.5ProcessingTimes..........................125 3.5Summary.......................................125 Chapter4PresentationAttackDetectionforOCTFingerprintImages ........ 127 4.1Introduction......................................128 4.1.1RelatedWork.................................131 4.2ProposedApproach..................................133 4.2.1Preprocessing.................................134 4.2.2Otsu'sBinarization..............................135 4.2.3LocalPatchExtraction............................135 4.2.4ConvolutionNeuralNetworks........................136 4.3ExperimentalResults.................................137 4.3.1OCTPresentationAttackDatabase......................137 4.3.2Results....................................138 4.3.3VisualizingCNNLearnings.........................139 4.4Summary.......................................141 Chapter5Summary .................................... 143 5.1Contributions.....................................144 5.2SuggestionsforFutureWork.............................146 BIBLIOGRAPHY ....................................... 147 x L ISTOF T ABLES Table2.1Performancecomparison(AverageError[%])ofsoftware- basedspoofdetectionstudiesonLivDet2011,2013,2015,and2017competition datasets.Sincedifferentcompetitiondatabasesutilizedifferentread- ers(optical/thermal/capacitive),spoofmaterials,andmodesofdatacollection (cooperative/uncooperative),adirectperformancecomparisonbetweendifferent databaseswillnotbeafaircomparison........................38 Table2.2Relatedworkonaltereddetection.Thereisnopublic-domain altereddatabaseavailableintheliterature.................39 Table2.3Networkhyper-parametersutilizedintrainingCNNmodelsforaltered gerprintdetectionandlocalization...........................55 Table2.4SummaryoftheLivenessDetection(LivDet)datasets(LivDet2011and LivDet2013)utilizedinthisstudy...........................57 Table2.5SummaryoftheLivenessDetection(LivDet)datasets(LivDet2015and LivDet2017)utilizedinthisstudy...........................58 Table2.6SummaryoftheMSUFingerprintPresentationAttackDataset(MSU-FPAD) andPreciseBiometricsSpoof-KitDataset(PBSKD).................59 Table2.7SummaryofthedatasetscollectedduringGovernmentControlledTest (GCT)I,II,andIIIaspartoftheIARPAODINprogram[123]............62 Table2.8Performancecomparisonbetweentheproposedapproach(bottom)andstate- of-the-art(top)reportedonLivDet2015dataset[113].Separatenetworksare trainedonthetrainingimagescapturedbyeachofthefourreaders. Ferrfakeknown and Ferrfakeunknown correspondtoKnownSpoofMaterialsand Cross-Materialscenarios,respectively.........................65 Table2.9Performancecomparisonbetweentheproposedapproachandstate-of-the-art resultsreportedonLivDet2011andLivDet2013datasetsforintra-sensorexperi- mentsintermsofAverageError(ACE)andFerrfake@Ferrlive=1%.66 Table2.10AverageError(ACE),Ferrfake@Ferrlive=0.1%andFer- rlive=1%ontheMSUFingerprintPresentationAttackDataset(MSU-FPAD)and PreciseBiometricsSpoof-KitDataset(PBSKD)forintra-sensorexperiments....66 xi Table2.11Performancecomparisonbetweentheproposedapproachandstate-of-the- artresults[114]reportedonLivDet2017datasetforcross-materialexperimentsin termsofAverageError(ACE)andFerrfake@Ferrlive=1%.....69 Table2.12Performancecomparisonbetweentheproposedapproachandstate-of-the-art resultsreportedonLivDet2011andLivDet2013datasetsforcross-materialexper- iments,intermsofAverageError(ACE)andFerrfake@Ferrlive= 1%...........................................69 Table2.13Performancecomparisonbetweentheproposedapproachandstate-of-the-art results[119]reportedonLivDet2011andLivDet2013datasetsforcross-sensor experiments,intermsofAverageError(ACE),andFerrfake@Fer- rlive=1%........................................70 Table2.14Performancecomparisonbetweentheproposedapproachandstate-of-the-art results[126]reportedonLivDet2011andLivDet2013datasetsforcross-dataset experiments,intermsofAverageError(ACE)andFerrfake@Fer- rlive=1%........................................70 Table2.15TrueDetectionRate(%)@FalseDetectionRate=0.2%ontheGCT-I,GCT- II,andGCT-IIIevaluationdatasets...........................71 Table2.16DetectiontimeandPADperformance(TDR@FDR= 0 : 2% )ofFingerprint SpoofBusterLite....................................80 Table3.1Summaryofthestudiesprimarilyfocusedonspoofgeneraliza- tion.TheperformancemetricsutilizedindifferentstudiesincludeACE=Average Error;EER=EqualErrorRate;andTDR=TrueDetectionRate (spoofs)@aedFDR=FalseDetectionRate(spoofs)...............84 Table3.2SummaryoftheMSU-FPAD-v2andLivDet2017datasets.Spoofngerprint imagesincludedinthetestsetofLivDet2017arefabricatedusingnewmaterials thatarenotusedinthetrainingset..........................88 Table3.3SummaryoftheSilkIDFastFrameRatedatabasecollectedat GCT-IIIaspartofIARPAODINProgram[123]....................89 Table3.4Summaryofthedatasetandgeneralizationperformance(TDR(%)@FDR = 0 : 2% )withleave-one-outmethod.Atotaloftwelvemodelsaretrainedwhere thematerialleft-outfromtrainingistakenasthenewmaterialforevaluatingthe model..........................................90 xii Table3.5Generalizationperformance(TDR(%)@FDR= 0 : 2% )ofstate-of-the-art spoofdetectors, i.e. ,Slim-ResCNN[172]andFingerprintSpoofBuster(FSB)[24], withleave-one-outmethodonMSU-FPADv2dataset.Atotaloftwelveexperi- mentsareperformedwherethematerialleft-outfromtrainingistakenasthefiun- knownflmaterialforevaluation.............................110 Table3.6Performancecomparisonbetweentheproposedapproachandstate-of-the-art CNN-onlyresults[24,172]onLivDet2017datasetforcross-materialexperiments intermsofAverageAccuracy(ACA)andTDR@FDR=1.0%....111 Table3.7Cross-sensorspoofgeneralizationperformanceonLivDet2017 datasetintermsofAverageAccuracyandTDR@FDR=1.0%....111 Table3.8Studiesprimarilyfocusedonpresentationattackdetectionusing temporalanalysis....................................116 Table3.9Performancecomparison(TDR(%)@FDR=0.2%and1.0%)betweenthe proposedapproachandtwostate-of-the-artmethods[24,172]forknown-material scenario,wherethespoofmaterialsusedintestingarealsoknownduringtraining..123 Table3.10Performancecomparison(TDR(%)@FDR=0.2%and1.0%)between theproposedapproachandtwostate-of-the-artmethods[24,172]forthreecross- materialscenarios,wherethespoofmaterialsusedintestingareunknownduring training.........................................123 Table4.1ExistingstudiesonOpticalCoherentTomography(OCT)based presentationattackdetection..............................130 Table4.2SummaryoftheOpticalCoherentTomography(OCT)databasecollectedat GCT-IIaspartofIARPAODINProgram[123]....................136 Table4.3Summaryoftheve-foldcross-validationandtheperformanceachievedus- ingInception-v3model.................................139 xiii L ISTOF F IGURES Figure1.1Fingerprintrecognitionbasedauthenticationsystemsusedinday-to-day applications.(a)India'sAadharProgram[159],(b)ApplePay[3],(c)International BorderCrossing,USVisit(OBIM)[34],and(d)AccessControl[149].Image Source:GoogleImages.................................2 Figure1.2Illustrationofthemorphologicalstructureofthefrictionridgeskin.Image reproducedfrom[120].................................3 Figure1.3Illustrationoftheformationprocess.(a)Volarpadsbeginform- ingduringweeks 6 7 ofgestation,(b)-(c)localizedridgeunitsappear,and(d)-(g) ridgeunitsmergetoformridgeswithuniquecharacteristicsduringweeks 10 11 , (h)wholevolarsurfaceisridgedby 14 weeks,(i)sweatglandsandporesbegin formingduringweeks 14 15 ,and(j)secondaryridgesbegintoforminweeks 15 17 andarefullymaturedby 24 weeksofgestation.Imagesreproducedfrom[77].5 Figure1.4FingerprintsofWilliamJ.Herschel'sson(A.E.H.Herschel)atages(a)7, (b)17,and(c)40years.Imagesreproducedfrom[69]................6 Figure1.5Fingerprintsofasubjectatages 34 , 40 , 42 , 43 , 44 ,and 45 yearsoldfrom thelongitudinaldatabaseusedin[171].........................7 Figure1.6Timelineillustratingsomeofthemajormilestonesinthehistoryof- printrecognition....................................9 Figure1.7India'sAadhaaristhelargestbiometricsbasedsysteminthe world,withmorethan 1 : 25 billionenrollments[159](March,2020).(a)Asample AadhaarIDcardcontaininga12-digituniquenumberwhichislinkedtoanindivid- ual'sdemographicandbiometricinformation.(b)Someoftheapplicationswhich utilizeAadhaarIDincludeselectronic-KnowYourClient(e-KYC)service,distri- butionofgovernmentsubsidies,processingincometaxandemployeeprovident funds..........................................12 Figure1.8Fingerprint-basedauthenticationisusedinmanycommercialapplications, includingexecutingtransactions,unlockingdevices,accesscontrol,etc. (a)AuserenrollingtheirinSamsungGalaxyS10withanin-display ultrasound-basedsensor,(b)userauthenticationinATMtransactions, and(c)biometric-enabledpaymentcardswithembeddedsensorand on-cardstoragefortemplate........................13 xiv Figure1.9Thetwomajorstagesofarecognitionsystem(a)enrollmentand (b)recognition(vorarepresented.Thesestagesusethe followingmodules:capture,featureextraction,templatecreation,matching,and templatedatabase.Imageadaptedfrom[104].....................15 Figure1.10TenprintcardusedbytheFBItocollectimpressionsofallten Thetoptworowspresenttherolledimpressionsofalltenandthe bottomrowpresentstheplain/slapimpressionsin4-4-2pattern.Imagereproduced from[83]........................................17 Figure1.11Twotypesofcooperativeacquisitionmethods:(i)off-line methodusingink-on-papertechnique,and(ii)live-scanmethodusinganelectronic readertocaptureadigitalfrictionridgeimpression............18 Figure1.12Differenttypesofimpressions:(a)Plain/Flat,(b)Rolled,(c) Slap,and(d)Latent...........................19 Figure1.13Setupofopticalreadersutilizing(a)aglassprismforFrus- tratedTotalInternal(FTIR)oftheincidentlightimagedusingCCD orCMOSsensor,(b)direct-viewmulti-spectralsetupemployingpolarizedillumi- nationofdifferentwavelengths,and(c)anin-displayopticalsensingsystemfor smartphones[65,104,138]..............................20 Figure1.14Opticalsensorsutilizedinourexperiments,namelyCrossMatch Guardian200,SilkIDSLK20R,andLumidigmV302.................21 Figure1.15(a)Opticalcoherencetomography(OCT)scannercanbeusedtoimagethe internalstructureas(b)2Dand(c)3Ddepthle.Imagesreproduced from(a)[154],(b)IARPAODINProgram(GCT-II)[123]and(c)[33].......22 Figure1.16Fingerprintfeaturesareintothreelevels:(i)Level-1features basedonglobalridgepattern,(ii)Level-2featuresbasedonlocalridge characteristics,suchasridgeendings,bifurcations,etc,and(iii)Level-3features includingdetailslikesweatpores,incipientridgesandcreases.Imagesrepro- ducedfrom[104]...................................24 Figure1.17Differentcomponentsinarecognitionsystemarevulnerableto varioustypesofattacksshowninred.Thisthesiscontributestowardsaddressing someofthechallengespertainingtopresentationattackdetection..........28 Figure1.18Fingerprintpresentationattackscanberealizedusing(a)gummy gers[57,108],(b)2Dor3Dprintedtargets[4,5,14],(c)altered gers[170],or(d)cadaver[105]........................29 xv Figure1.19Exampleproceduretocreateandirectlyfromalive .Plasticisusedtocreatethemoldandgelatinisusedasthecastingmaterial. Imagereproducedfrom[105].............................30 Figure1.20Exampleimagesofaltered(a)Transplantedfrictionridgeskin fromsole,and(b)thathavebeenbitten.Imagesource:[170]........31 Figure2.1Fingerprintspoofattackscanberealizedusingvariousreadilyavailablefab- ricationmaterials,suchasPlayDoh,WoodGlue,Gelatin,etc.Foreachoftheim- agepairs,theleftimagepresentstheactualspoofspecimenwhiletherightimage presentsthegrayscalentimpressioncapturedofthatspoofonaCross- MatchGuardian200reader.........................36 Figure2.2Visualcomparisonbetween(a)aliveand(b)thecorrespond- ingspoofs(ofthesamemadewithdifferentmaterials.Imagesaretaken fromLivDet-2011dataset(Biometrikasensor)[167].Ourmethodcansuccess- fullydistinguishbetweenliveandspoofThespoofnessscoreforlive is 0 : 00 ,andforspoofthescoresare 0 : 95 , 0 : 97 , 0 : 99 , 0 : 99 , and 0 : 95 forx,Gelatin,Latex,Silgum,andWoodGlue,respectively......37 Figure2.3Aliveimage(fromLivDet2015dataset)capturedusingCross- MatchLScanGuardianinits(a)originaldimensions( 800 750 ),and(b)resized to 227 227 .Adirectdownsizingoftheimagemayresultinthefriction ridgeareaoccupyinglessthan10%oftheoriginalimagesize,leadingto cantlossofdiscriminatoryinformation.Instead,localpatches( 96 96 upscaledto 227 227 ),asshownin(c),providesalientcuestodifferentiateaspoof fromlive.................................41 Figure2.4(a)Exampleofaliveandthecorrespondingspoofngerprint withtheartifactsintroducedinthespoofshighlightedinred.(b)Localregions highlightedasgreen(live)andred(spoof)byevaluatingallminutiae-centeredlo- calpatches( 96 96 ).(c)Asubsetofminutiae-basedlocalpatchesalongwiththeir individualspoofnessscores.TheimagesaretakenfromMSUFingerprintPresen- tationAttackDataset(MSU-FPAD)-CrossMatchSensorandthespoofmaterial usedisSiliconex).Thespoofnessscoresoutputbytheproposedapproach fortheliveandspoofare0.06and0.99,respectively.(Bestviewedin color).........................................42 Figure2.5AnoverviewoftheproposedFingerprintSpoofBuster[24],astate-of-the- artPAD,utilizingCNNstrainedonlocalpatchescenteredandaligned usingminutiaelocationandorientation,respectively.Atotalnumberof 30 minu- tiaearedetectedintheinputimage.....................44 xvi Figure2.6Localpatchesextractedaroundtheminutiaefor(a)real- print,and(b)spoof(gelatin),and(c)alignedusingminutiaeorientation. Thespoofnessscoreforeachpatchisintherange [0 1] ;higherthescore,more likelythepatchisextractedfromaspoofForagiveninputtestim- age,thespoofnessscorescorrespondingtothelocalpatchesareaveragedtogivea globalspoofnessscore.Thedecisionismadebasedonathresh- oldlearnedfromthetrainingdataset;animagewithaglobalspoofnessscorebelow thethresholdisclasaslive,otherwiseasspoof.Onlyasubsetofdetected gerprintminutiaeareshownforillustrativepurposes.................45 Figure2.7Theproposedapproachprovidesarepresentationforspoofde- tectionbyusingminutiae-basedlocalpatches.Aspooffabricatedusing siliconewhichconcealsonlyapartialregionoftheliveisshownin(a)and theimagedin(b)(enclosedinred).Theproposedapproachextracts andevaluatestheminutiae-basedlocalpatches,andhighlightsthelocalregionsas live(ingreen)orspoof(inred)asshownin(c)and(d).Itcanalsohighlightthe regionsofalterationsasshownforafiZflcutalteredin(e),(f) and(g).Theproposedapproachdetected(b)and(e)asspoofswiththespoofness scoresof 0 : 78 and 0 : 65 ,respectively.(Bestviewedincolor).............48 Figure2.8Illustratingtheembeddingsofminutiae-basedlocalpatches( 96 96 ),for (a)livepatch,(b)spoofpatch,and(c)spoofpatch(retouchedtoremove visibleartifacts),in1024-dimensionalfeaturespacefromMobileNet-v1bottleneck layer,transformedto 32 32 heatmaps,(d),(e),and(f),respectively,forvisual- ization.Ahighspoofnessscoreforthespoofpatchisachieved,despite removalofartifacts,indicatingtherobustnessoftheproposedapproach.(Best viewedincolor)....................................50 Figure2.9InterfaceoftheproposedFingerprintSpoofBuster.Itallowsselectionof thereaderandCNNmodel.(Bestviewedincolor)............51 Figure2.10Typesofalterations:(i)Obliteration,suchasscars,ormutila- tions,(ii)Distortion, i.e. ,frictionridgetransplantationtodistortfrictionridgearea, and(iii)Imitation, i.e. ,transplantationorremovaloffrictionridgeskinwhilestill preservinglikepattern...........................52 Figure2.11Examplesofalteredandcorrespondingmanuallymarkedre- gionsofinterest(ROI)circumscribingtheareasofalterations.Local patchesoverlappingwithmanuallymarkedROIarelabeledasalteredpatches, whiletherestarelabelledasThetestphaseisfullyautomaticanddoes notrequireanymanualmarkup............................53 xvii Figure2.12Examplesofalteredlocalizationbyourproposedmethod.Local regionshighlightedinredrepresentthealteredportionofthewhereas regionshighlightedingreenthefrictionridgearea.(Bestviewed incolor)........................................54 Figure2.13Anoverviewoftheproposedapproachfordetectionandlocalizationof alteredWetrainedtwoconvolutionalneuralnetworks(Inception-v3 andMobilenet-v1)usingfullimagesandlocalpatchesofimageswhere patchesarecenteredonminutiaelocations.......................54 Figure2.14Anoverviewoftheproposedend-to-endpresentationattackdetection.(Best viewedincolor)....................................56 Figure2.15ExampleimagesfromMSUFingerprintPresentationAttackDataset(MSU- FPAD)acquiredusing(a)CrossMatchGuardian200,and(b)LumidigmVenus302 readers.NotethatLumidigmreaderdoesnotimagePlayDoh(orange) spoofs..........................................59 Figure2.16ExampleimagesfromPreciseBiometricsSpoof-KitDataset(PBSKD)ac- quiredusing(a)CrossMatchGuardian200,and(b)LumidigmVenus302- printreaders.NotethatLumidigmreaderdoesnotimageSilicone(EcoFlex)spoofs withNanoTipsandBarePaintcoatings.........................60 Figure2.17IllustrationofthetimelineofIARPAODINProgram[123].ThePhase-III willbecompletedinMarch2021...........................61 Figure2.18HistogramofNFIQ2.0qualityscoresforalid(green)andaltered (red)images.Approximately,75%ofalteredimageshave aNFIQ2.0scoreof40orlower,andonly10%ofaltereddatasethasaNFIQ2.0 scoreoflargerthan50.ThemedianNFIQ2.0scoreforalteredimages is23,whilemedianNFIQ2.0scoreforimagesis48.This suggestsNFIQ2.0'ssuitabilityfordetectingalteredparticularlyfor casesofobliteration.(Bestviewedincolor)...............63 Figure2.19Exampleofalteredandimagesusedfortrainingand testinginoneofthevefolds.Thealteredregionishighlightedinred.TheNFIQ 2.0qualityscoresarealsopresentedforeachimage;thelargerNFIQ2.0score,the higherquality.TheNFIQ2.0qualityscoresrangesbetween[0,100]...64 Figure2.20ExampleliveandspoofforBiometrikasensorfromLivDet 2015dataset,correctlyandincorrectlyedbyourproposedapproach.(Best viewedincolor)....................................67 xviii Figure2.21ROCcurvesforlivev.spoofofimagesfrom LivDet2011Dataset(Biometrikasensor)utilizing(i)wholeimage,(ii)ran- domlyselectedpatches[ 96 96 ],(iii)minutiae-basedpatchesofsize[ p p ], p 2f 64 ; 96 ; 128 g ,(iv)score-levelfusionofmulti-resolutionpatches.(Bestviewed incolor)........................................68 Figure2.22Performancecurvesfortheproposedaltereddetectionapproach utilizingInception-v3andMobileNet-v1CNNmodels.Yoonetal.[170](baseline) achievedaTDRof70% @ FDR=2%on4,433alteredwhilethe proposedapproachachievesaTDR(overvefolds)of99.24% 0.58% @ FDR= 2%on4,815altered(Bestviewedincolor)...............71 Figure2.23Alterationscorehistogramsforandalteredobtained bytheproposedapproachusingthebestperformingInception-v3model.Thesmall overlapbetweentheandalteredscoredistributionsisanindicationofhigh discriminationpowerofthemodel.NotethattheY-axisispresentedinlogscale. (Bestviewedincolor).................................72 Figure2.24Exampledetectionsandtheiralterationscoresoutputbytheproposedap- proach.(a)and(d)presentcorrectlyimages,while(b)and(c)present incorrect(b)athatreceivesahighalteration scoreprimarilyduetothenoisyregionontheright.(c)containsasmallregionof alterationwhichissimilartothenoisepresentin.......73 Figure2.25Exampleimageswithpossiblegroundtruthlabelingerror.(a)Incorrectly labeledasaltered,and(b)incorrectlylabeledasTheInception-v3model outputsanalterationscoreof0.20and0.97for(a)and(b),respectively,indicating (a)asand(b)asaltered............................74 Figure2.26Aconfusionmatrixofcorrectandincorrectofand PApatches.Thecrucialregionsthatareresponsibleforthepredictionmadebythe CNNarchitecture(CNN-Fixations)andthecorrespondingdensityheatmapsare illustratedoneachlocalpatch.............................75 Figure2.27ExamplesofmiandPAimagesalongwith thespoofnessscore(SS)outputbytheCNNarchitecture.Densityheatmapsofthe arealsopresented...........................76 Figure2.28Illustrationoftheoutputs,foraliveandaspoofpatch, aftertheandthirdconvolutionlayersintheCNNarchitecture(Inception-v3). Differentfocusondifferentfeaturessuchaslocationofsweatpores,noise artifacts,frictionridge,valleynoise,etc........................77 xix Figure2.29Minutiaeclustering.(a)image;(b)extractedminutiaeoverlaid on(a);(c) 96 96 patchescenteredateachminutiae;(d)minutiaeclusteringusing k-means(kissetto10here).Theclusters,highlightedasyellowcirclesofsame size,areshownonlyforillustrativepurposes.Inpractice,theclustersizesmay varybasedontheminutiaedistribution........................79 Figure2.30UserinterfaceoftheAndroidapplication, FingerprintSpoofBusterLite shownin(a).Itallowsselectionofaninferencemodelasshownin(b).The usercanloadaimagefromphonestorageorcapturealivescanfrom aintreaderasshownin(c).TheappexecutesPADanddisplaysthe decisionalongwithhighlightedlocalpatchesonthescreenshownin(d)and(e)...81 Figure3.1LightabsorbancepropertyoftwelvePAmaterialsin200nm-800nmwave- lengthspectrumcomputedusingaPerkinElmarLambda900UV/Vis/NIRspec- trometer[130].....................................91 Figure3.2FourierTransformInfraredSpectroscopy[148]oftwelvePAmaterialsin the260-375wavenumberrange............................91 Figure3.3Representationofandpresentationattackinstruments fabricatedwithdifferentmaterialsinthe3Dt-SNEfeaturespace.Theoriginal representationis1024-dimensionalobtainedformthetrainedCNNmodel.(Best viewedincolor).Availablein3Dathttps://plot.ly/ n protect n unhbox n voidb@x n penalty n @M nfg icbsubmission/0/livepa-feature-space/................92 Figure3.4RepresentationofanddifferentsubsetsofPAmaterialsin3Dt- SNEfeaturespacefromdifferentanglesselectedtoprovidethebestview.The (darkgreen)andsilicone(navyblue)areincludedinallgraphsforper- spective.(Bestviewedincolor)............................93 Figure3.5AveragePearsoncorrelationvaluesbetween12PAmaterialsbasedonthe materialcharacteristics(twoopticalandtwophysical)................94 Figure3.6Acomplete-linkdendrogramrepresentingthehierarchical(agglomerative) clusteringofPAsbasedonthesharedmaterialcharacteristics.............95 xx Figure3.73Dt-SNEvisualizationoffeatureembeddingslearnedbyFingerprintSpoof Buster[24]of(a)live(darkgreen)andelevenknownPAmaterials(red)(2D printedpaper,3Duniversaltargets,conductiveinkonpaper,dragonskin,gold gers,latexbodypaint,monsterliquidlatex,playdoh,silicone,transparency,and woodglue)usedintraining,andunknownPA,gelatin(yellow).Alargeoverlapbe- tweenunknownPA(gelatin)andlivefeatureembeddingsindicatepoorgeneraliza- tionperformanceofstateoftheartPAdetectors.(b)Syntheticlive(brightgreen) andsyntheticPA(orange)imagesgeneratedbytheproposedUniversalMaterial Generator(UMG)wrapperimprovetheseparationbetweenrealliveandrealPA. 3Dt-SNEvisualizationsareavailableathttp://tarangchugh.me/posts/umg/index. html(Bestviewedincolor)..............................98 Figure3.8Proposedapproachfor(a)synthesizingPAandlivepatches,and (b)designoftheproposedUniversalMaterialGenerator(UMG)wrapper.An AdaINmoduleisusedforperformingthestyletransferintheencodedfeature space.ThesameVGG-19[147]encoderisusedforcomputingcontentlossand styleloss.AdiscriminatorsimilartotheoneusedinDC-GAN[133]isusedfor computingtheadversarialloss.Thesynthesizedpatchescanbeusedtotrainany PAdetector.Hence,ourapproachisreferredtoasawrapperwhichcan beusedinconjunctionwithanyPAdetector......................100 Figure3.9StyletransferbetweenrealPApatchesfabricatedwithlatexbodypaintand siliconetogeneratesyntheticPApatchesusingtheproposedUniversalMaterial Generator(UMG)wrapper.Theextentofstyletransfercanbecontrolledbythe parameter 2 [0 ; 1] ..................................101 Figure3.10SynthesizedPApatches( 96 96 )bytheproposedUniversalMaterialGen- eratorusingpatchesofaknown(source)materialcolumn)conditionedon style( =0 : 5 )ofanother(target)knownmaterialrow).............104 Figure3.11SyntheticliveimagesgeneratedbytheproposedUniversalMaterialGen- erator.(a)Sourcestyleimages,(c)targetstyleimages,and(b)synthesizedlive images.........................................107 Figure3.12ExampleimagesfromLivDet2017databasecapturedusing threedifferentrprintreaders,namelyDigitalPersona,GreenBit,andOrcan- thus.TheuniquecharacteristicsoffromOrcanthusreaderexplainthe performancedropincross-sensorscenariowhenOrcanthusisusedaseitherthe sourceorthetargetsensor...............................109 Figure3.13UMGwrapperusedtotransferstylefrom(b)areallivepatchfromOrcan- thusreader,to(a)areallivepatchfromDigitalPersona,togenerate(c)asynthe- sizedpatch.......................................112 xxi Figure3.14FingerprintpatchesfabricatedwithrealPAs(a)silicone,(b)latexbody paint,(c)theirmixture(in1:1ratio),and(d)synthesizedusingUMGwrapperwith styletransferbetweensiliconeandlatexbodypaint..................113 Figure3.153Dt-SNEvisualizationoffeatureembeddingsofrealliveprints, PAfabricatedusingsilicone,latexbodypaint,andtheirmixture (1:1ratio),andsynthesizedPAusingstyle-transferbetweensilicone andlatexbodypaintPAThe3Dembeddingsareavailableathttp: //tarangchugh.me/posts/umg/index.html(Bestviewedincolor)...........114 Figure3.16AsequenceoftencolorframesarecapturedbyaSilkIDSLK20Rngerprint readerinquicksuccession( 8 fps).Theandtenthframesfromalive(a)- (c),andPA(tanpigmentedthirddegree)(d)-(f)areshownhere.UnlikePAs, inthecaseofliveappearanceofsweatnearpores(highlightedinyellow boxes)andchangesinskincolor(pinkishredtopaleyellow)alongtheframescan beobserved.......................................115 Figure3.17Examplesof(i)liveand(ii)PAimages.(a)Grayscale 1000 ppiimage,and(c)-(g)theve(colored)framescapturedbySilkIDSLK20R FastFrameRatereader.Liveframesexhibitthephenomenonofblanchingofthe skin, i.e. ,displacementofbloodwhenaliveispressedontheglassplaten changingthecolorfromred/pinktopalewhite.(Bestviewedincolor)....118 Figure3.18ABayercolorarrayconsistsofalternatingrowsofred-greenand green-blueBilinearinterpolationofeachchannelisutilizedtoconstruct theRGBimage.....................................119 Figure3.19AnoverviewoftheproposedapproachutilizingaCNN-LSTMmodel trainedend-to-endonsequencesofminutiae-centeredlocalpatchesfor PAdetection......................................120 Figure4.1Differentlayersofa(stratumcorneum,epidermis,papillaryjunction, anddermis)aredistinctlyvisibleinaOCTscan,alongwithhelicalshaped eccrinesweatglandsin(a)3-DOCTvolumeand(b)2-DOCTdepth Notethat(a)and(b)areOCTscansofdifferentImage(a)is capturedusingTHORLabsTelestoseries(TEL1325LV2)SD-OCTscanner[154] and(b)isreproducedfrom[33]............................128 Figure4.2Aschematicdiagramofaspectral-domainopticalcoherenttomography (SD-OCT)scanner.Thesourcelightisemittedbyasuperluminescentdiode(SLD) whichissplitintoasamplearmandareferencearm.Ahigh-resolutiontomogra- phyimageoftheinternalmicrostructureofthebiologicaltissueisperformedby measuringtheinterferencesignalofthesamplebackscatteredlight.Imagerepro- ducedfrom[100]....................................129 xxii Figure4.3Directviewimageswithredarrowspresentingthescannedlineandthe correspondingcross-sectionalB-scanfora(a)anda(b)pigmentedx presentationattack...................................131 Figure4.4Anoverviewoftheproposedpresentationattackdetectionap- proachutilizinglocalpatchesextractedfromthesegmenteddepthfrom OCTB-scans......................................133 Figure4.5Depthofamanifestsalayeredtissueanatomyquite distinguishablefromthedepthofapresentationattackwithoutany structure.........................................134 Figure4.6ExamplesofandpresentationattacksamplesfromtheOCT gerprintdatabaseutilizedinthisstudy.........................135 Figure4.7SetupofaTHORLabsTelestoseriesSpectral-domainOCTscanner (TEL1325LV2).Imagetakenfrom[154].......................139 Figure4.8ROCcurvesfortheve-foldcross-validationexperiments.Theredcurve representstheaverageperformancewithgrayedregionthe intervalofonestandarddeviation...........................140 Figure4.9Patches( 150 150 )fromandPAOCTB-scansinputtothemodel arepresented.ThedetectedCNN-Fixationsandaheatmappresentingthedensity ofCNN-Fixationsarealsoshown.Ahighdensityofareobservedalong thestratumcorneum(surfaceandatpapillaryjunctioninboth andPApatches.(Bestviewedincolor)........................141 xxiii L ISTOF A LGORITHMS Algorithm1TrainingUMGwrapper...........................106 Algorithm2PresentationAttackDetectionforOCTFingerprintImages.........138 xxiv Chapter1 Introduction Over125yearsago,thepioneeringworkdonebySirFrancisGaltonbroughttogetherandstrength- enedtheevidenceessentialtothevalidationofasmeansofpersonal permanence ofthecharacteristics, uniqueness ofridgedetails, variability and identi- offrictionridgepatterns.Inhis 1892 booktitled fiFingerPrintsfl [52],hejudiciously commentedonthepotentialoffrictionridges: fiLetnoonedespisetheridgesonaccountoftheirsmallness,fortheyareinsome respectsthemostimportantofallanthropologicaldata.Weshallseethattheyform patterns,considerableinsizeandofacuriousvarietyofshape,whoseboundariescan beoutlined,andwhicharelittleworldsinthemselves.Theyhavetheunique meritofretainingalltheirpeculiaritiesunchangedthroughoutlife,andaffordincon- sequenceanincomparablysurercriterionofidentitythananyotherbodilyfeature.fl Fingerprintshavealonghistoryofuseasameansofreliablyidentifyingindividuals.The earliestrecordeduseofdatesbackto 1955 1913 BC ,whenclaytabletswith- printswereusedtosealbusinesscontractsinancientBabylon.InChina,wereused tosignlegaldocumentsbypersonswithoutwritingskillsin 600 700 AD [63,104].Suchhis- toricalrecordsindicateaninquisitivenessandperhapspurposefulfocusonHowever, 1 Figure1.1Fingerprintrecognitionbasedauthenticationsystemsusedinday-to-dayapplications. (a)India'sAadharProgram[159],(b)ApplePay[3],(c)InternationalBorderCrossing,USVisit (OBIM)[34],and(d)AccessControl[149].ImageSource:GoogleImages. thestudyofasatoolofhumanemergedonlyinthelate 19 th century[47,51,68]. Withtheadvancesinscienceandtechnologyoverthelastfewdecades,recognition systemshavebecomeubiquitouswithitsfootprintinaplethoraofdifferentapplicationssuchas mobilepayments[3],accesscontrol[149],internationalbordercrossing[34]andnationalID[159] (seeFigure1.1).Althoughtheresearchcommunityhasmadeadvancesover thelastfewdecades,thereremainscertainchallengingavenuesinrecognitionwhere furtheradvancesarerequired. Inthischapter,wedescribethemorphologyanddevelopmentprocessofthefrictionridge skin.Wethenpresentthefundamentaltenetsofhighlightingtwoofthemwhichval- idateitsuseforpersonali uniqueness and permanence .Wethendiscussthema- jormilestonesinthehistoryofrecognition.Next,wedescribethearchitectureof modern-dayautomatedsystems(AFIS)anddiscussthevulnerabilities andresearchavenuesinrecognition.Finally,weconcludethechapterbypresentingthe contributionsofthisdissertation. 2 Figure1.2Illustrationofthemorphologicalstructureofthefrictionridgeskin.Imagereproduced from[120]. 1.1MorphologyandDevelopmentofFrictionRidges Thefrictionridgeskinisalayeredtissuewiththeoutermostlayerknownas epidermis andthe external-facingsublayerofepidermis,wherethesurfaceexists,isknownas stratum corneum [96].Thelayerbelowepidermisisknownas dermis ,andthejunctionbetweenepidermis anddermislayersisknownas papillaryjunction .Therearehelicallyshapedductsintheepidermis layerconnectingtheeccrine(sweat)glandsinthedermistothesweatporesonthesurface.See Figure1.2. Biologicalevidencesuggeststhatthedevelopmentoffrictionridgebeginsinlateembryological andearlyfetaldevelopmentperiodsandarephysiologicallypresentatbirth[163].At 7 8 weeks ofestimatedgestationalage(EGA),swollenmesenchymetissue 1 undertheepidermislayeronthe palmarsurfaceofhandsandsolesofthefeet,called VolarPads ,areformed.SeeFigure1.3(a). Subsequently,basalcells 2 oftheepidermislayerbegintodividerapidlyformingprimaryridge 1 Mesenchymetissueisapartoftheembryowhichdevelopsintoconnectivetissue,cartilage,bone,etc.[163] 2 Basalcellsareatypeofcellwithintheskinthatproducesnewskincellsasoldonesdieoff[7]. 3 unitswhichwilllaterbecomethecentersofsweatglanddevelopment(Figure1.3(b)-(d))[7]. During 10 11 weeksofEGA,theseridgeunitsgrowandmergeintooneanotheralongthelinesof relief,perpendiculartothecompressionforces,whileformingtiveridgecharacteristics,such asridgebifurcationsandendings(Figure1.3(e)-(g)).Thepreciselocationandorientationofany particularridgecharacteristicwithinthedevelopingridgeisgovernedbyarandomseriesof interdependentforcesappliedacrossthatparticularareaofskinatthatcriticalmoment. Thesecharacteristicsarebelievedtobeuniquebecauseslightdifferencesinthephysiologicalen- vironment,mechanicalstress,orvariationinthetimingofdevelopmentcouldaffect theirlocationandorientation[163]. Duringweeks 14 15 ofgestation,theprimaryfrictionridgesexperienceproliferationintwo directions:theupwardpushofnewcellgrowthandthedownwardpenetrationofthesweatglands. Typically,thewholevolarsurfaceisridgedby 14 weeksofEGA(Figure1.3(h)).Betweenweeks 15 17 ofEGA,sweatporesbeginformingandsecondaryridgesappearbetweentheprimary ridgesandtheundersideoftheepidermis(Figure1.3(i)-(j)).Duringweeks 17 24 secondary ridgesbecomecompletelymature.Thesecondaryridges(orsurfacefrictionridgepattern)scanned bytraditional(opticalandcapacitive)readersaremerelyaninstanceoraprojectionof theprimaryridges,amasterprintexistingontheintersectionofepidermisanddermislayers( i.e. , papillaryjunction). Duringthedevelopmentofprimaryfrictionridge,thecentralnervousandcardiovascularsys- temsalsoundergoacrucialperiodofdevelopment.Dispositionofcapillary-nervepairsbeneaththe dermislayerproducesanidenticalvascularwiththesameindividualarchitecture[141]. Theseobservationssuggestthepermanenceofminorcutsandbruisesonthe donotchangepatternsbecausenewskincellsaregeneratedbeneaththeepidermisand facilitatethereformulationofpatternsontheepidermis. 4 Figure1.3Illustrationoftheformationprocess.(a)Volarpadsbeginformingduring weeks 6 7 ofgestation,(b)-(c)localizedridgeunitsappear,and(d)-(g)ridgeunitsmergetoform ridgeswithuniquecharacteristicsduringweeks 10 11 ,(h)wholevolarsurfaceisridgedby 14 weeks,(i)sweatglandsandporesbeginformingduringweeks 14 15 ,and(j)secondaryridges begintoforminweeks 15 17 andarefullymaturedby 24 weeksofgestation.Imagesreproduced from[77]. 5 Figure1.4FingerprintsofWilliamJ.Herschel'sson(A.E.H.Herschel)atages(a)7,(b)17,and (c)40years.Imagesreproducedfrom[69]. 1.1.1FundamentalTenetsofFingerprintRecognition Inprinciple,anyphysiological,behavioral,oranatomicalcharacteristicofanindividualcanbe usedasabiometrictraitforpersonalHowever,therearetwofundamentaltenetsof thatunderlietheirwideuseforrecognizingindividuals: (i) Uniqueness :Duetotherandomforcesinplayduringtheformationoffrictionridgedetails, notwoevenforthesameindividual,areidentical.IndividualssharingthesameDNA, suchasmonozygatictwins,alsohaveunique[84].Severalstudieshaveattemptedto assesstheindividualityof[127],however,thesestudiesareeitherbasedonrelatively simplestatisticalmodelsofcharacteristicsorrelyonempiricalevaluationinvolvinga smallnumberofsubjects. (ii) Permanence :Frictionridgepatternsarebelievedtobepersistentduringthelifetimeof anindividual.WilliamHerschel,aGerman-bornBritishastronomer,wasthetodemonstrate thepermanenceofinhis 1916 booktitled TheOriginofFinger-Printing [69].He collectedlongitudinalinkedimpressionsofhisson'sattheagesof 7 , 17 ,and 40 yearsold andconcludedthatntsremainedconstantovertime(seeFigure1.4).In 2015 ,Yoonand Jain[171]conductedthelargestformalstudytilldateinvolvinglongitudinalrecords 6 Figure1.5Fingerprintsofasubjectatages 34 , 40 , 42 , 43 , 44 ,and 45 yearsoldfromthelongitudinal databaseusedin[171]. of 15 ; 597 subjectstoassessthepermanenceof(seeFigure1.5).Theyutilizedmulti- levelstatisticalmodelsandastate-of-the-artAFISandconcludedthattherecognition accuracyoftheAFISdidnotdegradewithtime(over12yearsforwhichdataisavailable).This observationassertedthatrecognitionaccuracydoesnotchangeoverthelifetimeofan individual,despiteminorchangesintheridgestructureduetocutsandbruises. Inadditiontouniquenessandpermanence,thesuccessofntsasabiometrictraitisalso attributedtohowwellitseveralkeyprinciples:(i)universality,(ii)performance,(iii)user acceptance,(iv)collectability,(v)throughput,(vi)templatesize,(vii)easeofsystemintegration, and(viii)resistancetospoofandtemplateattacks[86]. 1.2FingerprintRecognitionMilestones 1.2.1EarlyDevelopments Thebook AchaeologyintheHolyLand byKenyonreportsthediscoveryofthumbprintsfoundin NeolithicbricksfromtheancientcityofJericho,StateofPalestine,ca. 7000 BC[89].Similar ancientartifactswithcarvingsoffrictionridgepatternshavebeenfoundinmanyplacesaround theworld.However,theearliestrecordedauthenticationapplicationofdatesbackto 1955 1913 BC,whenclaytabletswithwereusedtosealbusinesscontractsinancient Babylon.In 600 700 ADChina,wereusedtosigncontractsandlegaldocumentsin theTangperiod[63]. 7 1.2.2SeminalStudies Whilemanyremnantsofhavebeenfoundinhistory,thestudyof asatoolofhumanemergedonlyinthe 19 th century.In 1858 ,SirWilliamHerschel astheBritishchiefadministrativeofinBengal,India,mandateduseofhandprintsforcivil contractsforpayrolldistributiontolaborers.In 1869 Britain,theHabitualCriminalsActwas passedtodevelopameanstoclassifytherecordsofhabitualcriminals(orrepeatoffenders),such asbodymeasurement,marks,orphotograph,toreadilyre-identifythemwithcertainty[129].In 1880 ,Dr.HenryFauldspublishedaseminalarticleinNaturesuggestingtheuseoffor criminalinvestigations[47].In 1882 ,AlphonseBertillon,aclerkintheParisPolice Bureau,devisedasystemofrecordingbodymeasurements(knownas Bertillonage ),whichwas lateradoptedthroughoutFrance.TheusinghissystemwasmadeinFebruary 1883 .Hisanthropometrycardsweresupplementedwithonthebackside,whichled tomorecomparedtoanyotherbodymeasurements[63]. ItwasthestudiesbySirFrancisGalton,cousinofCharlesDarwin,thatbroughttogetherand strengthenedtheevidenceessentialtothevalidationofasmeansofpersonal cation.In 1892 ,inhisseminalbook FingerPrints [52],hepointedoutridgecharacteristicswhich purportedlymakeeachunique,suchasridgeendingsandbifurcationsandmadethe statementthatremainunchangedthroughoutthelifetime.Inhonorofhiscontribu- tions,theridgecharacteristics(nowwidelyknownas minutiae points)arealsocalled fiGaltonfl details. In 1900 ,SirEdwardHenryintroducedaclasssystem[67],which waslaterpopularlyknownas HenrySystemof .In 1901 ,itwasofintroduced atNewScotlandYardforcriminal[63].In 1963 ,MitchellTrauringproposedthe algorithmicapproachforcomparingfrictionridgepatternsbasedonminutiaedetails[157]. TheAutomatedFingerprintSystems(AFIS)becamearealityin 1974 ,avoiding tediousandtimeconsumingmanualapproachtocomparing 3 . 3 https://www.secureidnews.com/news-item/a-history-of- 8 Figure1.6Timelineillustratingsomeofthemajormilestonesinthehistoryofrecogni- tion. 9 1.2.3LandmarksinLawEnforcementApplications In 1880 ,Dr.HenryFauldssuggestedtheuseofnotonlyforbutalsofor criminalinvestigations[47].Thirteenyearslaterin 1893 ,wereusedforthetime tosolveamurdercaseoftwochildreninArgentina[63].In 1897 inBengal,India,anothermurder casewassolvedusingtwobrownsmudgesoffoundonanalmanac.SirEdwardHenry, Herschel'ssuccessorinIndia,foundtheprintstomatchwithanex-convictKangaliCharan,whose thumbprintwasalreadyintherecordsduetoapriortheftconviction[63]. In 1901 ,useofwasofintroducedatNewScotlandYardbySirEdward HenryforcriminalreplacingtherelativelyinaccurateBertillonsystem.The large-scalesystematicmethodofiwasadoptedinUnitedStatesof Americain 1902 .Dr.HenryForestinstalledthenewsystemtoinhibitapplicantsfromcheatingthe NewYorkCivilServiceCommission[63].Inthefollowingyears,authentication wasadoptedintheNewYorkStatePrison( 1903 )andtheU.S.Army( 1906 ).Subsequently,a youngwomannamedMaryHolland,studyingtheHenrysystem,wentthroughouttheUnitedStates teachingthesystemtovariouslawenforcementagencies. Amajordevelopmenthappenedintheyear 1924 ,whentheUnitedStatesCongressmandated thecollectionofntsofcriminals.Consequently,anewdivisionwasinsti- tutedattheFederalBureauofInvestigation(FBI).In 1933 ,aunitspecializingintechnicalanalysis oflatent i.e. ,noisymarksunintentionallyleftatacrimescene,wasalsoestab- lishedattheFBI[120].Withtheincreasingloadtomaintainalargerepositoryandperformmanual oftherewasaneedtoautomatetheprocess. AreportcompiledbytheRANDCorporation[62]highlightedtheopportunitiesformuchmore effectiveuseofphysicalevidencesuchastoimprovecrimesolvingperformance. Recognizingthepotentialofemergingtechnologytogetherwithelectronicsrevolutionhappening in1970s,agenciesincludingtheFBI,theUKHomeOfandtheJapaneseandFrenchpolice departmentsundertookresearchinitiativesthatledtodevelopmentofAutomatedFingerprintIden- Systems(AFIS)[92].Lawenforcementagenciesatthestateandlocallevelalsobegan 10 installingsuchsystemsknownasStateAFIS(SAFIS).,in1984,astateAFISsupplied totheauthoritiesinSanFransisco,withacompletelynewficrimescenetocourtroomflphilosophy, proveditsworthintherealworld 4 . In 1999 ,theFBIlaunchedanIntegratedAFIS(IAFIS)whichallowedelectronicrecordsubmis- sionfromstateandlocalauthoritiestothenationaldatabaseandsupportedcapabilitiestoperform directlarge-scalesearchesinthenationalrepository[92].Italsosupportedautomatedtenprint andlatentsearches,electronicexchangesofandresponses,andtext-based searchesbasedondescriptiveinformation.In 2011 ,IAFISwasupgradedtotheNextGeneration (NGI)system,withthelargestcollectionofcriminalrecordsandenhanced- printrecognitioncapabilitiesimprovingmatchingaccuracyfrom92%to99.6%with fasterresponsetimes 5 .ItismaintainedbytheFBICriminalJusticeInformationService(CJIS)and containsofmorethan 145 : 7 millioncriminalandcivilindividualsasofJune2019 6 . 1.2.4NotableUseinCivilandCommercialApplications Inadditiontolongstandingapplicationsinlawenforcementandforensics,anumber ofcivilianapplicationsareutilizingtheindividualizationpropertyofThishasbeen possibleduetotheavailabilityoflow-costacquisitiondevices,efandrobust recognitionalgorithms,andincreaseinprocessingpowerandmemorycapacityatlow prices.Forexample,asolidstatereaderwithmatchingalgorithminamobile phonecostsunderUS$2perdevice. NationalID :In 2009 ,theUniqueAuthorityofIndia(UIDAI)launchedana- tionalIDsystemknownas Aadhaar 7 fortheresidentsofIndia.Anyindividual,irrespective ofageandgender,cansubmittheirdemographicandbiometricinformation(ten twoiris,andfacephotograph)toenrollinthesystemandobtaina12-digituniqueID.Itis 4 https://www.gemalto.com/gohistory 5 https://www.fbi.goand-other-biometrics/ngi 6 https://www.fbi.gorepository/ngi-monthly-fact-sheet 7 https://uidai.gov.in/what-is-aadhaar.html 11 Figure1.7India'sAadhaaristhelargestbiometricsbasedsystemintheworld,with morethan 1 : 25 billionenrollments[159](March,2020).(a)AsampleAadhaarIDcardcontaining a12-digituniquenumberwhichislinkedtoanindividual'sdemographicandbiometricinforma- tion.(b)SomeoftheapplicationswhichutilizeAadhaarIDincludeselectronic-KnowYourClient (e-KYC)service,distributionofgovernmentsubsidies,processingincometaxandemployeeprov- identfunds. designedasastrategicpolicytoolforsocialandinclusion,corruption-freedelivery ofpublicsectorreforms,managingbudgets,increasingconvenienceandpromoting hassle-freepeople-centricgovernance(seeFigure1.7).Biometricinformationallowsthe authoritiestoperformde-duplicationatenrollmentandonlineauthenticationintheto preventanymisuse.Itisbyfarthelargestbiometricsbasedidentsysteminthe world,withmorethan 1 : 25 billionenrollments[159](March,2020). InfantFingerprinting :AsofDecember2019,thereareover 677 millionchildrenworld- wideintheagegroupof 0 4 yearsold 8 andover 370 ; 000 areborneveryday 9 .Given thatamajorityofthesechildbirthsoccurindevelopingcountries,wheretheinfant 10 mortal- 8 UNDataProject:https://bit.ly/2MF9FNs 9 https://www.indexmundi.com/world/birth rate.html 10 Thetermfiinfantflistypicallyappliedtoyoungchildrenunderoneyearofage. 12 Figure1.8Fingerprint-basedauthenticationisusedinmanycommercialapplications,including executingtransactions,unlockingdevices,accesscontrol,etc.(a)Auserenrollingtheir inSamsungGalaxyS10withanin-displayultrasound-basedsensor,(b) userauthenticationinATMtransactions,and(c)biometric-enabledpaymentcardswithembedded sensorandon-cardstoragefortemplate. ityratecanbeashighas 180 deathsper 1000 livebirths 11 ,based canprovideaformofidentityforhealthcareapplicationssuchastrackingvaccinationand improvingnourishment[78].Alow-costreaderdesignedtocapture infantintheisshowntoachieveanaccuracyofTAR= 90% @FAR= 0 : 1% [45]. CommercialApplications :Duetotherisingconcernsaboutdatasecurityandial fraud,coupledwiththeadventofcompactandinexpensivesensors,manycommercialorga- nizationshaveinitiatedtheirowndeploymentofconsumerauthentication, especiallyforaccesscontrolandsecuretransactions.Manyconsumerdevices,such aslaptopsandsmartphones,utilizesolid-statereadersfordeviceunlockingand makingonlinepurchases 12 .In 2018 ,theglobalpenetrationofsmartphoneswith sensorsreached 67% comparedtoonly 19% in2014 13 .Mastercard 14 andVisa 15 areconduct- ingpilotprogramsofutilizingbiometricpaymentcardswithembeddedsensors, 11 https://www.infoplease.com/world/health-and-social-statistics/infant-mortality-rates-countries 12 https://support.apple.com/en-us/HT207054 13 https://www.statista.com/statistics/522058/global-smartphone-penetration/ 14 https://www.mastercard.us/en-us/merchants/safety-security/biometric-card.html 15 https://usa.visa.com/visa-everywhere/security/biometric-payment-card.html 13 developedbyFingerprintCards 16 andGemalto 17 ,toreplacePIN/signaturebaseduserau- thenticationandprovideuserconvenience.Theenrolledtemplateisstoredonthe cardinasecureenvironmentforadditionalsecurity.SeeFigure1.8. 1.3DesignofAutomatedFingerprintRecognitionSystems Intheearlydaysofuse,primarilyinlawenforcementagencies,impressionswere collectedusingoff-linemethods, i.e. ,printerinkappliedtosubject'sandthenobtaining theimpressionsonten-printcards(seeFigure1.10)whichwerethenmanuallycomparedtoa queryThesecardscontainboth plain and rolled impressionsofallten 18 .While ten-printcardsarestillinusebysomelawenforcementagencies,mosthavemovedtodigital acquisitionviaslapscanners. 19 ; 20 . Withtheadvancementsinbothsensingtechnologyandautomatedmatchingalgo- rithms,ten-printrecognitionhasbecomeextremelyaccurateandefAtypical recognitionsystemcontainsthefollowingtwostages: enrollment and recognition (seeFigure1.9). 1. Enrollment :Duringthisstage,anindividual'sacquiredusingangerprintreader isprocessedtoextractsalientfeaturesandgeneratea erprinttemplate .Thetemplateis thentaggedwithauniqueuserforretrievalandisstoredwithassociatedmetadata inadatabase,knownas reference , background , gallery ,or enrollment database. 2. Recognition :Dependingontheapplicationcontext,therecognitionofanindividualcanbe donetoeithervalidatetheclaimedidentity(vortoestablishtheidentityofan unknownindividualInbothcases,aisacquiredandprocessedto generateatemplate,knownas query or probe template. 16 https://www 17 https://wwwbiometric-card 18 Aplain(orslap)referstoanimpressionmadebypressingaonasurface,andarolled isanimpressionmadebyrollingafromnail-to-nailinordertocapturealloffrictionridgedetails includingsides. 19 https://www.edo.cjis.gov/artifacts/standard-form-fd-258-1.pdf 20 https://www.fbi.goand-other-biometrics/recording-legible- 14 Figure1.9Thetwomajorstagesofarecognitionsystem(a)enrollmentand(b)recogni- tion(vorarepresented.Thesestagesusethefollowingmodules:capture, featureextraction,templatecreation,matching,andtemplatedatabase.Imageadaptedfrom[104]. 15 V :Intheveriscenario,thequerytemplateisaccompaniedbyauser (claimofidentity)whichisusedtoretrievetheenrolledtemplatefromthe referencedatabase.Thesystemeitheracceptsorrejectsthesubmittedclaimofidentity byperformingaone-to-onecomparisonbetweenthequerytemplateandtheretrieved referencetemplate.Popularexamplesofthisscenarioincludeac- cesscontrolandlarge-scalecivilIDsystem(e.g.Aadhaar),wheretheuserprovidesa uniqueID(e.g.employeeRFIDcardorAadhaar12-digituniqueID)anda impressionforauthentication. :Inthescenario,noclaimofidentityismade.Thegoal ofthesystemistoestablishanidentityofasubjectbysearchingtheentirereference databaseforamatch.Therefore,abiometricsystemoperatinginthe modeperformsone-to-manycomparisonstoestablishiftheuserisalreadyenrolledin thedatabase,andifso,returnstheuserthatmatched.Thesystemmayalso determinethatthesubjectisnotenrolledinthereferencedatabase.Acommonuse-case ofthisscenarioisacriminalinvestigation,wherealeftatthecrimesceneis usedtoidentifyiftheperpetratorisalreadyenrolledinthedatabase. Theenrollment,vandprocessesinvolvedinrecognition makeuseofthefollowingmodules:(i)FingerprintAcquisition,(ii)FeatureExtraction,(iii)Tem- plateDatabase,and(iv)Matching. 1.3.1FingerprintAcquisition Theprocessofcapturingthefrictionridgedetailsasaimpressionforenrollmentor recognitionisknownas erprintacquisition .Itcanbecarriedoutineithera controlled oran uncontrolled manner.Therearetwocontrolledacquisitionmethods:(i) off-line methodssuchas applyinginkontheandcreatinganinkedimpressionbypressing( i.e. ,plain/slap gerprints)orrollingthe( i.e. ,rolledonpaper,and(ii) live-scan methods 16 Figure1.10TenprintcardusedbytheFBItocollectimpressionsofalltenThe toptworowspresenttherolledimpressionsofalltenandthebottomrowpresentsthe plain/slapimpressionsin4-4-2pattern.Imagereproducedfrom[83]. whichutilizeelectronicsensors 21 toacquiredigitalfrictionridgeimpressions(seeFig- ure1.11).Inbothofthesemethods,thecaptureconditionsarefavorablewithacooperativesubject, resultinginnoise-freeimpressionsonaclearbackground.Suchimpressionsareknownas exem- plar Ontheotherhand,inthecaseofuncontrolled(ornon-attended)acquisition,there isnoguaranteeofthequalityofacquiredimage.Thisisespeciallytrueforlatent atcrimesceneswhichareroutinelyusedbyforensicsagenciestotheculprit.Extremely importantinforensicapplications,latent(alsoknownasmarks)arethefriction ridgeimpressionsunintentionallyleftonasurfacetouchedbytheTheoilsecretedfrom thesebaceousglandsintheskingetsdepositedonasurface,suchasglass,currencynote,etc., touchedbythe.Dependingonthecharacteristicsofthesurface,latentsareenhancedand 21 A reader isafiblackboxfldevice,soldfias-isflbyacommercialvendor,whichtypicallycontains animaging sensor thatacquiresdigitalimages.However,inliterature,theterm erprintsensor is interchangeablyusedtoimplya erprintreader . 17 Figure1.11Twotypesofcooperativeacquisitionmethods:(i)off-linemethodusing ink-on-papertechnique,and(ii)live-scanmethodusinganelectronicreadertocapture adigitalfrictionridgeimpression. filiftedfl (acquired)usingphysical( e.g. dustwithpowder),chemical( e.g. ninhydrintreatment), and/orphotographical( e.g. ultravioletimaging)methods.Figure1.12presentsthedifferenttypes ofimpressions,namelyplain,slap,rolled,andlatent Themostwidelyusedformofacquisitionisusinglive-scandevicestoacquirea digitalThemainparameterscharacterizingadigitalimageare:resolution, area,numberofpixels,geometricaccuracy,contrast,andgeometricdistortion[104].Toensure goodqualityoftheacquiredimpressionandinteroperabilitybetweenvariousAFIS,the USCriminalJusticeInformationServices(CJIS)releasedasetof 22 thatregulate thequalityandtheformatofbothimagesandFBI-compliantoff-line/live-scanscan- ners,called AppendixF .Anotherless-stringentstandarddesignedtosupportone-to-one vforcapturedevicesincivilianapplications,forthePersonal IdentityVprogram,is PIV-071006 . 1.3.1.1SensingTechnologies Theubiquitoususeofrecognitioninmanyconsumerandgovernmentapplicationshas ledtothedevelopmentofcompact,high-resolution,andlow-costsensingtechnologies. Thereareanumberoflive-scansensingmechanisms(e.g.,optical,solid-state,ultrasound,opti- 22 https://www.fbibiospecs.cjis.goAQ 18 Figure1.12Differenttypesofimpressions:(a)Plain/Flat,(b)Rolled,(c)Slap,and(d) Latent calcoherencetomography,etc.)thatcanbeusedtodetecttheridgesandvalleyspresentonthe Optical :Fingerprintreadersutilizingopticalimagingareoneofthemostwidelyusedreaders inthecommercialsector.MostopticalreadersoperateoneithertheprincipleofFrustratedTotal Internal(FTIR)orinadirect-viewsetup,wherethecamera/sensordirectlycapturesthe imageofthe.InthecaseofFTIR,thereaderistypicallyanassemblyofaglassprism, visibleorinfraredspectrumLEDs,andaCMOSorCCDsensor.Theacquisitionofa involvesthefollowingsteps:(i)theisplacedonaglassprism,(ii)thesurfaceis illuminatedwithLEDs,(iii)theincidentlightontheridgesisabsorbedandthatonthevalleysun- dergofrustratedtotalinternalbetweenthefacesofglassprismtoreachthesensorwhere theisimaged[104].Inthecaseofdirect-viewimaging,theisplacedonaglass platen,illuminatedwithLEDs,andtheimageiscapturedusingasensorplacedbelowtheplaten. 19 Figure1.13Setupofopticalreadersutilizing(a)aglassprismforFrustratedTotal Internal(FTIR)oftheincidentlightimagedusingCCDorCMOSsensor,(b)direct- viewmulti-spectralsetupemployingpolarizedilluminationofdifferentwavelengths,and(c)an in-displayopticalsensingsystemforsmartphones[65,104,138]. Theimageisprocessedtoenhancetheridge-valleycontrast.Someopticalreaderscapturemulti- pleimagesofthesame,usingdifferentwavelengths(visibleandnearinfrared)anddifferent polarizedconditions,whicharefusedtogethertoproduceamulti-spectralcompositeimage.These imagesarerobusttosub-optimalskinandambientconditions[138].However,oneofthemajor limitationsofopticalreadersistheirbiggerformfactor,unlikesolid-statecapacitivereaders,which hasinhibitedtheiruseinsmallelectronicdevicessuchassmartphones.However,recentadvance- mentshaveledtodevelopmentofan in-display opticalreaderthatisplacedunderthesmartphone touchscreen[65](seeFigure1.13).Figure1.14presentsthedifferentopticalsensors utilizedinthisthesis. Solid-state :Solid-statesensingtechnologyutilizesanarrayofmini-sensorstomeasureone ofthefollowingproperties:(i)capacitancedifferencebetweenridgesandvalleys,(ii)pressure variationsasinteractswithsensor,or(iii)currentgeneratedonapyro-electricsensorbed becauseoftemperaturedifferentials.Solid-statereaders,becauseoftheirlowcostandsmallsize, areeasilyembeddableinhand-helddevicessuchaslaptops,tablets,andsmartphones[104]. Ultrasound :Theultrasoundsensingtechnologyisbasedonsendingacousticsignalstowards theandsensingtheechoresponse.Thesensedechoresponseisprocessedtogeneratea depthofthetherebyprovidingthefrictionridgestructure.Ultrasoundtechnology 20 Figure1.14Opticalsensorsutilizedinourexperiments,namelyCrossMatchGuardian 200,SilkIDSLK20R,andLumidigmV302. isrobusttooil,dirt,moisture,andotherfactorswhichmaydegradetheimagequality. Untilrecently,readersutilizingultrasoundwereexpensiveandlargewhichinhibited theiruseincommercialapplications.However,QualcommInc.introducedanin-displayultra- soundsensor[37]whichisnowwidelydeployedintheSamsungsmartphoneseries (GalaxyS10onwards). Opticalcoherencetomography(OCT) :OCT[72]technologyallowsnon-invasive,high- resolution,cross-sectionalimagingofinternaltissuemicrostructuresbymeasuringtheiroptical AnopticalanaloguetoUltrasound[164],itutilizeslow-coherenceinterferometryof near-infraredlight( 900 nm 1325 nm ).InanOCTscanner,abeamoflightissplitintoa sample arm , i.e. ,aunitcontainingtheobjectofinterest,anda referencearm , i.e. ,aunitcontainingamir- rortobacklightwithoutanyalteration.Ifthelightfromthetwoarmsarewithin coherencedistance,itgivesrisetoaninterferencepatternrepresentingthedepthatasingle point,alsoknownas A-scan .LaterallycombiningaseriesofA-scansalongalinecanprovidea cross-sectionalscan,alsoknownas B-scan .StackingmultipleB-scanstogethercanprovidea3D 21 volumetricrepresentationofthescannedobject,ortheobjectofourinterest, i.e. ,internalstructure ofa(seeFigure1.15). Figure1.15(a)Opticalcoherencetomography(OCT)scannercanbeusedtoimagetheinternal structureas(b)2Dand(c)3DdepthImagesreproducedfrom(a)[154],(b)IARPA ODINProgram(GCT-II)[123]and(c)[33]. 1.3.2FeatureExtraction Themostevidentcharacteristicofaisitsassemblageofinterleavedridgesandvalleys, where,typically,ridgesaredarkandvalleysarebright.Thefeatures(seeFigure1.16) areusuallyinahierarchicalorder: Level-1 :Theseglobalfeatures,includepatterntype(arch,loop,whorl),singular points(cores,deltas),ridgeorientation,andridgespacing.Thesefeaturesarecommonly usedforindexingandalignment,however,theycannotidentifya uniquely[104].Thesefeaturescanbeextractedbyemployingimageprocessingtechniques, detectionofridgeswithmaximumcurvature,ordeeplearningapproaches[118,134]. Level-2 :Theselocalfeaturesrefertothesalientpointswhereridgesexhibitsomediscon- tinuitysuchasridgeendingsandbifurcations,alsoknownas minutiae points.Inarolled therecanbeover 100 minutiae,however,thespatialandangularcoincidenceof asmallnumberofminutiae( 12 15 )canbeusedtosuccessfullymatchtwowith high[85].Theminimumrecommendedimageresolutiontosuccess- fullyextractminutiaepointsis 500 ppi.ThesefeaturescanbeextractedusingGabor, dictionary-basedmethods,orCNN-basedapproaches[15,22,118]. 22 Level-3 :Thesefeaturesincludecharacteristicsataverylevelofgranularity suchassweatpores,incipientridges,scars,creases,dotsbetweentheridges,etc.These featuresprovideadditionaluniquenesstoabutrequireaminimumscanning resolutionof 1000 ppiforsuccessfulextraction[79].Primarilyusedbylatent examinersformanualcomparison,thesefeaturesarenotcommonlyusedinAFISdueto lackofrobustnessandhightimerequirements.However,recentdevelopmentsinlow-cost highresolutionreadershaveledtothedevelopmentofalgorithmsthatutilizelevel-3features formatching[17]. Priortoanyfeatureextraction,allimagestypicallyundergoaprepro- cessingstep(foregroundextraction,enhancement,and/oralignment).Inthecaseoflatent- prints,whereimagequalityispoor,preprocessingisacrucialstep.State-of-the-art commercial-off-the-shelfmatchers(COTS)mayutilizeCNN-basedmethodsforfeatureextraction similartoDeepPrint[44],andadditionaltexturalfeaturesatdifferentscales[15]. 1.3.3TemplateDatabase Atemplateisasetoffeaturesextractedfromtheimageofauser[96],such asvariable-lengthminutiae-basedfeatures[]anded-lengthrepresentation[44],.Itistypically muchsmallerinsizecomparedtotheactualimage,providingfasterprocessingtime. InternationalStandardsOrganization(ISO)standardtemplateformatssuchasminutiae- basedtemplatestandardsISO/IEC19794-2(2005)[22]forhighinteroperability,however,some commercialvendorsmayutilizeaproprietarytemplateformatforhighperformance.Thetemplates areassociatedwithauniqueuserIDforretrievalandarestoredinadatabase,referredtoas template database . 23 Figure1.16Fingerprintfeaturesareintothreelevels:(i)Level-1featuresbasedonglobal ridgepattern,(ii)Level-2featuresbasedonlocalridgecharacteristics,suchasridge endings,bifurcations,etc,and(iii)Level-3featuresincludingdetailslikesweatpores,incipi- entridgesandcreases.Imagesreproducedfrom[104] . 1.3.4FingerprintMatching Amatchingalgorithmcomparestwogiventemplatesand,typically,returns eithera similarityscore ,sayavaluebetween 0 and 1 whereavaluecloseto 0 impliesnosimi- larityandcloseto 1 meansveryhighsimilarity.Anymatchscoreaboveathreshold( t ) isdeemedasasuccessfulmatch.Astrictthreshold(closeto 1 )provideshighsecurity(lowfalse accepts)butresultsinpooruserexperience(duetohighfalserejects).Duetothelargevariability betweendifferentimpressionsofthesame(intra-classvariability),matchingisa difproblem.Someofthemainfactorsresultinginintra-classvariationsbetween 24 includerotation,non-lineardistortion,noise,displacement,partialoverlap,pressureandskincon- ditions[104].Thereareessentiallythreebroadcategoriesofmatchingapproaches: Correlation-basedmatching :Thistechniqueinvolvessuperimposingtwoimages andcomputingthecorrelationbetweenthecorrespondingpixelsfordifferentalignments,ro- tations,anddisplacements.Duetotheresource-intensivematchingprocess,thesetechniques arenotwidelyused. Minutiae-basedmatching :Itisthemostpopularandwidelydeployedtechniquefor- printmatchingbybothautomatedalgorithmsaswellasexaminers.Itinvolves thealignmentbetweenthereferenceminutiaesetandtheinputqueryminutiaeset thatresultinthemaximumnumberofpairedminutiae. Non-minutiaefeaturebasedmatching :Inthecaseoflow-qualityimages,suchaslatent gerprints,minutiaeextractionisextremelydifThisfamilyofmatchingapproaches mayutilizeeitherridgepatterncharacteristics(e.g.localridgefrequencyandorientation) ortextureinformationusinghand-craftedordeeplearningmethods[15,17].Afusionof minutiae-basedandtexture-basedfeaturescansuccessfullyimprovethematchingperfor- manceoflatentincludingstate-of-the-artdeep-learningbasedmethodswith edlengthrepresentation[44]. 1.4ChallengesinFingerprintRecognition Fingerprintrecognitionisoneofthemostwidelyusedmethodsforpersonrecognitionachievinga highlevelofmatchingaccuracyandthroughputinlarge-scaleoperationalapplications[161].De- spitetremendousimprovementsinthestate-of-the-art[83,104],recognitionencounters manyremainingchallengesandvulnerabilities. 25 1.4.1AutomaticLatentFingerprintRecognition Inadvertentlyleftatcrimescenes,latentareoneofthemostcrucialformsofevidence toidentifyorexcludeasuspectincriminalinvestigations.Inthecurrentpracticeofmatching latents,examinersareexpectedtofollowtheAnalysis,Comparison,Evaluation,and V(ACE-V)methodology[6].Inthefianalysisflphase,latentprintsaremanuallyexam- inedtoperformatriagebyassigningoneofthefollowingthreevaluestoaquerylatent:Valuefor Individualization(VID),ValueforExclusionOnly(VEO)orNoValue(NV).Inthecaseoflatents deemedtobefiofvaluefl(VIDandVEO),thefeaturesinthelatentaremarkedtosearchfortheir matesusinganAFIS.Intheficomparisonflphase,thelatentismanuallycomparedside-by-side withthecandidatematesretrievedfromtheexemplardatabase.Inthefievaluationflphase,oneof thefollowingdecisionsismadeaboutthelatentinquestion:individualization,exclusion,orincon- clusive 23 .Finally,inthefivphase,thedecisionmadebytheexamineris byhavingasecondexamineranalyzetheresultsindependently. AlthoughtheACE-Vmethodologyiswidelyacceptedintheforensiccommunity,thehuman subjectivityintheACE-Vprocesshasraisedconcernsaboutitsreliabilityandreproducibility.A notablecaseisthefalseaccusationofBrandoninthe2004Madridtrainbombingincident basedontheincorrectmatchbetweensexemplarandthelatent liftedfromthebombsite[124].Alongwiththeeffortstounderstandthehumanfactorsinlatent examination[25],standardsandguidelinesforlatentexaminers'practiceshavealso beensetup.Asanexample,ScienceWorkingGrouponFrictionRidgeAnalysis,Studyand Technology(SWGFAST)publishedstandardstoalleviatesubjectivityinvolvedinfeaturemarkups anddecisionmakingsamongexaminers[143].Furthermore,withthegrowingcaseloadfacedby 23 Individualizationreferstothedecisiononapairconsistingofalatentandanexemplarprintindicatingthatthe pairoriginatesfromthesamebasedonasufagreementbetweenthetworidgepatterns.Exclusion,on theotherhand,isbasedonasufdisagreementbetweenthetworidgepatternsconcludingthatthepairdidnot originatefromthesame.Aninconclusivedecisionismadewhenanexaminercannotmakeadecisionofeither individualizationorexclusionduetoinsufridgedetailsorsmallcorrespondingareabetweenlatentandexemplar print[143]. 26 forensicagencies,thereisaneedtodevelopmethodsforautomaticandobjectivevalueassignment andmatchingforlatents[17,25]. 1.4.2InteroperabilityofFingerprintReaders Consideramatchingsystemthatacquirestimagesusinganopticalreader duringenrollmentandasolid-statecapacitivesensorduringvDuetothevariationsin imagingtechnology,imageresolution,sensorarea,positionofthesensorwithrespecttotheuser, andsoon,therawimagesobtainedfromthetwosensorswillbedifferent.Thisdirectly impactsthefeaturesetextractedfromtheacquiredimages,andconsequently,thematchscores generatedbythesystem. Especiallyinthedeploymentoflarge-scalebiometricprojects,suchasAadhaar,onecannot operateundertheassumptionthattheimagestobecomparedwillbeobtainedusing thesamesensorasitwillrestrictourabilitytomatchimagesoriginatingfromdifferent sensors.Althoughprogresshasbeenmadeinthedevelopmentofcommondataexchangeformats andimagequalitystandards 24 tofacilitatetheexchangeoffeaturesetsbetweenvendors,verylittle efforthasbeeninvestedintheactualdevelopmentofalgorithmsandtechniquestomatchthese featuresets[137]. 1.4.3VulnerabilitiesofanAFIS Whilerecognitionsystemsaredeployedtoprotectanapplicationfromunauthorized access,thesecurityofthesystemitselfcanbejeopardizedimplyingnoguaranteethatthesystem willbecompletelysecure.Therecognitionsystem,likeanyothersecuritysystem,is susceptibletoanumberofsecuritythreatsasshowninFig.1.17.Thesesystemvulnerabilitiesmay haveadverseconsequencessuchasintrusionbyunauthorizedusers,denial-of-servicetolegitimate users,erosionofuserprivacy,orevenidentitytheft.Itmustbeemphasizedthatbiometricsystem 24 TheISO/IEC19794-4(2005)describesthemannerinwhichaimagemustbeacquiredandstoredto maximizeinteroperability. 27 Figure1.17Differentcomponentsinarecognitionsystemarevulnerabletovarious typesofattacksshowninred.Thisthesiscontributestowardsaddressingsomeofthechallenges pertainingtopresentationattackdetection. securityanduserprivacyconcernsareimportantpublicperceptionissues,whichcanpotentially derailthesuccessofabiometricsystemdeploymentunlesstheyareaddressedcomprehensively. Whilesomeofthetypicalsecuritythreats,suchasreplayandman-in-the-middleattacks,can beaddressedbyemployingcounter-measurestakenfromsecurepassword-basedauthentication paradigms,thetwomainchallengestothedomainofrecognitionsystemsare (i)presentationattackdetection(orlivenessdetection),and(ii)templateprotection. 1.4.3.1PresentationAttackDetection TheISOstandard IEC30107-1:2016(E) [74]presentationattacksasthe fipresentationto thebiometricdatacapturesubsystemwiththegoalofinterferingwiththeoperationofthebio- metricsystemfl .Theseattackscanberealizedthroughanumberofmethodsincluding,butnot limitedto,useof(i) gummyers [108], i.e. ,fabricated-likeobjectswithaccurateimita- tionofanotherindividual'sridge-valleystructures,(ii) 2Dor3Dprintedngerprint targets [4,5,14],(iii) alterederprints [170], i.e. ,intentionallytamperedordamagedreal gerprintpatternstoavoidand(iv) cadaverers [105](seeFigure1.18).Among these,spoofattacks( i.e. ,gummyandprintedtargets)arethemostcommonform 28 Figure1.18Fingerprintpresentationattackscanberealizedusing(a)gummy[57,108],(b) 2Dor3Dprintedtargets[4,5,14],(c)altered[170],or(d)cadaver[105]. ofpresentationattacks,withamultitudeoffabricationprocessesrangingfrombasic moldingand casting toutilizingsophisticated2Dand3Dprintingtechniques[4,5,14,42,108].Figure1.19 illustratesasimplemoldingandcastingproceduretocreateapresentationattackinstrumentusing gelatin. Unlikegummyalteredorobfuscatedarerealwhoseridgestructure hasbeenseverelyalteredbyabrading,burning,cutting,orperformingsurgeryon(see Figure1.20).Thepurposeofngerprintobfuscationistoconcealone'sidentityinordertoevade AFIS,especiallyforcriminalandinternationalbordercrossing[117,170].Tobe usefulinpractice,presentationattackdetectionschemesmustrecognizesuchattemptsinreal-time andwithhighaccuracywithoutcausingtoomuchinconveniencetolegitimateusers. 29 Figure1.19Exampleproceduretocreateandirectlyfromaliveger.Plastic isusedtocreatethemoldandgelatinisusedasthecastingmaterial.Imagereproducedfrom[105]. 1.4.3.2TemplateProtection Theothermajorchallengeisthesystemsecurityanduserprivacyissuesarisingfromtheleakage oftemplateinformationduetoattacksonthetemplatedatabase.Ithasbeenshownthat aimagecanbereconstructedgiventheminutiaetemplate[13].Additionally,withthe growingnumberofhackingattemptsonlarge-scalecentralrepositoriescontainingbiometrictem- platessuchaslawenforcementandnationalIDdatabases 25 ,thereisanurgentneedtopreventleak- ageofpersonaluserinformation.Withmorethan 1 : 24 billionenrollmentsinIndia'snationalID program,Aadhaar,thecentralrepositoryhousesmorethan 12 : 4 billiontemplates[159]. KeepingthebiometrictemplatesinacentralizedrepositorymakesitpronetoDistributedDenial- of-Service(DDOS)attacksaffectingtheavailabilityduringvalidauthenticattempts.InJanuary 2018,itwasreportedthatforRs. 500 (under $10 )onecanillegallyobtainaccesstoanyperson enrolledintheAadhaardatabasewithin 10 minutes 26 . 25 https://www.thenewsminute.com/article/aadhaar-data-stolen-i-t-grids-proves-uidais-main-database-can-be- breached-experts-100215 26 https://www.tribuneindia.com/news/nation/rs-500-10-minutes-and-you-have-access-to-billion-aadhaar- details/523361.html 30 Figure1.20Exampleimagesofaltered(a)Transplantedfrictionridgeskinfromsole, and(b)thathavebeenbitten.Imagesource:[170] Inanoperationalscenario,typically,templatesaresecuredbyusingstandarden- cryptiontechniques,e.g.,AES,wherethesecurityofthetemplateliesinknowledgeofthedecryp- tionkey.Duringauthentication,templatesaredecryptedleavingthemvulnerabletoattacks.To overcomethis,thetemplatesareeitherstoredandmatchedondeviceinasecureenvironment 27 , ormatchedintheencrypteddomainbyemployinghomomorphicencryption[9,44].Inliterature, manytemplateprotectionapproacheshavebeenproposedthataimtoensurenon-invertibility,revo- cability,andnon-linkabilityoftemplateswhileaffordinghighrecognitionperformance[82,104]. However,thereisstillaneedtobridgethegapbetweenthetheoreticalproofsandthepractical applicationoftheseapproaches[44,115]. 1.5DissertationContributions Themaincontributionsofthisdissertationareasfollows: 1.Anaccurateandrobustdeeplearning-basedpresentationattackdetector(PAD), called FingerprintSpoofBuster ,utilizinglocalpatchescenteredandalignedalong- printminutiae.Experimentalresultsonpubliclyavailabledatasets(LivDet2011-2017),in- cludingintra-sensor,cross-material,cross-sensor,andcross-datasetscenarios,showthatthe proposedapproachoutperformsthestate-of-the-artresultspublishedonthesethreedatasets. 27 https://support.apple.com/en-sg/HT204587 31 Forexample,inLivDet2015(2017),ouralgorithmachieves99.03%(95.91%)averageac- curacyoverallsensorscomparedto95.51%(95.25%)achievedbytheLivDet2015(2017) winner[113,172]. 2.Agraphicaluserinterfacewhichhighlightsthelocalregionsofthegerprintimageas (live)orPA(spoof)forvisualinspection.Thisismoreinformativethanasingle spoofscoreoutputbythetraditionalapproachesfortheentireimage. 3.Analgorithmfordetectionandlocalizationofalterationsobfusca- tion).Theproposedapproachachievesastate-of-the-artTrueDetectionRate(TDR)of 99 : 24% @FalseDetectionRate(FDR)of 2% onanoperationalaltereddatabase fromalawenforcementagency. 4.Alight-weightversionofthePAD,called FingerprintSpoofBusterLite ,asanAndroidappli- cationthatcanrunonacommoditysmartphone(SamsungGalaxyS8)withouta dropinperformance(fromTDR= 95 : 7% to 95 : 3% @FDR= 0 : 2% )inunder 100 ms. 5.Aninterpretationofcross-material(generalization)performanceoftheproposedPADby (i)evaluatingFingerprintSpoofBusteragainstunknownPAsbyadoptingaleave-one-out protocol;onematerialisleftoutfromtrainingsetandisthenutilizedfortesting,(ii)utiliz- ing3Dt-SNEvisualizationsoftheandPAsamplesinthedeepfeaturespace,(iii) investigatingthePAmaterialcharacteristics(twoopticalandtwophysicalproperties)and correlatingthemwiththeircross-materialperformances,toidentifyarepresentativesetof PAmaterialsthatshouldbeincludedduringtrainingtoensureahighgeneralizationperfor- mance. 6.Astyletransfer-basedwrapper,called UniversalMaterialGenerator (UMG),toimprove thegeneralizationperformanceofanyPAdetectoragainstnovelPAfabricationmaterials thatareunknowntothesystemduringtraining.Theproposedwrapperisshowntoimprove theaveragegeneralizationperformanceofFingerprintSpoofBusterfromTDRof75.24%to 32 91.78%@FDR=0.2%whenevaluatedonalarge-scaledatasetof 5 ; 743 liveand 4 ; 912 PA imagesfabricatedusing12materials.Itisalsoshowntoimprovetheaveragecross-sensor performancefrom67.60%to80.63%whentestedonLivDet2017dataset,alleviatingthe timeandresourcesrequiredtogeneratelarge-scalePAdatasetsfornewsensors. 7.AdynamicPADsolutionutilizingasequenceoflocalpatchescenteredatdetectedminutiae fromtencolorframescapturedinquicksuccession( 8 fps)astheispresentedonthe sensor.Wepositthatthedynamicsinvolvedinthepresentationofa,suchasskin blanching,distortion,andperspiration,providediscriminatingcuestodistinguishlivefrom spoofs.TheproposedapproachimprovesthespoofdetectionperformancefromTDRof 99.11%to99.25%@FDR=0.2%inknown-materialscenarios,andfromTDRof81.65% to86.20%@FDR=0.2%incross-materialscenarios. 8.APADsolutionutilizingtheridge-valleydepth-informationofskin,includinginternal (papillaryjunction)andsweat(eccrine)glands,sensedbytheopticalcoherentto- mography(OCT)technology.OurproposedsolutionachievesaTDRof 99 : 73% @FDRof 0 : 2% onadatabaseof 3 ; 413 and 357 PAOCTscanscapturedusing THORLabsTelestoseriesspectral-domainreader.Wealsoidentifytheregionsin theOCTscanpatchesthatarecrucialforPADdetection. 33 Chapter2 FingerprintPresentationAttackDetection Thischapteraddressestheproblemofdevelopinganaccurate,robust,andefsolutionforde- tectingpresentationattacks.,weproposeadeeplearning-basedapproach, called FingerprintSpoofBuster ,utilizinglocalpatchescenteredandalignedusing minutiaetotraindeepconvolutionalneuralnetworks(CNNs).Experimentalresultsonpublicly- availableLivDetdatasets,anoperationalaltereddatabase,threelarge-scalegovern- mentcontrolledevaluationsaspartoftheIARPAODINproject,andtwoin-housecollectedPA datasetscontainingmorethan 20 ; 000 images( 12 PAmaterials)showthattheproposedapproach achievesstate-of-the-artperformanceinpresentationattackdetectionforintra-sensor, cross-material,cross-sensor,andcross-datasettestingscenarios. InordertounderstandthedecisionmadebyCNN,wehavedevelopedagraphicaluserinterface thatallowstheoperatortovisuallyexaminethelocalregionsoftheimagehighlighted as(live)orPA(spoof/altered),insteadofrelyingonasinglespoofscoreasoutputby competingPADapproaches.Wealsopresentalight-weightversionoftheproposedPAD,called FingerprintSpoofBusterlite ,asanAndroidappthatcanrunonacommoditysmartphone(Sam- sungGalaxyS8)withoutadropinPADperformance(fromTDR= 95 : 7% to 95 : 3% @ FDR= 0 : 2% )inunder 100 ms. 34 2.1Introduction Withtheproliferationofautomatedrecognitionsystemsinmanyapplications,includ- ingmobilepayments,internationalbordersecurity,andnationalID,thevulnerabilityofthesystem securityto presentationattacks isofgrowingconcern[30,107,123].Theseattackscanberealized throughanumberofmethodsincluding,butnotlimitedto,(i) gummyers [108], i.e. ,fabri- cated-likeobjectswithanaccurateimitationofone'stostealtheiridentity,(ii) 2Dor3Dprintederprinttargets [5,14,42],(iii) alterederprints [152,170], i.e. ,inten- tionallytamperedordamagedrealpatternstoavoidand(iv) cadaver ers [105].Amongthese,spoofattacks( i.e. ,gummyandprintedtargets)are themostcommonandeasiesttolaunchformofpresentationattacks,withamultitudeoffabrication processesrangingfrombasic moldingandcasting toutilizingsophisticated2Dand3Dprinting techniques[4,5,14,42,108]. Ithasbeenreportedthatcommonlyavailableandinexpensivematerials,suchasgelatin,sil- icone,play-doh,etc.,havebeenutilizedtofabricatehighspoofswhichare capableofbypassingarecognitionsystem.SeeFigs.2.1and2.2.InMarch2013,a Braziliandoctorwasarrestedforusingspoofmadeofsiliconetofoolthebiometricatten- dancesystematahospitalinSaoPaulo 1 .Inanotherincident,inSept.2013,shortlyafterApple releasediPhone5swithinbuiltTouchIDtechnology,Germany'sChaosComputerClub 2 hackeditscapacitivesensorbyutilizingahighresolutionphotographoftheenrolleduser's- printtofabricateaspoofwithwoodglue.InJuly2016,researchersatMichiganState Universityunlockedasecure-smartphoneusinga2Dprintedspooftohelp policewithahomicidecase 3 [14].InMarch2018,aganginRajasthan,India,wasarrestedfor thebiometricattendancesystem,usinggluecastedinwaxmolds,toprovideproxiesfor 1 http://www.bbc.com/news/world-latin-america-21756709 2 http://www.ccc.de/en/updates/2013/ccc-breaks-apple-touchid 3 http://statenews.com/article/2016/08/how-msu-researchers-unlocked-a-secure-smartphone-to-help- police-with-homicide-case 35 Figure2.1Fingerprintspoofattackscanberealizedusingvariousreadilyavailablefabrication materials,suchasPlayDoh,WoodGlue,Gelatin,etc.Foreachoftheimagepairs,theleftim- agepresentstheactualspoofspecimenwhiletherightimagepresentsthegrayscale impressioncapturedofthatspoofonaCrossMatchGuardian200reader. apoliceentranceexam 4 .AsrecentlyasApril2019,aGalaxyS10ownerwiththeassistanceofa 3Dprinterandaphotoofhisownwasabletospooftheultrasonicin-display sensoronhissmartphone 5 .Othersimilarsuccessfulspoofattackshavebeenreportedshowingthe vulnerabilitiesofbiometricsystemsdeployedinvariousapplications 6 ; 7 .Itislikelythat alargenumberoftheseattacksareneverdetectedandhencenotreported. Anotherformofpresentationattacksincludeintentionalalteration,knownas al- terederprints (seeFigs.1.20and2.10),inanattempttoobfuscatethetrueidentitytoevade lawenforcementAFIS[36].Casesoftamperingwithtoevadedetectionincriminal caseswerereportedasearlyas1935.Cummins[31]reported3casesofalterationsand 4 https://www.medianama.com/2018/03/223-cloned-thumb-prints-used-to-spoof-biometrics-and-allow-proxies- to-answer-online-rajasthan-police-exam/ 5 https://www.reddit.com/r/galaxys10/comments/b97ur8/i attempted to fool the new samsung galaxy s10s/ 6 http://fortune.com/2016/04/07/guy-unlocked-iphone-play-doh/ 7 36 Figure2.2Visualcomparisonbetween(a)aliveand(b)thecorrespondingspoofs (ofthesamemadewithdifferentmaterials.ImagesaretakenfromLivDet-2011dataset (Biometrikasensor)[167].Ourmethodcansuccessfullydistinguishbetweenliveandspoof gerprints.Thespoofnessscoreforliveis 0 : 00 ,andforspoofthescoresare 0 : 95 , 0 : 97 , 0 : 99 , 0 : 99 ,and 0 : 95 forx,Gelatin,Latex,Silgum,andWoodGlue,respectively. presentedimagesofbeforeandafteralterations.Inrecentyears,bordercrossingapplicationshave beentargetedbyalteredattacks.In2009,ABCnewsreportedthatJapaneseofar- restedaChinesewomanwhotookfiaparticularlyextrememeasurefltoevadedetection[117].The Chinesewomanhadpaidaplasticsurgeontoswapbetweenherrightandlefthands. Patchesofskinfromherthumbsandindexwerereportedlyremovedandthengraftedonto theendsofontheoppositehand.Asaresult,heridentitywasnotdetectedwhenshe re-enteredJapanillegally.In2014,theFBI412recordsinitsIAFISwhichindicated deliberatealterations[121].In2018,BusinessInsiderreportedthatEduardoRavelo, whowasaddedtotheFBI's10MostWantedlistinOctober2009,wasbelievedtohavehadplastic surgerytoalterhistoevadeauthorities[122].Therefore,presentationattackdetection (PAD)isofutmostimportance,especiallyinanunsupervisedscenario(e.g.,authenticationona smartphone,securefacilityaccess,selfcheck-inkiosksatairports)wherethepresenta- tionbyauseristypicallynotmonitored. 37 Table2.1Performancecomparison(AverageError[%])ofsoftware-basedspoof detectionstudiesonLivDet2011,2013,2015,and2017competitiondatasets.Sincedifferent competitiondatabasesutilizedifferentreaders(optical/thermal/capacitive),spoof materials,andmodesofdatacollection(cooperative/uncooperative),adirectperformancecom- parisonbetweendifferentdatabaseswillnotbeafaircomparison. StudyApproachLivDet 2011 LivDet 2013* LivDet 2015 LivDet 2017 Approachesutilizinghand-engineeredfeatures Ghianietal.,2012[56]LocalPhaseQuantization(LPQ)11.13.0N/AN/A Gragnielloetal.,2013[60]WeberLocalDescriptor(WLD)7.9N/AN/AN/A Ghianietal.,2013[55]BinarizedStatisticalImageFeatures(BSIF)7.22.1N/AN/A Gragnielloetal.,2015[61]LocalContrast-PhaseDescriptor(LCPD)5.71.3N/AN/A Deeplearning-basedapproaches Nogueiraetal.,2016[119]TransferLearning+CNN-VGG+WholeImage4.51.14.5N/A Palaetal.,2017[126]CustomCNNwithtripletloss+Randomly selectedlocalpatches 3.330.58N/AN/A Zhangetal.,2019[172]CNNwithresidualblocks+Centerof gravity-basedlocalpatches N/A1.743.184.75 ProposedApproach CNN-MobileNet-v1+Minutiae-basedlocal patches 1.670.250.974.56 *LivDet2013includesresultsforBiometrikaandItaldatasensors. 2.2RelatedWork 2.2.1StudiesonFingerprintSpoofDetection Thevariousspoofdetectionapproachesproposedinthebiometricsliteraturecanbebroadlyclassi- into(i)hardware-basedand(ii)software-basedsolutions[107].Inthecaseofhardware-based approaches,thereadersareaugmentedwithsensor(s)whichdetectcharacteristicsof vitality,suchasbloodw,thermaloutput,heartbeat,odor,andskindistortion[2,8,94].Ad- ditionally,specialtypesofsensingtechnologieshavebeendevelopedforimagingthe sub-dermalfrictionridgesurfacebasedonmulti-spectral[136,138],short-waveinfrared[156]and opticalcoherenttomography(OCT)[29,111].Alow-costfiBuild-It-Yourselfflopen-source- printreader,calledRaspiReader,usestwocamerastoprovidecomplementarystreams(direct-view andFTIR)ofimagesforspoofdetection[43].Ultrasound-basedin-displayreadersde- velopedforsmartphonesbyQualcommInc.[1]utilizeacousticresponsecharacteristicsforspoof detection. 38 Table2.2Relatedworkonalteredngerprintdetection.Thereisnopublic-domainaltered- printdatabaseavailableintheliterature. SourceMethodDatasetPerformance Feng,Jainand Ross[48] orientation1,976simulatedaltered- prints 92% detectionrateatfalse positiverateof 7% Tiribuzietal.[155]minutiaedensitymapsand orientationentropies 1000 genuineandsyntheticaltered 90 : 4% accu- racy Yoonetal.[170]orientationandminu- tiaedistribution 4 ; 433 operationalaltered- printsfrom270subjects 70 : 2% detectionrateat falsepositiverateof 2 : 1% Ellingsgaardand Busch[40,41] orientationandminutia orientation 116alteredand180unalteredfrom varioussources 92 : 0% detectionrateat falsepositiverateof 2 : 3% ProposedApproach inputimageandminutiae- basedpatches;CNNmodels 4,815alteredand4,815 from270subjects 99 : 24% detectionrateat falsepositiverateof 2% Incontrast,software-basedsolutionsextractsalientfeaturesfromthecapturedim- age(orasequenceofframes)forseparatingliveandspoofimages.Thesoftware-basedap- proachesintheliteraturearetypicallybasedon(i)anatomicalfeatures(e.g.porelocationsand theirdistribution[142]),(ii)physiologicalfeatures(e.g.perspiration[106]),and(iii)texture-based features(e.g.WeberLocalBinaryDescriptor(WLBD)[165],SIFT[59].Moststate-of-the-art approachesarelearning-based,wherethefeaturesarelearnedbytrainingconvolutionalneural networks(CNN)[23,24,26,87,119,126,156,172].SeeTable2.1. 2.2.2StudiesonAlteredFingerprintDetection Detectionofalteredisofhighvaluetolawenforcementandhomelandsecurityagen- ciestopreventknowncriminals(inthegovernmentwatchlist)fromevadingtheAFISatborder crossingsandillegallyenteringthecountry.Existingapproachesfordetectingalter- ationhaveprimarilyexploredhandcraftedfeaturestodistinguishbetweenalteredand Fengetal.[48]trainedanSVMtodetectirregularitiesinridgeorientationand reporteda 92% detectionrateatafalsepositiverateof 7% onadatabaseof 1 ; 976 simulatedal- teredTiribuzietal.[155]combinedtheminutiaedensitymapsandtheorientation entropiesofthewtoidentifythealteredTheyreporteda 90 : 4% accuracyonadatasetof 1 ; 000 genuineandsyntheticalteredYoonetal.[170]uti- lizedtheorientationandminutiaedistributiontodetectalteredTheirmethodwas 39 testedonadatabaseof 4 ; 433 alteredfrom 270 subjects,providingfor 70 : 2% correctly alteredintsatafalsepositiverateof 2 : 1% .EllingsgaardandBuschin[40,41] discussmethodsforautomaticallydetectingalteredbasedonanalysisoftwodifferent localcharacteristicsofaimage:identifyingirregularitiesinthepixel-wiseorientations, andexaminingminutiaorientationsinlocalpatches.Theyfurthersuggestthatalterationdetection shouldbeincludedintostandardqualitymeasuresofBeyonddetectionofaltered gerprints,Yoonetal.[170]investigatedfeasibilityofanAFIStolinkalteredtotheir pre-alteredmates.Table2.2summarizespreviousworkinaltereddetection.Allthe existingmethodsarebasedonexaminingirregularitiesinorientationworminutiamapsbased on hand-engineeredfeatures . Theproposedapproach(Section2.4)usesadeeplearningtechniquetolearnandevaluate salientfeaturesinthealteredandclassifyinputintimagesintotwoclasses: oralteredInthecaseofalteredtheproposedapproachlocalizes theregionsofangerprintthatarealtered.Thiscanbeutilizedtoassessthe erprintness ofan inputimage[170],suchthats(orregions)produceahighscoreand altered(oralteredregions)producealowscore. 2.3FingerprintSpoofBuster AseriesofLivenessDetection(LivDet)competitionshavebeenheldsince2009to advancestate-of-the-artandbenchmarktheproposedsolutions[57].Thebestper- formerintheLivDet2015[113],Nogueiraetal.[119],utilizedtransferlearning,wheredeepCNNs originallydesignedforobjectrecognitionandpre-trainedonImageNetdatabase[140],were tunedonimagestodifferentiatebetweenliveandspoofIntheirapproach, thenetworksweretrainedonwholeimagesresizedto 227 227 pixelsforVGG[147] and 224 224 pixelsforAlexNet[93]asrequiredbythesenetworks.However,therearethreedis- advantagesofusingthisapproach:(i)imagesfromsomeofthesensorsusedinLivDet 40 Figure2.3Aliveimage(fromLivDet2015dataset)capturedusingCrossMatchL ScanGuardianinits(a)originaldimensions( 800 750 ),and(b)resizedto 227 227 .Adirect downsizingoftheimagemayresultinthefrictionridgeareaoccupyinglessthan10% oftheoriginalimagesize,leadingtolossofdiscriminatoryinformation.Instead,local patches( 96 96 upscaledto 227 227 ),asshownin(c),providesalientcuestodifferentiatea spooffromlive datasets,suchasCrossmatchLScanGuardian( 800 750 ),havealargeblankarea( 50% )sur- roundingthefrictionridgeregion.Directlyresizingtheseimages,from 800 750 to 227 227 , eventuallyresultsinthefrictionridgeareaoccupyinglessthan10%oftheoriginalimagesize(see Figure2.3);(ii)resizingarectangularimageofsize,say w h ,toasquareimage,say p p ,leads todifferentamountsofinformationretainedinthetwospatialimagedimensions;(iii)downsizing animage,ingeneral,leadstoalossofdiscriminatoryinformation. Itisimportanttoconsidervarioussourcesofnoiseinvolvedinthespooffabricationprocess itselfthatcanintroducesomeartifacts,suchasmissingfrictionridgeregions,cracks,airbubbles, etc.,inthespoofs.Theprimaryconsequenceofsuchartifactsisthecreationofspuriousminutiae intheimagessensedfromspoofs.Thelocalregionsaroundthesespuriousminutiae can,therefore,providesalientcuestodifferentiateaspooffromlive(see Fig.2.4).Weutilizethisobservationtotrainatwo-classCNNusinglocalpatchesaroundtheex- 41 Figure2.4(a)Exampleofaliveandthecorrespondingspoofwiththearti- factsintroducedinthespoofshighlightedinred.(b)Localregionshighlightedasgreen(live)and red(spoof)byevaluatingallminutiae-centeredlocalpatches( 96 96 ).(c)Asubsetofminutiae- basedlocalpatchesalongwiththeirindividualspoofnessscores.TheimagesaretakenfromMSU FingerprintPresentationAttackDataset(MSU-FPAD)-CrossMatchSensorandthespoofmaterial usedisSiliconex).Thespoofnessscoresoutputbytheproposedapproachfortheliveand spoofare0.06and0.99,respectively.(Bestviewedincolor) tractedminutiae,asopposedtothewholeimagesorrandomlyselectedlocalpatches,to designaspoofdetector.Inthissection,wewillshowthattheproposedapproach,called FingerprintSpoofBuster ,ismorerobusttonovelfabricationmaterialsthanearlierapproachesthat utilizethewholeimage[119]orrandomlyselectedlocalpatches[126]. Theproposedapproachforspoofdetection,utilizinglocalpatchesofsize p p ,( p =96 ), centeredatminutiae,(i)circumventsthepreviouslymentioneddrawbacksofdownsizingwhole imagestotraintheCNNs,(ii)provideslargeamountofdata(anaverageof48 image)totrainthedeepCNNarchitectureswithoutov(iii)learns salienttexturalinformationfromlocalregions,robusttodifferentiatebetweenspoofandlive 42 gerprints,and(iv)providesaanalysisoftheimagesbylocalizingandhigh- lightingspoofregions.TheoutputoftheCNNisascoreintherange[ 0 ; 1 ],as SpoofnessScore ;thehigherthespoofnessscore,themorelikelytheimagepatchisextractedfrom aspoofForagivenimage,thespoofnessscorescorrespondingtotheminutiae-based localpatchesareaveragedtogeneratetheglobalspoofnessscorefortheinputimage.Afusion ofCNNmodelstrainedonmulti-scalepatches(ranginginsizefromfrom 64 64 to 128 128 ), centeredandalignedusingminutiae,isshowntofurtherboostthespoofdetectionperformance. WealsooptimizeFingerprintSpoofBustertoreducememoryandcomputationrequirements by(i)K-meansclusteringofminutiaepointsfollowedbyweightedfusiontoreducetherequired numberoflocalpatchestobeevaluated,and(ii)modifyingtheMobileNet-v1networkarchitecture andquantizationofmodelweightstoreducetherequiredcomputationsandperformbytecomputa- tionsinsteadpointarithmetic.Consequently,alight-weightversionofthePAD( 3 : 2 MB), called FingerprintSpoofBusterLite ,isdevelopedasanAndroidapplicationthatcanrunona commoditysmartphonewithoutadropinPADperformanceinunder 100 ms.Themain contributionsofthischapterareenumeratedbelow: Utilizeddomain-knowledgetodesignarobustspoofdetector,called FingerprintSpoofBuster,wherelocalpatchescenteredandalignedusingminu- tiaeareutilizedfortrainingaCNNmodel.Thisdiffersfromotherexistingapproacheswhich havegenerallyusedthewholeinputimageforspoofdetection. Experimentalresultsonpubliclyavailabledatasets(LivDet2011,2013,2015,and2017),in- cludingintra-sensor,cross-material,cross-sensor,andcross-datasetscenarios,showthatthe proposedapproachoutperformsthestate-of-the-artresultspublishedonthesethreedatasets. Forexample,forLivDet2015(2017)dataset,ouralgorithmachieves99.03%(95.91%)av- erageaccuracyoverallreaderscomparedto95.51%(95.25%)achievedbythe LivDet2015(2017)winner[113,172]. 43 Figure2.5AnoverviewoftheproposedFingerprintSpoofBuster[24],astate-of-the-art PAD,utilizingCNNstrainedonlocalpatchescenteredandalignedusingminutiaelocationand orientation,respectively.Atotalnumberof 30 minutiaearedetectedintheinputimage. Collectedtwonewpresentationattackdatasetscontainingmorethan 20 ; 000 gerprint(liveandspoof)images,usingtwodifferentreadersandover 12 different spooffabricationmaterials.Experimentalresultsonthesetwonewdatasetsandthreelarge- scalegovernmenttestdatasetsaspartofIARPAODINprojectarealsopresented.IARPA considertheseresultstobestate-of-the-art 8 . Developedagraphicaluserinterface(GUI)forreal-timespoofdetectionwhich allowsavisualexaminationofthelocalregionsofthehighlightedas (live)orPA(spoof/altered). Optimized FingerprintSpoofBuster byK-means( K =10 )clusteringofminutiaefollowed byweightedfusiontoreducetherequirednumberofinferences(typicallya 70% 80% re- duction.).Further,networkarchitectureoptimizationsandquantizationofmodelweightsen- ableddevelopmentofalight-weightversionoftheproposedPAD,called FingerprintSpoof BusterLite 9 ,asanAndroidapplicationwhichacceptsalive-scanandmakesa vs.PAdecisionin 100 msonacommoditysmartphone(SamsungGalaxyS8). 8 Basedonverbalcommunication 9 Weusetheterm lite toindicatealightversionofthePADasweutilize TensorFlowLite frameworkforthe proposedmodeloptimizations.https://wwww.org/lite 44 Figure2.6Localpatchesextractedaroundtheminutiaefor(a)realand(b) spoof(gelatin),and(c)alignedusingminutiaeorientation.Thespoofnessscorefor eachpatchisintherange [0 1] ;higherthescore,morelikelythepatchisextractedfromaspoof Foragiveninputtestimage,thespoofnessscorescorrespondingtothelocalpatches areaveragedtogiveaglobalspoofnessscore.Thedecisionismadebasedona thresholdlearnedfromthetrainingdataset;animagewithaglobalspoofnessscorebelowthe thresholdisaslive,otherwiseasspoof.Onlyasubsetofdetectedminutiae areshownforillustrativepurposes. FingerprintSpoofBusterconsistsoftwostages,anoftrainingstageandanonlinetest- ingstage.Theoftrainingstageinvolves(i)detectingminutiaeinthesensedim- age,(ii)extractinglocalpatchescenteredandalignedusingminutiaelocationandorientation, respectively,and(iii)trainingMobileNetmodelsonthealignedlocalpatches.Duringthetesting stage,thespoofdetectiondecisionismadebasedontheaverageofspoofnessscoresforindividual patchesoutputfromtheMobileNetmodel.Anoverviewoftheproposedapproachispresentedin Fig.2.5. 2.3.1MinutiaeExtraction Theminutiaeareextractedusingthealgorithmfrom[16].ThefourLivDetdatasets (LivDet2011,2013,2015,and2017)compriseofimagescapturedatvaryingresolu- tions,rangingfrom 500 dpito 1000 dpi.Sincetheminutiaedetectorin[16]wasdesignedfor 500 dpiimages,allimagesareresizedtoensureastandardresolutionof 500 dpi.Astandard 45 resolutionforalltheimagesisalsocrucialtoensuresimilaramountoffrictionridge areaineachlocalpatch,irrespectiveofthereaderused.Anaverageof 46 minutiae (std.dev.= 6 : 2 )and 50 minutiae(std.dev.= 6 : 9 )aredetectedperliveimageandspoofimage, respectively,fortheseLivDetdatasets. 2.3.2LocalPatchExtraction Foragivenimage I with k detectedminutiaepoints M = f m 1 ;m 2 ;:::;m k g ,where m i = f x i ;y i ; i g , i.e. ,theminutiae m i isintermsofspatialcoordinates( x i , y i )andorien- tation( i ),acorrespondingsetof k localpatches L = f l 1 ;l 2 ;:::;l k g ,eachofsize[ q q ]where ( q = p 2 p ),areextracted.Eachlocalpatch l i ,centeredatthecorrespondingminutialocation ( x i ;y i ),isaligned 10 basedontheminutiaeorientation( i ).Afteralignment,thecentralregionof size[ p p ]( p =96 )iscroppedfromtherotatedpatchandusedfortrainingtheCNNmodel. Thesizeoflargerpatchisedto[ p 2 p p 2 p ]topreventanylossofinformationduringpatch alignment.Fig.2.6presentsexamplesofrealandspoofimagesandthecorresponding localpatchescenteredandalignedusingminutiaelocationandorientation,respectively. Forevaluatingtheimpactoflocalpatchsizeonthespoofdetectionperformance,wealso exploreuseofmulti-resolutionpatchesofsize p 2f 64 ; 96 ; 128 g fortrainingindependentCNN modelsandtheirfusion.Allthelocalpatchesareresized 11 to 224 224 asrequiredbythe Mobilenet-v1model. 2.3.3MobileNetCNN SincethesuccessofAlexNet[93]forobjectdetectioninILSVRC-2012[140],differentCNNar- chitectureshavebeenproposedinliterature,suchasVGG,GoogleNet(Inceptionv1-v4),ResNets, MobileNet,etc.Nogueiraetal.[119],winnerofLivDet2015,utilizedapre-trainedVGGar- 10 MATLAB's imrotate functionwithbilinearinterpolationisusedtorotatethelocalpatchforalignment. 11 TensorFlow'sresizeutilitywithbilinearinterpolationwasused;availableat https://wwww.org/api docs/python/ tf/image/resize images 46 chitecture[147]toachievethebestperformanceinLivDet2015[113].Inthisstudy,weutilize theMobileNet-v1architecture[71]becauseitoffersthefollowingadvantagesoverothernetwork architectures(suchasVGGandInception-v3):(i)MobileNet-v1isdesignedusingdepth-wise separableconvolutions,originallyintroducedin[21],providingdrasticdecreaseinmodelsize andtraining/evaluationtimeswhileprovidingbetterspoofdetectionperformance;(ii)itisalow- latencynetworkrequiringonly 6 mstoclassifyaninputpatchasliveorspoofcompared to 50 msrequiredbyInception-v3network[23]usingaNvidia1080TiGPU;and(iii)thenumber ofmodelparameterstobetrainedinMobileNet-v1(4.24M)issmallerthanthenum- berofmodelparametersinInception-v3(23.2M)andVGG(138M),requiringlower effortsintermsofregularizationanddataaugmentationtopreventov[71]. WeutilizedtheTF-Slimlibrary 12 implementationoftheMobileNet-v1architecture.Thelast layerofthearchitecture,a 1000 -unitsoftmaxlayer(originallydesignedtopredictthe 1 ; 000 classes ofImageNetdataset),wasreplacedwitha 2 -unitsoftmaxlayerforthetwo-classproblem, i.e. ,live vs.spoof.TheoptimizerusedtotrainthenetworkwasRMSPropwithasynchronousgradient descentandabatchsizeof 100 .Dataaugmentationtechniques,suchasbrightnessadjustment, randomcropping,andverticalareemployedtoensurethetrainedmodelisrobusttothe possiblevariationsinimages.Forthemulti-resolutionlocalpatches,aseparatenetwork istrainedforeachpatchsizewiththesameparametersasmentionedabove. 2.3.4Fine-grainedFingerprintImageRepresentation Partialspoofsandalterationsaremeanttoavoid 13 ,bymaskingthe trueidentityfromabiometricsystem[23,170].Spoofdetectorstrainedonthewhole imagesareineffectiveagainstlocalizingpartialspoofthatconcealonlya limitedregionofthelive.Moreover,inmanysmartphonesandotherembeddedsystemsthat onlysenseapartialregion(frictionridgearea)oftheduetosmallsensorarea(typically 12 https://githubw/models/tree/master/research/slim 13 http://abcnews.go.com/Technology/GadgetGuide/surgically-altered-woman-evade-immigration/ story?id=9302505 47 Figure2.7Theproposedapproachprovidesarepresentationforspoofdetectionby usingminutiae-basedlocalpatches.Aspooffabricatedusingsiliconewhichconceals onlyapartialregionoftheliveisshownin(a)andtheimagedin(b)(enclosed inred).Theproposedapproachextractsandevaluatestheminutiae-basedlocalpatches,andhigh- lightsthelocalregionsaslive(ingreen)orspoof(inred)asshownin(c)and(d).Itcanalso highlighttheregionsofalterationsasshownforafiZflcutalteredin(e),(f) and(g).Theproposedapproachdetected(b)and(e)asspoofswiththespoofnessscoresof 0 : 78 and 0 : 65 ,respectively.(Bestviewedincolor) 150 150 ),itisverycrucialtohaveadetailedrepresentationofthesensedregion. Oneofthekeyadvantagesofemployingapatch-basedapproachistherepresentation ofinputimageforspoofdetection.Fig.2.7(a)presentsanexampleofa spooffabricatedusingsilicone,concealingonlyapartialregionoftheliveandFig.2.7(b) presentstheimagedpartialspoofusingaCrossMatchGuardian200reader. Theproposedapproach,utilizingminutiae-basedlocalpatches,highlightsthelocalregionsaslive orspoof(showninFigs.2.7(c)and(d)ingreenandred,respectively),providinga representationoftheimage.Fingerprintalterations,suchascuts,mutilations,stitches, 48 etc.,performedusingsurgicalandchemicalprocedures(seeFig.2.7(e)),createspuriousminutiae asshowninFigs.2.7(f)and(g).Theproposedapproachisabletohighlighttheregionsof gerprintalterationsdespitenotbeingtrainedonaltereddatabase,indicating thegeneralizabilityoftheproposedapproach.Theproposedapproachdetectedboth imagesinFigs.2.7(b)and(e)asspoofswiththespoofnessscoresof 0 : 78 and 0 : 65 ,respectively. 2.3.5SpoofnessScore TheoutputfromthesoftmaxlayerofthetrainedMobileNet-v1modelisaspoofprobabilityscore, calledasthe SpoofnessScore ,intherange[ 0 ; 1 ].Thelargerthespoofnessscore(closeto 1 ), thehigherthesupportthattheinputlocalpatchbelongstothespoofclass(seeFig.2.6).Foran inputtestimage I ,thespoofnessscores s I i 2f 1 ; 2 ;:::;k g correspondingtothe k minutiae-basedlocal patchesofsize p p ,extractedfromtheinputimage,areaveragedtogiveaglobalspoofness score S I .Incaseofmulti-resolutionlocalpatches,theglobalspoofnessscores( S I p i )basedon eachlocalpatchsize, p i 2f 64 ; 96 ; 128 g ,areaveragedtoproduceaspoofnessscore.The thresholdthatminimizestheaverageerrorontrainingdatasetislearnedandutilized asthethreshold.Animagewithaspoofnessscorebelowthethresholdis aslive,otherwiseasspoof.Thelearnedthresholdperformedslightlybetterinspoofdetectionthan selectingathresholdof 0 : 5 . 2.3.6OnRobustnessofPatch-basedRepresentation Whiletheproposedapproachisbasedonthepremisethatitiscapableofcapturingdiscriminatory informationfromlocalpatches(presenceofartifacts),suchasvalleynoise,brokenridges,airbub- bles,etc.,fromspoofwealsoexaminetherobustnessofpatch-basedrepresentation byevaluatingitintheabsenceofsuchartifacts.Figs.2.8(a)and(b)presentminutiae-basedlocal patchesfromaliveandthecorrespondingspoof(fabricatedusingEcoFlex), respectively,forthesameminutiapoint,andFigs.2.8(d)and(e)presenttheirfeaturerepresen- tations,respectively,obtainedfromthebottlenecklayeroftheMobileNet-v1architecture.The 49 Figure2.8Illustratingtheembeddingsofminutiae-basedlocalpatches( 96 96 ),for(a)livepatch, (b)spoofpatch,and(c)spoofpatch(retouchedtoremovevisibleartifacts),in1024- dimensionalfeaturespacefromMobileNet-v1bottlenecklayer,transformedto 32 32 heatmaps, (d),(e),and(f),respectively,forvisualization.Ahighspoofnessscoreforthespoofpatch isachieved,despiteremovalofartifacts,indicatingtherobustnessoftheproposedapproach.(Best viewedincolor) 1024-dimensionalfeaturerepresentationistransformedto 32 32 heatmapforvisualization.The spoofnessscoresforthetwopatches,liveandspoof,are 0 : 00 (Fig.2.8(b))and 0 : 99 (Fig.2.8 (d)),respectively.Thespoofpatch(Fig.2.8(b))isbytheauthors,usinganopen-source photo-editingutility,called GIMP 14 ,toremovethevisibleartifactsandproducethespoof patchasshowninFig.2.8(c).Thefeaturerepresentationforthepatchis showninFig.2.8(f).Ahighspoofnessscoreforthespoofpatch( 0 : 94 )despiteremoval ofartifactsindicatestherobustnessoftheproposedapproach. 14 https://www.gimp.org/ 50 Figure2.9InterfaceoftheproposedFingerprintSpoofBuster.Itallowsselectionofthe readerandCNNmodel.(Bestviewedincolor) 2.3.7GraphicalUserInterface(GUI) AgraphicaluserinterfaceforFingerprintSpoofBusterallowstheoperatortoselecta readerandatrainedMobileNet-v1modelforevaluation.Theoperatorcanperformthe evaluationineither online or batch mode.Inthe online mode,aisimagedusingthe selectedreaderanddisplayedontheinterface(seeFig.2.9).Theextractedminutiaeand thecorrespondinglocalpatchesarepresentedandcolor-codedbasedontheirrespectivespoofness scores(greenforliveandredforspoof).Theglobalspoofnessscoreandthedecisionforthe inputimageisalsopresentedontheinterface.Inthebatchmode,allimageswithina directoryareevaluated,andglobalspoofnessscoresforeachareoutput togetherinascoreThegraphicaluserinterfaceallowstheoperatortovisuallyexaminethe localregionsofthehighlightedasliveorspoof,insteadofrelyingononlyasinglescore asoutputbythetraditionalapproaches. 51 Figure2.10Typesofalterations:(i)Obliteration,suchasscars,ormutilations,(ii)Dis- tortion, i.e. ,frictionridgetransplantationtodistortfrictionridgearea,and(iii)Imitation, i.e. , transplantationorremovaloffrictionridgeskinwhilestillpreservinglikepattern. 2.4AlteredFingerprints:DetectionandLocalization 2.4.1AlteredFingerprintDetection Thegoalofdetectingalteredimagescanbeformulatedasabinaryprob- lemwithtwoclasses; altered and .Whilesomecutsandcruisescouldbeduetounin- tentionalaccidents,ourinteresthereistodetectanywheretheridgestructure isAsshowninFigure2.10,differenttypesofalterationprocedureswould resultindifferentdegradation.Differenttypesofalterationproceduresandtheiref- fectonfrictionridgepatternsarediscussedin[40,170].Basedonthechangesmadetofriction ridgepatterns,theycategorizedalteredintothreetypes: obliteration , distortion ,and imitation . 52 Figure2.11Examplesofalteredandcorrespondingmanuallymarkedregionsofin- terest(ROI)circumscribingtheareasofalterations.Localpatchesoverlappingwith manuallymarkedROIarelabeledasalteredpatches,whiletherestarelabelledasThe testphaseisfullyautomaticanddoesnotrequireanymanualmarkup. Obliteration consistsofabrading,cutting,burning,applyingstrongchemicals,ortransplanting frictionridgeskin.Skindiseaseorsideeffectsofdrugscanalsoobliterate Distortion comprisesofcasesofusingplasticsurgerytoconvertanormalfrictionridgepatternintoanunusual ridgepattern.Someportionsofskinareremovedfromtheandgraftedbackontoadifferent positioncausinganunusualpattern. Imitation iswhenasurgicalprocedureisperformedinsuch awaythatthealteredappearasnaturalforexample,bygraftingskin fromtheotherhandoratoesuchthatridgepatternisstillpreserved.DespiteYoon andJain's[170]suggestiontodevelopdifferentmodelsfordifferentalterationtypes,wepropose toutilizeasinglemodelforthefollowingtworeasons:a)insufdataforeachalteration typefortrainingdeepnetworks,andb)manuallabelingofthealterationtypewouldbesubjective becauseanimagecansufferfrommorethanonealterationtype.WetrainedaConvolutionalNeural Network(CNN)toclassifyaninputimageintooneofthetwoclassesofor altered.Dataaugmentationtechniques,suchasmirroring,randomcropping,androtationhave beenemployedtoincreasethesizeofthetrainingdata. 2.4.2LocalizationofAlteredRegions Tolocalizeandhighlightthealteredregionsofweaugmentourwholeimagebased altereddetectionwithapatch-basedapproach.Ourapproachisasfollows:First,re- 53 Figure2.12Examplesofalteredlocalizationbyourproposedmethod.Localregions highlightedinredrepresentthealteredportionofthewhereasregionshighlightedin greenthefrictionridgearea.(Bestviewedincolor) Figure2.13Anoverviewoftheproposedapproachfordetectionandlocalizationofaltered- prints.Wetrainedtwoconvolutionalneuralnetworks(Inception-v3andMobilenet-v1)usingfull imagesandlocalpatchesofimageswherepatchesarecenteredonminutiaelocations. gionofinterest(ROI)ismanuallymarkedfor 1 ; 182 randomlyselectedalteredfrom ourdatabaseof 4 ; 815 alteredSeeFigure2.11.Next,localpatchesofsize 96 96 centeredaroundeachextractedminutiaarecropped.Localpatcheswithmorethan 50% overlap withthemanuallymarkedROIarelabeledasalteredpatches,andtheremainingpatchesarela- beledasBecauseamajorityofalterationsgeneratediscontinuitiesandnoisy regionsinthefrictionridgepattern,amuchhighernumberofspuriousminutiaearegeneratedin alteredcomparedtoofthesamesize[170].Asdiscussedearlier, localpatchescenteredaroundminutiaeprovidesuperiorperformanceinspoofdetection comparedtopatchesextractedinarasterscanorrandommanner.Atotalof 81 ; 969 and 89 ; 979 alteredpatchesareextractedandutilizedfortrainingtwodifferentnetworks:Inception- v3[150]andMobileNet-v1[71].Fig.2.12presentsexamplesofalteredlocalization 54 Table2.3Networkhyper-parametersutilizedintrainingCNNmodelsforaltereddetec- tionandlocalization. Hyper-paramtersInception-v3MobileNet-v1 BatchSize 32100 Optimizer RMSPropRMSProp LearningRate [0.01-0.0001]; exp.decay0.94 [0.01-0.0001]; exp.decay0.94 Momentum 0.90.9 Iterations 75,00025,000 outputbytheproposedapproach.Anoverviewoftheproposedapproachtodetectandlocalize alteredispresentedinFigure2.13. 2.4.3AlterationScore WetrainMobileNet-v1[71]andInception-v3[150]networks,usingTF-Slimlibrary[145],as binary(alteredvs.Theinputisafullimageandthe outputisaprobability(orscore)ofbelongingtoAlteredorValidclass,referredtoas alteration score .Aimageshouldresultinanalterationscoreofcloseto0,whereasan alteredimageshouldresultinanalterationscoreofcloseto1.Thenetworkhyper- parametersusedtotraintheCNNmodelsarepresentedinTable2.3. 2.5End-to-EndPresentationAttackDetection Theproposedmodulesforaltereddetectionandspoofdetectioncanbeimplementedin acascadedmannerasshowninFigure2.14.First,awholeimageisfedtothealtered detector.Iftheinputimageisasanalteredweoutputthealteration scoreandevaluateminutiae-basedlocalpatchestolocalizethealteredregions.Otherwise,ifthe imageisasvalid,itisthenfedtotheFingerprintSpoofBusterforspoofdetection, whichevaluatesthewholeimageandminutiae-basedlocalpatchesandperformsaveragescore fusiontogenerateaglobalspoofnessscore.Italsooutputsaheatmapoverlaidontheinputimage 55 Figure2.14Anoverviewoftheproposedend-to-endpresentationattackdetection.(Bestviewed incolor) highlightingthespoofandregions.Thescorethresholdsforaltereddetection andspoofdetectionaresetto0.15and0.50,respectively. 2.6ExperimentalResults 2.6.1PerformanceEvaluationMetrics TheperformanceoftheproposedapproachisevaluatedfollowingthemetricsusedinLivDet[57]. F errlive :Percentageoflive F errfake 15 :Percentageofspoof 15 Whenallthespooffabricationmaterialsareknownduringthetraining,thismetricisreferredtoas Ferrfake known ,andincaseallthespooffabricationmaterialstobeencounteredduringtestingarenotknown duringtraining,thismetricisreferredtoas Ferrfake unknown . 56 Table2.4SummaryoftheLivenessDetection(LivDet)datasets(LivDet2011andLivDet2013) utilizedinthisstudy. Dataset LivDet2011[167] LivDet2013[58] Fingerprint Reader Biometrika ItalData Digital Persona Sagem Biometrika ItalData Model FX2000 ET10 4000B MSO300 FX2000 ET10 ImageSize 315 372 640 480 355 391 352 384 315 372 640 480 Resolution(dpi) 500 500 500 500 569 500 #LiveImages 1000/1000 1000 = 1000 1000 = 1000 1000 = 1000 1000 = 1000 1000 = 1000 Train/Test #SpoofImages 1000/1000 1000 = 1000 1000 = 1000 1000 = 1000 1000 = 1000 1000 = 1000 Train/Test Cooperative Subject Yes Yes Yes Yes No No SpoofMaterials x,Gelatine,Latex, Silgum,WoodGlue Gelatine,Latex,PlayDoh, Silicone,WoodGlue x,Gelatine,Latex, Modasil,WoodGlue Theaverageerror( ACE )isas: ACE = F errlive + F errfake 2 (2.6.1) Additionally,wealsoreportthe F errfake @ F errlive =1 : 0% foreachoftheexperiments asreportedin[57].Thisvaluerepresentsthepercentageofspoofsabletobreachthebiometric systemsecuritywhentherejectrateoflegitimateusers 1 : 0% . 2.6.2PresentationAttackDatasets Thefollowingdatasetshavebeenutilizedtoevaluatetheproposedapproach: 2.6.2.1LivDetDatasets Inordertoevaluateperformanceoftheproposedapproach,weutilizedLivDet2011[167],LivDet 2013[58],LivDet2015[113]andLivDet2017[114]datasets.Eachofthesedatasetscontains over 16 ; 000 images,acquiredfromthreeormoredifferentreaders,with comparablenumbersofliveandspoofHowever,theCrossMatchandSwipereaders fromLivDet2013datasetwerenotutilizedforevaluationpurposesbecausethe(a)LivDetcompe- 57 Table2.5SummaryoftheLivenessDetection(LivDet)datasets(LivDet2015andLivDet2017) utilizedinthisstudy. Dataset LivDet2015[113] LivDet2017[114] Fingerprint Reader GreenBit Biometrika Digital Persona CrossMatch GreenBit Orcanthus Digital Persona Model Dacty Scan26 HiScan- PRO U.are.U 5160 LScan Guardian Dacty Scan84C Certis2 Image U.are.U 5160 ImageSize 500 500 1000 1000 252 324 800 750 500 500 300 n 252 324 Resolution(dpi) 500 1000 500 500 569 500 500 #LiveImages 1000 = 1000 1000 = 1000 1000 = 1000 1510 = 1500 1000 = 1700 1000 = 1700 999 = 1692 Train/Test #SpoofImages 1000 = 1500 1000 = 1500 1000 = 1500 1473 = 1448 1200 = 2040 1200 = 2018 1199 = 2028 Train/Test Cooperative Subject Yes Yes Yes Yes Yes Yes Yes SpoofMaterials x,Gelatine,Latex,Wood Glue,Liquidx,RTV BodyDouble, x,Play Doh,OOMOO, Gelatin WoodGlue,x,BodyDouble, Gelatine,Latex,Liquidx titionorganizersfoundanomaliesinthedatafromCrossMatchreaderanddiscouraged itsuseforcomparativeevaluations[57],and(b)theresolutionofimagesoutputfrom Swipereaderisverylow, i.e. , 96 dpi.UnlikeotherLivDetdatasets,spoofimages fromBiometrikaandItaldatareadersinLivDet2013dataset[58]arefabricatedusingthe non- cooperativemethod , i.e. ,withoutusercooperation.ItshouldbenotedthatinLivDet2015and LivDet2017,thetestingsetincludedspoofsfabricatedusingnewmaterials,thatwerenotknown inthetrainingset.InthecaseofLivDet2015,thesenewmaterialsincludedliquidxandRTV forBiometrika,DigitalPersona,andGreenBitreaders,andOOMOOandgelatinforCrossmatch reader.InthecaseofLivDet2017,thetestingsetcontainedmaterials,namelyGelatine,Latex, andLiquidx,completelydifferentfromtrainingwhichcontainedWoodGlue,x,and BodyDoublematerials.Tables2.4and2.5presentsasummaryoftheLivDetdatasetsusedinthis study. 2.6.2.2MSUFingerprintPresentationAttackDataset InadditiontoutilizingLivDetDatasets,wecollectedalargedataset,calledtheMSUFingerprint PresentationAttackDataset(MSU-FPAD),usingtwodifferentreaders,namely,Cross- MatchGuardian200andLumidigmVenus302.Thereareatotalof 9 ; 000 liveimagesand 10 ; 500 58 Table2.6SummaryoftheMSUFingerprintPresentationAttackDataset(MSU-FPAD)andPrecise BiometricsSpoof-KitDataset(PBSKD). Dataset MSU-FPAD PreciseBiometricsSpoof-Kit Fingerprint Reader CrossMatch Lumidigm CrossMatch Lumidigm Model Guardian200 Venus302 Guardian200 Venus302 ImageSize 750 800 400 272 750 800 400 272 Resolution(dpi) 500 500 500 500 #LiveImages 2 ; 250 / 2 ; 250 2 ; 250 / 2 ; 250 250 / 250 y 250 / 250 y Train/Test #SpoofImages 3 ; 000 / 3 ; 000 2 ; 250 / 2 ; 250 250 / 250 200 / 200 z Train/Test Cooperative* Yes Yes Yes Yes SpoofMaterials x,PlayDoh,2DPrint(Matte Paper),2DPrint(Transparency) x,Gelatin,Latexbodypaint,xwithsilver colloidalinkcoating,xwithBarePaintcoating, xwithNanotipscoating,CrayolaModelMagic, Woodglue,MonsterLiquidLatex,and2Dprinted onofpaper y 1000randomlysampledliveimagesfromMSU-FPADareselectedforPreciseBiometricsSpoof-Kit Dataset. z LumidigmreaderdoesnotimageSilicone(EcoFlex)spoofswithNanoTipsandBarePaintcoatings. Figure2.15ExampleimagesfromMSUFingerprintPresentationAttackDataset(MSU-FPAD) acquiredusing(a)CrossMatchGuardian200,and(b)LumidigmVenus302readers. NotethatLumidigmreaderdoesnotimagePlayDoh(orange)spoofs. spoofimagescapturedusingthesetworeadersand4differentspooffabricationmaterials,namely, x,PlayDoh,2Dprintedonmattepaper,and2DprintedontransparencyTheselectionof thereadersandthespoofmaterialsisbasedontherequirementsofIARPAODINpro- gram[123]evaluation.Fig.2.15presentssomeexampleimages,andTable2.6presents asummaryoftheMSUFingerprintPresentationAttackDataset. 59 Figure2.16ExampleimagesfromPreciseBiometricsSpoof-KitDataset(PBSKD)acquiredusing (a)CrossMatchGuardian200,and(b)LumidigmVenus302readers.NotethatLu- midigmreaderdoesnotimageSilicone(EcoFlex)spoofswithNanoTipsandBarePaintcoatings. 2.6.2.3PreciseBiometricsSpoof-KitDataset Wealsocollectedanotherdatasetcontaining 900 highqualityspoofimagesfabricated using 10 differenttypesofspoofmaterials,namely,(i)x,(ii)Gelatin,(iii)Latexbodypaint, (iv)xwithsilvercolloidalinkcoating,(v)xwithBarePaintcoating,(vi)xwith Nanotipscoating,(vii)CrayolaModelMagic,(viii)Woodglue,(ix)MonsterLiquidLatex,and (x)2Dprintedonofpaper.Thespoofspecimensusedforthisdatasetaretaken fromPreciseBiometrics 16 Spoof-Kitcontaining 10 specimensperspooftype,foratotalof 100 spoofspecimens.Eachspoofspecimenisimaged 5 timesusingtworeaders,namely, CrossMatchGuardian200andLumidigmVenus302.NotethatLumidigmreaderdoesnotimage Silicone(EcoFlex)spoofswithNanoTipsandBarePaintcoatings.Anadditional 900 randomly sampledlivefromMSU-FPADareselectedforatotalof 1 ; 800 imagesin PreciseBiometricsSpoof-KitDataset.Fig.2.16presentssomeexampleimages,and Table2.6presentsasummaryofthePreciseBiometricsSpoof-KitDataset. 16 https://precisebiometrics.com/ 60 Figure2.17IllustrationofthetimelineofIARPAODINProgram[123].ThePhase-IIIwillbe completedinMarch2021. 2.6.2.4GovernmentEvaluationDatasets(GCT-I,II,andIII) DuringMay14-May25,2018,theGovernmentControlledTest-I(GCT-I),aspartofthe IARPAODINprogram[123],wasorganized.Atotalof 13 ; 062 imageswerecollected usingtwoopticalreaders,CrossMatchGuardian200andLumidigmV302,from 340 subjectsin aspanof 2 weeksatJohnsHopkinsUniversityAppliedPhysicsLab(JHUAPL),Laurel,MD. Subjectspresentedeitherorpresentationattacksforatotalof 20 impressions persensorpersubject.FourdifferentPAtypeswereused,namely,Transparency,DragonSkin, YellowPigmentedSilicone,andVeroBlackplus. Inthefollowingyear,duringMay8-May17,2019,GovernmentControlledTest-II(GCT-II) wasconductedatJHUfacilityinColumbia,Maryland.Atotalof 8 ; 598 imagesfrom around 400 subjectswerecollectedonCrossMatchGuardian200,including 7 ; 852 and 746 PAimagesfabricatedwithmorethan8PAtypes.EightPAtypeswereknown( i.e. ,seenby SpoofBusterduringtraining),namely,BallisticGelatin,Clearx,Tanx,YellowPig- mentedSilicone,FleshPigmentedx,NusilR-2631ConductiveSilicone,FleshPigmented 61 Table2.7SummaryofthedatasetscollectedduringGovernmentControlledTest(GCT)I,II,and IIIaspartoftheIARPAODINprogram[123]. GCT-I GCT-II GCT-III CrossMatchLumidigm CrossMatch CrossMatch #Subjects 340340 400 685 #PATypes 44 8+ 12 #Samples 6,7815,842 7,852 13,241 #PASamples 232207 746 1,049 Total 7,0136,049 8,598 14,290 PDMS,andElmer'sGlue.Afewpresentationsobfuscatedwithbandaidswerealso labeledasPA. Morerecently,duringOct.28-Nov.15,2019,GovernmentControlledTest-III(GCT-III) wasconductedatJHUfacilityinColumbia,Maryland.Atotalof 14 ; 290 imagesfrom 685 subjectswerecollectedonCrossMatchGuardian200,including 13 ; 241 and 1 ; 049 PAimagesfabricatedwithmorethan12PAtypes.Figure2.17presentsthetimelineoftheIARPA ODINProgram.Table2.7summarizesthenumberofandPAsamplescollectedinthe threegovernmentevaluationdatasets. 2.6.2.5AlteredFingerprintDataset Anoperationaldatasetof 4 ; 815 alteredfrom 635 tenprintcardsof 270 subjects[170], acquiredfromlawenforcementagenciesisutilizedtoevaluatetheproposedapproach.Thenumber oftenprintcardspersubjectvariesfrom1to16duetomultipleencounters.However,notall10 imagesinatenprintcardmaybealtered.Thenumberofalteredinstancesper subjectvariesfrom1to137.Anotheroperationaldatasetof 4 ; 815 rolledimagesisused fors[15].Fingerprintimagesinbothsetsofalteredandareimages collectedaspartoflawenforcementoperations.Allimagesare8-bitsgrayscale.Figure2.18 showsdistributionofNFIQ2.0[75,151]scoresforthealteredandimages usedinthisstudy 17 .Ave-foldcross-validationisemployedwhereineachofthevefolds,the trainingsetcontains 3 ; 852 alteredand 3 ; 852 Thetestingsetineachfold 17 NFIQ2.0softwarereadsaimage,computesasetofqualityfeaturesfromtheimage,andusesthese featurestopredicttheutilityoftheimageasanintegerscorebetween0and100. 62 Figure2.18HistogramofNFIQ2.0qualityscoresforalid(green)andaltered(red) images.Approximately,75%ofalteredimageshaveaNFIQ2.0scoreof 40orlower,andonly10%ofaltereddatasethasaNFIQ2.0scoreoflargerthan50.Themedian NFIQ2.0scoreforalteredimagesis23,whilemedianNFIQ2.0scorefor imagesis48.ThissuggestsNFIQ2.0'ssuitabilityfordetectingaltered particularlyforcasesofobliteration.(Bestviewedincolor) containstheremaining 963 alteredand 963 suchthatthetrainandtestsets aredisjoint.Figure2.19showssamplealteredandimagesusedfortrainingandtestingin oneofthevefolds. 2.6.3SpoofDetectionResults Theproposedapproachisevaluatedunderthefollowingfourscenariosofspoofdetec- tion,whichanalgorithm'srobustnessagainstnewspoofmaterials,useofdifferentsensors and/ordifferentenvironments. 63 Figure2.19Exampleofalteredandimagesusedfortrainingandtestinginone ofthevefolds.Thealteredregionishighlightedinred.TheNFIQ2.0qualityscoresarealso presentedforeachimage;thelargerNFIQ2.0score,thehigherquality.TheNFIQ2.0 qualityscoresrangesbetween[0,100]. 2.6.3.1Intra-Sensor,KnownSpoofMaterials Inthissetting,allthetrainingandtestingimagesarecapturedusingthesamesensor,andallspoof fabricationmaterialsutilizedinthetestsetareknownapriori.Ourexperimentalresultsshowthat trainingtheMobileNet-v1modelfromscratch,usingminutiae-basedlocalpatches,performsbetter thane-tuningapre-trainednetwork,asreportedin[119].Thelargeamountofavailabledata, intheformoflocalpatches,issuftotrainthedeeparchitectureofMobileNet-v1 modelwithoutover Itwasreportedin[57]thatmostofthealgorithmssubmittedtoLivDet2015didnotperform wellonDigitalPersonasensorduetothesmallimagesize.Ourapproachbasedonlocalpatches doesnotsufferfromthislimitation.Tables2.8and2.9presenttheperformancecomparisonbe- tweentheproposedapproachandthestate-of-the-artresultsfortheLivDetdatasetsutilizedin 64 Table2.8Performancecomparisonbetweentheproposedapproach(bottom)andstate-of-the-art (top)reportedonLivDet2015dataset[113].Separatenetworksaretrainedonthetrainingimages capturedbyeachofthefourreaders. Ferrfakeknown and Ferrfakeunknown correspond toKnownSpoofMaterialsandCross-Materialscenarios,respectively. State-of-the-Art[113] LivDet2015 Ferrlive (%) Ferrfake y (%) Ferrfake known(%) Ferrfake unknown*(%) ACE (%) Ferrfake(%)@ Ferrlive=1%[57] GreenBit 3.50 5.33 4.30 7.40 4.60 17.90 Biometrika 8.50 3.73 2.70 5.80 5.64 15.20 DigitalPersona 8.10 5.07 4.60 6.00 6.28 19.10 Crossmatch 0.93 2.90 2.12 4.02 1.90 2.66 Average 4.78 4.27 3.48 5.72 4.49 13.24 ProposedApproach LivDet2015 Ferrlive (%) Ferrfake y (%) Ferrfake known(%) Ferrfake unknown*(%) ACE (%) Ferrfake(%)@ Ferrlive=1% GreenBit 0.50 0.80 0.30 1.80 0.68 0.53 Biometrika 0.90 1.27 0.60 2.60 1.12 1.20 DigitalPersona 1.97 1.17 0.85 1.80 1.48 1.96 Crossmatch 0.80 0.48 0.82 0.00 0.64 0.28 Average 1.02 0.93 0.64 1.48 0.97 0.96 y Ferrfakeincludesspoofsfabricatedusingbothknownandpreviouslyunseenmaterials.Itisanaverageof Ferrfake-knownandFerrfake-unknown,weightedbythenumberofsamplesineachcategory. *TheunknownspoofmaterialsinLivDet2015testdatasetincludeLiquidxandRTVforGreenBit, Biometrika,andDigitalPersonasensors,andOOMOOandGelatinforCrossmatchsensor. thisstudy.Table2.10presentstheperformanceoftheproposedapproachonMSUFingerprint PresentationAttackDataset(MSU-FPAD)andPreciseBiometricsSpoof-KitDataset(PBSKD). IndependentMobileNet-v1networksaretrainedforeachevaluation.NotethatinLivDet2015 (Table2.8),thisscenarioisrepresentedbythe Ferrfakeknown .ForLivDet2011and2013,MSU- FPAD,andPBSKDdatasets(Table2.9),allspoofmaterialsinthetestsetwereknownduring training.Fig.2.20presentsexampleimagesforBiometrikasensorfromLivDet2015 datasetthatwerecorrectlyandincorrectlybytheproposedapproach. Wealsoevaluatetheimpactoflocalpatchsizeontheperformanceoftheproposedapproach, bycomparingtheperformanceofthreeCNNmodelstrainedonminutiae-centeredlocalpatchesof size [ p p ] where p = f 64 ; 96 ; 128 g ,extractedfromtheimagescapturedbyBiometrika sensorforLivDet2011dataset.Amongthesethreemodels,theonetrainedonlocalpatchesofsize [ 96 96 ]performedthebest.However,ascore-levelfusion,usingaverage-rule,ofthethreemodels reducedtheaverageerror(ACE)from 1 : 24% to 0 : 88% ,andFerrfakefrom 1 : 41% to 65 Table2.9Performancecomparisonbetweentheproposedapproachandstate-of-the-artresultsre- portedonLivDet2011andLivDet2013datasetsforintra-sensorexperimentsintermsofAverage Error(ACE)andFerrfake@Ferrlive=1%. Dataset State-of-the-Art ProposedApproach LivDet2011 ACE(%) ACE(%) Ferrfake@Ferrlive=1% Biometrika 4.90[61] 1.24 1.41 DigitalPersona 1.85[126] 1.61 3.25 ItalData 5.10[126] 2.45 7.21 Sagem 1.23 [126] 1.39 4.33 Average 3.27 1.67 4.05 LivDet2013 Biometrika 0.65[126] 0.20 0.00 ItalData 0.40[119] 0.30 0.10 Average 0.53 0.25 0.05 Table2.10AverageError(ACE),Ferrfake@Ferrlive=0.1%andFerrlive=1%on theMSUFingerprintPresentationAttackDataset(MSU-FPAD)andPreciseBiometricsSpoof-Kit Dataset(PBSKD)forintra-sensorexperiments. Dataset ProposedApproach MSU-FPAD ACE(%) Ferrfake@Ferrlive=0.1% Ferrfake@Ferrlive=1% CrossMatchGuardian200 0.08 0.11 0.00 LumidigmVenus302 3.94 10.03 1.30 Average 2.01 5.07 0.65 PBSKD CrossMatchGuardian200 2.02 5.32 0.65 LumidigmVenus302 1.93 3.84 0.33 Average 1.98 4.66 0.51 0 : 58% @Ferrlive =1% .Similarperformancegainswereobservedforothersensors,butthereisa tradeoffbetweentheperformancegainandthecomputationalrequirementsforthespoofdetector. Inordertoevaluatetheofutilizingminutiaelocationsforextractinglocalpatches, wetrainedindependentMobileNet-v1modelsonasimilarnumberoflocalpatches,extractedran- domlyfromLivDet2015datasets.Itwasobservedthatthemodelstrainedonminutiae-centered localpatchesachievedasignhigherreduction( 78% )inaverageerror,com- paredtothereduction( 33% )achievedbythemodelstrainedonrandomlysampledlocalpatches. Fig.2.21illustratesthat(i)featuresextractedfromlocalpatchesprovidebetterspoofdetection accuracythanthewholeimage,(ii)patchesselectedaroundminutiaeperformbetterthanrandom patchesofthesamesize,(iii) 96 96 patchperformsthebestamongthethreepatchsizescon- 66 Figure2.20ExampleliveandspoofforBiometrikasensorfromLivDet2015dataset, correctlyandincorrectlybyourproposedapproach.(Bestviewedincolor) sidered,and(iv)score-levelfusionofmulti-resolutionlocalpatchesbooststhespoofdetection performance. 2.6.3.2Intra-Sensor,Cross-Material Inthissetting,thesamesensorisusedtocapturealltrainingandtestingimages,butthespoofim- agesinthetestingsetarefabricatedusingnewmaterialsthatwerenotseenduringtraining.Forthe setofcross-materialexperiments,weutilize(i)theLivDet2017datasetwhichcontainsthree completelydifferentspoofmaterialsinthetestingforeachsensor, i.e. ,Gelatine,Latex,andLiquid x,and(ii)theLivDet2015datasetwhichcontainstwonewspoofmaterialsinthetesting setforeachsensor, i.e. ,LiquidEcxandRTVforGreenBit,Biometrika,andDigitalPersona sensors,andOOMOOandGelatinforCrossmatchsensor.Theperformanceoftheproposedap- 67 Figure2.21ROCcurvesforlivev.spoofofimagesfromLivDet2011 Dataset(Biometrikasensor)utilizing(i)wholeimage,(ii)randomlyselectedpatches[ 96 96 ], (iii)minutiae-basedpatchesofsize[ p p ], p 2f 64 ; 96 ; 128 g ,(iv)score-levelfusionofmulti- resolutionpatches.(Bestviewedincolor) proachoncross-materialexperimentsforLivDet2017andLivDet2015datasetsarepresentedin Table2.11andTable2.8(column F errfake unknown ),respectively,andiscomparedwiththestate- of-the-artperformancereportedin[113,114].Areductionintheerrorrateisachieved bytheproposedmethod.Forbettergeneralizability,asecondsetofcross-materialexperiments areperformedonLivDet2011andLivDet2013datasets,followingtheprotocoladoptedbythe winnerofLivDet2015[119].Table2.12presentstheachievederrorratesontheseexperiments, alongwiththespooffabricationmaterialsusedintrainingandtestingsets. 2.6.3.3Cross-SensorEvaluation Inthisevaluation,thetrainingandthetestingimagesareobtainedfromtwodifferentsensorsbut fromthesamedataset.Thissettingthealgorithm'sstrengthinlearningthecommonchar- acteristicsusedtodistinguishliveandspoofacrossacquisitiondevices.For 68 Table2.11Performancecomparisonbetweentheproposedapproachandstate-of-the-artre- sults[114]reportedonLivDet2017datasetforcross-materialexperimentsintermsofAverage Error(ACE)andFerrfake@Ferrlive=1%. LivDet2017Dataset[114] State-of-the-Art ProposedApproach ACE(%) ACE(%) Ferrfake@Ferrlive=1% GreenBit 2.94 2.33 6.57 Orcanthus 5.84 7.04 26.05 DigitalPersona 4.41 2.90 20.32 Average(LivDet2017Winner) 4.40(4.75) 4.09 17.65 Table2.12Performancecomparisonbetweentheproposedapproachandstate-of-the-artresults reportedonLivDet2011andLivDet2013datasetsforcross-materialexperiments,intermsof AverageError(ACE)andFerrfake@Ferrlive=1%. Dataset SpoofMaterials State-of- the-Art ProposedApproach Materials-Training Materials-Testing ACE(%) ACE(%) Ferrfake@ Ferrlive=1% Biometrika2011 EcoFlex,Gelatine,Latex Silgum,WoodGlue 10.10[119] 4.60 8.15 Biometrika2013 Modasil,WoodGlue EcoFlex,Gelatine,Latex 2.10[126] 1.30 0.34 ItalData2011 EcoFlex,Gelatine,Latex Silgum,WoodGlue,Other 7.00[126] 5.20 7.80 ItalData2013 Modasil,WoodGlue EcoFlex,Gelatine,Latex 1.25[126] 0.60 0.68 Average 5.11 2.93 4.24 instance,usingtheLivDet2011dataset,imagesfromtheBiometrikasensorareusedfortraining, andtheimagesfromItalDatasensorareusedfortesting.Wefollowtheprotocolforselectionof trainingandtestingsetsforcross-sensorandcross-datasetexperimentsasadoptedbyNogueiraet al.[119].Table3.7comparestheaverageclaerrorandFerrfake@Ferrlive=1%for theproposedapproachwiththestate-of-the-artresultsobtainedby[119]and[126]oncross-sensor experiments. 2.6.3.4Cross-DatasetEvaluation Inthisscenario,thetrainingandthetestingimagesareobtainedusingthesamesensor,butfrom twodifferentdatasets,( i.e. ,onlythecaptureenvironmentsaredifferent).Forinstance,training imagesareacquiredusingtheBiometrikasensorfromLivDet2011datasetandthetestingimages areacquiredusingtheBiometrikasensorfromLivDet2013.Thissetofexperimentscaptures thealgorithm'sinvariancetothechangesinenvironmentfordatacollection.Table2.14presents theaverageerrorandFerrfake@Ferrlive=1%.ResultsinTable2.14showthatthe 69 Table2.13Performancecomparisonbetweentheproposedapproachandstate-of-the-artre- sults[119]reportedonLivDet2011andLivDet2013datasetsforcross-sensorexperiments,in termsofAverageError(ACE),andFerrfake@Ferrlive=1%. TrainingDataset(TestingDataset) State-of-the-Art ProposedApproach ACE(%) ACE(%) Ferrfake(%)@ Ferrlive=1% Biometrika2011(ItalData2011) 29.35[126] 25.35 50.81 ItalData2011(Biometrika2011) 27.65[126] 25.21 76.20 Biometrika2013(ItalData2013) 1.50 [126] 4.30 12.73 ItalData2013(Biometrika2013) 2.30 [119] 3.50 70.35 Average 15.20 14.59 52.52 Table2.14Performancecomparisonbetweentheproposedapproachandstate-of-the-artre- sults[126]reportedonLivDet2011andLivDet2013datasetsforcross-datasetexperiments,in termsofAverageError(ACE)andFerrfake@Ferrlive=1%. TrainingDataset(TestingDataset) State-of-the-Art ProposedApproach ACE(%) ACE(%) Ferrfake(%)@ Ferrlive=1% Biometrika2011(Biometrika2013) 14.00[126] 7.60 89.60 Biometrika2013(Biometrika2011) 34.05[126] 31.16 78.84 ItalData2011(ItalData2013) 8.30[126] 6.70 16.70 ItalData2013(ItalData2011) 44.65[126] 26.16 75.09 Average 25.25 17.91 65.06 proposedlocalpatchbasedapproachachievesareductionof29%intheaverageerror from25.25%in[126]to17.91%inourapproach.However,theaverageFerrfake@Ferrlive=1% thatwereportis52.52%and65.06%forcross-sensorandcross-datasetscenariosrespectively, indicatingthechallenges,especiallyinapplicationswhereahighlevelofspoofdetectionaccuracy isneeded. 2.6.3.5GovernmentControlledTests TheevaluationscenarioofGCT-I,GCT-II,andGCT-IIIissimilartoacross-datasetevaluation, asweutilizethesamereaderforcollectingtrainingandtestingdata,butindifferent environments.ThetrainingdataiscollectedinalabenvironmentatMSUwhereastestingdataset arecollectedinasimulatedoperationalsettingatJHUfacilitiesinMaryland.Table2.15presents theachievedPATrueDetectionRate(%)@FalseDetectionRate= 0 : 2% .Theselectionofthis 70 Table2.15TrueDetectionRate(%)@FalseDetectionRate=0.2%ontheGCT-I,GCT-II,and GCT-IIIevaluationdatasets. Dataset ProposedApproach GCT-I TDR(%)@FDR=0.2% CrossMatchGuardian200 99.60 LumidigmVenus302 97.44 GCT-II CrossMatchGuardian200 99.20 GCT-III CrossMatchGuardian200 99.81 metricisbasedontherequirementsofIARPAODINprogram[123]andrepresentsthepercentage ofPAsabletobreachthebiometricsystemsecuritywhentherejectrateoflegitimateusers 0 : 2% . Figure2.22Performancecurvesfortheproposedaltereddetectionapproachutilizing Inception-v3andMobileNet-v1CNNmodels.Yoonetal.[170](baseline)achievedaTDRof70% @ FDR=2%on4,433alteredwhiletheproposedapproachachievesaTDR(overve folds)of99.24% 0.58% @ FDR=2%on4,815altered(Bestviewedincolor) 71 Figure2.23Alterationscorehistogramsforandalteredobtainedbythepro- posedapproachusingthebestperformingInception-v3model.Thesmalloverlapbetweenthe andalteredscoredistributionsisanindicationofhighdiscriminationpowerofthemodel. NotethattheY-axisispresentedinlogscale.(Bestviewedincolor) 2.6.4AlteredFingerprintDetectionandLocalization Figure2.22showstheReceiverOperatingCharacteristic(ROC)curvesfortheproposedal- tereddetectionapproach(Inception-v3andMobileNet-v1)comparedwithstate-of-the- art[170].TheredcurveshowstheaccuracyoftheInception-v3implementationandthebluecurve showstheaccuracyoftheMobileNet-v1implementation.Inception-v3outperformsMobileNet-v1 architecture( ˘ 99% to ˘ 92% ),whilethecomputationalrequirement 18 forMobileNet-v1(6ms) isalmost10timeslowercomparedtotimerequiredbytheInception-v3architecture(50ms).The superiorperformanceofInception-v3overMobilenet-v1networkcanbeattributedto(i)thedeeper convolutionalnetworkprovidinghigherdiscriminationpowerand(ii)thelargerinputimagesize, 299 299 forInception-v3comparedto 224 224 forMobilenet-v1.Bothnetworkmodelsshow betterdetectionperformancethanYoonandJain[170]whichhadatruedetectionrateofonly 70 : 2% atafalsepositiverateof 2% . 18 WeutilizedNVIDIAGTX1080TiGPUtorunourimplementationofInception-v3andMobileNet-V1based altereddetection. 72 Figure2.24Exampledetectionsandtheiralterationscoresoutputbytheproposedapproach.(a) and(d)presentcorrectlyimages,while(b)and(c)presentincorrect(b) athatreceivesahighalterationscoreprimarilyduetothenoisyregiononthe right.(c)containsasmallregionofalterationwhichissimilartothenoisepresentin 73 Figure2.25Exampleimageswithpossiblegroundtruthlabelingerror.(a)Incorrectlylabeledas altered,and(b)incorrectlylabeledasTheInception-v3modeloutputsanalterationscore of0.20and0.97for(a)and(b),respectively,indicating(a)asand(b)asaltered. Figure2.23showsthehistogramsofscoresproducedbyourInception-v3modelfor andalteredimages.Theverysmalloverlapofthetwodistributionsisanindicationof thehighaccuracyofourmodel.Wefurtherinvestigatedtheimagesthatwereincorrectlylabeledby ourmodelaccordingtothegroundtruthlabelsgivenatthetimeoftraining.Ourvisualinspection oftheseimagessuggeststhatsomeofimageslabeledaslooklikealtered Thiscouldbeduetointentionalalterationorcasesofpoorqualitywherecharacteristics aredegradedbecauseofageoroccupation(bricklayers,forexample,areknowntohavepoor qualitybecausetheirskinisseverelydamaged).Ontheotherhand,someoftheimages labeledasaltered,havearelativelysmallportionoftheimageasalteredandmostpartsoftheimage lookInotherwords,mostofthefailurecasesareduetothesubjectivityofthelabeling process.ExampleimagesofcorrectandincorrectbytheInception-v3modelare showninFigure2.24alongwiththescoresgeneratedbyourmodel.Examplesofincorrectground truthlabelsareshowninFigure2.25. 74 Figure2.26AconfusionmatrixofcorrectandincorrectofandPApatches. ThecrucialregionsthatareresponsibleforthepredictionmadebytheCNNarchitecture(CNN- Fixations)andthecorrespondingdensityheatmapsareillustratedoneachlocalpatch. Toevaluatethelocalizationofalterations,atwo-foldcrossvalidationisperformed. TwoInception-v3networksaretrainedusing 81 ; 969 and 89 ; 979 alteredpatches,achiev- inganaverageEERof 8 : 5% . 2.7VisualizingCNNLearnings Theuseofconvolutionalneuralnetworks(CNNs)hasrevolutionizedcomputervisionandma- chinelearningresearchachievingunprecedentedperformanceinmanytasks.Butsuchsolutions areusuallyconsideredasfiblackboxesflsheddinglittlelightonhowtheyachievehighperformance. OnewaytogaininsightsintowhatCNNslearnisthroughvisualexploration, i.e. ,toidentifythe imageregionsthatareresponsibleforthepredictions.Towardsthisgoal,visualizationtech- niques[112,144,146]havebeenproposedtosupplementtheclasslabelspredictedbyCNN,in ourcaseorPA,withthediscriminatedimageregions(orsaliencymaps)exhibitingclass- patternslearnedbyCNNarchitectures.Thevisualizationtechniqueproposedin[112] exploitsthelearnedfeaturedependenciesbetweenconsecutivelayersofaCNNtoidentifythedis- 75 Figure2.27ExamplesofandPAimagesalongwiththespoof- nessscore(SS)outputbytheCNNarchitecture.Densityheatmapsofthearealso presented. criminativepixels,called CNN-Fixations ,intheinputimagethatareresponsibleforthepredicted label.WeutilizethisvisualizationtechniquetounderstandtherepresentationlearningofourCNN modelsandidentifythecrucialregionsinpatchesresponsibleforpredictions. Figure2.26presentsaconfusionmatrixofcorrectandincorrectonsofand PApatchesillustratingCNN-Fixationsandthecorrespondingdensityheatmaps.Weobservethat thereisahighdensityofalongfrictionridgelinesandatporelocations,suggestingthat thesearecrucialregionsindistinguishingvsPApatches.Figure2.27presents additionalexamplesofandPAimagesalongwithacoupleof localpatches.InthecaseofwholeimageasPA,weobserveahighdensity ofpointsontherightedgeoftheimagewherethefrictionridgelinesarecollapsedduetohigh moistureresultinginnarrowvalleys.InthecaseofPAimage,the exhibitamulti-modaldistributionwheretherightregionisdominatingresultingintheaverage spoofnessscoreof 0 : 39 . Adeepconvolutionalneuralnetwork(CNN)isshowntobeuniversal,implyingthatitcanbe usedtoapproximateanycontinuousfunctiontoanarbitraryaccuracygiventhedepthoftheneural networkislargeenough[173].fiInsteadofusingageneralbank,aneuralnetworkistrained 76 Figure2.28Illustrationoftheoutputs,foraliveandaspoofpatch,afterthe andthirdconvolutionlayersintheCNNarchitecture(Inception-v3).Differentfocuson differentfeaturessuchaslocationofsweatpores,noiseartifacts,frictionridge,valleynoise,etc. 77 to aminimalsetofsothatboththefeatureextractionandtasks areperformedbythesamenetworkfl[81].AftertrainingtheFingerprintSpoofBuster,we usethesameliveandspoofpatchesusedinFigure2.8tovisualizetheoutputs aftertheandthirdconvolutionlayersintheCNNarchitecture,showninFigure2.28.We observethatdifferentfocusondifferentfeaturessuchaslocationofsweatpores,noise artifacts,frictionridge,valleynoise,etc.,markedinred.TheCNN-architecturelearnsthenon- linearcomplexrelationshipbetweenthedifferentfeaturesextractedatvariousscalesfromthe inputimagetoachievethehighperformance.Itishoweverstillanon-goingresearch problemtounderstandandvisualizethefeatureslearnedbytheCNNarchitectures. 2.8ComputingTimes TheMobileNet-v1CNNmodeltakesaround6-8hourstoconvergeusingasingleNvidiaGTX 1080TiGPUwithapproximately 96 ; 000 localpatchesfrom 2 ; 000 images( 2 ; 000 images 48 image)inthetrainingset.Theaveragespoofdetectiontimefor aninputimage,includingminutiaedetection,localpatchextractionandalignment,inferenceof spoofnessscoresforlocalpatches,andproducingthespoofdetectiondecision,is 100 ms usingaNvidia1080TiGPUand 1 ; 500 msonacommoditysmartphone. 2.9FingerprintSpoofBusterLite FingerprintSpoofBusterevaluatesalllocalpatchescorrespondingtothedetectedminutiae.The individualscoresoutputbytheCNNmodelforeachofthelocalpatchesisaveragedtoproducea globalspoofnessscore.ThetimerequiredtoevaluateasinglepatchutilizingMobileNet-v1CNN modelonacommoditysmartphone,suchasSamsungGalaxyS8 19 (QualcommSnapdragon835 64-bitOctaCore2.35GHzProcessorand4GBRAM),isaround 48 ms.Thisresultsinanaverage executiontimeof 1 : 5 secondsperimage(withanaverageno.of 35 minutiae/image).Moreover,a 19 https://www.gsmarena.com/samsung galaxy s8-8161.php 78 Figure2.29Minutiaeclustering.(a)image;(b)extractedminutiaeoverlaidon(a);(c) 96 96 patchescenteredateachminutiae;(d)minutiaeclusteringusingk-means(kissetto10 here).Theclusters,highlightedasyellowcirclesofsamesize,areshownonlyforillustrative purposes.Inpractice,theclustersizesmayvarybasedontheminutiaedistribution. MobileNet-v1trainedmodelinProtoBuf(.pb)formattakesaround 13 MB.Thesecomputationand memoryrequirementsaretoolargetoyieldanacceptablefireal-timeflspoofdetectionofafraction ofasecond. 2.9.1ProposedOptimizations Inordertoreducethememoryandcomputationrequirementsforreal-timeoperationonacom- moditysmartphone,weproposethefollowingtwooptimizations: ModelQuantization:T 20 isusedtoconverttheMobileNet-v1(.pb)modelto format,resultinginalight-weightandlow-latencymodelwithweightsquantizedtoperform bytecomputationsinsteadofpointarithmetic.Theresultantmodeltakesonly 3 : 2 MBof memoryandcanexecutePADforasinglepatchonsSamsungGalaxyS8smartphoneinaround 10 ms,approximately 80% reductionincomputationandmemoryrequirements. Reducetherequirednumberofinferences: Ithasbeenobservedthatminutiaepointsina- printimagearedistributedinanon-uniformmanner[127].Thisobviatestheneedforevaluatingall minutiae-centeredpatches.WeclustertheminutiaeusingK-meansclustering[80](seeFigure2.29 20 https://wwww.org/lite/ 79 Table2.16DetectiontimeandPADperformance(TDR@FDR= 0 : 2% )ofFingerprintSpoof BusterLite. #MinutiaeClusters TimeRequired(inms)(Avg. s.d.) TDR(%)@FDR=0.2% 5 53 10 93 : 9 1 : 1 10 98 8 95.3 0.5 15 151 11 95 : 3 0 : 5 20 202 10 95 : 3 0 : 6 25 247 24 95 : 7 0 : 5 30 301 25 95 : 7 0 : 4 AllMinutiae(avg.=35) 510 26 95.7 0.1 Note:SamsungGalaxyS8smartphone(QualcommSnapdragon83564-bitOctaCore2.35GHzProcessorand4GB RAM)costs $349 . (d)),extractasinglepatch( 96 96 )centeredatthecentroidofeachcluster,andassignaweight toeachclusterbasedonthenumberofmembers(minutiaepoints)thatbelongtothatcluster.A clusterwithlargenumberof(minutiae)membersisgivenahigherweight.Thespoofness scoreiscomputedasaweightedaverageofspoofnessscoresofcentroid-basedlocalpatches.The weightedscore-fusionisutilizedtoachieveasimilarglobalspoofnessscoreasobtainedinthecase whennoclusteringisperformed. Apartfromtheabovetwooptimizations,wemodifytheMobileNet-v1networksuchthatthe inputimagesizeis 96 96 ,thesamesizeastheminutiaepatch.Correspondingly,thekernelsize usedinthelastaveragepoollayerisreducedfrom 7 7 to 3 3 .Thisreducesthetimerequired totrainthenetworkonadatasetwitharound 100 ; 000 patchesfrom 6 - 8 hoursto 2 - 2 : 5 hoursusing asingleNVIDIAGTX1080TiGPU,withoutanydropinPADperformance.Weutilizedthe w-slimlibrary 21 forourexperiments. Table2.16presentstheaccuracyofFingerprintSpoofBusterLite(TDR(%)@FDR=0.2%) andtheaveragetimerequiredtoevaluateminutiae-basedpatchesonGalaxyS8.Sincetheout- putclustersfromK-meansclusteringdependontheclusterinitialization,weuse5-foldcross- validationandreportaverage std.forboththeevaluationtimeandPADperformance.Table2.16 showsthatatotalof10minutiaeclustersaresuitabletomaintainPADperformance(TDR= 95 : 3% 21 https://githubw/models/tree/master/research/slim 80 Figure2.30UserinterfaceoftheAndroidapplication, FingerprintSpoofBusterLite shownin(a). Itallowsselectionofaninferencemodelasshownin(b).Theusercanloadaimage fromphonestorageorcapturealivescanfromareaderasshownin(c).Theapp executesPADanddisplaysthedecisionalongwithhighlightedlocalpatchesonthescreen shownin(d)and(e). comparedto 95 : 7% @FDR= 0 : 2% ),whilereducingthecomputationalrequirementbyalmost 80% . 2.9.2AndroidApplication Giventhereductioninresourcerequirements,anAndroid-basedapplication(app)forFingerprint SpoofBuster,called FingerpintSpoofBusterLite ,wasdeveloped.Theappprovidesanoptionto selectaninferencemodeltrainedonimagesfromdifferentreaderssuchasCrossMatch, SilkID 22 ,etc.,asshowninFigure2.30(b).Theappcanevaluateaimageinputbya gerprintreaderconnectedtothemobilephoneviaanOTG(on-the-go)cable.Italsoallowsloading andevaluatinganimagefromthephonestorage/gallery(seeFigure2.30(c)).Theappdisplays thecapturedimagewithextractedminutiaeoverlaidontheimage.Local patchescenteredaroundthecentroidofminutiaeclustersareevaluatedandhighlightedbasedon thespoofnessscore.Afterevaluation,theapppresentsthedecision(Live/Spoof),spoofness score,andPAdetectiontime(seeFigures2.30(d)and(e)). 22 http://www.silkid.com/products/ 81 2.10Summary Arobustandaccuratemethodforpresentationattackdetectioniscriticaltoensure thereliabilityandsecurityofauthenticationsystems.Inthischapter,wehaveutilized domainknowledgebyextractinglocalpatchescenteredandalignedusingminutiaein theinputimageforpresentationattackdetection.Thelocalpatchbased approach,calledFingerprintSpoofBuster,providessalientcuestodifferentiatePA fromSpoofbusterisabletoachieveareductionintheerrorrates forintra-sensor(63%),cross-material(43%),cross-sensor(4%),andcross-datasetscenarios(29%) comparedtostate-of-the-artonpublicdomaindatasets.AGUIisdevelopedtoallowanoperator orsystemdesignertoanalyzetheinputimagesforandPAregions.Wealso trainedaCNNmodelusingoperationaldatasetsof 4 ; 815 alteredand 4 ; 815 imagesforalteredprintdetectionandlocalization.Ouraltereddetectionmodel achievesaTrueDetectionRate(TDR)of 99 : 24%@ FalseDetectionRate(FDR)of 2% ,compared tothepreviousstate-of-the-artresultofTDR=70%atFDR= 2% whichusedasmalleroperational dataset.Finally,wepresentedalight-weightversionoftheproposedPADasanAndroidappthat canrunonacommoditysmartphone(SamsungGalaxyS8)withoutdropinperformance andmakeaPAdetectioninreal-time(under 100 ms). 82 Chapter3 FingerprintPADGeneralization Inthepreviouschapter,wetackledpresentationattackdetection(PAD)byutilizinglo- calpatches( 96 96 )centeredandalignedusingminutiaetotrainMobileNet-v1and Inception-v3models.Thisfusionofdomainknowledge(minutiae)anddeep-learning basedapproachesprovidedstate-of-the-artperformanceforPAD.Inthischapter,we addressoneofthemajorchallengesofdeep-learningbasedPADapproaches,namely, erprint PADgeneralization .Ourmainfocusistoimprove cross-material and cross-sensor PADgeneral- izationperformance,whilemaintaininghighperformanceintheknown-materialandknown-sensor scenarios. 3.1Introduction NewapproachestoPADhaveproposedconvolutionalneuralnetwork(CNN)basedso- lutionswhichhavebeenshowntooutperformhand-craftedfeaturesonpubliclyavailableLivDet databases[23,24,87,110,119,126,156].However,oneofthemajorlimitationsofexistingPAD approachesistheirpoorgeneralizationperformanceacrossfiunknownflPAmaterials,thatwere notusedduringtraining.Togeneralizeanalgorithm'seffectivenessacrossPAfabricationmate- rials,called cross-material performance,PAdetectionhasbeenreferredtoasan open-setprob- 83 Table3.1Summaryofthestudiesprimarilyfocusedonspoofgeneralization.Theper- formancemetricsutilizedindifferentstudiesincludeACE=AverageError;EER= EqualErrorRate;andTDR=TrueDetectionRate(spoofs)@aedFDR=FalseDetectionRate (spoofs). StudyApproachDatabasePerformance Rattanietal.[135]Weibull-calibratedSVMLivDet2011EER=19.70% Ding&Ross[35]Ensembleofmultipleone-class SVMs LivDet2011EER=17.60% Chugh&Jain[24]MobileNettrainedon minutiae-centeredlocalpatches LivDet2011-2015ACE=1.48%(LivDet2015), 2.93%(LivDet2011,2013) Chugh&Jain[26]Identifyarepresentativesetof spoofmaterialstocoverthedeep featurespace MSU-FPADv2.0,12 spoofmaterials TDR=75.24%@FDR=0.2% Engelsma&Jain[46]Ensembleofgenerative adversarialnetworks(GANs) Customdatabasewith liveand12spoof materials TDR=49.80%@FDR=0.2% Gonzlez-Soleretal.[59]Featureencodingofdense-SIFT features LivDet2011-2015TDR=7.03%@FDR=1% (LivDet2015),ACE=1.01% (LivDet2011,2013) Tolosanaetal.[156]FusionoftwoCNNarchitectures trainedonSWIRimages Customdatabasewith liveand8spoof materials EER=1.35% Gajawadaetal.[50]Styletransferfromspooftolive imagestoimprovegeneralization; requiresfewsamplesoftarget material LivDet2015, CrossMatchsensor TDR=78.04%@FDR=0.1% Zhangetal.[172]Slim-ResCNN+Centerof Gravitypatches LivDet2017Avg.Accuracy=95.25% ProposedApproach Styletransferbetweenknown spoofmaterialstoimprove generalizabilityagainst completelyunknownmaterials MSU-FPADv2.0,12 spoofmaterials& LivDet2017 TDR= 91.78% @FDR= 0.2%(MSU-FPADv2.0);Avg. Accuracy= 95.88% (LivDet 2017) lem 1 [135].Table3.1presentsasummaryofthestudiesprimarilyfocusedoncross-materialPAD generalization.EngelsmaandJain[46]proposedusinganensembleofgenerativeadversarialnet- works(GANs)onliveimageswiththehypothesesthatfeatureslearnedbyadiscrimi- natortodistinguishbetweenrealliveandsynthesizedlivecanbeusedtoseparatelive fromPAaswell.Onelimitationofthisapproachisthatthediscriminator intheGANarchitecturemaylearnmanyfeaturesrelatedtostructuralnoiseaddedbythegenerative process.SuchfeaturesarelikelynotpresentinthePAsfabricatedwithunknownmaterials. 1 Open-setproblemsaddressthepossibilityofnewclassesduringtesting,thatwerenotseenduringtraining. Closed-setproblems,ontheotherhand,evaluateonlythoseclassesthatthesystemwastrainedon. 84 Although,ithasbeenshownthatsomePAmaterials 2 areeasiertodetect(e.g.EcoFlex,Gelatin, Latex)thanothers(e.g.WoodGlue,Silgum)whenleftoutfromtraining[24],theunderlyingrea- sonsareunknown.Tounderstandandinterpretthegeneralizationperformanceagainstunknown PAs,weinvestigatematerialcharacteristics(twoopticalandtwophysicalproperties)correlated withcross-materialperformanceand3Dt-SNE 3 featureembeddings[103]toidentifyarepresen- tativesetofmaterialsthatshouldbeincludedtotrainarobustPAD.Wealsoproposetwodifferent approachestoimprovethegeneralizationperformance.Themaincontributionsofthischapterare: 1.EvaluatedthegeneralizationperformanceofFingerprintSpoofBuster,astate-of-the-art CNN-basedPADapproach,byemployingleave-one-outapproachonalargedatasetof 5 ; 743 and 4 ; 912 PAimagesusing12differentPAmaterials. 2.Investigatedthe3Dt-SNEvisualizationandmaterialcharacteristics(twophysicalandtwo optical)toidentifyafirepresentativesetflofmaterials(Silicone,2Dpaper,PlayDoh,Gelatin, LatexBodyPaint,andMonsterLiquidLatex)thatcouldalmostcovertheentirePAfeature space. 3.Designedastyletransfer-basedwrapper,calledUniversalMaterialGenerator(UMG),toim- provethegeneralizationperformanceofanyspoofdetectoragainstspoofsmade frommaterialsnotseenduringtraining.Itattemptstosynthesizeimpressionswithstyle (texture)characteristicspotentiallysimilartounknownspoofmaterialsbyinterpolatingthe stylesfromknownspoofmaterials. 4.Improvedthecross-sensorspoofdetectionperformancebysynthesizinglarge-scaleliveand spoofdatasetsusingonly100liveimagesfromanewtargetsensor.Ourapproachfor improvingcross-materialperformancealsoimprovesthecross-sensorperformanceoftwo state-of-the-artspoofdetectors. 2 Fig.2.1illustratesthetwelvedifferentPAmaterialsusedinthisstudy. 3 TheapproachT-distributedStochasticNeighborEmbedding(t-SNE)modelseachhigh-dimensionalobjectbya twoorthree-dimensionalpointinsuchawaythatsimilarobjectsaremodeledbynearbypointsanddissimilarobjects aremodeledbydistantpointswithhighprobability[103]. 85 5.Fabricatedphysicalspoofartifactsusingamixtureofknownspoofmaterialstoshowthatthe syntheticallygeneratedimagesusingimagesofthesamesetofspoofmaterials correspondtoanunknownmaterialwithsimilarstyle(texture)characteristics. 6.AdynamicPADsolutionutilizingsequencesofminutiae-basedlocalpatchestotrainaCNN- LSTMarchitecturewiththegoaloflearningdiscriminativespatio-temporalfeaturesfor gerprintPAdetection.Theproposedapproachimprovesthespoofdetectionperformance fromTDRof81.65%to86.20%@FDR=0.2%incross-materialscenariousingadataset of 26 ; 650 livecapturesfrom 685 subjects( 1333 uniqueand 32 ; 930 PAframesfrom 7PAmaterials(with14variants). 3.2DatabasesusedtoinvestigateFingerprintGeneralization Thefollowingdatasetshavebeenutilizedinthisstudy: MSUFingerprintPresentationAttackDatabase(FPAD)v2.0 Adatabaseof 5 ; 743 liveand 4 ; 912 spoofimagescapturedonCrossMatchGuardian200 4 , oneofthemostpopularslapreaders.Thedatabaseisconstructedbycombiningthepublicly availableMSUFingerprintPresentationAttackDatasetv1.0(MSU-FPADv1.0)[24]and PreciseBiometricsSpoof-KitDataset(PBSKD).Tables3.2and3.4presentsthedetailsof thisdatabaseincludingthesensorsused, 12 spoofmaterials,totalnumberof impressions,andthenumberofminutiae-basedlocalpatchesforeachmaterialtype.Fig.2.1 presentssamplespoofimagesfabricatedusingthe 12 materials. LivDet2017 LivDet2017[114]datasetisthemostrecent 5 publicly-availableLivDetdataset,containing over 17 ; 500 images.Theseimagesareacquiredusingthreedifferent readers,namely,GreenBit,Orcanthus,andDigitalPersona.UnlikeotherLivDetdatasets, 4 https://www.crossmatch.com/wp-content/uploads/2017/05/20160726-DS-En-Guardian-200.pdf 5 ThetestingsetofLivDet2019databasehasnotyetbeenmadepublic. 86 spoofimagesincludedinthetestsetarefabricatedusingnewmaterials(Wood Glue,x,andBodyDouble),thatarenotusedinthetrainingset(WoodGlue,x, andBodyDouble).Table3.2presentsasummaryoftheLivDet2017dataset. SilkIDFastFrameRateDataset Alarge-scaledatabaseof 26 ; 650 liveframes from 685 subjects,and 32 ; 930 PAframesof 7 materials(14variants)collectedonSilkID SLK20Rreaderisutilizedintheevaluationoftheproposeddynamicapproach. Thisdatabaseisconstructedbycombiningimagescollectedfromtwosources. First,aspartoftheIARPAODINprogram[123],alarge-scaleGovernmentControlledTest (GCT-3)wasconductedatJohnsHopkinsUniversityAppliedPhysicsLaboratory(JHUAPL) facilityinNov.2019,whereatotalof 685 subjectswithdiversedemographics(interms ofage,profession,gender,andrace)wererecruitedtopresenttheirreal(live)aswellas PAbiometricdataface,andiris).ThePAwerefabricatedusing 5differentmaterials(11variants)andavarietyoffabricationtechniques,includinguseof dentaland3Dprintedmolds.ForabalancedliveandPAdatadistribution,weutilizeonly rightthumbandrightindeximagesforthelivedata.Second,wecollectedPA datainalabsettingusingdentalmoldscastedwiththreedifferentmaterials,namely,x (withtonepigment),CrayolaModelMagic(redandwhitecolors),andDragonSkin (withtonepigment).ThedetailsofthecombineddatabasearesummarizedinTable3.3. 3.3UnderstandingPADGeneralization Weadopttheleave-one-outprotocoltosimulatethescenarioofencountering unknown materials withthegoalofevaluatingthegeneralizationperformanceofFingerprintSpoofBuster.OnePA materialoutofthe12typesisleftoutfromthetrainingsetwhichisthenutilizedduringtesting. Thisrequirestrainingatotalof 12 differentMobileNet-v1modelseachtimeleavingoutoneofthe 12 differentPAtypes.The 5 ; 743 imagesarepartitionedintotrainingandtestingsuch 87 Table3.2SummaryoftheMSU-FPAD-v2andLivDet2017datasets.Spoofimages includedinthetestsetofLivDet2017arefabricatedusingnewmaterialsthatarenotusedinthe trainingset. DatasetMSU-FPADv2[26]LivDet2017[114] FingerprintReader CrossMatchGuardian200 GreenBitOrcanthusDigitalPersona DactyScan84CCertis2ImageU.are.U5160 ImageSize( px: )( w h ) 800 750500 500300 n y 252 324 Resolution( dpi ) 500569500500 #Live(Train/Test) 4 ; 743 / 1 ; 0001 ; 000 / 1 ; 7001 ; 000 / 1 ; 700999 / 1 ; 692 #Spoof(Train/Test) 4 ; 912 (leave-one-out) 1 ; 200 / 2 ; 0401 ; 180 / 2 ; 0181 ; 199 / 2 ; 028 KnownSpoofMateri- als(Training) Leave-one-out :2DPrintedPaper,3DUniversalTar- gets,ConductiveInkonPaper,DragonSkin,Gelatin, GoldFingers,LatexBodyPaint,MonsterLiquidLatex, PlayDoh,Silicone,Transparency,WoodGlue WoodGlue,x,BodyDouble UnknownSpoofMate- rials(Testing) Gelatine,Latex,Liquidx y FingerprintimagescapturedusingOrcanthusreaderhaveavariableheight( 350 450 px )dependingon thefrictionridgecontent. *Asetof 20 LatexspooffoundinthetrainingsetofOrcanthusreaderwere excludedinourexperiments.OnlyWoodGlue,x,andBodyDoubleareexpectedtobeinthe trainingdataset. thatthereare 1 ; 000 randomlyselectedimagesinthetestingsetandtheremaining 4 ; 743 imagesareutilizedinthetrainingset. 3.3.1PerformanceagainstUnknownMaterials Table3.4presentstheperformanceofFingerprintSpoofBusteragainstunknownpresentationat- tacksintermsofTDR@FDR= 0 : 2% .Theweightedaveragegeneralizationperformanceachieved bythePADwiththeleave-one-outmethodisTDR= 75 : 24% ,comparedtoTDR= 97 : 20% @FDR = 0 : 2% whenallPAmaterialtypesareknownduringtraining.ThePAmaterialsDragonSkin, MonsterLiquidLatex,Transparency,3DUniversalTargets,andConductiveInkonPaperareeas- ilydetectedwithaTDR 90% @FDR= 0 : 2% evenwhenthesematerialsarenotseenbythe modelsduringtraining.Ontheotherhand,PAmaterialssuchasPlayDoh,Gelatin,2DPrintedPa- per,andSiliconearethemostaffectedwhennotseenduringtrainingachievingaTDR 70% @ FDR= 0 : 2% .Tounderstandthereasonsforthisdifferenceinperformancefordifferentmaterials, westudythematerialcharacteristicsinthenextsection. 88 Table3.3SummaryoftheSilkIDFastFrameRatedatabasecollectedatGCT-IIIaspart ofIARPAODINProgram[123]. PAMaterialMoldType#Presentations#Frames silicone x00-35,tonepigmentDental 7577 ; 570 x00-50,tonepigment3DPrinted 1381 ; 380 x00-50,tanpigment3DPrinted 1301 ; 300 Gelatin Ballisticgelatin,tonedye3DPrinted 50500 Knoxgelatin,clear3DPrinted 84840 Thirddegreesilicone LighttonepigmentDental 1311 ; 310 TanpigmentDental 98980 BeigesuedepowderDental 43430 MediumtonepigmentDental 36360 CrayolaModelMagic WhitecolorDental 9109 ; 100 RedcolorDental 3083 ; 080 PigmentedDragonSkintone) Dental 4524 ; 520 ConductiveSilicone 3DPrinted 87870 UnknownPA(JHU-APL) 3DPrinted 67670 TotalPAs3,29132,910 TotalLives ( 685 subjects) 2,66526,650 3.3.2PAMaterialCharacteristics Table3.4showsthatsomeofthePAmaterialsareeasiertodetectthanothers,evenwhenleftout fromtraining.Tounderstandthereasonforthis,itiscrucialtoidentifytherelationshipbetween differentPAtypesintermsoftheirmaterialcharacteristics.IfwecangroupthePAmaterialsbased onsharedcharacteristics,itcanpotentiallybeusedtoidentifyasetofrepresentativematerialsto trainarobustandgeneralizablemodel.Forthegivendatasetofimagesfrom 12 different spoofmaterials,wemeasuredthefollowingcharacteristics:(i) opticalproperties :UltraViolet -Visible(UV-Vis)spectroscopyresponseandFourierTransformInfrared(FT/IR)Spectroscopy response,and(ii) mechanicalproperties :materialelasticityandmoisturecontent.Thesematerial characteristicswereselectedbasedonourdiscussionswithmaterialscienceexperts 6 . 6 Materialresistivitywouldbeanimportantcharacteristicwhenperformingasimilaranalysisforcapacitive- printreaders. 89 Table3.4Summaryofthedatasetandgeneralizationperformance(TDR(%)@FDR= 0 : 2% ) withleave-one-outmethod.Atotaloftwelvemodelsaretrainedwherethematerialleft-outfrom trainingistakenasthenewmaterialforevaluatingthemodel. FingerprintPresentationAttack Material #Images #LocalPatches GeneralizationPerformance (TDR(%)@FDR=0.2%) Silicone 1 ; 160 38 ; 145 67 : 62 MonsterLiquidLatex 882 27 ; 458 94 : 77 PlayDoh 715 17 ; 602 58 : 42 2DPrintedPaper 481 7 ; 381 55 : 44 WoodGlue 397 12 ; 681 86 : 38 GoldFingers 295 9 ; 402 88 : 22 Gelatin 294 10 ; 508 54 : 95 DragonSkin 285 7 ; 700 97 : 48 LatexBodyPaint 176 6 ; 366 76 : 35 Transparency 137 3 ; 846 95 : 83 ConductiveInkonPaper 50 2 ; 205 90 : 00 3DUniversalTargets 40 1 ; 085 95 : 00 TotalPAs 4,912 144,379 Weighted* Total 5,743 228,143 Average: 75 : 24 *Theperformanceisweightedbythenumberofimagesforeachmaterial(similartoasperformedforpublicly-availableLivDetDatasets). 3.3.2.1OpticalProperties UltraViolet-Visible(UV-Vis)spectroscopy :TheUV-Visresponserepresentstheabsorptionof monochromaticradiationsbythematerialatdifferentwavelengths(ultraviolet(200-400nm)to visiblespectrum(400-750nm)).ApeakintheUV-Visresponseindicatesthatthematerialhashigh absorbanceofthelightatthatgivenwavelength[130].APerkinElmarLambda900UV/Vis/NIR spectrometer 7 wasusedtomeasurethelightabsorbancepropertyofmaterialsshowninFigure3.1. FourierTransformInfrared(FT/IR)Spectroscopy :TheFT/IRresponseofagivenmaterialis asignatureofitsmolecularstructure.Themoleculesabsorbfrequenciesthatarecharacteristicof theirstructure,calledresonantfrequencies,i.e.,thefrequencyoftheabsorbedradiationmatches withthevibrationalfrequency[148].AnFT/IRsignatureisagraphofinfraredlightabsorbance (ortransmittance)ontheY-axisvs.frequencyontheX-axis(measuredinreciprocalcentimeters, i.e., cm 1 orwavenumbers).Figure3.2presentstheFT/IRresponseof12differentPAmaterials measuredbyJascoFT/IR-4600spectrometer 8 .TheFT/IRspectrometerprovidedmaterialresponse 7 http://www.perkinelmer.com/category/uv-vis-spectroscopy-uv 8 https://jascoinc.com/products/spectroscopy/ftir-spectrometers/models/ftir-4000-series/ 90 Figure3.1LightabsorbancepropertyoftwelvePAmaterialsin200nm-800nmwavelengthspec- trumcomputedusingaPerkinElmarLambda900UV/Vis/NIRspectrometer[130]. Figure3.2FourierTransformInfraredSpectroscopy[148]oftwelvePAmaterialsinthe260-375 wavenumberrange. intherange 250 6 ; 000 wavenumbers,butallthematerialsexhibitednon-zerotransmittanceonly intherange 250 375 wavenumbers. 3.3.2.2MechanicalProperties MaterialElasticity :Aspooffabricatedusinganelasticmaterialundergoeshigherdefor- mation,resultinginlargefrictionridgedistortionwhenthespoofispressedagainstthe reader'sglassplaten,comparedtolesselasticmaterials.Weclassifythe12differentPAmaterials intothreeclassesbasedontheirobservedelasticity:(i) Highelasticity :Silicone,MonsterLiquid Latex,DragonSkin,WoodGlue,Gelatin,(ii) Mediumelasticity :PlayDoh,LatexBodyPaint,3D 91 Figure3.3Representationofandpresentationattackinstrumentsfabricated withdifferentmaterialsinthe3Dt-SNEfeaturespace.Theoriginalrepresentationis1024- dimensionalobtainedformthetrainedCNNmodel.(Bestviewedincolor).Availablein3Dat https://plot.ly/ ˘ icbsubmission/0/livepa-feature-space/. UniversalTargets,and(iii) Lowelasticity :2DPaper,GoldFingers,Transparency,andConductive InkonPaper. MoistureContent :Anothercrucialmaterialpropertyistheamountofmoisturecontent,which leadstovaryingdegreesofcontrastinthecorrespondingimage.PAmaterialswithhigh moisturecontent(e.g.Silicone)producehighcontrastimagescomparedtomaterialswithlow moisturecontent(e.g.2DPaper)onCrossMatchreader.Weclassifythe12differentPAmaterials intothreeclassesofmoisturecontentlevelbasedontheobservedimagecontrast:(i) HighMoisture Level :Silicone,PlayDoh,DragonSkin,(ii) MediumMoistureLevel :MonsterLiquidlatex,Wood Glue,GoldFingers,Gelatin,3DUniversalTargets,and(iii) LowMoistureLevel :2DPaper,Latex BodyPaint,Transparency,ConductiveInkonPaper. 92 Figure3.4RepresentationofanddifferentsubsetsofPAmaterialsin3Dt-SNEfeature spacefromdifferentanglesselectedtoprovidethebestview.The(darkgreen)and silicone(navyblue)areincludedinallgraphsforperspective.(Bestviewedincolor) 3.3.33Dt-SNEVisualizationofandPAs ToexploretherelationshipbetweenanddifferentPAmaterials,wetrainasinglemulti- classMobileNet-v1modeltodistinguishbetween 13 classes, i.e. ,and 12 PAmaterials. Thetrainingsplitincludesasetof 100 randomlyselectedimagesorhalfthenumberoftotalimages (whicheverislower)fromeachoftheandPAmaterialsforatotalof 1 ; 102 images.In asimilarmanner,atestsplitisconstructedfromtheremainingsetofimagesforatotalof 1 ; 101 images.Thisprotocolisadoptedtoreducethebiasduetounbalancednatureofthetrainingdataset. Weextractthe 1024 -dimensionalfeaturevectorfromthebottlenecklayeroftheMobileNet-v1 network[71]andprojectitto3dimensionsusingthet-SNEapproach[103](seeFigure3.3). Figures3.4(a)-(f)presenttherepresentationofanddifferentsubsetsofPAmaterialsinthe 3Dt-SNEfeaturespacefromdifferentanglesselectedtoprovideacompleteview.The (darkgreen)andSilicone(navyblue)areincludedinallgraphsforperspective.The3Dgraphis generatedusingplotlylibraryandisaccessibleatthelink:https://plot.ly/ ˘ icbsubmission/0/livepa- feature-space/. 93 Figure3.5AveragePearsoncorrelationvaluesbetween12PAmaterialsbasedonthematerial characteristics(twoopticalandtwophysical). 3.3.4RepresentativeSetofPAMaterials Weutilizematerialcharacteristicsand3Dt-SNEvisualizationtoidentifyasetofrepresentative materialstotrainarobustandgeneralizablemodel.Fromthefourmaterialcharacteristics,two continuous( i.e. ,opticalcharacteristics)andtwocategorical( i.e. ,mechanicalcharacteristics),we computefour 12 12 correlationmatrices.Forthetwocontinuousvariables,wecomputethe Pearsoncorrelation 9 betweenallpairsofmaterialstogeneratetwocorrelationmatrices C uvvis and C ftir .Forthetwocategoricalvariables,iftwoPAmaterials m i and m j belongtothesamecategory, weassign C i;j =1 ,else C i;j =0 ,togenerate C elastic and C moisture .Thefourcorrelationmatrices correspondingtoeachofthefourindividualmaterialcharacteristics,areaveragedtogeneratethe nalcorrelationmatrix C material ,suchthat C material i;j =( C uvvis i;j + C ftir i;j + C elastic i;j + C moisture i;j ) = 4 ,(see 9 MATLAB's corr functionisusedtocomputethePearsoncorrelation.https://www.mathworks.com/help/stats/ corr.html 94 Figure3.6Acomplete-linkdendrogramrepresentingthehierarchical(agglomerative)clusteringof PAsbasedonthesharedmaterialcharacteristics. Figure3.5)whichisutilizedtoperformcomplete-linkhierarchical(agglomerative)clustering 10 of the12PAmaterials.Figure3.6showsacomplete-linkdendrogramrepresentingthehierarchical groupingofthe12PAmaterialsbasedon C material .Basedonthe3Dt-SNEvisualizationandthe hierarchicalclusteringofthe12PAmaterials,weobservethat: PAmaterialsSilicone,PlayDoh,Gelatin,and2DPrintedPaperareclosesttoLive- printsinthe3Dt-SNEfeaturespacecomparedtoothermaterials.Thisexplainswhyexclud- inganyoneofthesematerialsinthetrainingsetresultedinpoorgeneralizationperformance whentestedagainstthem.ThesePAmaterialsappearindifferentclustersinthedendrogram (seeFigures3.4(a)and3.6). PAmaterialDragonSkiniseasilydetectedwhenSiliconeisincludedintrainingsetassili- coneislocatedbetweenandDragonSkininthe3Dt-SNEfeaturespace(seeFig- 10 WeutilizeMATLAB's linkage and dendrogram functionswithparametersmethod=`complete'andmet- ric=`correlation'. 95 ures3.4(b)and(d)).Thesematerials,DragonSkinandSilicone,alsolieinthesamecluster indicatingsharedmaterialcharacteristics. PAmaterialTransparencyiseasilydistinguishablewhen2DPrintedPaperisincludedin training.Inthet-SNEvisualization,weobservethat2DPrintedPaperappearsintwodif- ferentclusters,whereoneoftheclustersisco-locatedwithtransparency(seeFigures3.4(a) and(e)). PAmaterialsWoodGlueandGelatinareclosetoeachotherin3Dt-SNEfeaturespace, potentiallyassistingeachotherifincludedintraining(seeFigure3.4(c));whereasGelatin isclosertowhichexplainsitsworseperformancecomparedtoWoodGlue.These materialsalsoformasecondlevelclusterinthedendrogram. PAmaterialLatexBodyPaintislocatedbetweenandConductiveInkonPaper, andPAmaterialMonsterLiquidLatexliesbetweenand3DUniversalTargetsin 3Dt-SNEvisualization,whichcouldexplainthehighdetectionforConductiveInkonPaper and3DUniversalTargets(seeFigure3.4(f)).However,thesematerialsdonotformacluster untilthelastagglomerationstep,indicatingpossibilityofothermaterialcharacteristicsthat couldbefurtherexplored. Basedontheseobservations,weinferthatasetof 6 PAmaterials(Silicone,2DPaper,Play Doh,Gelatin,LatexBodyPaint,andMonsterLiquidLatex)almostcoverstheentirefeature spacearound(seeFigure3.4).Amodeltrainedusingandthese 6 PAmate- rialsachievedanaverageTDR= 89 : 76% 6 : 97% @FDR= 0 : 2% whentestedoneachofthe remaining 6 materials.ThisperformanceiscomparabletotheaverageTDR= 90 : 97% 7 : 27% @ FDR= 0 : 2% when 11 PAmaterialsandareusedfortraining,indicatingno contributionprovidedbyincludingallthe 11 PAmaterialsintraining.WepositthatthePADper- formanceagainstanewmaterialcanbeestimatedbyanalyzingitsmaterialcharacteristicsinstead ofcollectinglargedatasetsforeachofthenewmaterials. 96 3.4ImprovingPADGeneralization IthasbeenshownthattheselectionofPAmaterialsusedintraining(knownPAs)directlyim- pactstheperformanceagainstunknownPAs.Intheprevioussection,weanalyzedthematerial characteristicsof 12 differentspoofmaterialstoidentifyarepresentativesetofsixmaterialsthat covermostofthePAfeaturespace.Although,thisapproachcanbeusedtoidentifyifincluding anewPAmaterialintrainingdatasetwouldbeal,itdoesnotimprovethegeneralization performanceagainstmaterialsthatareunknownduringtraining.Moreover,withtheincreasing popularityofauthenticationsystems,hackersareconstantlydevisingnewfabrication techniquesandnovelmaterialstoattackthem.Also,itisprohibitivelyexpensivetoincludeallPA fabricationmaterialsintrainingaPAdetector. Additionally,imagescapturedusingdifferentsensorstypicallyhave uniquecharacteristicsduetodifferentsensingtechnologies,sensornoise,andvaryingresolution. Asaresult,PAdetectorsareknowntosufferfrompoorgeneralizationperformancein thecross-sensorscenario,wherethePADistrainedonimagescapturedusingonesensorandtested onimagesfromanother.Improvingcross-sensorPAdetectionperformanceisimportantinorderto alleviatethetimeandresourcesinvolvedincollectinglarge-scaledatasetswiththeintroductionof newsensors.Next,wepresenttwodifferentapproachestoimprovethegeneralizationperformance ofexistingPADsolutions. 3.4.1UniversalMaterialGenerator Inthissection,weproposeastyle-transferbasedwrapper,called UniversalMaterialGenerator (UMG),toimprovethecross-materialandcross-sensorgeneralizationperformanceof PAdetectorsagainstPAsmadefrommaterialsnotseenduringtraining[28].Inparticular,forthe cross-materialscenario,wehypothesizethatthetexture(style)informationfromtheknownPA typescanbetransferredfromonetypetoanothertypetosynthesizeimagespotentiallysimilarto novelPAsfabricatedfrommaterials,notseeninthetrainingset.WepositthatthesynthesizedPA 97 Figure3.73Dt-SNEvisualizationoffeatureembeddingslearnedbyFingerprintSpoofBuster[24] of(a)live(darkgreen)andelevenknownPAmaterials(red)(2Dprintedpaper,3Duniversaltar- gets,conductiveinkonpaper,dragonskin,goldlatexbodypaint,monsterliquidlatex, playdoh,silicone,transparency,andwoodglue)usedintraining,andunknownPA,gelatin(yel- low).AlargeoverlapbetweenunknownPA(gelatin)andlivefeatureembeddingsindicatepoor generalizationperformanceofstateoftheartPAdetectors.(b)Syntheticlive(brightgreen)and syntheticPA(orange)imagesgeneratedbytheproposedUniversalMaterialGenerator(UMG) wrapperimprovetheseparationbetweenrealliveandrealPA.3Dt-SNEvisualizationsareavail- ableathttp://tarangchugh.me/posts/umg/index.html(Bestviewedincolor) imagesmayoccupythespacebetweentheimagesfromknownPAmaterialsinthedeepfeature space.SyntheticliveimagesarealsoaddedtothetrainingdatasettoforcetheCNNto learngenerative-noiseinvariantfeatureswhichdiscriminatebetweenlivesandPAs.Inthecross- sensorscenario,weutilizeasmallsetofonly 100 imagesfromthetarget sensor,sayGreenBit,andtransferitssensorstylecharacteristicstolarge-scaleliveand PAdatasetsavailablefromasourcesensor,sayDigitalPersona.Reusinglarge-scalePAdatasets fromexistingsensorswillreducethesteepcostassociatedwithcollectinglarge-scaleand spoofdatabasesforeverynewsensor. TheproposedUMGframeworkisusedtoaugmentCNN-basedPAdetectors,signi improvingtheirperformanceagainstnovelmaterials,whileretainingtheirperformanceonknown materials.SeeFigure3.10forexamplesofsomeofthestyletransferredimages. 98 3.4.1.1RelatedWork Realisticimagesynthesisisachallengingproblem.Earlynon-parametricmethodsfaceddif ingeneratingimageswithtexturesthatarenotknownduringtraining[18].Machinelearninghas beenveryeffectiveinthisregard,bothintermsofrealismandgenerality.Gatysetal.[53]perform artisticstyletransfer,combiningthecontentofanimagewiththestyleofanyotherbyminimizing thefeaturereconstructionlossandastylereconstructionlosswhicharebasedonfeaturesextracted fromapre-trainedCNN.Whilethisapproachgeneratesrealisticlookingimages,itiscomputation- allyexpensivesinceeachstepoftheoptimizationrequiresaforwardandbackwardpassthrough thepre-trainednetwork.Otherstudies[88,95,160]haveexploredtrainingafeed-forwardnetwork toapproximatesolutionstothisoptimizationproblem.Thereareothermethodsbasedonfeature statisticstoperformstyletransfer[73,158].Elgammaletal.[39]appliedGANstogenerateartistic images.Isolaetal.[76]usedconditionaladversarialnetworkstolearnthelossforimage-to-image translation.Xianetal.[166]learnedtosynthesizeobjectsconsistentwithtexturesuggestions.The proposedUniversalMaterialGeneratorbuildson[73]andiscapableofproducingrealistic gerprintimagescontainingstyle(texture)informationfromimagesoftwodifferentPAmaterials. Existingstyletransfermethodsconditionthesourceimagewithtargetmaterialstyle.However, inthecontextofsynthesis,thisresultsinalossinridge-valleyinformation ( i.e. ,content).Inordertopreservebothstyleandcontent,weuseadversarialsupervisiontoensure thatthesynthesizedimagesappearsimilartotherealimages. 3.4.1.2ProposedApproach Thisapproachincludesthreestages:(i)trainingtheUniversalMaterialGenerator(UMG)wrapper usingthePAimagesofknownmaterials(withonematerialleft-outfromtraining),(ii)generating syntheticPAimagesusingrandomlyselectedimagepairsofdifferentbutknownmaterials,and (iii)trainingaPAdetectorontheaugmenteddatasettoevaluateitsperformanceonthefiunknownfl materialleftoutfromtraining.Inallourexperiments,weutilizelocalimagepatches( 96 96 )cen- teredandalignedusingminutiaelocationandorientation,respectively[24].Duringtheevaluation 99 Figure3.8Proposedapproachfor(a)synthesizingPAandlivepatches,and(b)design oftheproposedUniversalMaterialGenerator(UMG)wrapper.AnAdaINmoduleisusedfor performingthestyletransferintheencodedfeaturespace.ThesameVGG-19[147]encoderis usedforcomputingcontentlossandstyleloss.AdiscriminatorsimilartotheoneusedinDC- GAN[133]isusedforcomputingtheadversarialloss.Thesynthesizedpatchescanbeusedto trainanyPAdetector.Hence,ourapproachisreferredtoasawrapperwhichcanbe usedinconjunctionwithanyPAdetector. 100 Figure3.9StyletransferbetweenrealPApatchesfabricatedwithlatexbodypaintandsiliconeto generatesyntheticPApatchesusingtheproposedUniversalMaterialGenerator(UMG)wrapper. Theextentofstyletransfercanbecontrolledbytheparameter 2 [0 ; 1] . stage,thePAdetectiondecisionismadebasedontheaverageofspoofnessscoresforindivid- ualpatchesoutputfromtheCNNmodel.Anoverviewoftheproposedapproachispresentedin Fig.3.8. TheprimarygoaloftheUMGwrapperistogeneratesyntheticPAimagescorrespondingtoun- knownPAmaterials,bytransferringthestyle(texture)characteristicsbetweenimages ofknownPAmaterials.Gatysetal.[54]werethetoshowthatdeepneuralnetworks(DNNs) couldencodenotonlycontentbutalsothestyleinformation.Theyproposedanoptimization-based style-transferapproach,althoughprohibitivelyslow,forarbitraryimages.In[158],Ulyanovetal. proposeduseofanInstanceNormlayertonormalizefeaturestatisticsacrossspatialdimensions. AnInstanceNormlayerisdesignedtoperformthefollowingoperation: IN ( x )= x ( x ) ˙ ( x ) + (3.4.1) where, x istheinputfeaturespace, ( x ) and ˙ ( x ) arethemeanandstandarddeviationparam- eters,respectively,computedacrossspatialdimensionsindependentlyforeachchannelandeach sample.Itwasobservedthatchangingtheafparameters and (whilekeepingconvolutional parametersed)leadstovariationsinthestyleoftheimage,andtheafparameterscouldbe learnedforeachparticularstyle.Thismotivatedanapproachforartisticstyletransfer[38],which 101 learns and valuesforeachfeaturespaceandstylepair.However,thisrequiredretrainingofthe networkforeachnewstyle. HuangandBelongie[73]replacedtheInstanceNormlayerwithanAdaptiveInstanceNorm (AdaIN)layer,whichcandirectlycomputeafparametersfromthestyleimage,insteadof learningthemŒeffectivelytransferringstylebyimpartingsecond-orderstatisticsfromthetarget styleimagetothesourcecontentimage,throughtheafparameters.Wefollowthesameap- proachasdescribedin[73]inUMGwrapperforfusingfeaturestatisticsofoneknown(source) PAmaterialimage( c )providingfrictionridge(content)informationandsourcestyle,withanother known,butdifferent(targetstyle)PAmaterial( s )inthefeaturespace.AsdescribedinAdaIN,we applyinstancenormalizationontheinputsourceimagefeaturespace,however,notwithlearnable afparameters.Thechannel-wisemeanandvarianceofthesourceimage'sfeaturespaceis alignedtomatchthoseofthetargetimage'sfeaturespace.Thisisdonebycomputingtheaf parametersfromthetargetmaterialPAfeaturespaceinthefollowingmanner: AdaIN ( x;y )= ˙ ( y ) x ( x ) ˙ ( x ) + ( y ) (3.4.2) wherethesource( c )featurespaceis x andthetarget( s )featurespaceis y .Inthismanner, x isnormalizedwith ˙ ( y ) andshiftedby ( y ) .OursyntheticPAgenerator G iscomposedofan encoder f ( ) andadecoder g ( ) .Fortheencoder, f ( ) ,weusethefewlayersofapre-trained VGG-19networksimilarto[88].Theweightsofthisnetworkarefrozenthroughoutallstagesof thesetup.Forsourceimage ( c ) andthetargetimage ( s ) , x is f ( c ) and y is f ( s ) .Thedesired featurespaceisobtainedas: t = AdaIN ( f ( c ) ;f ( s )) (3.4.3) Weusethedecoder, g ( ) ,totake t asinputtoproduce T ( c;s )= g ( t ) whichisthesynthe- sizedimageconditionedonthestylefromthetargetimage.Inordertoensurethatoursynthesized PApatches( i.e. , g ( t ) )domatchthestylestatisticsofthetargetmaterialPA,weapplyastyleloss 102 L s similarto[88,98]givenas: L s = P L i =1 k ( ˚ i ( g ( t ))) ( ˚ i ( s )) k 2 + P L i =1 k ˙ ( ˚ i ( g ( t ))) ˙ ( ˚ i ( s )) k 2 (3.4.4) whereeach ˚ i denotesalayerintheencodernetwork(VGG-19).Wepass g ( t ) and s through f ( ) andextracttheoutputsof relu 1 1 , relu 2 1 , relu 3 1 and relu 4 1 layersforcomputing L s . Theextentofstyletransfercanbecontrolledbyinterpolatingbetweenfeaturemapsusing: T ( c;s; )= g ((1 ) f ( c )+ t ) (3.4.5) wheresetting =0 willreconstructtheoriginalcontentimageand =1 willconstructthemost stylizedimage.Tocombinethetwoknownstyles,wepreservethestyleofsourcePAmaterial whileconditioningitwithtargetPAmaterialbysettingthevalueof to 0 : 5 . Toensurethatthesynthesizedimagesretainfrictionridgecontentfromthereal image,weuseacontentloss, L c ,whichiscomputedastheeuclideandistancebetweenthefeatures ofthesynthesizedimage, i.e. , f ( g ( t )) andthetargetfeatures( t )fromtherealimage. L c = k f ( g ( t )) t k 2 (3.4.6) Doingthestyletransfer,simplyusingacontentloss ( L c ) toensurethatcontentisretainedis notenoughtoensurethatthesynthesizedimageslooklikerealimages.Fingerprints havemanydetailsintermsofstructureduetothepresenceofcertainlandmarks, e.g. ,minutiae, ridges,andpores.Withtheaimofsynthesizingthatlookindistinguishablefromthe realweuseadversarialsupervision.Atypicalgenerativeadversarialnetwork(GAN) setupconsistsofagenerator G andadiscriminator D playinga minimaxgame ,where D tries todistinguishbetweensynthesizedandrealimages,and G triestofool D bygeneratingrealistic lookingimages.Theadversarialobjectivefunctionsforthegenerator( L G adv )anddiscriminator 103 Figure3.10SynthesizedPApatches( 96 96 )bytheproposedUniversalMaterialGeneratorusing patchesofaknown(source)materialcolumn)conditionedonstyle( =0 : 5 )ofanother (target)knownmaterialrow). 104 ( L D adv )aregivenas 11 : L G adv = E t [log(1 D ( G ( t )))] (3.4.7) L D adv = E x [log D ( x )]+ E t [log(1 D ( G ( t )))] (3.4.8) Inourapproach,weuseadiscriminatorasusedin[133]andthegeneratoristhedecoder function g ( ) .WeoptimizetheUMGwrapperinanend-to-endmannerwiththefollowingobjective functions: min G L G = c L c + s L s + L G adv (3.4.9) max D L D = L D adv (3.4.10) where c and s aretheweightparametersforcontentloss ( L c ) andstyleloss ( L s ) ,respec- tively.Algorithm1summarizesthestepsinvolvedintrainingaUMGwrapper. 3.4.1.3UMG-WrapperforPADGeneralization GivenaPAdatasetofrealimages, S m real ,fabricatedusingasetof m PAmaterials,weadopta leave-one-outprotocoltosplitthedatasetsuchthatPAimagesfabricatedusing m 1 materials areconsideredasfiknownflandusedfortraining.Andtheimagesfabricatedusingtheleft-out m th materialareconsideredasfiunknownflandusedforcomputingthegeneralizationperformance.The imagesofknownmaterials( k = m 1 )areusedtotraintheUMGwrapper(UMG spoof ) describedinsection3.4.1.2. AfterwetraintheUMG spoof ,weutilizeatotalof N synth randomlyselectedpairsofimages f I i m a , I i m b g s.t. i 2f 1 ;:::;N synth g fromknownbutdifferentmaterials m a ;m b 2f m 1 ;:::m k g , a 6 = b ,togenerateadatasetofsynthesizedPAimages S k synth .Foreachsynthesizedimage,the frictionridge(content)informationandthesourcematerial(style)characteristicsareprovidedby theimage, I m a ,andthetargetmaterial(style)characteristicsareprovidedbythesecondimage, 11 Here x isanimagesampledfromthedistributionofrealand t isthefeatureoutputbytheAdaIN module. 105 Algorithm1 TrainingUMGwrapper 1: procedure 2: input 3: x :sourceimageprovidingfrictionridgecontentandknownstyleA 4: y :targetimageprovidingknownstyleB 5: f ( ) :encodernetwork;4layersofVGG-19networkpre-trainedonImageNetwith weightsfrozenduringtraining 6: g ( ) :decodernetwork;mirrors f ( ) withpoolinglayersreplacedwithnearestup-sampling layers 7: D ( ) :discriminatorfunctionsimilarto[133] 8: A ( x;y ) :AdaINoperation;transferstylefrom x to y (usingEq.3.4.2) 9: =0 : 5 10: c =0 : 001 , s =0 : 002 11: output 12: UMG ( ) :UMGwrappertrainedonknownmaterials 13: begin : 14: Encoding : f x = f ( x ) and f y = f ( y ) 15: Styletransfer : t = A ( f x ;f y ) 16: Stylizedimage : T ( c;s; )= g ((1 ) f c + t ) 17: StyleLoss : L s usingEq.3.4.4 18: ContentLoss : L c usingEq.3.4.6 19: AdversarialLoss(generator) : L G adv usingEq.3.4.7 20: AdversarialLoss(discriminator) : L D adv usingEq.3.4.8 21: ObjectivefunctionsfortrainingUMGwrapper 22: min G L G = c L c + s L s + L G adv 23: max D L D = L D adv 24: end I m b .SeeFigures3.9and3.10.TherealPAdatasetisaugmentedwiththesynthesizedPAdatato createadatasetthatisusedfortrainingthePAdetector.Additionally,weaugment thereallivedatasetwithatotalof N synth synthesizedliveimagesusinganotherUMGwrapper (UMG live )trainedononlyliveimages.Addingsynthesizedlivedatabalancesthedatadistribution andforcesthePAdetectortolearngenerative-noiseinvariantfeaturestodistinguishbetweenlives andPAs.Figure3.11presentsexamplesofthesynthesizedliveimages. TheproposedUniversalMaterialGeneratorapproachactslikeawrapperontopofanyexisting PAdetectortomakeitmorerobusttoPAsnotseenduringtraining.Inthisstudy,weutilizetwo state-of-the-artspoofdetectors,namely,FingerprintSpoofBuster[24]andSlim-ResCNN[172]. FingerprintSpoofBusterutilizeslocalpatches( 96 96 )centeredandalignedaround 106 Figure3.11SyntheticliveimagesgeneratedbytheproposedUniversalMaterialGenerator.(a) Sourcestyleimages,(c)targetstyleimages,and(b)synthesizedliveimages. minutiaetotrainMobileNet-v1[71]architectureandachievedstate-of-the-artperformanceonpub- liclyavailableLivDetdatabases[168]andexceededtheIARPAOdinProject[123]requirementof TrueDetectionRate(TDR)of97.0%@FalseDetectionRate(FDR)=0.2%.Slim-ResCNNuti- lizescenterofgravity-basedlocalpatchestotrainacustomCNNarchitecturecontainingresidual blocksinspiredfromResNetarchitecture[64],andachievedthebestperformanceintheLivDet 2017competition[114]. 3.4.1.4ExperimentsandResults MinutiaeDetectionandPatchExtraction TheproposedUMGwrapperistrainedonlocalpatchesofsize 96 96 centeredandaligned usingminutiaepoints.Weextractminutiaeusingthealgorithmproposedin[17].Fora givenimage I with k detectedminutiaepoints, M = f m 1 ;m 2 ;:::;m k g ,where m i = 107 f x i ;y i ; i g , i.e. ,theminutiae m i isdintermsofspatialcoordinates( x i , y i )andorientation ( i ),acorrespondingsetof k localpatches L = f l 1 ;l 2 ;:::;l k g ,eachofsize [96 96] ,centered andalignedusingminutiaelocation( x i ;y i )andorientation( i ),areextractedasproposedin[24]. ImplementationDetails TheencoderoftheUMGwrapperisthefourconvolutionallayers( conv 1 1 , conv 2 1 , conv 3 1 , and conv 4 1 )ofaVGG-19network[147]asdiscussedinsection3.4.1.2.Weuseencoderweights pre-trainedonImageNet[140]databasewhicharefrozenduringtrainingoftheUMGwrapper. Thedecodermirrorstheencoderwithpoolinglayersreplacedwithnearestup-samplinglayers, andwithoutuseofanynormalizationlayersassuggestedin[73].Bothencoderanddecoderutilize paddingtoavoidborderartifacts.Thediscriminatorforcomputingtheadversarialloss issimilartotheoneusedin[133].Theweightsforstylelossandcontentlossaresetto s =0 : 002 and c =0 : 001 .WeusetheAdamoptimizer[90]withabatchsizeof 8 andalearningrateof 1 e 4 forbothgenerator(decoder)anddiscriminatorobjectivefunctions.Theinputlocalpatches areresizedfrom 96 96 to 256 256 asrequiredbythepre-trainedencoderbasedonVGG-19 network.AllexperimentsareperformedintheTensorFlowframework. Theproposedapproachisshowntoimprovethegeneralizationperformanceoftwostate-of-the- artspoofdetectors,namely,FingerprintSpoofBusterandSlim-ResCNN.WetrainaMobileNet- V1[71]fromscratchusingtheaugmenteddatasetforFingerprintSpoofBuster[24]. InthecaseofSlim-ResCNN,acustomarchitecture,consistingaseriesofoptimizedresidual blocks[64]isimplemented 12 asdescribedin[172]. ExperimentalProtocol ThePAgeneralizationperformanceagainstunknownmaterialsisevaluatedbyadopting aleave-one-outprotocol[26].InthecaseofMSUFPADv2.0dataset,oneoutofthetwelveknown PAmaterialsisleft-outandtheremainingelevenmaterialsareusedtotraintheproposedUMG 12 WewereunabletoobtainthesourcecodefortheSlim-ResCNNapproachfromtheauthors. 108 Figure3.12ExampleimagesfromLivDet2017databasecapturedusingthreedifferent readers,namelyDigitalPersona,GreenBit,andOrcanthus.Theuniquecharacteristics offromOrcanthusreaderexplaintheperformancedropincross-sensorscenariowhen Orcanthusisusedaseitherthesourceorthetargetsensor. wrapper.TherealPAdata(ofelevenknownmaterials)isaugmentedwiththesynthesizedPA datageneratedusingthetrainedUMGwrapper,whichisthenusedtotrainthePA detector, i.e. ,FingerprintSpoofBuster[24].ThisrequirestrainingatotaloftwelvedifferentUMG wrappersandPAdetectionmodelseachtimeleavingoutoneofthetwelvedifferentPAmaterials. The 5 ; 743 liveimagesinMSUFPADv2.0arepartitionedintotrainingandtestingsuchthatthere are 1 ; 000 randomlyselectedliveimagesintestingsetandtheremaining 4 ; 743 imagesintraining suchthatthereisnosubjectoverlapbetweentrainingandtestingdatasplits.Thereallivedatais alsoaugmentedwithsynthesizedlivedatageneratedusinganotherUMGwrappertrainedonreal livedata. InthecaseofLivDet2017dataset,thePAmaterialsavailableinthetestset(Gelatin,Latex,and Liquidx)aredeemedasfiunknownflmaterialsbecausethesearedifferentfromthematerials includedinthetrainingset(WoodGlue,x,andBodyDouble).Toevaluatethegeneralization performance,weevaluatetheperformanceofFingerprintSpoofBusterwithandwithoutusingthe UMGwrapperandcomparewiththestate-of-the-artpublishedresults.AstheLivDet2017dataset containsimagesfromthreedifferentreaders,wetraintwoUMGwrapperspersensor, oneforeachoftheliveandthePAtrainingdatasets. 109 Table3.5Generalizationperformance(TDR(%)@FDR= 0 : 2% )ofstate-of-the-artspoofdetec- tors, i.e. ,Slim-ResCNN[172]andFingerprintSpoofBuster(FSB)[24],withleave-one-outmethod onMSU-FPADv2dataset.Atotaloftwelveexperimentsareperformedwherethematerialleft-out fromtrainingistakenasthefiunknownflmaterialforevaluation. UnknownSpoofMaterial #Images #LocalPatches GeneralizationPerformance(TDR(%)@FDR=0.2%) BaseCNN BaseCNN+UMGwrapper Slim- ResCNN[172] FingerprintSpoof Buster(FSB)[26] Slim-ResCNN +UMG FSB+ UMG Silicone 1 ; 160 38 ; 145 64 : 74 67 : 59 96.55 98.62 MonsterLiquidLatex 882 27 ; 458 90 : 25 94 : 78 95.35 96.26 PlayDoh 715 17 ; 602 58 : 18 58 : 46 71.05 72.31 2DPrintedPaper 481 7 ; 381 53 : 22 55 : 30 79.42 80.25 WoodGlue 397 12 ; 681 84 : 89 86 : 40 97.98 98.99 GoldFingers 295 9 ; 402 85 : 08 88 : 14 88.14 88.81 Gelatin 294 10 ; 508 55 : 78 55 : 10 98.30 97.96 DragonSkin 285 7 ; 700 96 : 14 97 : 54 99.30 100.00 LatexBodyPaint 176 6 ; 366 78 : 98 76 : 70 90.34 89.20 Transparency 137 3 ; 846 91 : 24 95 : 62 97.08 100.00 ConductiveInkonPaper 50 2 ; 205 88 : 00 90 : 00 96.00 100.00 3DUniversalTargets 40 1 ; 085 92 : 50 95 : 00 100.00 100.00 TotalSpoofs 4,912 144,379 Weightedmean*( weighteds.d.) TotalLives 5,743 228,143 73.09 15.66 75.24 16.60 90.63 10.19 91.78 10.29 *Thegeneralizationperformanceforeachspoofmaterialisweightedbythenumberofimagesto producetheweightedmeanandstandarddeviation. Cross-MaterialFingerprintPAGeneralization Table3.5presentsthegeneralizationperformanceoftheproposedapproachontheMSUFPAD v2.0dataset.Themeangeneralizationperformanceofthespoofdetectoragainstunknownspoof materialsimprovesfromTDRof 75 : 24% (73.09%)toTDRof 91 : 78% (90.63%)@FDR= 0 : 2% forFingerprintSpoofBuster(Slim-ResCNN),resultinginapproximately 67% decreaseintheerror rate,whenthespoofdetectoristrainedinconjunctionwiththeproposedUMGwrapper.Table3.6 presentsaperformancecomparisonoftheproposedapproachandthestate-of-the-artapproach whentestedonthepubliclyavailableLivDet2017dataset.TheproposedUMGwrapperim- provesthestate-of-the-artaveragecross-materialspoofdetectionperformancefromTDR= 73 : 32% ( 72 : 62% )to 80 : 74% ( 78 : 27% )@FDR= 1 : 0% forFingerprintSpoofBuster(Slim-ResCNN),re- spectively. 110 Table3.6Performancecomparisonbetweentheproposedapproachandstate-of-the-artCNN-only results[24,172]onLivDet2017datasetforcross-materialexperimentsintermsofAverageClas- Accuracy(ACA)andTDR@FDR=1.0%. LivDet2017 BaseCNN BaseCNN+UMGwrapper Slim-ResCNN*[172]FSB[26] Slim-ResCNN+UMGFSB+UMG Avg.Accuracy(TDR@FDR=1.0%) Avg.Accuracy(TDR@FDR=1.0%) GreenBit 95 : 20 ( 90 : 22 ) 96 : 68 ( 91 : 07 ) 96 : 90 ( 91 : 95 ) 97 : 42 ( 92 : 29 ) Orcanthus 93 : 93 ( 65 : 82 ) 94 : 51 ( 66 : 59 ) 94 : 45 ( 71 : 91 ) 95 : 01 ( 74 : 45 ) DigitalPersona 92 : 89 ( 61 : 81 ) 95 : 12 ( 62 : 29 ) 94 : 75 ( 70 : 96 ) 95 : 20 ( 75 : 47 ) Mean s.d. 94 : 01 1 : 16 ( 72 : 62 15 : 38 ) 95 : 44 1 : 12 ( 73 : 32 15 : 52 ) 95.37 1.34 ( 78.27 11.85 ) 95.88 1.34 ( 80.74 10.02 ) *WewereunabletoobtainthesourcecodefortheSlim-ResCNNapproachfromtheauthors.Besteffortsweremadetoimplementtheapproach basedonthedetailsprovidedintheirmanuscript[172].BasedonLivDet2017[114],Slim-ResCNNachievedaverageaccuracyof 95.25%comparedto94.01%achievedbyourimplementation. Table3.7Cross-sensorspoofgeneralizationperformanceonLivDet2017datasetin termsofAverageAccuracyandTDR@FDR=1.0%. LivDet2017 Slim-ResCNN[172]FSB[26] Slim-ResCNN+UMGFSB+UMG Training(Testing)Sensors Avg.Accuracy(TDR@FDR=1.0%) Avg.Accuracy(TDR@FDR=1.0%) GreenBit(Orcanthus) 43 : 98 ( 0 : 00 ) 49 : 43 ( 0 : 00 ) 65.40 ( 20.60 ) 66.05 ( 21.52 ) GreenBit(DigitalPersona) 80 : 39 ( 48 : 28 ) 89 : 37 ( 57 : 48 ) 92.07 ( 69.55 ) 94.81 ( 72.91 ) Orcanthus(GreenBit) 68 : 82 ( 8 : 02 ) 69 : 93 ( 8 : 02 ) 74.38 ( 29.90 ) 81.75 ( 30.91 ) Orcanthus(DigitalPersona) 62 : 30 ( 6 : 70 ) 57 : 99 ( 4 : 97 ) 72.33 ( 25.24 ) 76.36 ( 28.46 ) DigitalPersona(GreenBit) 87 : 90 ( 54 : 24 ) 89 : 54 ( 57 : 06 ) 95.28 ( 84.38 ) 96.35 ( 85.21 ) DigitalPersona(Orcanthus) 44 : 30 ( 0 : 00 ) 49 : 32 ( 0 : 00 ) 66.10 ( 18.25 ) 68.44 ( 20.38 ) Mean s.d. 64 : 62 18 : 18 ( 19 : 54 24 : 86 ) 67 : 60 18 : 53 ( 21 : 26 28 : 06 ) 77.59 12.97 ( 41.32 28.29 ) 80.63 12.88 ( 43.23 28.31 ) Cross-SensorFingerprintPAGeneralization Toimprovethecross-sensorperformance,weemploytheproposedUMGwrappertosyntheti- callygeneratelarge-scaleliveandPAdatasetstotrainaPAdetectorforthetargetsensor.Givena realdatabase, D A real ,collectedonasourcesensor, F A ,containingreallive, L A real ,andrealPA, S A real datasets,s.t. D A real = f L A real [ S A real g ,theproposedUMGwrapperisusedto generate 50 ; 000 syntheticlivepatches, L B synth ,and 50 ; 000 syntheticPApatches, S B synth ,foratar- getsensor, F B .TheUMGwrapperistrainedonlyontheliveimagescollectedon S B ,andusedfor styletransferon L A real and S A real togenerate L B synth ,and S B synth ,respectively.Weevaluatethecross- sensorgeneralizationperformanceusingLivDet2017datasetwheretheUMGwrappertrainedon asourcesensor,sayGreenBit,isusedtogeneratesyntheticdataforatargetsensor,sayOrcanthus, 111 Figure3.13UMGwrapperusedtotransferstylefrom(b)areallivepatchfromOrcanthusreader, to(a)areallivepatchfromDigitalPersona,togenerate(c)asynthesizedpatch. usingonlyasmallsetof 100 liveimagesfromthetargetsensor 13 .ThePAdetectoris trainedfromscratchonlyonthesyntheticdatasetcreatedforthetargetsensorusingUMGwrapper andtestedontherealtestsetofthetargetsensor.Table3.7presentsthecross-sensorPA generalizationperformanceofthePAdetectorintermsofaverageaccuracyandTDR (%)@FDR= 1% .WenotethattheproposedUMGwrapperimprovestheaveragecross-sensorPA detectionperformancefrom67.60%to80.63%.Figure3.12presentsexampleimages capturedusingthethreesensorsinLivDet2017.Theuniquecharacteristicsoffrom Orcanthusreaderexplaintheperformancedropincross-sensorscenariowhenitisusedaseither thesourceorthetargetsensor. 3.4.1.5ComputationalRequirements OfTrainingstage :TheproposedapproachincludesanofstageoftrainingtheUMG wrapperandsynthesisofstyle-transferredpatches.Ittakesaround 2 hourstotrain, andaround 1 hourtogenerate 100 ; 000 patchesonaNvidiaGTX1080TiGPU.The synthesizedpatchesareusedtoaugmentthetrainingdatausedtotraintheunderlying spoofdetector. OnlineTestingstage :Thereisnoincreaseinthespoofdetectiontimeoftheunderlyingspoof detectorwiththeadditionoftheUMGwrapper.Thespoofdetectiontimeremainsaround 100 ms forbothFingerprintSpoofBusterandSlim-ResCNN. 13 Anaverageof ˘ 3100 localpatchesareextractedfrom 100 liveimagesinLivDet2017experiments. 112 Figure3.14FingerprintpatchesfabricatedwithrealPAs(a)silicone,(b)latexbodypaint,(c)their mixture(in1:1ratio),and(d)synthesizedusingUMGwrapperwithstyletransferbetweensilicone andlatexbodypaint. 3.4.1.6FabricatingUnknownPAs Toexploretheroleofcross-materialstyletransferinimprovinggeneralizationperformance,we fabricatephysicalPAspecimensusingtwoPAmaterials,namelysiliconeandlatexbodypaint,and theirmixtureina1:1ratiobyvolume 14 .Wefabricateatotalof 24 physicalspecimens,including 8 specimensforeachofthetwomaterials,and 8 specimensusingtheirmixture.Atotalof 72 PA 3 impressions/specimen,arecapturedusingaCrossMatchGuardian200 reader.FingerprintSpoofBuster,trainedontwelveknownPAmaterialsincludingsiliconeand latexbodypaint,achievesTDRof 100% @FDR=0.2%onthetwoknownPAmaterials,and TDRof 83 : 33% @FDR=0.2%againstthemixture.Weutilizethetestingdatasetof 1 ; 000 live imagesfromMSUFPADv2.0fortheseexperiments. WeutilizetheproposedUMGwrappertogenerateadatasetof 5 ; 000 synthesizedPApatches 15 usingcross-materialstyletransferbetweenPAofsiliconeandlatexbodypaint.Fin- gerprintSpoofBuster,usingthesynthesizeddataset,improvestheTDRfrom 83 : 33% to 95 : 83% @FDR= 0 : 2% whentestedonthesiliconeandlatexbodypaintmixture,highlighting theroleofthestyle-transferredsynthesizeddatainimprovinggeneralizationperformance.Fig- 14 NotallPAmaterialscanbephysicallycombinedandmayresultinmixtureswithpoorphysicalpropertiesfor themtobeusedtofabricateanygoodqualityPAartefacts. 15 Around1,100minutiae-basedlocalpatchesareextractedfrom24imagescorrespondingtoeachmate- rial. 113 Figure3.153Dt-SNEvisualizationoffeatureembeddingsofreallivePA- printsfabricatedusingsilicone,latexbodypaint,andtheirmixture(1:1ratio),andsynthesizedPA usingstyle-transferbetweensiliconeandlatexbodypaintPAThe3D embeddingsareavailableathttp://tarangchugh.me/posts/umg/index.html(Bestviewedincolor) ure3.14presentssamplepatchesofthetwoPAmaterials,siliconeandlatexbodypaint, theirphysicalmixture,andsynthesizedusingstyle-transfer.Figure3.15presentsthe3Dt-SNE visualizationoffeatureembeddingsoflive(green),twomaterials,silicone(blue)and latexbodypaint(brown),theirmixture(purple),andsyntheticallygeneratedimages(orange).Al- thoughthemixtureembeddingsarenotlocatedinbetweentheembeddingsforthetwoknown materials,possiblyduetothelow-dimensionalt-SNErepresentation,theyareclosetotheembed- dingsofthesyntheticallygeneratedPAimages.Thisexplainstheimprovementinperformance 114 Figure3.16AsequenceoftencolorframesarecapturedbyaSilkIDSLK20Rreader inquicksuccession( 8 fps).Theandtenthframesfromalive(a)-(c),andPA(tan pigmentedthirddegree)(d)-(f)areshownhere.UnlikePAs,inthecaseoflive appearanceofsweatnearpores(highlightedinyellowboxes)andchangesinskincolor(pinkish redtopaleyellow)alongtheframescanbeobserved. againstthePAmixtureswhensynthesizedPAsareusedintraining.Therefore,theproposedUMG wrapperisabletogeneratePAimagesthatarepotentiallysimilartotheunknownPAs. 3.4.2TemporalAnalysisforPADGeneralization Inthissection,wepresentadynamicapproachtoimprovethePADgeneralization[27].Wepro- posetoutilizethedynamicsinvolvedintheimagingofaonatouch-based reader,suchasperspiration,changesinskincolor(blanching),andskindistortion,todifferentiate fromPA,weutilizeadeeplearning-basedarchitecture(CNN- LSTM)trainedend-to-endusingsequencesofminutiae-centeredlocalpatchesextractedfromten colorframescapturedonaCOTSreader(SilkIDFastFrameRatesensor). Comparedtothestaticapproachesthatwerediscussedearlier,inthecaseofdynamicap- proaches,publishedstudiesutilizetemporalanalysistocapturethephysiologicalfeatures,such 115 Table3.8Studiesprimarilyfocusedonpresentationattackdetectionusingtemporal analysis. StudyApproachDatabasePerformance Parthasaradhiet al.[128] Temporalanalysisofperspiration patternalongfrictionridges 1 ; 840 livefrom 33 subjects and 1800 PAfrom 2 materials,and 700 cadaver from 14 Avg.Accuracy= 90% Kolbergetal.[91]Bloodwdetectionusinga sequenceof 40 LaserSpeckle ContrastImages 1 ; 635 livefrom 163 subjects and 675 PAimagesof8PA materials(32variants) TDR= 90 : 99% @FDR= 0 : 05% Pleshetal.[131]Fusionofstatic(LBPandCNN)and dynamic(changesincolorratio) featuresusingasequenceof 2 color frames 14 ; 892 liveand 21 ; 700 PA imagesof10materials TDR= 96 : 45% (known-material)@FDR= 0 : 2% ProposedApproach Temporalanalysisofminutiae-based localpatchsequencesfrom 10 color framesusingCNN+LSTMmodel 26 ; 650 livefrom 685 subjectsand 32 ; 910 PA imagesof 7 materials(14 variants) TDR= 99.15% (known-material)andTDR= 86.20% (cross-material)@ FDR= 0 : 2% asperspiration[106,128],bloodw[91,169],skindistortion[2],andcolorchange[131,169]. Table4.1summarizesthedynamicapproachesforPAdetectionreportedinthelitera- ture.Someofthelimitationsofthesestudiesincludelongcapturetime(2-5seconds),expensive hardware,and/orsmallnumberofframesinthesequence.Moreover,itislikelythatsomelive gersmaynotexhibitanyofthesedynamicphenomenonstoseparatethemfromPAs.Forinstance, somedrymaynotexhibitsignsofperspirationduringthepresentationoraPAmay producesimilardistortioncharacteristicsasthatofsomelive Wepositthatautomaticlearning,asopposedtohand-engineering,ofthedynamicfeatures involvedinthepresentationofacanprovidemorerobustandhighlydiscriminatingcuesto distinguishlivefromPAs.Inthissection,weproposetouseaCNN-LSTMarchitecture tolearnthespatio-temporalfeaturesacrossdifferentframesinasequence.Weutilizeasequenceof minutiae-centeredlocalpatchesextractedfromtencoloredframescapturedbyaCOTS reader,SilkIDSLK20R 16 ,at 8 fpstotrainthenetworkinanend-to-endmanner.Theuseof minutiae-basedlocalpatcheshasbeenshowntoachievestate-of-the-artPAdetectionperformance comparedtorandomlyselectedlocalpatchesinstaticimages.Additionally,usingminutiae-based 16 https://www.zkteco.com/en/product detail/SLK20R.html 116 localpatchesprovidesalargeamountoftrainingdata, 71 ; 530 minutiae-basedpatchsequences, comparedto 5 ; 956 whole-framesequences. 3.4.2.1ProposedApproach Theproposedapproachconsistsof:(a)detectingminutiaefromeachoftheframesandselect- ingtheframewiththehighestnumberofminutiaeasthereferenceframe,(b)preprocessingthe sequenceofframestoconvertthemfromBayerpatterngrayscaleimagestoRGBimages,(c) extractinglocalpatches 17 fromalltenframesbasedonthelocationofdetectedminutiaeinthe referenceframe,and(c)end-to-endtrainingofaCNN-LSTMarchitectureusingthesequencesof minutiae-centeredpatchesextractedfromthetenframes.Whileatime-distributedCNNnetwork (MobileNet-v1)withsharedweightsextractsdeepfeaturesfromthelocalpatches,abidirectional LSTMlayerisutilizedtolearnthetemporalrelationshipbetweenthefeaturesextractedfromthe sequence.AnoverviewoftheproposedapproachispresentedinFigure3.19. MinutiaDetection Whena(orPA)ispresentedtotheSilkIDSLK20Rreader,itcapturesa sequenceoftencolorframes, F = f f 1 ;f 2 ;:::;f 10 g ,at8framespersecond 18 (fps)andaresolution of 1000 ppi.Whilethecompletesensingregion( h w )inaSilkIDreaderis 800 600 pixels,eachofthetencoloredframesarecapturedfromasmallercentralregionof 630 390 pixelstoensurethefastframerateof 8 fps.Thestartingandendingframesinthesequencemay havelittleornofrictionridgedetailsiftheisnotyetcompletelyplacedorquicklyremoved fromthereader.Therefore,weextractminutiaeinformationfromallofthetenframesusingthe algorithmproposedbyCaoetal.[17].Sincetheminutiaedetectorproposedin[17]isoptimized for 500 ppiimages,allframesareresizedbeforeextractingtheminutiae.Theframe 17 Earlier,wereportedthatfor 500 ppiimages,theminutiae-basedpatchesofsize 96 96 pixelsachieve thebestperformancecomparedtootherpatchsizes.SinceSilkIDimageshavearesolutionof 1000 ppi,we selectapatchsizeof 192 192 pixelstoensureasimilaramountoffrictionridgeareaineachpatch,ascontainedin a 96 96 pixelspatchsizefor 500 ppiimages. 18 Ittakesanaverageof1.25secondstocaptureasequenceoftenframes. 117 Figure3.17Examplesof(i)liveand(ii)PAimages.(a)Grayscale 1000 ppiimage,and (c)-(g)theve(colored)framescapturedbySilkIDSLK20RFastFrameRatereader.Live framesexhibitthephenomenonofblanchingoftheskin, i.e. ,displacementofbloodwhenalive ispressedontheglassplatenchangingthecolorfromred/pinktopalewhite.(Best viewedincolor) 118 Figure3.18ABayercolorarrayconsistsofalternatingrowsofred-greenandgreen-blue BilinearinterpolationofeachchannelisutilizedtoconstructtheRGBimage. withthemaximumnumberofdetectedminutiaeisselectedasthereferenceframe( f ref )andthe correspondingminutiaesetasthereferenceminutiaeset( M ref ). Pre-processing Adigitalsensor,containingalargearrayofphoto-sensitivesites(pixels),istypicallyusedincon- junctionwithacolorarraytopermitonlyparticularcolorsoflightateachpixel.TheSilkID readeremploysoneofthemostcommonarrays,knownas Bayererarray ,con- sistingofalternatingrowsofred-green(RG)andgreen-blue(GB) Bayerdemosaicing [97] (debayering)istheprocessofconvertingabayerpatternimagetoanimagewithcompleteRGB colorinformationateachpixel.Itutilizesbilinearinterpolation[153]toestimatethemissingpixels inthethreecolorplanesasshowninFigure3.18.TheoriginalsequenceofgrayscaleBayerpattern frames( 10 630 390 )isconvertedtotheRGBcolorspaceusinganOpenCV[11]function, cv2.cvtColor() ,withtheparameter flag = cv2.COLOR BAYER BG2RGB .Afterdebayering,the frameshavehighpixelintensityvaluesinthegreenchannel(seeFigure3.19)asSilkIDreadersare calibratedwithstronggainsongreenpixelsforgeneratinghighqualityFTIRimages.Weutilize theserawimagesforourexperiments.Forvisualizationpurposes,wereducethegreenchannel 119 Figure3.19AnoverviewoftheproposedapproachutilizingaCNN-LSTMmodeltrainedend-to- endonsequencesofminutiae-centeredlocalpatchesforPAdetection. intensityvaluesbyafactorof 0 : 58 andperformhistogramequalizationonintensityvaluesinthe HSVcolorspace 19 (seeFigures3.16and3.17). LocalPatchExtraction Foreachofthedetectedminutiaefromthereferenceframe, m i 2 M ref ,weextractasequenceof tenlocalpatches, P i = f p f 1 i ;p f 2 i ;:::;p f 10 i g ,ofsize 192 192 ,fromthetenframes ( F ) ,centered 19 Reducinggainingreenchannelandhistogramequalizationachievedsimilarorlowerperformancecomparedto usingrawcolorimages.Therefore,rawimageswereusedforallexperiments. 120 attheminutiaelocation 20 , i.e. , m i = f x i ;y i g .Thisresultsinatotalof k patchsequences,where k isequaltothenumberofdetectedminutiaeinthereferenceframe.Earlier,wereportedthat for 500 ppiimages,theminutiae-basedpatchesofsize 96 96 pixelsachievethebest performancecomparedtopatchsizesof 64 64 pixelsand 128 128 pixels.Therefore,for 1000 ppiimagesinourcase,weselectedthepatchsizeof 192 192 pixelstoensureasimilaramountof frictionridgeareaineachpatch,ascontainedina 96 96 pixelspatchsizefor 500 ppi images.Eachlocalpatchfromthereferenceframeiscenteredaroundtheminutiae.However, thismightnotholdtruefornon-referenceframeswheretheminutiaemayshiftduetonon-linear distortionofhumanskinandnon-rigidPAmaterials.Wehypothesizethattheproposedapproach canutilizethedifferencesinthenon-linearshiftalongthesequencesoflocalpatchesasasalient cuetodistinguishbetweenliveandPAs. 3.4.2.2NetworkArchitecture SeveraldeepConvolutionalNeuralNetwork(CNN)architectures,suchasVGG[147],Inception- v3[150],MobileNet-v1[71]etc.,havebeenshowntoachievestate-of-the-artperformancefor manyvision-basedtasks,includingPAdetection[23,119].Unliketraditionalap- proacheswherespatialarehand-engineered,CNNscanautomaticallylearnsalientfeatures fromthegivenimagedatabases.However,asCNNsarefeed-forwardnetworks,theyarenotwell- suitedtocapturethetemporaldynamicsinvolvedinasequenceofimages.Ontheotherhand,a RecurrentNeuralNetwork(RNN)architecturewithfeedbackconnectionscanprocessasequence ofdatatolearnthetemporalfeatures. Withthegoaloflearninghighlydiscriminativeandgeneralizablespatio-temporalfeaturesfor PAdetection,weutilizeajointCNN-RNNarchitecturethatcanextractdeepspatialfea- turesfromeachframe,andlearnthetemporalrelationshipacrossthesequence.Oneofthemost popularRNNarchitecturesisLongShort-TermMemory[70]thatcanlearnlongrangedepen- denciesfromtheinputsequences.Theproposednetworkarchitectureutilizesatime-distributed 20 Minutiaecoordinatesextractedfromtheresized 500 ppiframesaredoubledtocorrespondtominutiaecoordinates intheoriginal 1000 ppiframes. 121 MobileNet-v1CNNarchitecturefollowedbyaBi-directionalLSTMlayer 21 anda2-unitsoftmax layerforthebinaryproblem, i.e. ,livevs.PA.SeeFigure3.19. MobileNet-v1isalow-latencynetworkwithonly 4 : 24 Mtrainableparameterscomparedto othernetworks,suchasInception-v3( 23 : 2 M)andVGG( 138 M),whichachievecomparableper- formanceinlarge-scalevisiontasks[140].Inlowresourcerequirementssuchassmartphones andembeddeddevices,MobileNet-v1iswell-suitedforreal-timePAdetection.Mostimportantly, ithasbeenshowntoachievestate-of-the-artperformanceforPAdetection[24]on publiclyavailabledatasets[57].Ittakesaninputimageofsize 224 224 3 ,andoutputsa 1024-dimensionalfeaturevector(bottlenecklayer).Weresizethelocalpatchesfrom 192 192 to 224 224 asrequiredbytheMobileNet-v1input.Forthepurposesofprocessingasequenceof images,weutilizeaKeras'TimeDistributedwrappertoutilizetheMobileNet-v1architectureasa featureextractorwithsharedparametersacrossdifferentframes(time-steps)inthesequence. 3.4.2.3ImplementationDetails ThenetworkarchitectureisdesignedintheKerasframework 22 andtrainedfromscratchona NvidiaGTX1080TiGPU.WeutilizetheMobileNet-v1architecturewithoutitslastlayerwrapped inaTime-Distributedlayer.TheBi-directionalLSTMlayercontains256unitsandhasadropout rateof 0 : 25 .WeutilizetheAdam[90]optimizerwithalearningrateof 0 : 001 andabinarycross entropylossfunction.Thenetworkistrainedend-to-endwithabatchsizeof 4 .Thenetworkis trainedfor 80 epochswithearly-stopping 23 . 122 Table3.9Performancecomparison(TDR(%)@FDR=0.2%and1.0%)betweentheproposed approachandtwostate-of-the-artmethods[24,172]forknown-materialscenario,wherethespoof materialsusedintestingarealsoknownduringtraining. StudyApproachArchitectureTDR( s.d.)(%) @FDR=0.2% TDR( s.d.)(%) @FDR=1.0% Baseline Static(WholeImage)CNN(MobileNet-v1) 96 : 90 0 : 7897 : 64 0 : 55 Zhangetal.[172] Static(CenterofGravityPatches)CNN(Slim-ResCNN) 98 : 05 0 : 3898 : 44 0 : 30 Chughetal.[24] Static(MinutiaePatches)CNN(MobileNet-v1) 99 : 11 0 : 2499 : 15 0 : 24 Proposed Dynamic(WholeFrames)CNN-LSTM(MobileNet-v1) 98 : 94 0 : 4499 : 04 0 : 43 Dynamic(CenterofGravityPatches)CNN-LSTM(Slim-ResCNN) 99 : 04 0 : 2699 : 30 0 : 28 Dynamic(MinutiaePatches)CNN-LSTM(MobileNet-v1) 99.25 0.2299.45 0.16 Table3.10Performancecomparison(TDR(%)@FDR=0.2%and1.0%)betweentheproposed approachandtwostate-of-the-artmethods[24,172]forthreecross-materialscenarios,wherethe spoofmaterialsusedintestingareunknownduringtraining. BaselineStaticApproaches(CNN) ProposedDynamicApproaches(CNN-LSTM) Unknown Material WholeImage (Grayscale) Slim- ResCNN[172] FingerprintSpoof Buster[24] Sequenceof WholeImages Sequenceof CoGPatches Sequenceof Minutiae-basedPatches TDR@FDR=0.2% ThirdDegree 43 : 8375 : 3279 : 20 80 : 4483 : 2284 : 50 Gelatin 50 : 7476 : 8476 : 52 73 : 8883 : 1082 : 81 77 : 3787 : 3989 : 23 87 : 5590 : 9491 : 28 Mean s.d. 57.31 17.71 79 : 85 6 : 5781 : 65 6 : 70 80 : 62 6 : 8485 : 75 4 : 49 86.20 4.48 TDR@FDR=1.0% ThirdDegree 60 : 2586 : 1589 : 11 88 : 1094 : 2296 : 20 Gelatin 66 : 4090 : 1089 : 00 89 : 5096 : 3896 : 08 85 : 3193 : 2794 : 90 93 : 2798 : 0098 : 20 Mean s.d. 70 : 65 13 : 0689 : 84 3 : 5791 : 00 3 : 37 90 : 29 2 : 6796 : 20 1 : 90 96.83 1.19 3.4.2.4ExperimentalResults Todemonstratetherobustnessofourproposedapproach,weevaluateitusingtheSilkIDFast FrameRatedataset(Table3.3)undertwodifferentsettings: Known-Material and Cross-Material scenarios. Known-MaterialScenario Inthisscenario,thesamesetofPAmaterialsareincludedinthetrainandtestsets.Toevaluate this,weutilizeve-foldcrossvalidationsplittingtheliveandPAdatasetsfortrainingandtesting 21 Experimentswithuni-directionalLSTMlayerachievedlowerorsimilarperformancecomparedtowhenusing bi-directionallayer. 22 https://keras.io/ 23 Thepatienceparameterissetto20,whichmeansthatifthevalidationaccuracydoesnotimproveformorethan 20epochsthenetworktrainingisautomaticallystopped. 123 withnosubjectoverlap.Ineachofthevefolds,thereare 21 ; 320 liveand 26 ; 400 PAframes intrainingandtherestareintesting.Table3.9presentstheresultsachievedbytheproposed approachonknown-materialscomparedtoastate-of-the-artapproach[24]thatutilizesminutiae- basedlocalpatchesfromstaticgrayscaleimages.Theproposedapproachimprovesthespoof detectionperformancefromTDRof 99 : 11% ( 99 : 15% )to 99 : 25% ( 99 : 45% )@FDR=0.2%( 1 : 0% ). Cross-MaterialScenario Inthisscenario,thePAmaterialsusedinthetestsetwerenotincludedinthetrainingset.We simulatethisscenariobyadoptingaleave-one-outprotocol,whereonematerial(includingallits variants)isremovedfromtraining,andisthenusedforevaluatingthetrainedmodel.Itisamore challengingandpracticalsettingasitevaluatesthecross-materialgeneralizabilityofaPAdetector againstPAmaterialsthatareneverseenduringtraining.Forinstance,inoneofthecross-material experiments,weexcludeThirdDegreesiliconePAmaterial,includingitsallvariants(pigmented, tan,beigepowder,andmedium)fromtraining,andusethemfortesting.Thelivedataisrandomly dividedina80/20split,withnosubjectoverlap,fortrainingandtesting,respectively. Table3.10presentstheperformanceachievedbytheproposedapproach,onthreecross- materialexperiments,comparedtotwostate-of-the-art 24 methods[24,172].Weobservethatuti- lizingsequenceofwholeimagesimprovestheperformanceachievedbystaticwhole images(fromTDR= 57 : 31% ( 70 : 65% )toTDR= 80 : 62% ( 90 : 29% )@FDR= 0 : 2% ( 1 : 0% )).How- ever,itisslightlylowerthattheperformanceachievedbythestaticpatch-basedapproaches, i.e. , TDR= 81 : 65% ( 91 : 00 )@FDR= 0 : 2% ( 1 : 0% ).Thiscouldbeduetothedrawbacksofutilizing wholeimagescomparedtolocalpatches[24]fortrainingadeepneuralnetwork,namely,(i)whole imagesmayhavesomeblankareasurroundingthefrictionridgearea;directlyresizingtheseim- ages,from 630 390 to 224 224 ,resultsinthefrictionridgeareaoccupyinglessthan 20% ofthe originalimagesize,(ii)resizingarectangularimagetoasquareimageleadstodifferentamountsof informationretainedinthetwospatialdimensions,and(iii)downsizinganimagetypicallyleadsto lossofdiscriminatoryinformation.However,thesedrawbacksareaddressedbyusing 24 ThealgorithmbyZhangetal.[172],Slim-ResCNN,wasthewinneroftheLivDet2017competition[114]. 124 asequenceoflocalpatchesintheproposedapproach,whichisshowntoachieveasuperiorcross- materialPAdetectionperformanceofTDRs= 86 : 20% ( 96 : 83% )@FDR= 0 : 2% ( 1 : 0% ).Figure ?? presentsthreechallengingcasesinthecaseofcross-materialexperimentwheretheThirdDegree siliconePAmaterialisleftoutfromtraining,andisusedintesting. 3.4.2.5ProcessingTimes Theproposednetworkarchitecturetakesaround 4 6 hourstoconvergewhentrainedwithse- quencesofwholeframes,and 24 30 hourswithsequencesofminutiae-basedlocalpatches,using aNvidiaGTX1080TiGPU.Anaveragenumberof 11 and 13 sequencesofminutiae-basedlocal patchesareextractedfromtheliveandPAframes,respectively.Theaveragetimefor asinglepresentation,including:preprocessing,minutiae-detection,patchextraction,andsequence generationandinference,onaNvidiaGTX1080TiGPU,is 58 msforfullframe-basedsequences, and 393 msforminutiae-basedpatchsequences. 3.5Summary IntroductionofnewPAmaterialsandfabricationtechniquesposesacontinuousthreattothesecu- rityofrecognitionsystemsandrequiresdesignofrobustandgeneralizablePAdetectors. ItisobservedthattheselectionofPAmaterialsusedintraining(knownPAs)directlyimpactsthe performanceagainstunknownPAs,howevertheunderlyingreasonsforthisphenomenaareun- known.Inthisstudy,weinvestigatethePAmaterialcharacteristicsandcorrelatethemwiththe 3Dt-SNEembeddingsofPAmaterialsandtheircross-materialperformances.Thisenablesus toidentifyasubsetofPAmaterials,namelySilicone,2DPaper,PlayDoh,Gelatin,LatexBody Paint,andMonsterLiquidLatexessentialfortrainingarobustPAD.Wepositthatthisapproach canbeutilizedtoestimatethePADperformanceagainstnewmaterialsbyanalyzingitsmaterial characteristicsandt-SNEvisualizationofonlyfewsamplesinsteadofcollectinglargedatasetsfor eachofthenewmaterial. 125 Next,weproposeastyle-transferbasedwrapper,UniversalMaterialGenerator(UMG),toim- provethegeneralizationperformanceofanyPAdetectoragainstnovelPAfabricationmaterials thatareunknowntothesystemduringtraining.Theproposedapproachisshowntoimprove theaveragegeneralizationperformanceoftwostate-of-the-artPAdetectors,namelyFingerprint SpoofBuster(andSlim-ResCNN),fromTDRof 75 : 24% ( 73 : 09% )to 91 : 78% ( 90 : 63% )@FDR = 0 : 2% ,respectively,whenevaluatedonalarge-scaledatasetof 5 ; 743 liveand 4 ; 912 PAimages fabricatedusing12materials.Ourapproachalsoimprovestheaveragecross-sensorperformance from 67 : 60% ( 64 : 62% )to 80 : 63% ( 77 : 59% )forFingerprintSpoofBuster(Slim-ResCNN)when testedonLivDet2017dataset,alleviatingthetimeandresourcesrequiredtogeneratelarge-scale PAdatasetsforeverynewsensorandPAmaterial.WehavealsofabricatedphysicalPAspeci- mensusingamixtureofknownPAmaterialstoexploretheroleofcross-materialstyle-transferin improvinggeneralizationperformance. Finally,weutilizethedynamicsinvolvedinthepresentationofa,suchasskinblanching, distortion,andperspiration,tolearnarobustPAD.Thisapproachusesasequenceoflocalpatches centeredatdetectedminutiaefromtencolorframescapturedat 8 fpsastheispresentedon thesensor.TheproposedapproachimprovesthePAdetectionperformancefromTDRof 81 : 65% to 86 : 20% @FDR= 0 : 2% incross-materialscenarios,whileretaininghighperformanceinthe knownmaterialscenario. 126 Chapter4 PresentationAttackDetectionforOCT FingerprintImages Inthepreviouschapters,weaddressedtheproblemofpresentationattackdetectionanditsgener- alizationusingconventionalreaders, e.g. ,opticalandcapacitivereaders,thatimagethe 2DsurfaceInthischapter,weexploretheuseofopticalcoherenttomography(OCT) technologywhichprovidesrichdepthinformation,includinginternal(pap- illaryjunction)andsweat(eccrine)glands,inadditiontoimaginganyfakelayers(presentation attacks)placedoverskin.Unlike2Dsurfacescans,additionaldepthinformation providedbythecross-sectionalOCTdepthscansarepurportedtothwartpre- sentationattacks.Wedevelopandevaluateapresentationattackdetector(PAD)basedonadeep convolutionalneuralnetwork(CNN).TheinputdatatoourCNNislocalpatchesextractedfrom thecross-sectionalOCTdepthscanscapturedusingTHORLabsTelestoseriesspectral- domainngerprintreader.TheproposedapproachachievesaTDRof 99 : 73% @FDRof 0 : 2% on adatabaseof 3 ; 413 and 357 PAOCTscans,fabricatedusing8differentPAmaterials.By employingavisualizationtechnique,knownas CNN-Fixations ,weareabletoidentifytheregions intheOCTscanpatchesthatarecrucialforPADdetection. 127 Figure4.1Differentlayersofa(stratumcorneum,epidermis,papillaryjunction,anddermis) aredistinctlyvisibleinaOCTscan,alongwithhelicalshapedeccrinesweatglandsin(a) 3-DOCTvolumeand(b)2-DOCTdepthNotethat(a)and(b)areOCT scansofdifferentImage(a)iscapturedusingTHORLabsTelestoseries(TEL1325LV2) SD-OCTscanner[154]and(b)isreproducedfrom[33]. 4.1Introduction Mostoftherecognitionsystemsbasedontraditionalreaders(e.g.,FTIRandcapacitive technology)relyuponthefrictionridgeinformationonthesurface( i.e. ,stratumcorneum). Thismakesthemhighlyvulnerabletobefooledbypresentationattacks.Ontheotherhand,op- ticalcoherencetomography(OCT)[72]technologyallowsnon-invasive,high-resolution,cross- sectionalimagingofinternaltissuemicrostructuresbymeasuringtheiropticalAnop- ticalanaloguetoUltrasound[164],itutilizeslow-coherenceinterferometryofnear-infraredlight ( 900 nm 1325 nm )andiswidelyusedinbiomedicalapplications,suchasophthalmology[132], oncology[66],dermatology[162]aswellasapplicationsinartconservation[99]and presentationattackdetection[111].InanOCTscanner,abeamoflightissplitintoa samplearm , i.e. ,aunitcontainingtheobjectofinterest,anda referencearm , i.e. ,aunitcontainingamirror tobacklightwithoutanyalteration(seeFig.4.2).Ifthelightfromthetwoarms arewithincoherencedistance,itgivesrisetoaninterferencepatternrepresentingthedepth atasinglepoint,alsoknownas A-scan .LaterallycombiningaseriesofA-scansalongalinecan provideacross-sectionalscan,alsoknownas B-scan (seeFigs.4.1(b)and4.3).Stackingmultiple 128 Figure4.2Aschematicdiagramofaspectral-domainopticalcoherenttomography(SD-OCT) scanner.Thesourcelightisemittedbyasuperluminescentdiode(SLD)whichissplitintoasample armandareferencearm.Ahigh-resolutiontomographyimageoftheinternalmicrostructureof thebiologicaltissueisperformedbymeasuringtheinterferencesignalofthesamplebackscattered light.Imagereproducedfrom[100]. B-scanstogethercanprovidea3Dvolumetricrepresentationofthescannedobject,ortheobject ofourinterest i.e. ,internalstructureofa(seeFigure4.1(a)). Thehumanskinisalayeredtissuewiththeoutermostlayerknownas epidermis andthe external-facingsublayerofepidermis,wherethefrictionridgestructureexists,isknownas stra- tumcorneum .Thelayerbelowepidermisisknownas dermis ,andthejunctionbetweenepidermis anddermislayersisknownas papillaryjunction .Thedevelopmentoffrictionridgepatternson papillaryjunction,whichstartsasearlyasinweeks10-12ofgestation,resultsintotheformation ofasurfaceonstratumcorneum[7].Thesurfacefrictionridgepattern,scannedby traditional(opticalandcapacitive)readers,ismerelyaninstanceoraprojectionofthe, sotosay,a masterprint existingonthepapillaryjunction.Therealsoexisthelicallyshapedducts inepidermislayerconnectingtheeccrine(sweat)glandsindermistothesweatporesonsurface. SeeFigure4.1. 129 Table4.1ExistingstudiesonOpticalCoherentTomography(OCT)basedpresentation attackdetection. Study Approach OCTTechnology Database Comments Cheng etal., 2006[19] AveragedB-scanslicestogenerate1Ddepth performedauto-correlationanalysis; B-scanis2.2mmindepthand2.4mmlaterally ImaluxCorp. Time-domainOCT; capturetime:3s 8(8 ofonesubject)and 10-20impressions perPA,fourPA materials Manualinspection ofauto-correlation response Cheng etal., 2007[20] Extended[19]bycombining100B-Scansto create3Drepresentation;anisotropicresolution (4762dpi,254dpi) ImaluxCorp. Time-domainOCT; capturetime:300s for100scans One, onePA Visualanalysisof 3Drepresentation Bosen etal., 2010[10] UsedCOTSformatching3DOCT scans;scannedvolume:14mmx14mmx 3mm;discusseddetectionofeccrineglandsfor PAD THORLabs Swept-sourceOCT (OCS1300SS); capturetime:20s for3Dvolume 153impressions from51for experiment;onePA material. Visualanalysisfor PAD; performance:FRR =5%@FAR= 0.01% Liuetal., 2010[102] Mappedsubsurfaceeccrineglandswithsweat poresonsurface;exhibitedrepeatable matchingofbasedonsweatpores; discussedabsenceofsweatporesfor PAD Custom Spectral-domain OCT;capturetime: 4minfor3Dvolume Nine impressionsfrom threetwo PAmaterials Visualanalysisof eccrineglandsfor PAD Nasiri- Avanaki etal., 2011[116] Usedadynamicfocus en-Face OCTtodetect anylayerplacedoverskin;discussed DopplerOCTtodetectbloodwandsweat productionforlivenessdetection Custom en-Face OCT;capturetime isnotreported One, onePA Visualanalysisof one andonesellotape PA Liuetal., 2013[101] Auto-correlationanalysisbetweenadjacent B-Scanstodeterminebloodwin micro-vascularpattern Swept-sourceOCT; capturetime:20s Onewith andw/oinhibited bloodw Exhibitedrepeatable signsofvitality Meissner etal., 2013[109] Detectednumberofhelicaleccrineglandducts todistinguishvsPA,scannedvolume: 4.5mmx4mmx2mm Swept-sourceOCT; capturetimeisnot reported 7 ; 458 images,cadavers: 330 images,PA: 2 ; 970 images ManualPAD: 100% ; automatedPAD: 93% and PA: 74% success rate Darlow etal., 2016[33] Detecteddoublebrightpeaksindepth forthinPAsandautocorrelationanalysisfor thickPAs;2differentresolutions;scanned volume:13mmx13mmx3mm(500dpi)and 15mmx15mmx3mm(867dpi) THORLabs Swept-sourceOCT (OCS1300SS); capturetime:20s for3Dvolume 540 scans from15subjects, PA:28scans;one PAmaterial+ sellotape PADaccuracy: 100% Darlow etal., 2016[32] Measuredridgefrequencyconsistencyofthe internalinnon-overlappingblocks; THORLabs Swept-sourceOCT (OCS1300SS) 20scans, PA20scans;onePA material PADaccuracy: 100% Liuetal., 2019[100] Analyzedorderandmagnitudeofbrightpeaks in1-DdepthsignalstodetectPAswith differentthickness;scannedvolume:15mmx 15mmx1.8mm Custom Spectral-domain OCT 30scans from15subjects, PA:60scans;four PAmaterials Contact-based(glass platen)OCT scanner;PAD accuracy:100% Proposed Approach TrainedadeepCNNmodelusingoverlapping patchesextractedfromdetecteddepth inB-Scans;B-scanis1.8mmindepth and14mmlaterally THORLabs Spectral-domain OCT (TEL1325LV2); capturetime: < 1 s 3,413 scansfrom415 subjects,PA:357 scans,eightPA materials Five-fold cross-validation; TDR=99.73%@ FDR=0.2% 130 Figure4.3Directviewimageswithredarrowspresentingthescannedlineandthecorresponding cross-sectionalB-scanfora(a)anda(b)pigmentedxpresentationattack. 4.1.1RelatedWork SinceOCTenablesimagingthe3Dvolumetricmorphologyoftheskintissue,includingthesubsur- faceandotherinternalstructures,ithasgreatpotentialindetectingpresen- tationattacks.ExistingPADstudiesintheliteraturehaveexploredvariousOCTtech- nologiessuchastime-domain,fourier-domain,andspectraldomain,anddevelopedhandcrafted featurestodetectbloodw,eccrineglands,andcorrelationbetweenthesurfaceandinternal gerprint. In2006,Chengetal.[19]utilizedatime-domainOCTscannertocaptureB-scansliceswhich wereaveragedtogenerate1Ddepthsignals.Theyusedauto-correlationof1Dsignalsto manuallydistinguishfromPA.Stacking100B-scanslicesallowedthemtocreatea3D representationoftheinternalstructureforbettervisualizationtodistinguishbetweenlive 131 andPA[20].In2010,Bosenetal.[10]utilizedaswept-sourceOCTtocollect153impressions from51andaCOTSmatchertoevaluatetheperformance.Theydiscussed theideaofdetectingeccrineglandsforPAD.Liuetal.[102]mappedsubsurfaceeccrineglands withsweatporesonsurfacecapturedusingaspectral-domainOCTanddiscussedtheidea ofusingabsenceofsweatporesforPAD.In2011,Nasiri-Avanakietal.[116]utilizedacustom dynamicfocus en-Face OCTcapabletocaptureanylayerplacedoverskinandalsodiscussed amethodtoutilizeDopplerOCTfordetectingbloodwandsweatproductionforPAD. In2013,Liuetal.[101]usedaswept-sourceOCTtocaptureB-scansandusedauto-correlation betweenadjacentscanstodeterminewinmicro-vascularpatterns.Theyutilizedone withandwithoutinhibitedbloodwtoshowtheantchangesinauto- correlationvalues.Meissneretal.[109]presentedthelarge-scaleOCT-basedPADevaluation with 7 ; 458 images, 330 cadaverimages,and 2 ; 970 PAimagescapturedusingaswept- sourceOCTscanner.Theyutilizeddetectionofhelicallyshapedeccrineglandductsandachieved 100%PADperformanceonmanualanalysis.However,thedetectionratesdroppedto93%and 74%forandPA,respectively,usinganautomatedalgorithm.In2016,Darlowetal.[33] utilizedswept-sourceOCT1DscansanddetecteddoublebrightpeaksforthinPAsandanalyzed auto-correlationforthickPAs.AperfectPADaccuracywasachievedwith28PAscansand540 scansfrom15subjects.However,theyonlyutilized1PAmaterialforthickPAand sellotapeforthinPA.In[32],Darlowetal.measuredridgefrequencyconsistencyofinternal innon-overlappingblocks.Theyused20and20PAscansfabricatedusing 1PAmaterialandachieved100%accuracy.Recently,in2019,Liuetal.[100]utilizedacustom spectral-domainOCTscannerandanalyzedorderandmagnitudeofbrightpeaksin1Ddepth signalstodetectPAs.ThesestudiesaresummarizedinTable4.1. Intheproposedapproach,weutilizelocalpatches( 150 150 )extractedfromtheautomati- callysegmenteddepthfrominputB-scanimagestotrainadeepconvolutionalneural network.Themaincontributionsofthischapterare: 132 Figure4.4Anoverviewoftheproposedpresentationattackdetectionapproachutilizing localpatchesextractedfromthesegmenteddepthfromOCTB-scans. 1.ProposedadeepconvolutionalneuralnetworkbasedPADapproachtrainedonlocalpatches containingdepthfromcross-sectionalB-Scans. 2.Evaluatedtheproposedapproachonadatabaseof3,413and357PAOCTB-scans fabricatedusing8differentPAmaterialsandachievedaTDRof99.73%@FDRof0.2% forPAD. 3.dtheregionsintheOCTscanpatchesthatarecrucialforPADdetection byemployingavisualizationtechnique,knownas CNN-Fixations . 4.2ProposedApproach TheproposedPADapproachincludestwostages,anoftrainingstageandanonlinetesting stage.Theoftrainingstageinvolves(i)preprocessingtheOCTimages(noiseremovaland imageenhancement),(ii)detectingregion-of-interest( i.e. ,depth(iii)extracting localpatchesfromtheregion-of-interest(ROI),and(iv)trainingCNNmodelsontheextracted localpatches.Duringtheonlinetestingstage,thespoofdetectiondecisionismadebasedon theaverageofspoofnessscoresoutputfromtheCNNmodelforeachoftheextractedpatches.An overviewoftheproposedapproachispresentedinFigure4.4. 133 Figure4.5Depthofamanifestsalayeredtissueanatomyquitedistinguish- ablefromthedepthofapresentationattackwithoutanystructure. 4.2.1Preprocessing OpticalCoherentTomography(OCT)2Dscansaregrayscaleimageswithheight =1024 pixels andwidth =1900 pixels(seeFigs.4.5and4.6).Theseimagescontaingaussiannoisewhich makestheextractionofregion-of-interestdepthbysimplethresholdingproneto errors.WeemployNon-LocalMeansdenoising[12]thatremovesnoisebyreplacingtheintensity ofapixelwithanaverageintensityofthesimilarpixelsthatmaynotbepresentclosetoeachother (non-local)intheimage.Anoptimizedopencvpythonimplementation 1 ofNon-LocalMeans denoising, cv2.fastNlMeansDenoising() ,isusedwith ength = 20 , templateWindowSize = 7 , and searchWindowSize = 21 .Afterde-noising,amorphologicaloperationofimagedilation[49], withthekernelsizeof 5 5 ,isappliedtoenhancetheimage. 1 https://opencv-python-tutroals.readthedocs.io/en/latest/py tutorials/py photo/py non local means/py non local means.html 134 Figure4.6ExamplesofdeandpresentationattacksamplesfromtheOCT databaseutilizedinthisstudy. 4.2.2Otsu'sBinarization ThecharacteristicdifferencesbetweenaandapresentationattackOCTimagearepri- marilydiscernibleinthedepthregionasshowninFigure4.5.Thepixelintensity histogramsforthegrayscale2DOCTimagesarebimodal,withthepeak(highintensityval- ues)referringtothedepthregion,whilethesecondpeak(lowintensityvalues)refers tothebackgroundregion.InordertosegmentoutthedepthweapplyOtsu'sthresh- olding[125]whichanadaptivethreshold,inthemiddleofthetwopeaks,tosuccessfully binarizetheinputOCTimagesasshowninFigure4.4. 4.2.3LocalPatchExtraction ThebinarizedimagegeneratedafterOtsu'sbinarizationisrasterscanned,withastrideof 30 pixels (inboth x and y -axis),toidentifythepossiblecandidatesforpatchextraction.Ateachscanned pixel,awindowofsize 9 9 isevaluatedandifmorethan 25% ofthepixels( 20 outof 81 pixels) inthewindowhavenon-zerovalues,thepixelismarkedasacandidateforextractingalocalpatch. 135 Table4.2SummaryoftheOpticalCoherentTomography(OCT)databasecollectedatGCT-IIas partofIARPAODINProgram[123]. FingerprintPresentationAttackMaterial #Images BallisticGelatin 34 Clearx 7 Tanx 49 YellowPigmentedSilicone 57 FleshPigmentedx 36 NusilR-2631ConductiveSilicone 128 FleshPigmentedPDMS 42 Elmer'sGlue 1 Bandaid 3 TotalPAs 357 Total 3,413 Thisruleisappliedtoguaranteesufdepthinformationintheextractedpatches.Afterthe patchcandidatesareselected,amaximumof 60 localpatchesofsize 150 150 areextractedfrom theoriginalimagearoundthepatchcandidates.Iftherearemorethan 60 candidates,thetopmost candidatesfromeachcolumn( i.e. ,thepointsclosesttothesurfaceareselected beforemovingtothenextrow.Withtheimagewidthof 1900 pixelsandastrideof 30 pixels,a maximumof60patchesaresuftoprovideatleastonepassof stratumcorneum .Thepatches areextractedsuchthatthecandidateislocatedat (50 ; 75) inthe 150 150 patch.Thisensuresthat theextractedpatchescoverstratumcorneum,epidermis,andpapillaryjunctionregionsasshown inFigure4.4. 4.2.4ConvolutionNeuralNetworks WiththesuccessofAlexNet[93]inILSVRC-2012[140],differentdeepCNNarchitectureshave beenproposedinliterature,suchasVGG,GoogleNet(Inception),Inceptionv2-v4,MobileNet, andResNet.Inthisstudy,weutilizetheInception-v3[150]architecturewhichhasexhibited 136 state-of-the-artperformanceinpatch-basedpresentationattackdetection[23,24].Our experimentalresultsshowthattrainingthemodelsfromscratch,usinglocalpatches,performs betterthanapre-trainednetworkonimagepatchesfromotherdomains(e.g.FTIR images). WeutilizedtheTF-Slimlibrary 2 implementationoftheInception-v3architecture.Thelast layerofthearchitectures,a 1000 -unitsoftmaxlayer(originallydesignedtoclassifyaqueryimage intooneofthethe 1 ; 000 classesofImageNetdataset)wasreplacedwitha 2 -unitsoftmaxlayer forthetwo-classproblem, i.e. ,vs.PA.Theoutputfromthesoftmaxlayerisintherange [0 ; 1] ,nedas SpoofnessScore .Thelargerthespoofnessscore,thehigherthelikelihoodthat theinputpatchbelongstothePAclass.Foraninputtestimage,thespoofnessscorescorrespond- ingtoeachofthelocalpatches,extractedfromtheinputimage,areaveragedtogivea Global SpoofnessScore .TheoptimizerusedtotrainthenetworkwasRMSProp,withabatchsizeof 32 , andanadaptivelearningratewithexponentialdecay,startingat 0 : 01 andendingat 0 : 0001 .Data augmentationtechniques,suchasrandomcropping,brightnessadjustment,horizontalandvertical areemployedtoensurethetrainedmodelisrobusttothepossiblevariationsin images.TheproposedapproachispresentedinAlgorithm2. 4.3ExperimentalResults 4.3.1OCTPresentationAttackDatabase Adatabaseof 3 ; 413 and 357 presentationattack(PA)2DOCTscansisutilizedinthis study.ThesescansarecapturedusingTHORLabsTelestoseries(TEL1325LV2)Spectral-domain OCTscanner[154](seeFigure4.7).Table4.2liststheeightPAmaterialsandthecorresponding numberofscansforeachmaterialtype.Figure4.6presentsfewsamplesofandPAscans 2 https://githubw/models/tree/master/research/slim 137 Algorithm2 PresentationAttackDetectionforOCTFingerprintImages 1: procedure 2: input 3: I :2DOCTFingerprintImage 4: output 5: S I :PredictedSpoofnessScorefor I 6: functionsandparameters 7: f ( : ) :OpenCVnon-localmeansdenoisingfunction cv2.fastNlMeansDenoising() 8: f : filterStrength =20 , templateWindowSize =7 ,and searchWindowSize =21 9: g ( : ) :imagedilationfunction 10: g : kernelSize =5 11: h ( : ) :Otsu'sBinarizationFunction 12: p ( : ) :Raster-scanlocalpatchextractorwithmaximumnumberofpatches= 60 13: p : h = w =150 , Stride x =30 , Stride y =30 , PatchCenter =(50 ; 75) 14: c ( : ) :Inception-v3CNNModeltrainedonandPAOCTpatchimages,returns spoofnessscoresforinputpatches 15: begin : 16: Preprocessing : I p = g ( f ( I; f ) ; g ) 17: BinarizedImage : I b = h ( I p ) 18: LocalPatchExtraction : ˚ = p ( I;I b ; p ) 19: CNNEvaluationofLocalPatches : S ˚ = c ( ˚ ) 20: SpoofnessScore : S I = average ( S ˚ ) 21: end fromthisdatabase.ThisdatasetiscollectedatJohnHopkinsUniversityAppliedPhysicsLab 3 as partofalarge-scaleevaluationunderIARPAODINProject[123]onpresentationattackdetection. 4.3.2Results Theproposedapproachisevaluatedusingve-foldcross-validation.Table4.3presentsthetraining andtestingsetdetailsforeachfold 4 ,alongwiththeachievedPATrueDetectionRate(%)@False DetectionRate =0 : 2% .TheselectionofthismetricisbasedontherequirementsofIARPAODIN program[123]andrepresentsthepercentageofPAsabletobreachthebiometricsystemsecurity whentherejectrateoflegitimateusers 0 : 2% .Notethattheproposedapproachachievesan avg.TDR= 99 : 73% (s.d.= 0 : 55 )@FDR= 0 : 2% forthevefolds.Figure4.8presentstheROC 3 https://www.jhuapl.edu/ 4 NotethatallPAtypesareuniformlydistributedamongthevefoldswithoutrepetition,thereforeElmer'sGlue andBandaidwhichhavelessthanvesamplesaremissingfromsomefolds. 138 Figure4.7SetupofaTHORLabsTelestoseriesSpectral-domainOCTscanner(TEL1325LV2). Imagetakenfrom[154]. Table4.3Summaryoftheve-foldcross-validationandtheperformanceachievedusingInception- v3model. Fold #Images/PA) TDR(%)@FDR=0.2% Training Testing I (2,730/281) (683/76) 100 : 00 II (2,730/283) (683/74) 98 : 63 III (2730/288) (683/71) 100 : 00 IV (2731/289) (682/70) 100 : 00 V (2731/288) (682/71) 100 : 00 Average 99.73 (s.d.=0.55) curvesforeachofthevefolds.Infold-II,onlyonescanwasasPAdueto incorrectsegmentation. 4.3.3VisualizingCNNLearnings CNNshaverevolutionizedcomputervisionandmachinelearningresearchachievingunprece- dentedperformanceinmanytasks.Buttheseareusuallytreatedasfiblackboxesflsheddinglit- tlelightontheirinternalworkingsandwithoutansweringhowtheyachievehighperformance. OnewaytogaininsightsintowhatCNNslearnisthroughvisualexploration, i.e. ,toidentifythe 139 Figure4.8ROCcurvesfortheve-foldcross-validationexperiments.Theredcurverepresents theaverageperformancewithgrayedregiontheintervalofonestandard deviation. imageregionsthatareresponsibleforthepredictions.Towardsthisgoal,visualizationtech- niques[112,144,146]havebeenproposedtosupplementtheclasslabelspredictedbyCNN,in ourcaseorPA,withthediscriminatedimageregions(orsaliencymaps)exhibitingclass- patternslearnedbyCNNarchitectures.Thevisualizationtechniqueproposedin[112] exploitsthelearnedfeaturedependenciesbetweenconsecutivelayersofaCNNtoidentifythedis- criminativepixels,called CNN-Fixations ,intheinputimagethatareresponsibleforthepredicted label.WeutilizethisvisualizationtechniquetounderstandtherepresentationlearningofourCNN modelsandidentifythecrucialregionsinOCTimagesresponsibleforpredictions.Figs.4.9 presentsCNN-FixationsandthecorrespondingdensityheatmapsfortwoandtwoPAim- agepatchesthatarecorrectlyWeobservethatthereisahighdensityofalong stratumcorneumandatpapillaryjunction,suggestingthatthesearecrucialregionsin distinguishingvsPAOCTpatches.NotethattheonlysampleinFold-IIwas 140 Figure4.9Patches( 150 150 )fromandPAOCTB-scansinputtothemodelarepre- sented.ThedetectedCNN-FixationsandaheatmappresentingthedensityofCNN-Fixationsare alsoshown.Ahighdensityofareobservedalongthestratumcorneum(surface- print)andatpapillaryjunctioninbothandPApatches.(Bestviewedincolor) duetoincorrectsegmentation,otherwiseitwouldbeusefultoobservetheCNN-Fixationsthatled toanincorrectprediction. 4.4Summary Thepenetrativepowerofopticalcoherenttomography(OCT)toimagetheinternaltissuestruc- tureofhumanskininanon-invasivemannerpresentsagreatpotentialtoinvestigaterobustness againstpresentationattacks.Weproposeanddemonstratealearning-basedapproach todifferentiatebetween(live)andeightdifferenttypesofpresentationattacks(spoofs). Theproposedapproachutilizeslocalpatchesautomaticallyextractedfromthedepth in2DOCTB-scanstotrainanInception-v3networkmodel.Ourexperimentalresultsachievea TDRof 99 : 73% @FDRof 0 : 2% onadatabaseof 3 ; 413 and 357 PAscans.Thecrucial regionsintheinputimagesforPADlearnedbytheCNNmodels,namely stratumcorneum and 141 papillaryjunction ,areusingavisualizationtechnique.Infuture,wewillevaluatethe generalizationabilityoftheproposedapproachagainstnovelmaterialsthatarenotseenbythe modelduringtraining. 142 Chapter5 Summary Inthisthesis,weaddressthechallengesofpresentationattackdetectionbydevelopingan accu- rate , ef , interpretable ,and generalizable solutiontodetectfake/gummy(spoofs)and alteredTheproposedsolutionachievesstate-of-the-artaccuracyonpubliclyavailable livenessdetection(LivDet)databases,large-scalegovernment(IARPAODINprogram)evalua- tiondatabases,twonewin-houseself-collecteddatabases,andanoperationalaltered databasefromalawenforcementagency.Fingerprintsusedinthesedatasetsarecapturedus- ingbothtraditionalreaders, e.g. ,CrossMatchGuardian200,LumidigmV302,SilkID FastFrameRate,etc.,aswellasnovelreadersbasedonopticalcoherenttomography (OCT).Theproposedsolutionisoptimized,intermsofbothmemoryandcomputationalresources, forreal-timeinferenceandisportedasanefAndroidappthatcanmakeaPADdecisionin under 100 msonacommoditysmartphone(SamsungGalaxyS8).Furthermore,weinvestigate theopticalandphysicalcharacteristicsofdifferentspoofmaterialstounderstandandinterpret thecross-material(generalization)performanceachievedbytheproposedapproach.Wealsoim- provethePADgeneralizationperformancebyproposingtwodifferenceapproaches:(i)astyle transfer-basedwrappertogeneratespoofimagesofunknownstylesand(ii)atemporalanalysisof asequenceofimageframes. 143 5.1Contributions Themaincontributionsofthisthesisaresummarizedbelow: 1.Anaccuratedeeplearning-basedpresentationattackdetector(PAD),called Fin- gerprintSpoofBuster ,utilizinglocalpatchescenteredandalignedalongminu- tiae.Theproposedapproachutilizingonlygrayscaleimagescanbeintegratedas a software-onlysolution ,withoutincurringanyadditionalhardwarecost,toawiderangeof alreadydeployedmatchingsystems.Ouralgorithmcanbegeneralizedtoimages capturedbyanysensorwithminimalretraining. 2.AgraphicaluserinterfacefortheFingerprintSpoofBusterwhichhighlightsthelocalregions oftheimageas(live)orPA(spoof)forvisualinspection.Thisismore informativethanasinglespoofscoreoutputbythetraditionalapproaches.Weutilizevisu- alizationtechniquestointerpretthefeatureslearnedbyCNNmodelsinordertounderstand thestrengthsandlimitationsoftheproposedapproach.Inthesamespirit,wealsopropose amethodfordetectionandlocalizationofalterationsutilizingwholeimagesand minutia-centeredpatchestotrainCNNmodels,achievingstate-of-the-artaccuracy. 3.WetacklethehighmemoryandcomputationalrequirementsofFingerprintSpoofBusterby (i)minutiaeclustering,followedbyweightedfusiontoreducetherequirednumberoflo- calpatchinferences,and(ii)optimizingthenetworkarchitectureandquantizationofmodel weightparameterstoperformbytecomputationsinsteadofpointarithmetic.The proposedoptimizationsresultinanapproximately 80% reductionincomputationandmem- oryrequirements.Thishasenabledustodevelopalight-weightversionofthePAD,called FingerprintSpoofBusterLite ,asanAndroidapplicationthatcanrunonacommoditysmart- phone(SamsungGalaxyS8)withoutadropinPADperformance(fromTDR= 95 : 7% to 95 : 3% @FDR= 0 : 2% )capableofdetectingspoofsinunder 100 ms. 144 4.Aninterpretationofcross-material(generalization)performanceoftheproposedPADby (i)evaluatingFingerprintSpoofBusteragainstunknownPAsbyadoptingaleave-one-out protocolwhereonematerialisleftoutfromtrainingsetandsetasidefortesting,(ii)uti- lizing3Dt-SNEvisualizationsoftheandPAsamplesinthedeepfeaturespace, and(iii)investigatingthePAmaterialcharacteristics(twoopticalandtwophysicalproper- ties)andcorrelatingthemwiththeircross-materialperformancestoidentifyarepresentative setofPAmaterialsthatshouldbeincludedduringtrainingtoensureahighgeneralization performance. 5.Astyletransfer-basedwrapper, UniversalMaterialGenerator (UMG),toimprovethegen- eralizationperformanceofanyPAdetectoragainstnovelPAfabricationmaterialsthatare unknowntothesystemduringtraining.Theproposedwrapperisshowntoimprovetheaver- agegeneralizationperformanceofFingerprintSpoofBusterfromTDRof75.24%to91.78% @FDR=0.2%whenevaluatedonalarge-scaledatasetof 5 ; 743 liveand 4 ; 912 PAimages fabricatedusing12materials.Itisalsoshowntoimprovetheaveragecross-sensorperfor- mancefrom67.60%to80.63%whentestedonLivDet2017dataset,alleviatingthetimeand resourcesrequiredtogeneratelarge-scalePAdatasetsfornewsensors. 6.AdynamicPADsolutionutilizingasequenceoflocalpatchescenteredatdetectedminutiae fromtencolorframescapturedinquicksuccession( 8 fps)astheispresentedonthe sensor.Wepositthatthedynamicsinvolvedinthepresentationofa,suchasskin blanching,distortion,andperspiration,providediscriminatingcuestodistinguishlivefrom spoofs.TheproposedapproachimprovesthespoofdetectionperformancefromTDRof 99.11%to99.25%@FDR=0.2%inknown-materialscenarios,andfromTDRof81.65% to86.20%@FDR=0.2%incross-materialscenarios. 7.APADsolutionutilizingtheridge-valleydepth-informationofskin,includinginternal (papillaryjunction)andsweat(eccrine)glands,sensedbytheopticalcoherentto- mography(OCT)technology.OurproposedsolutionachievesaTDRof 99 : 73% 145 @FDRof 0 : 2% onadatabaseof 3 ; 413 and 357 PAOCTscanscapturedusing THORLabsTelestoseriesspectral-domainreader.Wealsoidentifytheregionsin theOCTscanpatchesthatarecrucialforPADdetection. 5.2SuggestionsforFutureWork Thefollowingaresomeofthepossiblefuturedirectionswithinthescopeofpresentation attackdetection: PADGeneralization :Exploreadversarialrepresentationlearning(ARL)basedap- proach[139]tolearn material and sensor agnosticfeaturerepresentationsforgeneralized PAD. Multi-TaskLearning :Inadditiontodetectingvs.PA,aPADcouldbetrainedto predictthePAmaterialtypeasanopen-setproblem.Withoneoftheclassesasfiunknownfl material,thesystemcouldbetrainedinacontinuous(online)mannerwhenthenetworkis notabletopredictthematerialtypewithhigh DynamicPADApproaches :Learninga fimixtureofPADexpertsfl whereeachexpertmod- ulespecializesinsomesensorand/orsomePAmaterials.Theselectionofthebestmodule canbelearnedasanauxiliarytaskandthisdecisioncanbemadedynamicallyattesttime.s AlteredFingerprints :ExploreGAN-basedgenerativemodelsand3Dprintingofaltered targetstoincreasetheavailabilityofaltereddatabasesintheliterature forconductingalarge-scalestudy. 146 BIBLIOGRAPHY 147 B IBLIOGRAPHY [1]MeirAgassy,BoazCastro,AryeLerner,GalRotem,LiranGalili,andNathanAltman.Live- nessandSpoofDetectionforUltrasonicFingerprintSensors,April162019.USPatent 10,262,188. [2]AthosAntonelli,RaffaeleCappelli,DarioMaio,andDavideMaltoni.FakeFingerDetection bySkinDistortionAnalysis. IEEETransactionsonInformationForensicsandSecurity , 1(3):360Œ373,2006. [3]Apple.ApplePay:PaymentauthorizationusingTouchID.https://www.apple.com/ business/site/docs/iOS Security Guide.pdf,May2019. [4]SunpreetS.Arora,KaiCao,AnilK.Jain,andNicholasG.Paulter.DesignandFabrica- tionof3DFingerprintTargets. IEEETransactionsonInformationForensicsandSecurity , 11(10):2284Œ2297,2016. [5]SunpreetS.Arora,AnilK.Jain,andNicholasG.Paulter.GoldFingers:3DTargetsfor EvaluatingCapacitiveReaders. IEEETransactionsonInformationForensicsandSecurity , 12(9):2067Œ2077,2017. [6]DavidR.Ashbaugh. Quantitative-QualitativeFrictionRidgeAnalysis:AnIntroductionto BasicandAdvancedRidgeology .CRCpress,1999. [7]WilliamJ.Babler.EmbryologicDevelopmentofEpidermalRidgesandtheir tions. BirthDefectsOriginalArticleSeries ,27(2):95Œ112,1991. [8]DenisBaldisserra,AnnalisaFranco,DarioMaio,andDavideMaltoni.FakeFingerprint DetectionbyOdorAnalysis.In Proc.InternationalConferenceonBiometrics(ICB) ,pages 265Œ272.Springer,2006. [9]MauroBarnietal.APrivacy-compliantFingerprintRecognitionSystembasedonHo- momorphicEncryptionandFingercodeTemplates.In IEEEInternationalConferenceon Biometrics:Theory,ApplicationsandSystems(BTAS) ,pages1Œ7,2010. [10]AnkeBossen,RolandLehmann,andChristophMeier.InternalFingerprint withOpticalCoherenceTomography. IEEEPhotonicsTechnologyLetters ,22(7):507Œ509, 2010. [11]GaryBradskiandAdrianKaehler. LearningOpenCV:ComputervisionwiththeOpenCV library .O'ReillyMedia,Inc.,2008. [12]AntoniBuades,BartomeuColl,andJean-MichelMorel.Non-localMeansDenoising. Image ProcessingOnLine ,1:208Œ212,2011. [13]KaiCaoandAnilK.Jain.LearningFingerprintReconstruction:FromMinutiaetoImage. IEEETransactionsonInformationForensicsandSecurity ,10(1):104Œ117,2014. 148 [14]KaiCaoandAnilK.Jain.Hackingmobilephonesusing2DPrintedFingerprints,MSU Tech.report,MSU-CSE-16-2.https://www.youtube.com/watch?v=fZJI BrMZXU,2016. [15]KaiCaoandAnilK.Jain.AutomatedLatentFingerprintRecognition. IEEETransactions onPatternAnalysisandMachineIntelligence ,41(4):788Œ800,2018. [16]KaiCao,EryunLiu,LiaojunPang,JiminLiang,andJieTian.FingerprintMatchingby IncorporatingMinutiaeDiscriminability.In IEEEInternationalJointConferenceonBio- metrics(IJCB) ,pages1Œ6,2011. [17]KaiCao,Dinh-LuanNguyen,CoriTymoszek,andAnilK.Jain.End-to-EndLatentFin- gerprintSearch. IEEETransactionsonInformationForensicsandSecurity ,15:880Œ894, 2019. [18]TaoChen,Ming-MingCheng,PingTan,ArielShamir,andShi-MinHu.Sketch2photo: InternetImageMontage. ACMTransactionsonGraphics(TOG) ,28(5):124,2009. [19]YezengChengandKirillV.Larin.FingerprintRecognitionbyusingOpticalCo- herenceTomographywithAutocorrelationAnalysis. AppliedOptics ,45(36):9238Œ9245, 2006. [20]YezengChengandKirillV.Larin.InVivoTwo-andThree-dimensionalImagingof andRealFingerprintsWithOpticalCoherenceTomography. IEEEPhotonicsTechnology Letters ,19(20):1634Œ1636,2007. [21]Franc¸oisChollet.Xception:DeepLearningwithDepthwiseSeparableConvolutions. arXiv preprintarXiv:1610.02357 ,2016. [22]TarangChugh,SunpreetS.Arora,AnilK.Jain,andNicholasG.Paulter.BenchmarkingFin- gerprintMinutiaeExtractors.In IEEEInternationalConferenceoftheBiometricsSpecial InterestGroup(BIOSIG) ,pages1Œ8,2017. [23]TarangChugh,KaiCao,andAnilK.Jain.FingerprintSpoofDetectionusingMinutiae- basedLocalPatches.In Proc.IEEEInternationalJointConferenceonBiometrics(IJCB) , pages581Œ589,2017. [24]TarangChugh,KaiCao,andAnilK.Jain.FingerprintSpoofBuster:UseofMinutiae- centeredPatches. IEEETransactionsonInformationForensicsandSecurity ,13(9):2190Œ 2202,2018. [25]TarangChugh,KaiCao,JiayuZhou,ElhamTabassi,andAnilK.Jain.LatentFingerprint ValuePrediction:Crowd-basedLearning. IEEETransactionsonInformationForensicsand Security ,13(1):20Œ34,2017. [26]TarangChughandAnilK.Jain.FingerprintPresentationAttackDetection:Generalization andEfy.In IEEEInternationalConferenceonBiometrics(ICB) ,pages1Œ8,2019. [27]TarangChughandAnilK.Jain.FingerprintSpoofDetection:TemporalAnalysisofImage Sequence. arXivpreprintarXiv:1912.08240 ,2019. 149 [28]TarangChughandAnilK.Jain.FingerprintSpoofGeneralization. arXivpreprint arXiv:1912.02710 ,2019. [29]TarangChughandAnilK.Jain.OCTFingerprints:ResiliencetoPresentationAttacks. arXivpreprintarXiv:1908.00102 ,2019. [30]EuropeanCommision.TrustedBiometricsunderAttacks(TABULARASA).http: //www.tabularasa-euproject.org/,2013. [31]HaroldCummins.AttemptstoAlterandObliterateFinger-Prints. JournalofCriminalLaw andCriminology ,25(12),1935. [32]LukeN.Darlow,AnnSingh,Moolla,etal.DamageInvariantandHighSecurityAcquisition oftheInternalFingerprintusingOpticalCoherenceTomography.In WorldCongresson InternetSecurity ,2016. [33]LukeN.Darlow,LeandraWebb,andNatashaBotha.AutomatedSpoof-detectionforFin- gerprintsusingOpticalCoherenceTomography. AppliedOptics ,55(13):3387Œ3396,2016. [34]Dept.ofHomelandSecurity.OfofBiometricIdentityManagementSer- vices.https://www.dhs.gov/obim-biometric-services,2016. [35]YaohuiDingandArunRoss.AnEnsembleofOne-classSVMsforFingerprintSpoofDetec- tionacrossDifferentFabricationMaterials.In IEEEInternationalWorkshoponInformation ForensicsandSecurity(WIFS) ,pages1Œ6,2016. [36]FBICriminalJusticeInformationServicesDivision.AlteredFingerprints:AChal- lengetoLawEnforcementEfforts.www.crime-scene-investigator.net/altered- 2015. [37]KostadinD.Djordjev,LeonardE.Fennell,NicholasI.Buchan,DavidW.Burns,SamirK. Gupta,andSanghoonBae.DisplaywithPeripherallyUltrasonicBiometricSen- sor.USPatent9,323,393,2016. [38]VincentDumoulin,JonathonShlens,andManjunathKudlur.ALearnedRepresentationfor ArtisticStyle. arXivpreprintarXiv:1610.07629 ,2016. [39]AhmedElgammal,BingchenLiu,MohamedElhoseiny,andMarianMazzone.CAN:Cre- ativeAdversarialNetworks,GeneratingflartflbyLearningaboutStylesandDeviatingfrom StyleNorms. arXivpreprintarXiv:1706.07068 ,2017. [40]JohnEllingsgaardandChristophBusch.AlteredFingerprintDetection. HandbookofBio- metricsforForensicScience,Springer ,pages85Œ123,2017. [41]JohnEllingsgaard,CtiradSousedik,andChristophBusch.DetectingFingerprintAlterations byOrientationFieldandMinutiaeOrientationAnalysis.In 2ndInternationalWorkshopon BiometricsandForensics ,pages1Œ6,2014. 150 [42]JoshuaJ.Engelsma,SunpreetS.Arora,AnilK.Jain,andNicholasG.Paulter.Universal3D WearableFingerprintTargets:AdvancingFingerprintReaderEvaluations. IEEETransac- tionsonInformationForensicsandSecurity ,13(6):1564Œ1578,2018. [43]JoshuaJ.Engelsma,KaiCao,andAnilK.Jain.RaspiReader:OpenSourceFingerprint Reader. IEEETransactionsonPatternAnalysisandMachineIntelligence ,41(10):2511Œ 2524,2018. [44]JoshuaJ.Engelsma,KaiCao,andAnilK.Jain.LearningaFixed-LengthFingerprintRepre- sentation. IEEETransactionsonPatternAnalysisandMachineIntelligence(earlyaccess) , 2019. [45]JoshuaJ.Engelsma,DebayanDeb,AnilK.Jain,AnjooBhatnagar,andPremS.Sudhish. Infant-Prints:FingerprintsforReducingInfantMortality.In Proc.IEEEConferenceon ComputerVisionandPatternRecognitionWorkshops(CVPRW) ,pages67Œ74,2019. [46]JoshuaJ.EngelsmaandAnilK.Jain.GeneralizingFingerprintSpoofDetector:Learninga One-Class IEEEInternationalConferenceonBiometrics(ICB) ,2019. [47]HenryFaulds.OntheSkin-furrowsoftheHand. Nature ,22(574):605,1880. [48]JianjiangFeng,AnilK.Jain,andArunRoss.DetectingAlteredFingerprints.In IEEE InternationalConferenceonPatternRecognition(ICPR) ,pages1622Œ1625,2010. [49]DavidA.ForsythandJeanPonce. ComputerVision:AModernApproach .PrenticeHall, 2002. [50]RohitGajawada,AddityaPopli,TarangChugh,AnoopNamboodiri,andAnilK.Jain.Uni- versalMaterialTranslator:TowardsSpoofFingerprintGeneralization.In IEEEInterna- tionalConferenceonBiometrics(ICB) ,2019. [51]FrancisGalton.PersonalandDescription. JournalofAnthropologicalInstitute ofGreatBritainandIreland ,pages177Œ191,1889. [52]FrancisGalton. FingerPrints .MacmillanandCompany,1892. [53]LeonA.Gatys,AlexanderS.Ecker,andMatthiasBethge.ANeuralAlgorithmofArtistic Style. arXivpreprintarXiv:1508.06576 ,2015. [54]LeonA.Gatys,AlexanderS.Ecker,andMatthiasBethge.ImageStyleTransferusingCon- volutionalNeuralNetworks.In IEEEConferenceonComputerVisionandPatternRecog- nition(CVPR) ,pages2414Œ2423,2016. [55]LucaGhiani,AbdenourHadid,GianLucaMarcialis,andFabioRoli.FingerprintLiveness DetectionusingBinarizedStatisticalImageFeatures.In IEEEInternationalConferenceon Biometrics:Theory,ApplicationsandSystems(BTAS) ,pages1Œ6,2013. [56]LucaGhiani,GianLucaMarcialis,andFabioRoli.FingerprintLivenessDetectionbyLocal PhaseQuantization.In IEEEInternationalConferenceonPatternRecognition(ICPR) , pages537Œ540,2012. 151 [57]LucaGhiani,DavidYambay,ValerioMura,GianLucaMarcialis,FabioRoli,andStephanie Schuckers.ReviewoftheFingerprintLivenessDetection(LivDet)CompetitionSeries: 2009to2015. ImageandVisionComputing ,58:110Œ128,2017. [58]LucaGhiani,DavidYambay,ValerioMura,SimonaTocco,GianLucaMarcialis,Fabio Roli,andStephanieSchuckcrs.LivDet2013FingerprintLivenessDetectionCompetition 2013.In Proc.IAPRInternationalConferenceonBiometrics(ICB) ,pages1Œ6,2013. [59]L ´ azaroJGonz ´ alez-Soler,MartaGomez-Barrero,LeonardoChang,AirelP ´ erez-Su ´ arez,and ChristophBusch.FingerprintPresentationAttackDetectionBasedonLocalFeaturesEn- codingforUnknownAttacks. arXivpreprintarXiv:1908.10163 ,2019. [60]DiegoGragnaniello,GiovanniPoggi,CarloSansone,andLuisaVerdoliva.FingerprintLive- nessDetectionbasedonWeberLocalImageDescriptor.In Proc.IEEEWorkshoponBio- metricMeas.Syst.Secur.Med.Appl.(BIOMS) ,pages46Œ50,2013. [61]DiegoGragnaniello,GiovanniPoggi,CarloSansone,andLuisaVerdoliva.LocalContrast PhaseDescriptorforFingerprintLivenessDetection. PatternRecognition ,48(4):1050Œ 1058,2015. [62]PeterW.GreenwoodandJoanPetersilia. TheCriminalInvestigationProcessVolumeI: SummaryAndPolicyImplications .RandCorporation,1975. [63]MarkHawthorne. Fingerprints:AnalysisandUnderstanding .CRCPress,2017. [64]KaimingHe,XiangyuZhang,ShaoqingRen,andJianSun.DeepResidualLearningfor ImageRecognition.In ProceedingsoftheIEEEConferenceonComputerVisionandPattern Recognition ,pages770Œ778,2016. [65]YiHeandBoPi.Under-screenOpticalSensorModuleforOn-screenFingerprintSensing. USPatentApp.15/421,249,2017. [66]MichaelR.Hee,CarmenA.CarltonWong,JayS.Duker,EliasReichel,Bryan Rutledge,JoelS.Schuman,EricA.Swanson,andJamesG.Fujimoto.QuantitativeAssess- mentofMacularEdemawithOpticalCoherenceTomography. ArchivesofOphthalmology , 113(8):1019Œ1029,1995. [67]EdwardR.Henry. andUsesofFingerPrints .GeorgeRoutledgeandSons, 1900. [68]WilliamJamesHerschel.Finger-Prints. Nature ,51(1308):77,1894. [69]WilliamJamesHerschel. TheOriginofFinger-printing .OxfordUniversityPress,1916. [70]SeppHochreiterandJ ¨ urgenSchmidhuber.LongShort-TermMemory. NeuralComputation, MITPress ,9(8):1735Œ1780,1997. [71]AndrewG.Howard,MenglongZhu,BoChen,DmitryKalenichenko,WeijunWang,Tobias Weyand,MarcoAndreetto,andHartwigAdam.Mobilenets:EfConvolutionalNeural NetworksforMobileVisionApplications. arXivpreprintarXiv:1704.04861 ,2017. 152 [72]DavidHuang,EricA.Swanson,CharlesP.Lin,JoelS.Schuman,WilliamG.Stinson,War- renChang,MichaelR.Hee,ThomasFlotte,KentonGregory,CarmenA.o,etal. OpticalCoherenceTomography. Science ,254(5035):1178Œ1181,1991. [73]XunHuangandSergeBelongie.ArbitraryStyleTransferinReal-timewithAdaptiveIn- stanceNormalization.In IEEEInternationalConferenceonComputerVision(ICCV) ,pages 1501Œ1510,2017. [74]InternationalStandardsOrganization.ISO/IEC30107-1:2016,Information TechnologyŠBiometricPresentationAttackDetectionŠPart1:Framework. https://www.iso.org/standard/53227.html,2016. [75]InternationalStandardsOrganization.InformationTechnologyŒBiometricSampleQuality ŒPart4:FingerImageData.https://www.iso.org/standard/62791.html,2017. [76]PhillipIsola,Jun-YanZhu,TinghuiZhou,andAlexeiA.Efros.Image-to-ImageTransla- tionwithConditionalAdversarialNetworks.In IEEEConferenceonComputerVisionand PatternRecognition(CVPR) ,pages1125Œ1134,2017. [77]AnilKJain.Fingerprints:ProvingGroundforPatternRecognition.In IEEEInternational ConferenceonPatternRecognition(ICPR) ,2006. [78]AnilKJain,SunpreetSArora,KaiCao,LaceyBest-Rowden,andAnjooBhatnagar.Fin- gerprintRecognitionofYoungChildren. IEEETransactionsonInformationForensicsand Security ,12(7):1501Œ1514,2016. [79]AnilKJain,YiChen,andMeltemDemirkus.PoresandRidges:High-resolutionFingerprint MatchingusingLevel3Features. IEEETransactionsonPatternAnalysisandMachine Intelligence ,29(1):15Œ27,2006. [80]AnilKJainandRichardCDubes. AlgorithmsforClusteringData .Prentice-Hall,Inc., 1988. [81]AnilK.JainandKalleKaru.LearningTextureDiscriminationMasks. IEEETransactions onPatternAnalysisandMachineIntelligence ,18(2):195Œ205,1996s. [82]AnilKJain,KarthikNandakumar,andAbhishekNagar.BiometricTemplateSecurity. EURASIPJournalonAdvancesinSignalProcessing ,page113,2008. [83]AnilKJain,KarthikNandakumar,andArunRoss.50YearsofBiometricResearch:Ac- complishments,Challenges,andOpportunities. PatternRecognitionLetters ,79:80Œ105, 2016. [84]AnilKJain,SalilPrabhakar,andSharathPankanti.OntheSimilarityofIdenticalTwin Fingerprints. PatternRecognition ,35(11):2653Œ2663,2002. [85]AnilK.Jain,ArunRoss,andSalilPrabhakar.FingerprintMatchingusingMinutiaeand TextureFeatures.In Proc.IEEEInternation aalConferenceonImageProcessing(ICIP) , volume3,pages282Œ285,2001. 153 [86]AnilK.Jain,ArunA.Ross,andKarthikNandakumar. IntroductiontoBiometrics .Springer Science&BusinessMedia,2011. [87]Han-UlJang,Hak-YeolChoi,DongkyuKim,JeonghoSon,andHeung-KyuLee.Finger- printSpoofDetectionusingContrastEnhancementandConvolutionalNeuralNetworks. In InternationalConferenceonInformationScienceandApplications ,pages331Œ338. Springer,2017. [88]JustinJohnson,AlexandreAlahi,andLiFei-Fei.PerceptualLossesforReal-timeStyle TransferandSuper-resolution.In EuropeanConferenceonComputerVision(ECCV) ,pages 694Œ711.Springer,2016. [89]DameKathleenMaryKenyon. ArchaeologyintheHolyLand .E.Benn,1960. [90]DiederikPKingmaandJimmyBa.Adam:AMethodforStochasticOptimization. arXiv preprintarXiv:1412.6980 ,2014. [91]JaschaKolberg,MartaGomez-Barrero,andChristophBusch.Multi-algorithmbenchmark forpresentationattackdetectionwithlaserspecklecontrastimaging.In IEEE InternationalConferenceoftheBiometricsSpecialInterestGroup(BIOSIG) ,pages1Œ5, 2019. [92]PeterKomarinski. AutomatedFingerprintSystems(AFIS) .Elsevier,2005. [93]AlexKrizhevsky,IlyaSutskever,andGeoffreyEHinton.ImageNetwithDeep ConvolutionalNeuralNetworks.In Proc.ConferenceonNeuralInformationProcessing Systems(NIPS) ,pages1097Œ1105,2012. [94]PhilipDeanLapsley,JonathanAlexanderLee,DavidFerrinPareJr,andNedHoffman.Anti- fraudBiometricScannerthatAccuratelyDetectsBloodFlow,1998.USPatent5,737,439, 1998. [95]ChuanLiandMichaelWand.PrecomputedReal-timeTextureSynthesiswithMarkovian GenerativeAdversarialNetworks.In EuropeanConferenceonComputerVision ,pages 702Œ716.Springer,2016. [96]StanZ.LiandAnilK.Jain,editors. EncyclopediaofBiometrics .Springer,2015. [97]XinLi,BahadirGunturk,andLeiZhang.Imagedemosaicing:Asystematicsurvey.In Vi- sualCommunicationsandImageProcessing ,volume6822.InternationalSocietyforOptics andPhotonics,2008. [98]YanghaoLi,NaiyanWang,JiayingLiu,andXiaodiHou.DemystifyingNeuralStyleTrans- fer. arXivpreprintarXiv:1701.01036 ,2017. [99]HaidaLiang,MartaGomezCid,RaduGCucu,GMDobre,AGhPodoleanu,JustinPedro, andDavidSaunders.En-faceOpticalCoherenceTomography-aNovelApplicationofNon- invasiveImagingtoArtConservation. OpticsExpress ,13(16):6133Œ6144,2005. 154 [100]FengLiu,GuojieLiu,andXingzhengWang.High-accurateandRobustFingerprintAnti- SystemusingOpticalCoherenceTomography. ExpertSystemswithApplications , 130:31Œ44,2019. [101]GangjunLiuandZhongpingChen.CapturingtheVitalVascularFingerprintwithOptical CoherenceTomography. AppliedOptics ,52(22):5473Œ5477,2013. [102]MengyangLiuandTakashiBuma.BiometricMappingofFingertipEccrineGlandswith OpticalCoherenceTomography. IEEEPhotonicsTechnologyLetters ,22(22):1677Œ1679, 2010. [103]LaurensVanDerMaatenandGeoffreyHinton.VisualizingDatausingt-SNE. Journalof MachineLearningResearch ,9(Nov):2579Œ2605,2008. [104]DavideMaltoni,DarioMaio,AnilKJain,andSalilPrabhakar. HandbookofFingerprint Recognition .SpringerScience&BusinessMedia,secondedition,2009. [105]EmanuelaMarascoandArunRoss.ASurveyonAntiSchemesforFingerprint RecognitionSystems. ACMComputingSurveys ,47(2):28,2015. [106]EmanuelaMarascoandCarloSansone.CombiningPerspiration-andMorphology-based StaticFeaturesforFingerprintLivenessDetection. PatternRecognitionLetters ,33(9):1148Œ 1156,2012. [107]S ´ ebastienMarcel,MarkS.Nixon,JulianFierrez,andNicholasEvans,editors. Handbook ofBiometricPresentationAttackDetection .Springer,secondedition,2019. [108]TsutomuMatsumoto,HiroyukiMatsumoto,KojiYamada,andSatoshiHoshino.Impact ofGummyFingersonFingerprintSystems.In Proc.SPIE ,volume4677,pages 275Œ289,2012. [109]SvenMeissner,RalphBreithaupt,andEdmundKoch.DefenseofFakeFingerprintAttacks usingaSweptSourceLaserOpticalCoherenceTomographySetup.In FrontiersinUltrafast Optics:Biomedical,andIndustrialApplicationsXIII ,volume8611.SPIE,2013. [110]DavidMenotti,GiovaniChiachia,AllanPinto,WilliamRobsonSchwartz,HelioPedrini, AlexandreXavierFalcao,andAndersonRocha.DeepRepresentationsforIris,Face,and FingerprintSpDetection. IEEETransactionsonInformationForensicsandSecurity , 10(4):864Œ879,2015. [111]YaseenMoolla,LukeDarlow,AmeethSharma,AnnSingh,andJohanVanDerMerwe. OpticalCoherenceTomographyforFingerprintPresentationAttackDetection.InS ´ ebastien Marcel,MarkS.Nixon,andStanZ.Li,editors, HandbookofBiometric ,pages 49Œ70.Springer,2019. [112]KondaReddyMopuri,UtsavGarg,andRVenkateshBabu.CNNFixations:AnUnraveling ApproachtoVisualizetheDiscriminativeImageRegions. IEEETransactionsonImage Processing ,28(5):2116Œ2125,2018. 155 [113]ValerioMura,LucaGhiani,GianLucaMarcialis,FabioRoli,DavidAYambay,and StephanieASchuckers.LivDet2015-FingerprintLivenessDetectionCompetition2015. In Proc.IEEEInternationalConferenceonBiometrics:Theory,ApplicationsandSystems (BTAS) ,pages1Œ6,2015. [114]ValerioMura,GiuliaOrr ˚ u,RobertoCasula,AlessandraSibiriu,GiuliaLoi,PierluigiTuveri, LucaGhiani,andGianLucaMarcialis.LivDet2017FingerprintLivenessDetectionCom- petition2017.In Proc.IAPRInternationalConferenceonBiometrics(ICB) ,pages297Œ302, 2018. [115]KarthikNandakumarandAnilKJain.BiometricTemplateProtection:BridgingthePer- formanceGapBetweenTheoryandPractice. IEEESignalProcessingMagazine ,32(5):88Œ 100,2015. [116]Mohammad-RezaNasiri-Avanakietal.Anti-spoofReliableBiometryofFingerprintsusing En-face OpticalCoherenceTomography. OpticsandPhotonicsJournal ,1(03):91Œ96,2011. [117]ABCNews.SurgicallyAlteredFingerprintsHelpWomanEvadeImmigration, 2009.abcnews.go.com/Technology/GadgetGuide/surgically-altered-woman- evade-immigration/story?id=9302505. [118]Dinh-LuanNguyen,KaiCao,andAnilKJain.RobustMinutiaeExtractor:IntegratingDeep NetworksandFingerprintDomainKnowledge.In Proc.IAPRInternationalConferenceon Biometrics(ICB) ,pages9Œ16,2018. [119]RodrigoFrassettoNogueira,RobertodeAlencarLotufo,andRubensCamposMachado. FingerprintLivenessDetectionUsingConvolutionalNeuralNetworks. IEEETransactions onInformationForensicsandSecurity ,11(6):1206Œ1213,2016. [120]FederalBureauofInvestigation. TheScienceofFingerprints:andUses,Rev 12-84. U.S.GovernmentPrintingOfWashington,DC,1984. [121]FederalBureauofInvestigation.Fbiwarnsaboutalteredwww.forensicmag. com/article/2015/05/fbi-warns-about-altered-2015. [122]FederalBureauofInvestigation.FugitivesontheFBI's10MostWantedList.http://www. businessinsider.com/fbi-10-most-wanted-criminals-list-2017-11,2018. [123]OfoftheDirectionofNationalIntelligence(ODNI),IARPA.IARPA-BAA-16-04 (Thor).https://www.iarpa.gov/index.php/research-programs/odin/odin-baa,2016. [124]AOIG.ReviewoftheFBI'sHandlingoftheBrandonCase. OfoftheInspector General,OversightandReviewDivision,USDepartmentofJustice ,pages1Œ330,2006. [125]NobuyukiOtsu.AThresholdSelectionMethodfromGray-levelHistograms. IEEETrans- actionsonSystems,Man,andCybernetics ,9(1):62Œ66,1979. [126]FedericoPalaandBirBhanu.DeepTripletEmbeddingRepresentationsforLivenessDetec- tion.In DeepLearningforBiometrics ,pages287Œ307.Springer,2017. 156 [127]SharathPankanti,SalilPrabhakar,andAnilKJain.OntheIndividualityofFingerprints. IEEETransactionsonPatternAnalysisandMachineIntelligence ,24(8):1010Œ1025,2002. [128]SujanParthasaradhi,RezaDerakhshani,LarryHornak,andStephanieSchuckers.Time- seriesDetectionofPerspirationasaLivenessTestinFingerprintDevices. IEEETransac- tionsonSystems,Man,andCybernetics,PartC(ApplicationsandReviews) ,35(3):335Œ343, 2005. [129]GeorgePavlich.Theemergenceofhabitualcriminalsin19thcenturybritain:Implications forcriminology. JournalofTheoretical&PhilosophicalCriminology ,2(1),2010. [130]Heinz-HelmutPerkampus. UV-VISSpectroscopyanditsApplications .SpringerScience& BusinessMedia,2013. [131]RichardPlesh,KeivanBahmani,GangheeJang,DavidYambay,KenBrownlee,Timothy Swyka,PreciseBiometrics,PeterJohnson,ArunRoss,andStephanieSchuckers.Finger- printPresentationAttackDetectionutilizingTime-Series,ColorFingerprintCaptures.In IEEEInternationalConferenceonBiometrics(ICB) ,2019. [132]CarmenAMichaelRHee,CharlesPLin,EliasReichel,JoelSSchuman,JayS Duker,JosephAIzatt,EricASwanson,andJamesGFujimoto.ImagingofMacularDis- easesWithOpticalCoherenceTomography. Ophthalmology ,102(2):217Œ229,1995. [133]AlecRadford,LukeMetz,andSoumithChintala.Unsupervisedrepresentationlearning withdeepconvolutionalgenerativeadversarialnetworks. arXivpreprintarXiv:1511.06434 , 2015. [134]NaliniKRatha,ShaoyunChen,andAnilKJain.AdaptiveFlowOrientation-basedFeature ExtractioninFingerprintImages. PatternRecognition ,28(11):1657Œ1672,1995. [135]AjitaRattani,WalterJScheirer,andArunRoss.OpenSetFingerprintSpoofDetection AcrossNovelFabricationMaterials. IEEETransactionsonInformationForensicsandSe- curity ,10(11):2447Œ2460,2015. [136]CharlesD.RobisonandMaxwellS.Andrews.SystemandMethodofFingerprintAnti- ProtectionusingMulti-spectralOpticalSensorArray,March262019.USPatent 10,242,245. [137]ArunRossandAnilJain.BiometricSensorInteroperability:ACaseStudyinFingerprints. In InternationalWorkshoponBiometricAuthentication ,pages134Œ145.Springer,2004. [138]RobertKRoweandDavidPSidlauskas.MultispectralBiometricSensor.USPatent 7,147,153,2006. [139]ProteekChandanRoyandVishnuNareshBoddeti.Mitigatinginformationleakageinimage representations:Amaximumentropyapproach.In ProceedingsoftheIEEEConferenceon ComputerVisionandPatternRecognition ,pages2586Œ2594,2019. 157 [140]OlgaRussakovsky,JiaDeng,HaoSu,JonathanKrause,SanjeevSatheesh,SeanMa,Zhi- hengHuang,AndrejKarpathy,AdityaKhosla,MichaelBernstein,etal.ImagenetLarge ScaleVisualRecognitionChallenge. Proc.InternationalJournalofComputerVision (IJCV) ,115(3):211Œ252,2015. [141]SSangiorgi,AManelli,TCongiu,ABini,GPilato,MReguzzoni,andMRaspanti.Mi- crovascularizationoftheHumanDigitasstudiedbyCorrosionCasting. JournalofAnatomy , 204(2):123Œ131,2004. [142]StephanieSchuckersandPeterJohnson.FingerprintPoreAnalysisforLivenessDetection, November142017.USPatent9,818,020. [143]StudyWorkingGrouponFrictionRidgeAnalysisandTechnology(SWGFAST). StandardsforExaminingFrictionRidgeImpressionsandResultingConclusionsversion 1.0,2011. [144]RamprasaathRSelvaraju,MichaelCogswell,AbhishekDas,RamakrishnaVedantam,Devi Parikh,andDhruvBatra.Grad-CAM:VisualExplanationsfromDeepNetworksvia Gradient-basedLocalization.In Proc.IEEEInternationalConferenceonComputerVision (ICCV) ,pages618Œ626,2017. [145]NathanSilbermanandSergioGuadarrama.TensorFlow-SlimImageModel Library.https://githubw/models/tree/master/research/slim. [146]KarenSimonyan,AndreaVedaldi,andAndrewZisserman.DeepInsideConvolutional Networks:VisualisingImageModelsandSaliencyMaps. arXivpreprint arXiv:1312.6034 ,2013. [147]KarenSimonyanandAndrewZisserman.VeryDeepConvolutionalNetworksforLarge- scaleImageRecognition. arXivpreprintarXiv:1409.1556 ,2014. [148]BrianCSmith. FundamentalsofFourierTransformInfraredSpectroscopy .CRCpress, 2011. [149]ClaronWSwonger,DanMBowers,andRobertMStock.Fingerprint-basedAccessControl andApparatus.USPatent4,210,899,1980. [150]ChristianSzegedy,VincentVanhoucke,SergeyIoffe,JonShlens,andZbigniewWojna.Re- thinkingtheInceptionArchitectureforComputerVision.In Proc.IEEEConferenceon ComputerVisionandPatternRecognition(CVPR) ,pages2818Œ2826,2016. [151]ElhamTabassi.NISTFingerprintImageQuality,NFIQ2.0,2016. [152]ElhamTabassi,TarangChugh,DebayanDeb,andAnilK.Jain.AlteredFingerprints:De- tectionandLocalization.In IEEE9thInternationalConferenceonBiometricsTheory,Ap- plicationsandSystems(BTAS) ,pages1Œ9,2018. [153]PhilippeTh ´ evenaz,ThierryBlu,andMichaelUnser.ImageInterpolationandResampling. HandbookofMedicalImaging,ProcessingandAnalysis ,1(1):393Œ420,2000. 158 [154]THORLabs.Telestoseries(TEL1325LV2)Spectral-domainOCTscanner.https://www. thorlabs.com/catalogpages/Obsolete/2017/TEL1325LV2-BU.pdf,2017. [155]MichelaTiribuzi,MarcoPastorelli,PaoloValigi,andElisaRicci.Amultiplekernellearning frameworkfordetectingalteredIn 21stInternationalConferenceonPattern Recognition(ICPR) ,pages3402Œ3405,2012. [156]RubenTolosana,MartaGomez-Barrero,ChristophBusch,andJavierOrtega-Garcia.Bio- metricPresentationAttackDetection:BeyondtheVisibleSpectrum. IEEETransactionson InformationForensicsandSecurity ,2019. [157]MitchellTrauring.AutomaticComparisonofFinger-ridgePatterns. Nature ,197(4871):938, 1963. [158]DmitryUlyanov,AndreaVedaldi,andVictorLempitsky.ImprovedTextureNetworks:Max- imizingQualityandDiversityinFeed-forwardStylizationandTextureSynthesis.In Proc. IEEEConferenceonComputerVisionandPatternRecognition(CVPR) ,pages6924Œ6932, 2017. [159]UniqueAuthorityofIndia:Govt.ofIndia.AadhaarDashboard.https://uidai. gov.in/aadhaar dashboard/,2019. [160]XinWang,GeoffreyOxholm,DaZhang,andYuan-FangWang.MultimodalTransfer:A HierarchicalDeepConvolutionalNeuralNetworkforFastArtisticStyleTransfer.In Proc. IEEEConferenceonComputerVisionandPatternRecognition(CVPR) ,pages5239Œ5247, 2017. [161]C.Watson,G.Fiumara,E.Tabassi,S.L.Chang,P.Flanagan,andW.Salamon.Fingerprint VendorTechnologyEvaluation(FpVTE).NISTInteragencyReport8034,2015. [162]JuliaWelzel.OpticalCoherenceTomographyinDermatology:AReview. SkinResearch andTechnology:Reviewarticle ,7(1):1Œ9,2001. [163]KaseyWertheim.EmbryologyandMorphologyofFrictionRidgeSkin. TheFingerprint Sourcebook ,pages1Œ26,2011. [164]JohnJWildandJohnMReid.ApplicationofEcho-rangingTechniquestotheDetermination ofStructureofBiologicalTissues. Science ,115(2983):226Œ230,1952. [165]ZhihuaXia,ChengshengYuan,RuiLv,XingmingSun,NealNXiong,andYun-QingShi. ANovelWeberLocalBinaryDescriptorforFingerprintLivenessDetection. IEEETrans- actionsonSystems,Man,andCybernetics:Systems ,2018. [166]WenqiXianetal.TextureGAN:ControllingDeepImageSynthesiswithTexturePatches.In Proc.IEEEConferenceonComputerVisionandPatternRecognition(CVPR) ,pages8456Œ 8465,2018. [167]DavidYambay,LucaGhiani,PaoloDenti,GianLucaMarcialis,FabioRoli,andSSchuck- ers.LivDet2011-FingerprintLivenessDetectionCompetition2011.In Proc.IAPRInter- nationalConferenceonBiometrics(ICB) ,pages208Œ215,2012. 159 [168]DavidYambay,LucaGhiani,GianLucaMarcialis,FabioRoli,andStephanieSchuckers. ReviewofFingerprintPresentationAttackDetectionCompetitions.InS ´ ebastienMarcel, MarkSNixon,JulianFierrez,andNicholasEvans,editors, HandbookofBiometricAnti- .Springer,2019. [169]Wei-YunYau,Hoang-ThanhTran,Eam-KhwangTeoh,andJian-GangWang.Fake detectionbycolorchangeanalysis.In InternationalConferenceonBiometrics ,pages 888Œ896.Springer,2007. [170]SoweonYoon,JianjiangFeng,andAnilKJain.AlteredFingerprints:AnalysisandDe- tection. IEEETransactionsonPatternAnalysisandMachineIntelligence ,34(3):451Œ464, 2012. [171]SoweonYoonandAnilKJain.LongitudinalStudyofFingerprintRecognition. Proceedings oftheNationalAcademyofSciences ,112(28):8555Œ8560,2015. [172]YongliangZhang,DaqiongShi,XiaosiZhan,DiCao,KeyiZhu,andZhiweiLi.Slim- ResCNN:ADeepResidualConvolutionalNeuralNetworkforFingerprintLivenessDetec- tion. IEEEAccess ,7:91476Œ91487,2019. [173]Ding-XuanZhou.UniversalityofDeepConvolutionalNeuralNetworks. AppliedandCom- putationalHarmonicAnalysis ,48(2):787Œ794,2020. 160