o¢.\.lJ..I..4.-.|.uoho..fl.. 0.....3 .-. .1...\. I05. .slv.’ n. ...-.39.. 9.94%. , .1- ...vs.... . ..v. o ..P.)ooN:.. .0.\.o~.c..._Q-¢o.. ’CquQQ- r... \‘o‘. i..:l\ov‘fiu .vo‘.o.o.u... neon-0......o’4‘. ‘ . . . I ... ill I IIIIIIIIIIJIQ‘fbtv.‘ ...! IIIIIIII.I ‘.I.I| ... ... . . it ... .o o o . ... .... ... .... v." .. ..- to.» 2.3“..u2HI... ... of. ..O urflf .V-ulouubnfl... .0...3.u1.‘.uf «mhrgum?.o.h.n—O~nm..nsr Nu+unm.uaor~.a\v.’f o 0:...” d 0.0“. 0...... ..Ouooovflo.u.9001h.va “Out-C0000! A's to O) a. q I .. ... o . . I o. 5. ...I o. «0.. ... o . Co‘sft, Duo’dh Kora-W". w. . .p I. ...I.’O\.Ob.ouot to“..~ocl:..o D. y lo.‘ I‘ to C O '0' u . to. ‘ no .' . \ld .I oflotdQ-Wv'bdflv .30.! . clvhbu-UOO 0.90.0.0an.....“OCOOOPODPO‘OI‘IWCOH l-£..".l . . ‘ ’ 1...; -{i \fiwgl 1‘ ........ ......a.. w,.u....:.......,rh ....v... ”.0.” 13.15.... .25 .. Janus... .3..- .H a. fl...“ ....mu.......utm.. ”.2. Human... mm” a...» .9“...,,..........«.I....II.I.. ...............n... ..9I........4..vtcr...0....oIIHVtheNouuuuhfiunmanned... . . . , ”.... o . .. L I I . to‘....: 2;. o. . I . ... *o‘EOWAJI z... «Jun. b. If...“ 0 r“. “Hindu?! I. 0.1. ..uoo-c . ...... of. a c . a) ......l. 0"“. .-3‘WIIOIHOMJ'IIWF'WJOWQ"; deugl'ov"io.ii . .0... 3". a 9. l. 3 ....bCuuo.o:‘nuo 00"...ovo16 5‘3 (9"): 0‘ b O u .. . .. .p. . .I . . o a.... If ..t on... lat... . .. ..'.I325. 5:. n . n . . . . .. , . I I. . . .... CI?! .13“. . 2:” ”v. . 00 . or .9.I\..O.o ..‘l 9. 5:7..- vY-o."?lv.3u1.'.~0.o.!h:o .HONOHOnch-cour!ofh 0‘ 0J0 .. .'-.o..-\'oa.. .505680. .. o to: of. . . 3.... .15 a . .couaff.no... . ...». I» . II. o. .1. o I 6’1. s...$lnr av o 3! v o .- .....- 5. ,3 . . . . . , . . . .. ‘00. o 'nllrh—l.‘ .l-o‘.0t1'.9.-.' o...-....fl....fl0.o~ ‘9"..3-6‘ ubr.\b . Ito!:.v‘.lo.0.oo'o 30.....1 331...:i3900vql .v‘l. ..oro;. I . . I _ . ....O€a_bfll...\-V0I103¢. up... .. . c . . I. f... . . .. .. . . a 0.. u. 90'! Izo.’ ~0.......9. c... o. .... ....‘E-chtao. ..~ .3». O‘...J‘hlul..tdi.03fb~...§#fl. .70...-3'13u B: ........A.. ...?! \ 0.7;...202.‘ I! .‘OIO‘.’ . . I . . . I . o . . . . . . 9. cl. .. I ..--666'. “0‘. I O. . . phi o 1.24.1.7. .9..OI 50;! .bo.OJ .f n. u o It. to..a..0‘ao .u.~...&’.bulnm.fn Shaft. 0.49.09” ”0‘ o“ A..O¢O..f' ‘0‘... C 0 I a ’3'»... 5.7 ...O‘l 3‘. I 1»... ...8 ..C 5 . o o l'- .. .3. ‘1 1:4... .3035... I- ‘o 0.9.000 1.’C‘5¢50I0"9'§fu’ I . . . . . ...dluou;..l‘.baonp‘. .t‘ro‘.vo.lfiotfifizoefi” :’ ":.o 0‘ . . ...; 3:23.15. . ..I . .9; I .... o . .o .. ...-L...pa..ol:n.o.on ..JH 4.1. Lfl...u...v.9o ...-lob . Pat-$939..“ 03... .0 00 . . - a .. OI.- .o. 9. s a O O. .. . . . Nruwns‘l‘olul v 11...... A ...-it. (...: 59¢.-g 3-K... . . . . . out 15. ¢ 0 fl ’6 r '\ . 0...“.0‘ I00: .0.“ . o . .1}. ‘l’ O O‘cu‘r‘ll m.".r.uo'stl I - ...! .43... .r\. v.3. ... .. ., . . -. ..v.r....w.l ..Fldrflhh.l300.1lwo ... ; o.....*¢§.l.‘.t 9.....I03.P\ sc.l..+...'. . u .. . . I . In. 0‘... .. f, .. 30s.... , I In 0 .I o .. 90 . o.\ .’Q.t- ... .5... 0 .I: .I’PL .{£' “a... 9 U ' I “out!" . I . n ..C ’01- D! a”. ’ :0" .0 cl...'.l-9'I I! D ‘1‘. fi’. .0. ...-... I P .9 . bio ......‘O . ‘§l. ‘ c ..v. n-.. o I at... . . a. l‘ I'h I: o O . o . . . a. u 0.... .0. .... . o I a k g - ... .0 II... .1. .‘n Irv-cl. .Q .....1233 .n I... uGl' . I II, . I , I . o 90 oAI . .v Iv! '0 ...}..‘I‘ t. ’o'cé '0 PP... I‘r ..D- p o » to in. p (a. O. itltb I 1x331”. idhhvo-uofluoo o:tb.....ob m-QY-nsi - .....ISR-Io D». in t 11 It. -. ......ftl ;).ft 3? ,t. I .. I...In.¢!3 1P uikuflqsoltq “Kw-82.1. ”Junk”. . Oahu-.10... “VJ-C.- 2.’ p I z u . . . I. a. ’u .- A l... . in. '_ . _ . x {Mn .3. P .. '9 70...! c..w!1!p'09oufl.!2..odcioo...t'o.7!fl.«Inc... o o 01.3.... o: . o o . . . o o.‘ :9, I!‘..I!U...t.?.o. o. o . ...RO.?L.?. 2| IH|WI110 . . . ...-... . ....obzt flé... 80 ”00.8.... t: . I . _ . .c 1 . . , . . . . .1 ... . '.¢§..DO.3I-v a c I. . {939.133.19.33: n1.._l..iln II .11 0’ tsa.'$-.l..l:Pe!a..’lz:!1 .15. .V!: ¢ I 0" o...‘b.'..9.lo.' ... 2-...Ib0’t...§. O. no‘IOI"‘Q.,:QOO-, a .. _ ... o .0 v. ...-..o .0. - . o .‘. e. t‘. U... .o . L . o 1.. 6 0‘ 3‘! ...;Ivv...‘ p U3... 9": .00. . . o. .. ...-.....l {fan-’23... A33.» ~15...’.:.. .. ... .... 39 .....W .>. o"... o... 00‘! I c ...,s .OffinolJWOII-h howhrphdor... 91,913.. .. . . a '1 Io . a 00' 1....1t 0.... . A ...-b . 0’ p 0.9. J . .. .3 0 \‘CPD ’C.”~- .‘2. b on. ..Qd.‘ wt O?! .o . o O I o... o . 3' A“ ...C- but... ...-u. .ODfl-‘oeoJOI.out...!’..fl00.flh.0p9€9 . I ...“...L tat-in 91:. :25”? b o '39.. n.1~¢ovOL.VV-.v Q: I . .O‘Jt. p'v‘o-Qlos'PU.DOoI0..I 09...?blcusn13ftfilfhvtw ...O’QQ'.?.:’O|poo!'Io tout-1‘3... ?.a’0-..1o . . . . D I. . . ' . . . ... . 1.2.... Quit)... . n. . . I... . . . .. . . 51.33....st..o:?.3..’.¢.o0.0;!..3....I..£.ov-S-I}!.L;f; ‘7', '7‘}: . O ‘30.. , o o. 5‘. Q'I 6’¢’U?~‘.4v‘.0 '3..£.. ...: 1...}...uo. " 3’9 a. O: ‘5 5|. v o I . . . :CQ‘I’.‘ fil u ‘ 30.... n. . 9‘07 ‘Clop... vu- V... loan-3. . \ ‘ i4 OIv' 09v; .... .I. ...v‘...» .. . 0.. 3... . 5 L .’I HUI-t .v- poo-I93: 0 ....O 0 9A 00 .. ...”. 3"... oibal inn". 0 Jphdens.03...ooounbooohr-no.uWoo'nglvb""nun“. Q ”(Ofiutnt | ’0 .VI .. I .I I . .I I ......o' .‘Oo’.ovd to. a. ‘Oi'tvtb.'ov}- ’nf‘.‘ ‘ ’9. PD. in. . a . - i'rlt IO. . i ' . I . 'k‘.’tsov- 9-... ..9‘- 9a... 3. . fi'l _.ovI.I....n.oJV‘,V.H"J.H~§HI.I J90 . ”J'H-d‘tfiugfio’nfl.l’lcflsfl. i‘ It.9§ ’p u . ~ . . 5 0|...- Ot I. .. . . . .I O . C. ...II. . . . o. - ... I . I ‘ 9.4“} ’ I l o . s aC'Itusrflapf.Vo.3.."-. ‘0'... . c H ..'!’$.~.lk ..".v0 0'. L8,. ‘rptc’Jo. ‘30’. toc.9.t39 ...l . w. I c. no . .l oor 0' vs- a to... ‘1! 9. ‘.o. o 070 .930- 'Iotln.. 0...." so '3 ...».rq/duovdwloo. 0“:ooh10“lb¢n’l’£0|03-{V$.5t.‘u‘ 29:. .i ’0 9‘30... O. 0.... I... .v'loio. .ml .Iucq..~nc'sv.lo.$?.l.v(.?t .I . . . '99. . # ~39: t .o iio'ig I ‘v’: 31...; IO. . .‘9‘22 .t:n3 . ...-...II .I IIIS‘A.$!9.‘ .0 . .. .. 2.31.. .o .0... 3 2?. . . ... .0. '.60 9 by 292 I a 13.00., ..L-l. I’D...“ . l" O. . 7 . .. . 0 lg. . I I . .... . v3.9 2.- 9.0.2... 30.... ......a-DDQ!.I?I-.o an... vIYI. v.9.i...’.‘.t. .... bIP'If.3s.VaLH.JIM‘-lavb ...-IO '30,??? 0""!vl"... Co. ...:2 .0. .. x a . . .S'b. . o. ......DU. .3 r5. .... .6 ....-«tfi' >98): ...c'oo‘d: . .. o o ....Oud 0. adv”. v: olooounho...‘ C‘oh I I”. O ’70.".03‘ (It; .- u is. 1. to. v . . . 9|. art-... 7. . 351$ C. 7.7.35.9!2... 35!. . u ... 37¢... . I... t. r... ...-...: oat}... . . I I I. .5 l 3! . :9 .5 I! Stu. -..}... 92.: .213... .7? ...v a .0 .6... €3.93: .3. 89...- ...9‘. 1.63.1.109' ...! a s?! of; I... ill .6. f. 1.. '97:. 1.7.“: . - . Io. ‘IPD :r . . ... . . . .. ... , O. .l‘.. I. .Otn'c....ocoo‘so‘l ; ..l"‘0’. . I I 1 . s . v. 01.3 I. 3 r! . 0-... Vi....'.. o 4.. ...O .3... a 6 o p O-(Ou....3n 3" u’ruho’vo'urv '2‘,“ 5.: . D I _ . . . I .I . o I 3 . _ ... _. .vor. .... I... o. . . ...... I . I ‘ ......o ..f . I... O. ...-7........?!..H...o.los.r.u3.o...u\.0§-h r9vr. 9.22.... 0....vD-.ti§ttr.‘.o..0’§9..1‘1..Y.....3273 ...... .1. ...: .2! £§I...I...oli.li.von .... .o 3.2....I..&....!.I. I .0315 I}. I! .Q r»... v. ......D ....I o... .'1 ..r I . O O D . I o I u I o 0 . - .... Io)?! 1.5? ‘ac;av. ...-a ......rao. V0.3 . . 0.. II. o . . . 1.3!. \o’..." . 919.:- rfpo \u. . .-303:'$.. o, a... ooz‘¢3l~ 398....z er.3.nfi”.-ffl.‘ U. .0 0w 440.520.... fi"f¢’.‘.-Gu.0. a 0.! their 5-3.. 3."." ILo‘hn‘ ‘r. nlMO'l” 1.6.. I I . v‘.vL1.A.n: aQ!o..2-¢ . I!i?0.?8 .I ... 0A4..‘ 00 a!ul?t..1nt to ..I 035303.! 1....‘u. . (o. 91.. IUIIA.}I'.I.CA '1) K Inc. (in It 0“ . s 1:! r .333... .’..‘o. 1E? .\ I. ...VI; oi. I co...- b .08 Pp. oily. on. ...! 03",... [it’lf {In I. a 1 u 1 O C C o . I I o 3 ' $ 3 l f . S 9‘ 3 ‘ . E o I § ‘ .:. :w 'i 5 *2 § 2 p 3 V 3 ‘ 3 I f 9 ‘ ' 5 ‘ § 2 i i 5 = 3 I ‘. x J m t 0’"? :55 4 I) ‘ .3 :3 o '4 t O I‘ E 3 L :3 3% :3 g! b: 2; ~: :3 a; 3 z t c 3 z ? : 3 E i 2 V I —l ... l 0 v . J . ¢ I' o O I ‘ ‘I f o o O O 3 O i o .0 u! D 9 Q 0 I o 1 1 D 3 9 V I 9“! . o ..- .s a. £OO$IO‘- I’dfioiifi .3Q'..o. .i: o...- b I I o . . . . 91.3 . ‘t ...-0.. (‘1‘ an. 8.1th I-.foIa ..bLILO .36.. “ab‘bfall‘loofl‘..L‘OIOD.UII-puuhbv..cwo..o n... I»: 9' .... 4 o 70 . '1'?! 2952!... .... .ioonolalolso‘av'sr33'p fuclvt {’3’ to ‘36.- .037- o ... ...-6.5.1.5 .0 . Ii . IOCoon'p.13e.. ’9 , .. . . .1 . .o ... I...“‘ol0ivh'os ..Hoflin’o .02.)” 97.. .09...." 0....” wit. 0...IUI..I _vI .kfia.’.3nu.o.:uo-5. O. ‘0‘ cu". 0.. 1..‘- o O..~.. s' - .3C’t..0- I. '- ColtOI"I4';. o’otfl’ot...’ 0"... o‘l.£0. II' 'v.o 10- ‘ 1-7.... D itr"t.'"|“ .0.‘ .‘ol. 3.3I ‘QI'! I I. . it. v. a .5. Ii.‘ ‘ I I a . w . I Stoll. in '0...- ’Qa. r b.6183 . ... . ...-V I . r:|: 1“ 0. ‘C ...-I I . nl I: I I ' a. .l 03:3,).‘1‘30 .. . _ . no.5 r3. ll. to . .. sf . . . . .‘ICO'OIOCJUT‘ . 0'“..- I'D O ' b ‘J. v 01”..." in. ..4..."'o.“l. leurt’n' III.“ ‘33-‘u¢gt.0.;’o.’. ‘0’ . O. ... I‘o"'. ‘5. Cl .t’IJI-I. I...‘ v -.. v] t II .Ib. no... ....to’ I ...! .0. c . Ali. 5.” . - . . .. 0 .' o . .5. I I. o ' - n I 9’ 9 v . r t 0' o a. :0. 0.. .’ .‘ ‘\. A n 5.1!. l. lwl.\ lit... 0.99.. .u II. ' . '0 .UQ0097. I .-.-.9t. o 1.. lr37o‘cc ‘9.“ ”n.0,. . . o. 9“-.- Q‘A.0II0:.0...'¢. o..- ...u 9". 00'... u' 0...- "O 1... (......tlfl 5... . . . I -..-...... nl’l ‘III’II‘. 010 . . . . ‘0 I. 10-... a... 1.3.1.6: | .o I». ... 8. I I, .. . IIOII . I 3800- -In...8p‘\.’ ...?! Y.-..Q29"-5.Juzt .. b. .25.." II. I... . '3 -.2 o o. t O I Q II. . . :13... . n 9.2... 1'. .‘ o'm. 3.0iozsiu'C‘h. ... .... v..- .x I op” ...:- u...“!": .u..fl.w.c.. 3’ 03-... 25“}.9? ... If? .- gal-3.8 ....pl .. 0 . quiz-huff....oolt..oq.: .0. ...$.£IL.3I.¢.:. .....O’I...II5.. 12’0““. Ipsxnn‘flqrb Hum-El! Hts-h.“ ”unifiol '33.“ 60. All? A‘Lrbu‘olo 3: h .310- ‘59, CI . a... I. 1...! ... - 'lllr 4?&bo..t .. o It .P. .... . :3. :2. ..2||.32..Ir . I "UK .I ..7 I . . . .4 £1... to. I.- ......Oa'lt... :0 . I o}. ... ‘1.)‘ 2 .52.)!!! «if... .ollft’r‘: b .1”va . .3) ..lvvtc» -.IF’O II § . _"lpo.. .. a . . 1 In... I Id 33 I J -5. bl I -.l 11.1.0.81‘ .t...|ntl ...-tit}. {it}... ....9 In... ..IIL. .I. ...“...Dthnul Hrcrhtscbt..ls.oou .....illg.l._(rui 2. ..rrttacst:t3 .roIv 3" l I... ?..«I . 4{.|.$£.o.1x.lttl’:-.. . . J . 1 I V I. to 1.. .0. 910-..- I r . . I I .. .... . I . o I .... s ..p..\..uv{.l 3:...rl pttii'I ..\ 9 t’.... 1.0 . t O. In . . I \ 1-. . . o- .0 r'. 0.. a. v... 8. (I! .‘o. r). Safi'i. ....llu... t. I . 99:. ...‘t. .583. v.9...0 4.3.190; 8.... roctxie. o . 391.3. .519..- I. 2!. £509.19... w . ...c . .Ov I 3. ..Oo 1. 91. I. ..u Pl ‘. .... [‘3' 1‘1... 9!?! .. 9.65-106 0..-..on..\n.‘ltill. 0", (.11.; Oil. I u .. o . . I I . . I v 3: i 3 .2 , 3 E 3’ H "3', . . . . o I ‘0. - . .I . I _ AID ... I .9. '03.” ...}! I . .. ll .9 I‘ 0.143... ... ..O..)O 69.01....“ lo..ul . III. I J? ....of..ou¢vna O'NLLIVUWIH .it...I...!I1‘Itofilo.8hoib.ii.$.fla..oilf'. ....TO.‘ 6. . in. I! - iv} .PJOVQnC. .3)!!.!...OD¢¢A¢I..Sohb;ho..u..or.0h”.cuill ’1'. {tall ll. 3 3|. 2 F 3 ‘Et. 3 ;: .‘J'H : V O p O I J i 0 § ‘ I Q 9 '. C u‘ 2 : 5 5 0 : l I O J l i I 9 Q I 3 A 1 I fi 0 e ) 2 O l i 5 O f S r V O I I 5 1 J O D 0 ‘ p L OPN'N'H ..C 00 a... O. . . . 01‘ .Ia. o.o.|-O'¢..‘.i.1 E. I? . , . 101k 10.. .... . I, p¢£.:‘ 04“.}..- «(...-5-3.. Ltczlv III... . QObtruI w .‘n ’83.? cl . . I I . . w I o I . o . It. I . . o I09 C, 0])50..‘ ’o’tl' .ILft.’ 0‘ '00 .3 ...,” 1‘.0z‘.73.. ......13.--I.ul" .. )0 '0‘ .t' 5 I rOf I. ...! . I c L 41““: .0 o o 5.....‘Iscbézll’t {-..O 8‘. ‘98.. l...‘.t. l .l .70 . 0-114 (30 ...: lo)...’ 30:..l?..l..f9t9o 9......3114‘ J .9: I‘Oo‘! . .0 oil P13}?! II... .3; 0 . OIJI’i". o . 30....ch; I.) oa‘ I ..‘5’7'. .1. JD. .' OI! of... 0.. 00'. Ilvttlu't‘li‘l . . V I _ . . . . .....v0. ......0 I...I.-\IE,..O..IIJ ' In“’l.)o§‘o§"lov'onI'Ilt‘.'rflfi.la . . I O ... . , I I t . . o '31....1 O 'a'IOVO .651. no’! ~|§o.£’tc’lo".a' "‘o‘ t. .l v. -.. Ill. ‘ ‘t u o I I, so I ...J . . ...:¢‘..9..: Ug‘ttlaa; JII‘JI‘; o ‘Hiu'uo'élt-l‘. oo :t'AI . 0a. 30 ...l...’ .\A. olIo 0.. 13'“ e“ ."Oooo..o-o - .-..b . .0 . $.51“: It'd! . .IvI.IOH" _ u! I. .ll.‘ ‘ . I . f ...o' I... .... 10,0. (.0... ‘uSQ... liaivvticilloo u, . ..) O . It (KO. 1. .03. I, .3: Iu-Ja’. ‘20:..“2 Oil‘s! .t) 50.5.9.3... ...OO’tn' 9.8:. It ... . o .0.- n '0‘ , .I' .0: a f. to... Y 'u 03"! "2 I _ V‘ \v I t l I O o E 5 2_ i I l O 2 I I I I 5 «mo 9 z’? ,i ; V ' O V,‘ IFb‘ II .(l’o.-{'| 'x-§f6 0' I . . L’ofip‘fo' . o I p I q I: n I . . . cut. Xllo|30 I: no it y . . . . . l ......35‘2 0. .I. .i‘ L'(.u"\ ..v‘... 0 .893330 o .85.»... $.31. 9 ~ 44‘ .0 VPIJ‘S“ r. -.3..29’3.r\v ...... . O a .. 3| . ‘ '0'... 9‘0 .. vllo . . .5 I r 0,... V ...- J.’ .‘D ‘v-.ll.‘9310....oct.l9|ol,. I). O t :53... I. O’..IO'9I8.-- Of. 0 .0 (€4.33- "ID.¥Ql’(02'3'."r.¢ihho”.6.ovloo”vh‘voa‘oo '19 DIV. r 1!- 1.. l . a v. I . . . I I ..I 0.515? 09.“.-.Oow .199... u... _. 20-3..0I a... tic-1.... ...:OIvtt.’: . I! I’LI ...... V:.'.lnlotla,u. 1" r. vi .0» fiffod’ét‘oASO I, I . I kr’l‘l I- 0”: Os...- t.l .00 . u ‘0. bio-I11“! .tLv. ... 0-. Ill-'lf' u 0"- g o . Sufi-z (I. ‘1 .L¥I;OOO.OOII.-lfi9" t) 1 . I ~ *1 C .A w v. _3 ‘i‘ 5 .o .o. v I 0.0.. 9...: 9:: . I ' . .. ..IIJ .l' 00’ .50: ‘0‘, IIIIIIo I'D NLIO'IIII‘. I Hill .- ... 1.- I. 82.0- o I 00...." ‘31.! I. Ioovl ‘.’ . .00... ‘o.!.\lbl’.‘.. . . . . . I y I .i‘ fiv ! 9 3 f 2””! c. In: ' I f U .‘l 1 f " ; I E .9 e- . y f V 5 t E l 3 V s 0 I I 5 l I § Q I o u . o I . . f . OLIL , .... . II . ‘ . .033 .3“. g \. . 0‘ o llhhul-rvcfldulotsw. ......Px. .0 .. 30.,3I..|P....s..o).l ull.‘wlt!W-s9.o4oI-.oos) ..I)...% 9!. ..th . ‘Iuoiclur ...pr r . 5:55:33) 3:04... sir...» ...! I. ...b u Jon-335.. I , . f . O ‘0 v - {I It . I . .0 iv - I .lha‘ooIo-Drf la.|OIPJo. DJ? 1...... 9.!” . CICJIdcft' ll? 1... .5... ...-Ithfirh-t..b...r8ob .01.!“- .>.l1l..2&8..’o .. vlth- ..I.‘!: . $9.: bloofiicfio iii... Spit-v0! Q I .5h’tflb..3v'oo.. ..tgilOT... to... A l.o.ob. I' ‘I 91.! r... o o. o It ...-..é..31.c0..oh?f?f a. . . . .. . . .. . . . I If.-. -.. IQ'OO‘ 8‘ .... . I . . .o 1 {I . 1’3"! t 0.0.oivoh .oI ' . . o to n... "Iv' .. o. A... . .I v ..Y L :L .3‘: P. . OcoO-‘u H‘.‘“PO.40.J3.S‘. . ...-0 ¢.. ‘0. to It. ". .oa.{o O. . . . .1.‘ ... ’00 I.S*t! ulbv '.0P 0: 0|! 0”... .‘.. .. . i1 ’9"? I‘l ; I E" Q . . l O o I. ’0. I .. ‘0'}. vlclvl' C- [95"‘rr! - ' ...-.0 000’. Ix" I t I I . I . I l s . . . I ll... . .. I $403.9... . 10" .... . UPI- [it"SI p3... . -.I I Y . . 1 g d "i .3 i .5 0" o‘.‘... I'Vplvo..c.8€n. v a ‘ A 3 l l O ;. o o I . . .70 L. ..... ..b‘xlf.....’)§ofu§§.3ll 09‘...D‘.’OI’II. 1 st . O .i .0 . I ... ot . . . 1‘. .! Dc‘ t ‘. ‘0‘ . Ito». .03. I ... ... “Hurt!” .4“ ’I .fl.".n I295.~ral.zlfli.t. .l.'.. -.. .Il.c .0. is o. .. o W. 0. v3.1. 5- t... .v a! ... 1.7.0.... 3.0 . Wur!‘“‘ot.€.0fi.li|.3‘n ObD|h.v§lv}| o. . 0 I . .... . . . I. . 3... 0 . . . .. . . . .. qr.iu’ p’o'u o)..$|vlo...1'.I}.AIOPIoI I,“ z o . . ‘0 43.. '_ . " | .I'OI .0! 3.! Or. I. . I or. . ... .‘ah§.g..~ ...-2!.“ursuro’ _ ....§‘.} 3|- .bl oi i‘ .50 o ‘;Is'-Ol. 03....) 0.! '60. ..Dulv)’ '- a . 0"l 0“... 8|! . ’00 l I n‘. IDI\|Q\‘9. " .0 O J .2 19 I... (”It . .\ {It .. 5...»! . .L?!. ‘V' 7 o n». Zuni-«.40....Jloht. HRH-(.... 4...... 3.319.?" 3! an. .5: a..3\a.tl§.al.."3.oboov.3 I)! b .....oopv’..la Mu aIuv 00...!. .0. J}.lo.-a|v..8.‘4:|t.l ...-53"}: .dui.0|‘.flIA.l-... “IOONVQYOOIJ. 0,000.1!!!"1’ I! 03:0. .3 I . I 3 c It. 0.. Inn-’3' I e A. . 3!... II. .J. I “...-U. . . to u (it. 5......t-‘vtd-rv 3|. .3 .II. a... .....-WIQO v 10L. 06...,nl'.§oo.3333"¢..-.lz . a!‘>llv.’\)...‘3f‘ . . (ID .. I . v . ‘ . . . . . . . . I . ‘I . o. I . . I l 0 C. 0 sin} o .lo. 5.. a! 0 . . . sh bio-I'd. v1.0 O .....‘i ‘03‘1I’to'v brunW-l‘vlldwti’rh‘g 307’s. .... , ’31..) ovluo [.7335er :9 cons; . 75 It .92.. .l ‘v 0"! 011.0!14... .0. l c. 23:}... t .- l.u~.hs?‘.$6.fio.!u”\ .....ll’uolt-‘IIIOI Iv: . It! I. II. Ill 0 iii! I c . a. . u merit" ‘t : J 1. . common-«0 I . . . ‘0' 0". a 3-0. 9 .. 00.! £6: ...... 'c‘.6 ......I“, 0.15,. ‘ .. t.‘ ts f ....‘ovo . .9 u. .4! ... ... {In ..Ut .0: I. _. n'at' I I“. \vld.‘ uo.oalflo.o"§‘h..lotco‘vs ,. 30.: pt... ...-..R-Olv...\lgfi‘. . . -0 s . ‘0 .. . .3 II ... - ... :..o..)l n A1 a, ’I‘O!‘9. .‘o. . . a w J ' I . . _ . 13V Vt: ..- t. I... o. ...-0‘ 21¢; It. ‘ I....O.“.Pavu.hflduuv7ul.‘t'lo 1.0.. (.86.; org—gust: .t'f .4 ...!g... .‘On... 6. 1.. S'b‘3IQ‘ceo .3. 6’?.I€.l.‘a. .1 'lt‘ .l’ "V. n?’ Iozififlduufl‘ on.“ ‘ifigl’o‘It l'\ .0 . I III. I I .n P; I o. . 3|. furl- I» I'- I. s ..IL...-- nt....-...,r . - . ..1...-...I...fc....a.z..€ $0.135; 88.92.503.21) .Eo.‘h.air.pl..l: .p ...!» -.I....e ....n .u (.3?.o....-t0.£. ...... 5.... . .EIst...et!- 9.. til-No.3. 5.. 03...... 6.08.52!!! ...... . flgnmg” 3:. ' 3'} :II E5 13. ‘HE'Sts ICHIGAN STATE UNIVERSITY LIBRARIE Illillllllllllll Ill llllllllllllllllll 3 1293 00881 0917 ll This is to certify that the thesis entitled Mobile Robot Localization Using Pattern Classification Techniques presented by Jonathan D. Courtney has been accepted towards fulfillment of the requirements for M.S. degree in Computer Science er/ Major professor Date 0-7639 MS U is an Affirmative Action/Equal Opportunity Institution LIBRARY Michigan State 1 University PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. DATE DUE DATE DUE DATE DUE 12.4 L “*7 l MSU is An Affirmative Action/Equal Opportunity institution chS—pn MOBILE ROBOT LOCALIZATION USING PATTERN CLASSIFICATION TECHNIQUES By Jonathan D. Courtney A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Computer Science Department 1993 ABSTRACT MOBILE ROBOT LOCALIZATION USING PATTERN CLASSIFICATION TECHNIQUES By Jonathan D. Courtney This thesis describes a methodology for coarse position estimation of a mobile robot within an indoor setting. The approach is divided into two phases: exploration and navigation. In the exploration phase, the mobile robot is allowed to sense the environment for the purpose of mapping and thus “learning” its unknown surround- ings. In the navigation phase, the robot senses the surroundings and compares this information with its learned maps for the purpose of locating itself in the workspace. We solve the task of coarse-level mobile robot localization via pattern classification of grid-based maps of important or interesting workspace regions. Using datasets representing 10 different rooms and doorways, we estimate a 94% recognition rate of the rooms and a 98% recognition rate of the doorways. We conclude that coarse position estimation is possible through classification of grid-based maps in indoor domains. Now to the King eternal, immortal, invisible, the only God, be honor and glory for ever and ever. Amen. — 1 Timothy 1:17 iii ACKNOWLED GMEN TS First, I want to express my deep love and gratitude to my wife Mickie. Without her prayers, encouragement, and sacrifice, my graduate school studies would not be possible. In many ways, this thesis should be hers. I would also like to thank my advisor, Professor Anil K. Jain. He provided insight, direction, and encouragement that were crucial for this thesis. He also taught me more about professionalism and self discipline than he realizes. Thanks as well to my committee, Professors Charles MacCluer and Abdol—Hossein Esfahanian, for their critique of my research and for their personal support. Professor George Stockman and Dr. Mihran Tiiceryan were also instrumental in the birth of robotics research in our laboratory. The work of Dr. Alberto Elfes had a significant impact upon my earliest thinking regarding mobile robotics. Additionally, the advice and insight he has offered in our personal communications has been a source of direction and motivation for me. No doubt when I look back upon my days at Michigan State some of my fondest memories will be of the PRIP Lab and the PRIPpies that inhabit it. To all of them (faculty included): thanks for the encouragement, stimulating conversations, dinners from around the world, and regular excursions to El Azteco. Brian Wright and Roxanne Fuller provided invaluable assistance when my elec- tronic abilities failed me (or turned up completely absent). Thanks also goes to John Lees for his friendship and willingness to get his hands dirty in custom—tailoring ROME’S operating system and hardware. National Science Foundation infrastructure grant CDA-88-06599 provided the funding for contruction of our robot. My time in graduate school has been seasoned by the love of many fellow Christian students; thanks especially to Steve Walsh, Hans Dulimarta, Tim Newman, Barb Czerny, and Jon Engelsma for their fellowship during my studies. Finally, thanks to Dave Reynolds, Rick Hallgren, and the Bible Study: Scott and Michelle Sochay, Shawn and Stacy Dry, Todd and Susan Luter, David and Lisanne Reed, and Rick and Clarissa Ross for their support, prayers and patience during the writing of my thesis. For the record, they are easily the best friends I have ever had. iv TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES 1 Introduction 1.1 Mobile Robot Navigation ......................... 1.2 Spatial Representations .......................... 1.3 Localization Using Pattern Classification Techniques ......... 1.4 Thesis Outline ............................... 2 Mobile Robot Localization 2.1 2.2 2.3 2.4 2.5 Topological Localization ......................... Coarse-level Localization ......................... Navigation Using Coarse-level Localization ............... Related Work ............................... Research Methodology .......................... 3 Grid-based Maps 3.1 Experimental Platform .......................... 3.2 HIMM Grid Map Creation ........................ 3.2.1 Ultrasonic Range Sensors ..................... 3.2.2 Stereo Vision ........................... 3.2.3 Lateral Motion Vision ...................... 3.2.4 Forward Motion Vision ...................... 3.2.5 Infrared Proximity Detectors ................... 3.2.6 Tactile Sensors .......................... 3.3 Experimental Datasets .......................... 4 Grid Map Processing 4.1 Preprocessing ............................... 4.2 Grid Map Descriptions .......................... vii viii OvthMi-i 7 10 12 14 18 21 26 29 29 32 36 42 51 53 53 56 56 58 4.2.1 Fourier Descriptors ........................ 58 4.2.2 Standard Moments ........................ 59 4.2.3 Moment Invariants ........................ 61 4.3 Experimental Data ............................ 62 5 Grid Map Classification 66 5.1 Pattern Classification ........................... 66 5.1.1 Bayes Classifier .......................... 67 5.1.2 Minimum Mahalanobis Distance Classifier ........... 68 5.1.3 K-Nearest-Neighbor Classifier .................. 70 5.2 Error Estimation ............................. 71 5.3 Feature Selection ............................. 74 5.4 Experimental Results ........................... 76 6 Conclusions 84 6.1 Contributions ............................... 85 6.2 Conclusions ................................ 85 6.3 Future Work ................................ 86 A Experimental Datasets 89 A.1 Room Locales ............................... 89 A.2 -Door Locales ............................... 92 B ROME Cartographer 96 B.1 Mouse Buttons .............................. 96 B.2 Main Window ............................... 98 B.3 Sonar Subwindow ............................. 99 B.4 Lateral Motion Vision Subwindow .................... 99 B.5 IR Subwindow ............................... 100 36 Camera Subwindows ........................... 100 BIBLIOGRAPHY 102 vi 2.1 3.1 4.1 5.1 5.2 5.3 5.4 5.5 5.6 LIST OF TABLES Locale types for an example navigation system. ............ Comparison of operations on grid-based maps and intensity images. Adapted from [26]. ............................ Ordering of the features in the experimental dataset. Note that the IR features are not included for the room locales. ............. Feature selection and classification error of room locales: (a) 3-NN classifier, (b) MMD classifier. ...................... Confusion matrix of room locale data resulting from MMD classifier using features ”3’0, ago,cpf,cpf, and (pi. The misclassifications are un- derlined ................................... The numbering of the locales, used for both room and door confusion matrices ................................... Feature selection and classification error of door locales: (a) 3-NN classifier, (b) MMD classifier. ...................... Confusion matrix of door locale data resulting from 3-NN classifier us— 25 63 77 79 79 ing features 90;, (pg, #50, (p59, andpgo. The misclassifications are underlined. 83 Bootstrap error estimation for the best classifier of each dataset: (a) room locales, (b) door locales. ...................... vii 83 2.1 2.2 2.3 2.4 2.5 2.6 2.7 3.1 3.2 3.3 3.4 3.5 3.6 LIST OF FIGURES An example distinctive place. The DP is at the intersection of the dashed lines. Adapted from a figure in [40]. .............. 9 Locating DP’s using hill-climbing search. Adapted from a figure in [40]. 9 An indoor domain where the locale criteria are “any region not more than 25 meters long or wide, and bounded on all sides”. The shaded regions qualify as locales .......................... 11 The workspace of a delivery robot, with “door” and “room” locales shaded. The “X” marks the initial position of the robot. ....... 13 Specifying a local coordinate system. .................. 13 Navigation via coarse-level localization .................. 14 Door and room locales on the third floor of the Engineering Building. 20 Cross section of a conical sonar signal. Any object along arc A will yield a sensed distance of d. Adapted from a figure in [9] ........ 22 Uncertainty profile for a single sonar sensor firing from (a: = 0, y = 0) toward (a: = 5,3; = O), with a sensed distance of d = 5. The bivariate Gaussian function p(r,0 | d) = 21min exp [—% (95%): + 3%)] models sensing uncertainty in range, r, and angle, 0 for cell occupancy and va- cancy. Plot (a) shows p [RHl | VAC], the probability density function for vacancy, and plot (b) shows p [RH] I CC C], the p.d.f. for occupancy. 23 A sonar occupancy grid map after a single sensor reading. The sensor fired from (a: = 0, y = 0) towards (a: = 5, y = 0), with a sensed distance of d = 5. .................................. 24 ROME. .................................. 26 Labmate mechanical design. Reproduced from [14]; used by permission. 27 A functional diagram of ROME. Ellipses indicate robot sen- sors/ actuators. .............................. 28 viii 3.7 Building a sonar histogram grid. The cell at distance d and lying on the acoustic axis is incremented by 3. Other cells on the acoustic axis with distances 0 to d are decremented by 1. A probability distribution is obtained by continuous and rapid sampling of the sensors while the robot is moving. (Adapted from a figure in [9] .) ............ 3.8 A sonar map of the PRIP Laboratory. ................. 3.9 The PRIP Laboratory, from three views. ................ 3.10 The sonar map of the PRIP lab. The arrows indicate the location of- the views shown in Figure 3.9. The dashed line indicates the path taken by the robot. ............................ 3.11 Sources of ultrasonic ranging anomalies. ................ 3.12 Four sonar grid maps of the PRIP lab, taken at different times under differing exploration paths of the robot .................. 3.13 The stereo field of view and the range profile form a “wedge” of infor- mation. The shaded region is sensed as empty, and the cells along the solid line are sensed as occupied. Adapted from [46]. ......... 3.14 The stereo depth profile with uncertainty band shaded. ........ 3.15 A diagram of the one-dimensional camera. Adapted from a figure in [11] . .................................... 30 31 31 31 32 33 35 37 3.16 A side-looking camera tracking a point under constant forward velocity. 38 3.17 Constructing an HIMM grid map using lateral motion vision ...... 3.18 A lateral-motion epipolar plane image. Time advances from top to bottom, and each row is a one-dimensional image captured from the side-looking camera. ........................... 3.19 First, middle and last images of the lateral motion sequence from which the EPI of Figure 3.18 was obtained. The horizontal lines indicate the scanlines averaged in each image. Averaging these scanlines in image (a) gives the first row of the EPI; likewise, image (c) provides the last row of the EPI. .............................. 3.20 The set of lateral—motion EPI traces. .................. 3.21 An LMV map of the PRIP Laboratory. Cell certainty values have been mapped inversely to intensity. ...................... 3.22 The PRIP Laboratory, from three views. ................ 3.23 The LMV map of the PRIP lab. The arrows indicate the location of the views shown in Figure 3.22. The dashed line indicates the path taken by the robot. ............................ ix 39 40 41 41 42 43 43 3.24 3.25 3.26 3.27 3.28 3.29 3.30 3.31 3.32 3.33 3.34 3.35 4.1 4.2 4.3 4.4 Characteristics of lateral motion vision data. LMV detects the high- contrast surface markings on wall posters and the air conditioner in the lab, but cannot sense the whiteboard. It also lacks the range precision of ultrasound. ............................... Four LMV grid maps of the PRIP lab, taken at different times under differing exploration paths of the robot .................. A camera undergoing translation in the direction of robot motion. Points to the right of the center of expansion move to the right in the image; points to the left of 05 move to the left in the image. Adapted from a figure in [11] ............................ The camera system tracking a point under slight misalignment and constant forward velocity. Adapted from a figure in [11] ........ A forward-motion EPI ........................... First, middle and last images of the forward motion sequence from which the EPI of Figure 3.28 was obtained. Image (a) provides the 43 44 45 46 49 first row of the EPI; likewise, image (c) provides the last row of the EPI. 50 Occupancy grid profile for an infrared proximity detector located at (a: = O,y = 0) and facing toward (a: = 0.5, y = 0). A bivariate Gaus- 1 21036” uncertainty. This profile is used for probability of occupancy or va- sian function p(:z:,y | d) = exp % 7% + 1313] models sensing cancy, depending on whether or not the sensor detected an object. . . Building an HIMM grid map with infrared proximity sensors ...... An IR map of the PRIP lab. The light gray regions indicate unoccupied sensed cells, and thus reveal the path taken by the robot through the workspace .................................. Four IR grid maps of the PRIP lab, taken at different times under differing exploration paths of the robot .................. Building an HIMM grid map with tactile sensors. ........... Sonar, Vision, and Infrared sensor grid maps of a locale. ....... Image processing operations on a sample grid map: (a) the origi- nal map, (b) occupied regions, (c) disconnected free space and sonar anomalies, (d) the free space of the locale, (e) morphological dilation, (f) morphological erosion. ........................ Calculating ¢(s) from a boundary. ................... A star plot of the class centroids for room locale maps. ........ A star plot of the class centroids for door locale maps. ........ 51 52 53 54 55 55 5.1 5.2 6.1 6.2 A.1 A.2 A.3 A.4 A.5 A.6 A.7 A.8 B.1 Pair-wise feature scatter plots of room locale data. Red denotes EB 312; green, EB 323; yellow, EB 325; orange, EB 327; yellow-green, EB 330; pink, EB 335; light blue, BB 350; dark blue, EB 352; orchid, EB 354; peru, EB 360. ............................ Pair-wise feature scatter plots of door locale data. Colors are assigned identically as in Figure 5.1 ......................... Sliding windows around a mobile robot navigating in a hallway environ- ment. Both the doors and the turn in the hallway could be considered generic hallway locales ........................... Coarse-level localization and its relation to other components of a robot navigation system. The arrows indicate flow of information in the logical system ................................ Example sensor grid maps of room locale EB 312. ........... Example sensor grid maps of room locales EB 323, EB 325, and EB 327. Example sensor grid maps of room locales EB 330, EB 335, and EB 350. Example sensor grid maps of room locales EB 352, EB 354, and EB 360. Example sensor grid maps of door locale EB 312. ........... Example sensor grid maps of door locales EB 323, EB 325, and EB 327. Example sensor grid maps of door locales EB 330, BB 335, and EB 350. Example sensor grid maps of door locales EB 352, EB 354, and EB 360. ROME Cartographer interface. ..................... xi 82 87 88 89 90 91 92 92 93 94 95 CHAPTER 1 Introduction Robots and automated mechanisms have long fascinated both scientists and the pub— lic. The prospect of endowing machines with the ability to perform commonplace or even mundane tasks has preoccupied researchers and tinkerers for centuries. In the last twenty years, however, there has been particular interest in equipping robots with autonomous navigation capabilities. With advances in computation and sensing technologies, mobile robots are considered candidates for the routine transportation tasks performed by humans. Elfes identifies five areas in which mobile robots may soon find practical application [26]: 0 Factory automation projects. e Operation in hazardous environments, such as mine excavation, nuclear facilities inspection, and deep-sea surveying. Planetary and space exploration. Handicapper assistance. Military battlefield projects. 1.1 Mobile Robot Navigation Unfortunately, the robotic navigation task is not an easy one. According to Leonard, the problem of navigation can be broken down into three questions [43]: “Where am 1?”, “Where am I going?”, and “How should I get there?”. The first question has been termed the localization problem, that is, how can the navigator determine its location in the environment, given its current view of the world and what it has encountered while traveling thus far? The second and third questions are those of specifying a goal and finding an appropriate path to that goal. Robotic research in path planning and obstacle avoidance is concerned with these last two questions. However, the localization problem is central to the task of robotic navigation. Without some idea of its current location, the navigator will be unable to follow a proper path to reach its goal. For example, consider the case of a hypothetical robot delivery vehicle. The robot must navigate through busy city streets to reach its destination while avoiding obstacles in its path (the other cars) and obeying traffic laws. But in order to do so, the robot must have accurate information of its location: its position in the traffic lane, its location in the city, whether or not it has reached the goal, etc. Furthermore, the task may be assisted by the ability to recognize landmarks or distinctive areas on the streets of interest. 1.2 Spatial Representations In order for a robot to have a precise assessment of its location, some kind of internal spatial representation of the environment must be employed. These representations can be divided into three different paradigms: e Geometric e Sensor-based e Topological The geometric paradigm has been the traditional approach to spatial representa- tion; see [42, 28, l, 18, 48] for examples in mobile robotics. Here, low-level geometric primitives are extracted from sensor data to build up high-level abstractions of the environment. Thus, the world might be modeled using wire-frame, surface boundary, or volumetric representations, for example. This approach is appealing because it provides a concise representation that is easily manipulated and displayed by com- puters and is readily understood by humans. However, the geometric paradigm has drawbacks in that it leads to world models that are potentially fragile and unrep- resentative of the sensor data because it relies heavily upon a priori environment assumptions and heuristics [26]. Elfes concludes: As a consequence of these shortcomings, the Geometric Paradigm im- plicitly creates a wide gap between two informational layers: the layer that corresponds to the imprecise and limited information actually provided by the sensor data, and the layer of abstract geometric and symbolic world models manipulated by the sensing and world modelling [sic] processes. As a result, geometric approaches may operate successfully in highly struc- tured domains, but are of limited applicability in more complex scenarios, such as those posed by mobile robots. [26] An alternative to the geometric paradigm is the sensor-based paradigm. Here, the environment is represented as a collection of the sensor readings, obtained through sensor models. No a priori assumptions or models of the world are employed. Most typical of this approach are cellular representations such as Elfes’ occupancy grid [25, 27] and other grid-based mapping schemes [10, 9, 6, 53], wherein cells in a 2- or 3-D map are iteratively updated to reflect sensor readings. These methods make weaker assumptions about the environment than in the geometric paradigm, and therefore have a wider domain of application [43]. Furthermore, they usually have a high “sensing to computation ratio” since no high-level abstractions are generated. The geometric and sensor-based paradigms are both metric approaches, i.e., the resulting environment representation depends heavily upon quantitative sensor data. Therefore, their strengths can be greatly limited by the accuracy of the sensors em- ployed. For example, if unaccounted for, cumulative errors in movement sensors (used by “dead reckoning” position estimation) can cause fatal inconsistencies in the world representation. To alleviate this problem, some mobile robotics researchers have re- sorted to a topological representation paradigm [40, 41, 4, 45, 52, 23]. This approach is often considered “qualitative” since exact distances between world landmarks are not used, but adjacency and coarse directional information is stored instead (typically in a graph-like structure). The topological paradigm is supported by evidence from cognitive science which indicates that humans may use such an approach for mental representations of space. Thus, this paradigm is considered “intuitive” to humans. Unfortunately, it does not readily lend itself to implementation in practical sensor systems which typically yield metric data. 1.3 Localization Using Pattern Classification Techniques This thesis describes a paradigm wherein pattern classification techniques are applied to the task of mobile robot localization. In our approach, the robot workspace is represented as a hybrid of two spatial representation schemes: metric grid-based maps interconnected via topological relations. We feel that this maximizes the strengths of the sensor-based and topological paradigms. Mobile robot localization is then divided into two phases: exploration and navi- gation. This two-part approach bears resemblance to that described by Kuipers [40] and is similar to Leonard and Durrant-White’s “weekend experiment” [43]. In this scenario, the robot is assumed to have enough time to adequately learn about its environment before it must begin work—as if it were “turned loose” on the weekend with no a priori knowledge, but with several hours to wander about and explore. Af- ter this stage is completed (on Monday morning perhaps), the robot may be employed for tasks in which it uses its new-found knowledge to maneuver about. In the exploration phase of our system, the robot is engaged in creating grid maps of the various regions (which we call locales) that it encounters in the workspace. This phase is essentially performed “offline” and the robot may make as many maps in as much detail as necessary. Thus, this phase corresponds to the training phase of pattern classification, and locales correspond to pattern classes. In the navigation phase of our system, the robot localizes itself by matching its map of the current locale with the salient features of the locale maps learned during exploration. This is similar to the testing phase of pattern classification. Once the system has recognized the current locale, metric position information is available with respect to the local coordinate system of the locale. We call this method of localization via locale recognition coarse-level localization. We pose mobile robot localization as a pattern classification problem, and then in- vestigate the map recognition process within this context. Major questions addressed in this thesis are listed below. e What features should be calculated from the maps for the purpose of recogni- tion? What pre-processing of the maps will be necessary to extract these features? How many features should be calculated from a locale to adequately differentiate it from other locales? e How many maps of each locale should be collected during the exploration phase, in order to adequately represent the locale? e What classification method is appropriate for grid map recognition? e How can multiple sensing sources (sonar, vision, proximity, etc.) facilitate the matching process? e How successful are grid-based maps as a representation of the sensed environ- ment, and how useful are they for mobile robot localization? 1.4 Thesis Outline The remainder of this thesis is organized as follows: e Chapter 2 discusses issues involved in mobile robot localization, introduces the concept of coarse-level localization, reviews the related literature, and outlines our research methodology. 0 Chapter 3 introduces grid-based maps, describes the experimental platform, and discusses the various sensing devices and techniques used in grid map creation. 0 Chapter 4 discusses image processing and feature extraction methods applicable to grid-based maps. 0 Chapter 5 presents grid-map recognition techniques and results. 0 Chapter 6 outlines applications, future work, contributions, and conclusions of this thesis. CHAPTER 2 Mobile Robot Localization An important subproblem in the robot navigation task is that of localization. That is, given accumulated knowledge about the world and current sensor readings, what is the position of the robot within the workspace? This would be a trivial problem if the navigation system could rely upon its velocity history (i.e., employ “dead reck- oning”) to calculate its global coordinates. Unfortunately, distance and orientation measurements made from dead reckoning are highly prone to error due to shaft en- coder imprecision, wheel slippage, etc. which accumulate over time. Dead reckoning thus requires supplemental information in order to periodically “flush” the metric error from the system and re-determine the robot’s position in the environment. 2.1 Topological Localization Much work has been done in establishing the instantaneous absolute coordinates of the robot from its sensor data (sometimes called continuous localization) in the presence of metric error; see [20, 38, 42, 43] for example. Another approach is to avoid metric representations (such as geometric models or grid-based maps), and to rely upon topological descriptions of the environment instead. Such is the case with Kuipers’ “qualitative spatial learning and navigation” [40] based on his TOUR model [39]. Kuipers’ work is motivated by evidence from cognitive science which indicates that humans primarily depend upon topological, rather than metric information for navigation. Here, the topological model is organized as an attributed graph, with the nodes representing landmarks (called distinctive places or UPS), and the arcs representing connecting routes (called local travel edges). In addition, procedural knowledge regarding how to detect distinctive places and follow local travel edges is incorporated into the model. Finally, metric information is associated with each node and are for local geometric accuracy. Spatial learning proceeds in two phases: exploration and navigation. In the ex- ploration phase, the robot searches its workspace, locating DP’s and travel edges, and organizing the topological model. In the navigation phase, the robot utilizes the results of the exploration phase in order to move efficiently from one DP to another along the defined local travel edges. Kuipers’ paradigm depends heavily upon the notion of a distinctive place. A DP is defined to be a point in the workspace that is locally distinctive according to some geometric criterion. It is identified by its signature, a subset of low-level features (“distinctiveness measures”) which are maximized at the DP. The following examples of distinctiveness measures are given in [40]: e Extent of distance differences to near objects. Extent and quality of symmetry in a region. Temporal discontinuity in one or more sensors, given a small movement. Number of directions to nearby open spaces. Temporal change in the number of directions to nearby open spaces, given a small movement. Position of extreme distance readings. Figure 2.1 shows a DP defined from a subset of the preceding distinctiveness measures. Figure 2.1. An example distinctive place. The DP is at the intersection of the dashed lines. Adapted from a figure in [40]. To locate a DP in the exploration or navigation phases, Kuipers’ system must determine which distinctiveness measures are appropriate for the current region, and then maximize them using a hill-climbing search. The point at which the signature is maximized is the DP. See Figure 2.2. Edge following D' . 've Place _, ~ I/‘ Hill climbing 0‘ ‘ ' e " Figure 2.2. Locating DP’s using hill-climbing search. Adapted from a figure in [40]. Kuipers’ work in qualitative spatial learning is intuitively appealing due to the use of a topological environment representation. His simulation results seem success- ful, and the claim is that the system can navigate while avoiding cumulative metric error. However, the definition of a distinctive place may be too vague for a real-world implementation. First, it is questionable whether Kuipers’ suggested distinctiveness 10 measures could be obtained from a practical system. Second, systematic sensor er- rors may make the distinctiveness measures depend upon the orientation of the robot. Thus, during the hill-climbing search, a hysteresis effect in the sensor data would make locating the true local maximum impossible. Finally, one cannot specify an arbitrary location in the workspace as a DP, allowing the robot to navigate to and from the site of a task, for example. 2.2 Coarse-level Localization Instead of a topological model supplemented with metric data, we present an ap- proach to robot localization that uses a metric model supplemented with topological information. In this paradigm, dead reckoning methods are used for robot position estimation within the coordinate system of a local metric map, and this local map is, in turn, topologically related to the other maps in the environment. This spatial representation is employed in what we call coarse-level localization. Coarse-level localization solves the localization problem by matching the current local map (of what we call a locale) with the salient features of previously learned locales stored in the system database. A locale is defined to be a connected region of the robot work space that can be reliably delimited via sensing, given predefined criteria. The criteria for a locale depends upon the domain of the robot, but we limit our consideration to notions of proximity and containment. For example, in an indoor setting, appropriate criteria for a locale might be “any region not more than 25 meters long or wide, and bounded on all sides”. In this case, most rooms would qualify as locales; hallways, in general, would not (see Figure 2.3). For outdoor domains, an appropriate locale criterion might be “the region within a 10 meter radius of a local elevation maximum”. The purpose of defining locales in this manner is to guarantee that the robot m “ in; “ , a... .' Hallway Hallway Figure 2.3. An indoor domain where the locale criteria are “any region not more than 25 meters long or wide, and bounded on all sides”. The shaded regions qualify as locales. segments the environment into maps of identical regions during each pass through the workspace. Such an approach is not uncommon; for example, Leonard and Durrant- Whyte [42] restrict their landmarks of interest to those which can be reliably detected during repeated sensing. Our proposed system is similar to Kuipers’ model in that the workspace is rep- resented as several topologically related local metric maps, rather than one global metric map. Thus, locales are an analog to DP’s that represent regions, instead of points, in the workspace. As in Kuipers’ model, it is assumed that the interesting regions of the environment are contained in the set of locales and that the remaining areas can be excluded from detailed mapping. Furthermore, dead reckoning is only employed within the context of a locale; no global coordinate system is used. Instead, the robot’s metric position is specified relative to the local coordinate frame of the locale, if needed. Fennema et al. [28] use a similar approach in their spatial representation scheme in order to circumvent global inaccuracies arising from dead reckoning error. Likewise, we assert that if locales are of limited size, the robot. will not travel long distances within the local coordinate system, and cumulative metric errors are avoided. There are robotic applications that require finer coordinate information and global 12 position control than our approach will allow. However, we believe that the proposed scheme is sufficient for many circumstances; consider that human beings generally navigate with no precise metric information whatsoever. Furthermore, coarse—level localization can be viewed as an important sub-problem in the navigation task—a first step before performing more detailed localization or locale-dependent procedures. 2.3 Navigation Using Coarse-level Localization Consider, for example, a package-delivery robot that relies upon the two locale types described in Table 2.1. We will demonstrate how this paradigm can be used for navigation in an indoor setting. Table 2.1. Locale types for an example navigation system. Locale Type Locale Criteria Door Region within 3 meter radius of the center of a doorway. Room Region not more than 25 meters long or wide, and bounded on all sides. Figure 2.4 depicts the workspace of the mobile robot, where the regions that meet either of the two locale criteria are shaded. Assume that for each “door” locale a right-handed coordinate system is specified whose origin is at the midpoint of the doorway and y-axis is perpendicular to the doorway, pointing into the room (see Figure 2.5). Likewise, each “room” locale uses the coordinate system of one of its doors for reference. Suppose the robot, at the position indicated by the “X”, is given the command to deliver its cargo to Room 3. If the robot moves to the north, it will first en- 13 Room4 ‘ l . . . -. . . .vl’. l .‘ ." ' V ' '32 “a 'i _ . ‘31:”: ":33,“ 4.; ‘r F" -' " . \ ,. .;' .' . ’-~, --I {-2 .L‘. .I . '. ._ 1 ,. ; 1:3 -. . " ' ,x ' - ', r» ‘2; Figure 2.4. The workspace of a delivery robot, with “door” and “room” locales shaded. The “X” marks the initial position of the robot. «Room 2 I Figure 2.5. Specifying a local coordinate system. counter and recognize the door locale corresponding to doorway A. Suppose further that the topological relations between locales provide directional information. The robot would then know that doorways D and F (of Room 3) lie to the east of its current location. Moving eastward down the hallway, the robot next encounters the locales for doorways B, C and D. Recognizing the last locale as one of the desired doorways, it enters the door and begins to map Room 3 to verify its location. Once the locale corresponding to Room 3 is identified, the robot has reached its goal, and may deposit its cargo appropriately (see Figure 2.6). To do so, it can employ the map just constructed and the local coordinate frame of the locale. 14 Figure 2.6. Navigation via coarse-level localization. 2.4 Related Work Here we describe some of the previous work done in mobile robot localization. Hung, et al. [33] perform robot localization in a complex indoor environment by pre—locating several unique marks on planar surfaces in the robot workspace such that at least one mark is visible from any location. Once a mark is detected, pattern recognition techniques are used to determine which one is being imaged, and the global location of the robot can be calculated relative to the location of that mark stored in a database. In their experiments, marks labeled with one of the characters “X”, “Y”, or “Z” were employed. Identification of a mark requires extraction of a series of shape descriptors to be used as features in a classification tree. The position of the robot relative to a mark is computed using a geometric method similar to that proposed by Fischler and Bolles [29]. The disadvantage of this approach is that it requires the addition of artificial landmarks to the workspace; these landmarks must be precisely located and cannot be moved without updating the database. Furthermore, it makes no use at all of dead reckoning measures, which can supplement the localization process. Moravec [50, 25] developed a method to perform robot localization by multi- resolution matching of grid maps. A hierarchy of reduced resolution maps is employed; 15 coarser maps are produced from finer ones by converting two-by-two subarrays of cells to single cells by averaging. The matching process then proceeds from coarse to fine resolution, searching in (x,y,0) parameter space with two translational and one rotational degrees of freedom. The match itself is implemented as a correlation operation between two maps, and the best match obtained at a low resolution level is used to constrain the subsequent search at the next highest resolution. With this method, alignment of two spatially “close” maps of 3000 cells each takes about 1 second of VAX 11/750 computation time. However, an unconstrained search of an entire global grid map would take much longer. Note that despite the recent interest in grid-based mapping systems, Moravec’s work seems to be the only attempt to apply grid maps to the task of robot localization. Dudek et al. [23] pose the robot navigation and localization task as a theoreti- cal graph-construction problem. The environment is modeled topologically with an attributed graph structure similar to that used by Kuipers [40, 41]. It is assumed that the robot has no sensors for dead reckoning, but that it can reliably recognize locations that correspond to graph vertices and paths that correspond to graph edges. Furthermore, the robot is assumed to be equipped with multiple distinct markers that it can deposit, recognize, and pick up at any of the vertices that it encounters. The authors present and prove the correctness of an algorithm for robotic exploration, and show that the use of portable markers is necessary for this theoretical problem. The authors concede that the abilities of the robot used in their model are particularly primitive, but argue that these capabilities should be the “lowest common denomi- nator” of robotic systems and that more advanced perceptual mechanisms may not be reliable. Cox [17] performs continuous localization within a single room using an a pri- ori map composed of straight-line segments. Data from an optical range finder is registered with the map using a matching algorithm that attempts to minimize the 16 total squared distance between range data points and line segments in the map. The results are then combined with the robot’s dead reckoning measurements to update the robot’s position. Fennema et al. [28] describe a vision-guided mobile robot system that performs lo- calization using an a priori geometric spatial model. Coincidentally, the model is very similar to the spatial representation we propose: it is composed of a hierarchically organized system of neighborhoods (also called “locales”), each with a local coordi- nate system. In this model, the locales are topologically related by containment; for example, a particular room is contained within a floor which is contained within a building. The location of a point within that room is given in terms of the room’s local coordinate system. Thus, localization is performed by matching visual data to geometric models of the locales to identify the local neighborhood of the robot. Then, precise position estimation can be performed relative to the locale’s local co- ordinate system. The authors maintain (as we do) that this topological approach avoids problems with global inaccuracies. Drumheller [20] describes a methodology for localization within a given room us- ing a ring of ultrasound sensors. The room of interest is modeled a priori as a series of straight-line segments representing walls and other obstacles. To perform localiza- tion, line segments are first extracted from the “sonar contour” generated by the ring of ultrasound sensors, and then registered to the room model using an adaptation of a two-dimensional matching algorithm proposed by Grimson and Lozano—Pérez [31]. This produces a list of “interpretations” representing possible position and orienta- tions of the robot in the room. Each interpretation must then pass a “sonar barrier test” which ensures that individual sonar readings for an interpretation do not violate physical constraints—i.e., given a small enough incident angle, the ultrasonic signal will not appear to pass through solid objects. Finally, a robot position and orienta- tion is chosen from the remaining interpretations that places the greatest amount of 17 the sonar contour in contact with the line segments of the map. Leonard and Durrant-Whyte [42] present a method of continuous localization us— ing the extended Kalman Filter (EKF) for a robot equipped with ultrasound sensors. The system employs an a priori map of “geometric beacons”—targets that can be re- liably observed in successive sensor measurements and that can be represented using a concise geometric description. Examples of geometric beacons are planes, corners, and cylinders; thus, the approach is best suited for an indoor domain. The authors formulate an EKF to produce an estimate of the location of the robot based on the most recent position estimate, the navigation input, and the current beacon obser- vations. This results in a cyclic algorithm with four steps: prediction, observation, matching, and estimation. Instead of extracting line segments from the sonar data and matching them to the environment map (as does Drumheller [20]), this approach predicts the sensor readings that the map will produce, and compares this to the observed readings. The authors contend that their method is more reliable because line segment models are not representative of sonar data. Krotkov [38] extends the work of Sugihara [54] by developing a robot localization system that relies upon an a priori map of the environment and a single video image. The map marks the positions of vertically oriented parts of objects such as doors, desks, wall corners, etc. Upon acquiring an image, the system extracts the dominant vertical edges and computes the angular directions to these landmarks. The position of the robot is then determined by establishing the correspondence between landmark directions in the image and landmark locations in the map. This is done by posing the problem as an interpretation tree search; the author presents an algorithm for efficiently searching the tree while accounting for errors in the landmark directions extracted by image processing techniques. Experiments verify that the system can determine the coarse location of the robot in a given room using a map generated from architectural drawings. The advantage of this approach is that it uses structures 18 commonly found in indoor scenes as beacons, and that it requires no sophisticated computer vision or image processing algorithms. Many commercial robots rely upon artificial beacons or landmarks for localization. Leonard [42] reports that the CBC-Caterpillar AGV uses a rotating laser to locate itself with respect to a set of bar codes placed at known locations in the environment. He also reports that “the TRC Corporation has developed a system for localization that uses retro-reflective strips and ceiling lights as beacons that are observed by T) vision and active infrared sensors. Artificial landmarks provide a straight-forward solution to the mobile robot localization problem, but their disadvantage is that they require modification of the environment, which may not always be feasible. A system which relies instead upon techniques applicable to a wide range of environments is therefore desirable. 2.5 Research Methodology Our research concerns the application of pattern classification techniques to locale recognition for the purpose of achieving coarse—level localization. We restrict our work to the solution of the “where-am-I” problem rather than navigation per se. Thus, the remainder of this thesis will consider the issues involved in metric map recognition within the framework of a topological spatial representation. Due to its broad applicability, ease of sensor integration, and straight-forward implementation, we chose a grid-based mapping scheme for locale representation. Many common image processing techniques are applicable to grid maps, therefore, we find it reasonable to exploit this duality by using well-founded methods for pattern recognition in gray-scale images. We assume that the robot localization process is divided into two phases: explo- ration and navigation. In the exploration phase, the robot is engaged in creating grid 19 maps of the locales that it encounters in its workspace, and extracting features from these maps. We allow that the robot may make as many maps as is necessary for later recognition. In the navigation phase, the robot performs coarse-level localization by comparing features extracted from its map of the current locale with representative features of locales of interest in the environment. We simulate the exploration phase by generating sets of grid maps under multi- ple sensing modalities for various real-world indoor scenes. Navigation is performed manually via teleoperation, so problems with obstacle avoidance and path planning are handled by the human operator. This effectively separates the sensing task from that of navigation. By computing spatial features on the collected maps, a dataset is created wherein each pattern represents a particular map and each pattern class repre- sents a particular locale in the workspace. This dataset is then split into training and testing datasets, and a classification system is constructed using the training dataset. Coarse-level localization in the navigation phase is simulated by classifying patterns from the testing dataset, resulting in a class designation for each pattern. The accu- racy of the coarse-level localization system is given by the percentage of correct class designations returned by the classifier. Furthermore, the classifier may be aided by additional a priori class probability information obtained from the topological spatial model. We consider two locale types for our experiments, as follows: Room An enclosed region not more than 20 meters long or wide. Door The region sensed by the robot moving from the center of a doorway 2 meters into the room. To generate the experimental datasets, we created grid maps of several room and door locales on the third floor of the Engineering Building. Figure 2.7 shows the spatial relationship of these locales to one another—we use the notation “EB 11” to indicate 20 “room n, Engineering Building”. There are ten locales of each type, and each locale was mapped ten times. This resulted in a dataset of 100 patterns for each locale type that we used in two separate (i.e., door and room) classifiers. The next chapter describes the methods used to create the grid maps in our experimental data. Figure 2.7. Door and room locales on the third floor of the Engineering Building. CHAPTER 3 Grid-based Maps Moravec and Elfes introduced the concept of grid-based maps, which they called the certainty grid [50, 25]. In the certainty grid system, the robot’s three-dimensional environment is projected into a two-dimensional “floorplan” and tessellated into an array of square cells. As readings are received from the robot’s sensors, these cells are updated to reflect the probability that the corresponding region of three-dimensional space is occupied by an object. Ultrasonic (a.k.a. “sonar”) sensors have often been used in implementations of the certainty grid system. A typical ultrasonic sensor using time-of-flight measurements yields precise range readings for object location, but lacks angular resolution. Figure 3.1 shows the cross section of an ultrasonic signal intercepting an object in the envi- ronment. Since the sensor uses merely the first echo it receives for range calculations, it is impossible to determine the exact angular location of the reflecting object with respect to the acoustic axis; only its radial distance can be measured. The are labeled “A” depicts the angular uncertainty of the ultrasonic sensor—it is bounded by the width of the ultrasonic cone (typically 30 degrees ) [13]. In his occupancy grid system, Elfes [27, 26] accounts for this uncertainty with a Gaussian probability function that models the error of the ultrasonic sensor. Here, cells close to the sensed distance from the sensor and near to the acoustic axis have a 21 Figure 3.1. Cross section of a conical sonar signal. Any object along arc A will yield a sensed distance of d. Adapted from a figure in [9]. high likelihood of occupancy. Likewise, cells near to the ultrasound sensor and close to the acoustic axis have a high likelihood of vacancy. (See Figure 3.2.) After each sensor reading, Bayes’ theorem (Equation 3.1) is applied to update the probability of occupancy of each of the cells intersecting the ultrasonic cone. A Gaussian model is used as the evidence of occupancy (OCC) or vacancy (VAC) and the current value of the cell provides the prior probability of occupancy. PIRk+l l 000] PIOCC l Ru] 1) le+1 I 000] P [OCC | Ru] + P le+1 I VAC] (1 - P [OCC | Rkl) (3.1) P [000 | Rm] = Here, PIOCC I R1,] is the current value of the cell (after 1: sensor readings), P [OCC I Rk+1I will be the new cell value, and pIRkH I OCC] and pIRkH I VAC] are obtained from the sensor model. Note that P [VAC I Hi] = 1 — P [CC C I R1,]. Initially, the occupancy status of a cell is unknown, thus we set P [CC C I R0] = 0.5 for all cells. After several readings by multiple sensors at different locations in the 23 environment, a map representing the objects and empty space of the environment is constructed. Figure 3.3 shows an occupancy grid after one sensor reading. Figure 3.2. Uncertainty profile for a single sonar sensor firing from (:1: = O,y = 0) toward (a: = 5, y = 0), with a sensed distance of d = 5. The bivariate Gaussian func- 2 tion p(r,0 I d) = 21ml” exp [—% (vi—é?— + 32)] models sensing uncertainty in range, 7' r 0 r, and angle, 0 for cell occupancy and vacancy. Plot (a) shows PIRk-H I VAC], the probability density function for vacancy, and plot (b) shows p IRk+1 I CC C ], the p.d.f. for occupancy. To reduce the computational cost of updating cell probabilities, other researchers have employed different approaches to handling sensor uncertainty. Beckerman and Oblow [6] developed a system to handle the systematic errors resulting from the accu- mulation of ultrasonic sensor readings. Instead of a probability value, their discrete- valued grid cells contain one of four labels: Unknown, Empty, Occupied or Conflict. A logic function is used to update the cells when new sensor readings are taken. Borenstein and Koren [10, 9] handle the sensor uncertainty by exploiting a trade- off between precise sensor models (as in Elfes’ work) and frequent sensing. By rapidly and continuously sensing the environment, their “Histogramic ln-Motion Mapping” 24 Figure 3.3. A sonar occupancy grid map after a single sensor reading. The sensor fired from (a: = 0, y = 0) towards (a: = 5,y = 0), with a sensed distance of d = 5. (HIMM) system builds a sampling histogram grid of the sonar data. Thus, the ultra- sonic sensor errors are accounted as the readings accumulate. The authors contend that this approach is not an oversimplification because “a probability distribution is actually obtained by continuously and rapidly sampling each sensor while the vehicle is moving.” Grid-based representations have proved useful in several areas of mobile robot research, including range-based mapping [49, 27], multiple-sensor integration [46, 49], path planning and navigation [10], and handling of sensor position uncertainty due to robot motion [26]. In addition, grid-based maps have been employed in many projects, including indoor and outdoor navigation, hazardous environment navigation, underwater navigation, and 3D mapping. The popularity of grid-based maps is due to several reasons: Intuition. Two-dimensional world projections are familiar to humans as “floor- plans” . 25 Efficiency. Grid-based representations allow for swift creation and revision of maps as new sensor readings are taken. Sensor independence. The framework is amenable to sensors other than ultrasonic range devices; scanline stereo and proximity sensors have also been used [49]. Data abstraction. Grid maps are a good intermediate step between raw sensor data and geometric models, providing automatic data abstraction by representing the data in a Cartesian projection. Correspondence to intensity images. Elfes notes the similarity of operations on occupancy grids to those for intensity images [27, 26]. One may draw upon knowledge of image processing procedures when working with grid-based rep- resentations. A comparison of a few of these operations is shown in Table 3.1. We chose grid-based maps as the spatial representation for locales due to these attractive features and the recent interest in this approach. In particular, we found the HIMM scheme to be a flexible, efficient method of map creation. Furthermore, we exploit the relationship between grid maps and intensity images when computing features for our classifier. Table 3.1. Comparison of operations on grid-based maps and intensity images. Adapted from [26]. Grid Maps Intensity Images Labeling cells as Occupied, Empty or Unknown Thresholding Handling position uncertainty Blurring / Convolution Removing spurious spatial readings Low-pass filtering Map matching / Motion solving Multi-resolution correlation Obstacle growing for path planning Region growing Path-planning Edge tracking Labeling objects Segmentation Determining object boundaries Edge detection 26 3.1 Experimental Platform The Pattern Recognition and Image Processing (PRIP) Laboratory has a mobile robot base for research in vision—guided mobile robots. This robot, ROME (RObotic Mobile Experiment), is used for investigations in scene recognition, path planning, pose estimation, and sensor fusion (see Figure 3.4). We employ ROME as a platform to collect grid maps for our research in coarse-level robot localization. Figure 3.4. ROME. ROME is built upon a Labmate mobile robot base from Transitions Research Corporation (TRC), and is equipped for experiments utilizing stereo vision, ultrasonic range finding, and infrared object detection. The TRC Labmate robot base is “a battery-powered vehicle . . .which can carry up to 200 pounds of computers, cameras, arms, communications gear, or other application payloads” [14]. Positional feedback 27 (i.e. dead reckoning) is provided by encoders mounted on the two drive wheels. The mechanical design of the Labmate base is shown in Figure 3.5. Figure 3.5. Labmate mechanical design. Reproduced from [14]; used by permission. Our Labmate base carries a Sun SPARCstation 1 running SunOS Unix 4.1.3, a TRC Proximity Subsystem (with sonar and IR sensors), and two Panasonic color CCD cameras. Figure 3.6 is a diagram of the functional components of the mobile robot system. The Sun workstation controls I/O to and from the Labmate, Proximity Subsys- tem, and cameras, and is capable of performing all image processing, mapping, and navigation calculations on-board. In addition, the robot can be operated with an ethernet communication line, thus providing remote operating and computing capa- bility. The TRC Proximity Subsystem consists of a horizontal ring of 24 Polaroid ultra- sound transducers which measure distances up to 10 meters, and 8 infrared sensors for object detection up to 1 meter. Images from the stereo cameras are captured by two Sun VideoPix frame grabbers on board the workstation. Finally, the stereo head is computer-controlled using a Rhino Robot pan-tilt carousel. 28 Proximity Subsystem I ........... I I I I Ultrasound I l I I I I I l Infrared l l __________ I 115-232 I ----------- | c ----------- I I I I I I Camera] I I Drive Wheels I I I S-Bus RS-232 I I : ié—-> Workstation . I I I I I l ___________ l I ___________ I Video Capture Labmate Ethemet Figure 3.6. A functional diagram of ROME. Ellipses indicate robot sensors / actuators. We have devised a computer program that performs ultrasound, vision, and in- frared grid-map acquisition and remote control of ROME. The grid maps consist of 100 by 100 square cells each representing 20cm by 20cm regions in the environment. Therefore, the program can map any locale not more than 20 meters long or wide. Navigation is operator-controlled, thereby separating the obstacle-avoidance task from that of map building. Furthermore, a graphical user interface allows the user to view the grid map creation process as the robot acquires sensor readings, as shown in Appendix B. One may extend this interface to allow for additional sensing modalities as the need arises. 29 3.2 HIMM Grid Map Creation We use Borenstein and Koren’s HIMM scheme to gather grid maps from indoor lo- cales for use as experimental data in our localization system. HIMM collects data on a two-dimensional Cartesian histogram grid, recording each sensor “sample” by incre- menting a single cell for each reading. The result is an array of cells each containing a “certainty value” (CV) related to its probability of occupancy by an object. HIMM was specifically developed for use with ultrasonic range data. However, Moravec claims that “the readings of almost any kind of sensor can be incorporated into a [grid map] if they can be expressed in geometric terms” [49]. We investigated the usefulness of applying sensing sources in addition to sonar to the HIMM scheme and the coarse-level localization paradigm. Here, we present some sensing modalities that are most amenable to a grid-based mapping scheme and how we incorporated them into the HIMM framework. 3.2.1 Ultrasonic Range Sensors Figure 3.7 shows the application of HIMM to ultrasonic sensor data. For each sensor reading, the cell lying on the acoustic axis at sensed distance d from the robot is up- dated. In order to “grow” obstacles quickly, this cell is incremented by 3 (rather than 1). Furthermore, the free space of the environment is determined by decrementing (by 1) the cells between distance 0 and d and lying along the acoustic axis of the sensor—this also removes spurious readings from the grid. CV’s are constrained to be between 1 and 15; cells for which no sensor readings have been made have value 0. Experimental Data Collection We implemented and tested a sonar-based HIMM scheme on ROME. The mapping results show consistency between maps of identical locales and little ill effects from Figure 3.7. Building a sonar histogram grid. The cell at distance d and lying on the acoustic axis is incremented by 3. Other cells on the acoustic axis with distances 0 to d are decremented by 1. A probability distribution is obtained by continuous and rapid sampling of the sensors while the robot is moving. (Adapted from a figure in [9]-) dead reckoning error. Figure 3.8 shows a sonar map of the PRIP Laboratory ob- tained from a continuously-moving robot making a circuit around the room. (For comparison, Figure 3.9 shows three views of our laboratory, as indicated by the ar- rows in Figure 3.10.) Here, the CV’s have been mapped inversely to intensity; thus, the darker the cell, the more likely it is to be occupied. Ultrasonic ranging creates a few anomalies in the map that are worthy of comment (see Figure 3.11). First, instead of depicting sharp angles at the corners of the room, the corners are rounded due to lack of angular resolution in the sensor. Second, ultrasonic signals are prone to specular reflections if they intercept hard, smooth surfaces. A whiteboard on the wall at the bottom of the map provides such a “clean” reflection of the sonar signal that the sensor can not reliably detect an echo. Likewise, a large air conditioning (AC) unit in the corner of the room is undetectable due to 31 Figure 3.8. A sonar map of the PRIP Laboratory. Figure 3.10. The sonar map of the PRIP lab. The arrows indicate the location of the views shown in Figure 3.9. The dashed line indicates the path taken by the robot. 32 its metallic surface. This accounts for the “streaks” where the ultrasonic ranging “misses” the wall. Third, the sonar sensors, mounted on the robot at a height of approximately 5 feet, tend to “miss” objects that are significantly lower in height than themselves. The noise in the middle of the right “lobe” of the map corresponds to a large conference table in the center of the room! Instead of detecting the table, most of the sonar signals pass over it without creating an echo. Figure 3.11. Sources of ultrasonic ranging anomalies. Note that for our purposes, it is important that the grid maps uniquely identify each locale in the workspace, not necessarily that they precisely represent the spatial layout of each locale. Similarity between successive maps is most important—Figure 3.12 demonstrates that this has been attained using the HIMM scheme. 3.2.2 Stereo Vision Matthies and Elfes [46] applied scanline stereo in combination with sonar to generate detailed occupancy grid maps. By joining the set of points found by scanline stereo with line segments, a “depth profile” is formed to approximate the surface structure of the environment. This depth profile creates a wedge (with the profile at the base 33 Figure 3.12. Four sonar grid maps of the PRIP lab, taken at different times under differing exploration paths of the robot. 34 and the cameras at the apex) that can be used to make inferences about the occupied and empty regions of the environment (see Figure 3.13). Figure 3.13. The stereo field of view and the range profile form a “wedge” of infor- mation. The shaded region is sensed as empty, and the cells along the solid line are sensed as occupied. Adapted from [46]. The authors remark: Stereo range information is complementary to sonar. Stereo detects and localizes surface boundaries and surface markings, but misses the interior of unmarked surfaces. On the other hand, wide-angle sonar gives better detection of broad surface structure and indicates large empty areas between the robot and nearby objects. However, it gives less definition to surface boundaries and fine surface structure. With our current systems, stereo can see through clutter but sonar cannot. Furthermore, stereo vision provides good angular resolution, whereas ultrasound does not. Conversely, ultrasound yields greater precision in range than does stereo vision. To model the range uncertainty inherent to scanline stereo vision, Matthies and Elfes define an occupancy grid profile for the vision sensor, as was done for ultrasonic sensors. They note that it has been shown that for a stereo disparity of d correspond- ing to a depth of Z, an error of (id in disparity will yield (approximately) an error of 6Z = Z (id/d in the depth estimate. Therefore, there is a band of uncertainty around the stereo depth profile that is wide where the profile is far from the cameras and narrow where the profile is near (see Figure 3.14). The width of this band defines the 35 probability that cells underlying that band are occupied according to the formula: PIOCCIRI=Punk+£9%E15xa where Punk = 0.5 is the UNKNOWN probability level, Pace 6 (0.5, 1.0) is a constant indicating the highest probability that will be assigned, and N is the width of the uncertainty band in units of cells, measured along the line of sight from the cameras. The factor a 6 (0.0,1.0) gives greater weight to the profile near to the measured points, and less weight to the profile between these points. Figure 3.14. The stereo depth profile with uncertainty band shaded. A more refined stereo error estimate was provided by Matthies and Shafer [47] that models the uncertainty of projected points as multivariate Gaussian distributions. Yet both of these uncertainty models can be accounted for in the HIMM approach: if the stereo algorithm allows for rapid in-motion sampling, the histogram cells correspond- ing to the depth profile are simply incremented for each reading. Likewise, the cells within the free space of the stereo wedge are decremented. After many readings, the resulting map is an array of CV’s indicating the probability of occupancy by visual features. 36 3.2.3 Lateral Motion Vision Stereo vision has drawbacks in that it requires precise camera calibration and suf- fers from the correspondence problem of matching points from two different views. As an alternative to stereo, we have investigated the application of lateral motion vision (LMV) to grid-based maps. Lateral motion vision provides depth information from straight-line, constant motion of a single camera. It does not require precise calibration or knowledge of camera geometries, but provides the same ultrasound- complementing features as does stereo vision. By tracking objects from a side-looking camera while the robot is in motion, LMV delivers a continuous update of range in- formation from the environment. Because the nature of this approach depends upon constant camera motion, it is especially suited for the HIMM scheme. In addition, the correspondence problem found in stereo vision systems is greatly simplified by use of a high sampling rate—features are easier to track since the images are taken close together. In order to apply this method to two-dimensional grid maps, the camera is as- sumed to be one-dimensional. One-dimensional images can be obtained from regular CCD cameras by way of extracting single scanlines, optical averaging using a plano— concave cylindrical lens, or through arithmetic averaging of scanlines. The latter two methods highlight strong vertical edges and suppress noise. Figure 3.15 depicts the LMV camera model. The image “plane” is a 1D strip of P pixels, numbered 0 through ’P -— l. The optical center of the camera is at a distance f from the image plane; f is called the focal ratio. Vertical lines in the environment are projected as points onto the image plane along lines that run through the optical center. The optical center plane is parallel to the image plane and runs through the optical center. The optical axis intersects the optical center at right angles to the image plane. It is assumed that the plane formed by the optical axis and the 1D 37 image plane is parallel to the ground. The point of intersection of the optical axis and the image plane is called the center of view; it divides the image plane into segments of equal length. Optical axis Center of View Image plane 7 / .1 1/2 field of View f _ WEFECLP'EE _ _ _ _. ____________ Optical center Figure 3.15. A diagram of the one-dimensional camera. Adapted from a figure in [11]. The field of view of the camera is (b, the maximum angle subtended by any two visible points. The focal ratio and field of view are related by: p f = 2tan (ct/2), which gives f in units of pixels. For the cameras used in our experiments, the field of view is approximately 55°; if each row is 256 pixels wide, f is approximately 246 pixels. Figure 3.16 is an overhead view of a 1D camera mounted parallel to the ground and perpendicular to the direction of motion on a robot moving with constant known velocity v. Note that the motion vector, the optical axis and the image line are all coplanar. The system is tracking a point P in the environment as it moves to the left 38 in the image plane. Image plane Optical center plane Optical center Direction of motion _——> Figure 3.16. A side-looking camera tracking a point under constant forward velocity. By similar triangles, we have 13 7' y — f where y is the distance of P from the optical center plane, a: is the distance of P from the optical axis, and r is the distance of the image of P from the optical axis. Differentiating with regard to time and solving for y, we obtain 1dx_1dr ydt -fdt d_x y: din We know that dx/dt is the velocity v of the robot, thus the distance of the point from the camera is y = dr (3-2) 39 Since y, f and v are each constant, dr/dt must be a constant as well. By tracking a point over time and measuring dr/dt, the coordinates of environment features relative to the robot can be calculated from Equation (3.2) and y r x = —. f Since the images are sampled rapidly as the robot moves forward, scene features will only move by small, predictable amounts from one image to the next, and the correspondence problem is alleviated. Furthermore, it is evident that lateral motion vision is easily incorporated into the HIMM scheme (see Figure 3.17). A histogram is rapidly generated by incrementing the grid cells corresponding to the coordinates of the detected edges that are calculated for each sampled image. Also, since we know that an unobstructed line of sight should exist between the camera and the object, the cells along the line from the optical center to the detected edge are decremented. .... " Vertical edge—WC" 3mm?” i. 5 E I f ‘5 '3’ !.l' f g. ‘ 'r ' ' "."'E‘"I‘"l"'? 'r' .-.].14... 1‘1 ............... ---q---~---e---v--~:-'°-!-.-l°I --------------- .............. Luigi]... 1 s r r -: Ill sail-'1] ---------------- .................. .‘r._r'.:...:.............. .’le i I .................. IT" --1 : : .................. +"+°*:-":-"-"°-"- s a I r .1... '°:'°°:r° r r . . , .fif ..,.?II!?E?. ..... Figure 3.17. Constructing an HIMM grid map using lateral motion vision. 40 Experimental Data Collection We implemented an LMV algorithm on ROME that tracks points through a series of arithmetically-averaged 1D images and plots the coordinates of strong vertical edges in an HIMM grid map. To track edges and estimate dr/dt, we use a spatiotemporal image defined by Bolles et al. called the epipolar plane image or EPI [8, 2]. An EPI is created by collecting a series of one-dimensional (spatial) images separated by At in time and “stacking” them vertically. Figure 3.18 shows an EPI obtained from a side-looking camera on board a mobile robot traveling with constant velocity through our laboratory. Figure 3.19 shows three images from the lateral motion sequence used to construct the EPI. Time Figure 3.18. A lateral-motion epipolar plane image. Time advances from top to bottom, and each row is a one-dimensional image captured from the side-looking camera. Note that since dr/dt is constant, features in the environment result in linear paths through the EPI. These paths can be tracked over time by means of a simple 1D edge-detection procedure. Each row of the EPI is convolved with a 1 x 3 Prewitt edge operator [3] in the horizontal direction, thresholded, and then thinned. The resulting image consists of a series of pixel-wide traces of edges through the EPI (see Figure 3.20). We smooth out noise in the EPI by fitting a line to each trace. The quantity dr/dt of a feature is then the inverse slope of its trace. The farther a detected object is from the camera, the greater the slope of its trace will be. 41 Figure 3.19. First, middle and last images of the lateral motion sequence from which the EPI of Figure 3.18 was obtained. The horizontal lines indicate the scanlines averaged in each image. Averaging these scanlines in image (a) gives the first row of the EPI; likewise, image (c) provides the last row of the EPI. Figure 3.20. The set of lateral-motion EPI traces. 42 Figure 3.21 shows a lateral motion vision map of our laboratory generated using the HIMM method and dual side-looking cameras. By comparing this map to the sonar map of Figure 3.8, we see that lateral motion vision detects surface detail and object boundaries that ultrasound cannot. However, LMV lacks range precision, as is indicated by the high CV’s radiating from the center of the room along the line of sight from the camera. Also, LMV cannot detect objects that are visually uniform, such as the laboratory whiteboard, as shown in Figure 3.24. However, maps generated using LMV are consistent despite varying sensing times and paths, as shown for sonar grid maps (see Figure 3.25). a “ ”it": ’1' Ha I: ‘ flat (.5: ‘9' i I ' a. b .. a. ~ it“ . " I I ‘ t. _. .- A 9 '6 gt ' ’5 is, Figure 3.21. An LMV map of the PRIP Laboratory. Cell certainty values have been mapped inversely to intensity. 3.2.4 Forward Motion Vision An algorithm for forward motion vision (FMV) is given by Brooks [11]. It uses the same ID camera model as for lateral motion vision (Figure 3.15), except that the camera is assumed to be forward-looking. Figure 3.26 is an overhead view of the camera mounted parallel to the ground on a mobile robot which is moving forward 43 Figure 3.22. The PRIP Laboratory, from three views. . "L \ s . ”I. J 5“, Q I, - - *1 , I I II (b) \ .. ,a‘ ' ‘ '7 0* :r I i In I. II I~ ‘ 3 , 7,, _ _&. ...1 ~. ..I _ r ,. _ . (.7 - - - a. / " s. i (a) Figure 3.23. The LMV map of the PRIP lab. The arrows indicate the location of the views shown in Figure 3.22. The dashed line indicates the path taken by the robot. 1' Figure 3.24. Characteristics of lateral motion vision data. LMV detects the high- contrast surface markings on wall posters and the air conditioner in the lab, but cannot sense the whiteboard. It also lacks the range precision of ultrasound. 44 .- a, as ”,i P J 1 .. fl .4; , x .3. . , -. d" . -. ‘- “Pa, ‘ F .5 "r. ’1 ‘ If . a r. “a . ~~-~ “a “ ’2' ' a 1:, at ‘ ,H 1. . . - “m ’3. s JI‘T‘} ”a . ‘ e ‘ ~ v .- . 3 w .h 5?. .~ ‘W i .5 I . x ’ q' t i l'. «A . i“ L x» ”F i r". N: i. f ‘ x: , 5.. ’ ”‘5 a. xlr :- .I: _ ... F , "" ”a, a I ’ " (3‘ a I' a: .. ' ~ t .\ ‘~ . if ‘I" J” I a. ‘a ' .. ' ,3» .. e a... I; ‘ ‘7‘ 1 " ! Figure 3.25. Four LMV grid maps of the PRIP lab, taken at different times under differing exploration paths of the robot. 45 with constant velocity. The motion vector, optical axis, and image line are all copla- nar. Only hand-alignment of the camera is required by this algorithm, so it is assumed that the camera will be turned somewhat from the direction of motion. The amount of misalignment is denoted by A and is assumed to be limited to :l:5°. Direction of motion Optical axis A PR Appaent motion PL Im e lane Apparent motion as p Ce Optical center Figure 3.26. A camera undergoing translation in the direction of robot motion. Points to the right of the center of expansion move to the right in the image; points to the left of CE move to the left in the image. Adapted from a figure in [11]. Figure 3.26 also shows the motion of projected points on the image plane as the camera is translated along the motion vector. The image of any point P3 which is to the right of the direction of motion will move rightwards in the image plane. The image of any point PL which is to the left of the direction of motion will move leftwards in the image plane. Points directly ahead will not move at all in the image. The projection of such points is called the center of expansion; CE is its coordinate in pixels. The trajectories of image points under these conditions are described by Bolles et al. [8]; Brooks then derived equations describing the distance to objects (in time-to-collision) and center of expansion under this camera system. From these, one 46 can compute the coordinates of tracked objects. Figure 3.27 shows the camera system tracking a point in the environment. Here we see that the location of the point can be described by two non-orthogonal coordinates: x2, its distance from the camera motion vector parallel to the image plane, and yg, its distance from the optical center plane parallel to the motion vector. Note that the derivative of y; with respect to time is the constant —v, where v is the magnitude of velocity of the robot. Direction of motion Opticalaxis A x P 3”}' "mil ” x2 I I I I : y2 Imageplane I y/ I A d I / . , Optical center plane ' I I ”’ f2 I .1I’ \ Ir ----El.......ll , ’ Optical center Figure 3.27. The camera system tracking a point under slight misalignment and constant forward velocity. Adapted from a figure in [11]. By similar triangles we see that the distance, r, of the image of point P from the center of expansion parallel to the image plane is given by __ f2 132 ._ y. (3.3) 47 where f; is the distance from the optical center to the center of expansion. Since :5; is constant, the derivative of r with respect to time is rt _ f 2 32 y! '- _— 2 y? and we observe that r y; y; distance —:——:—=——. =lme r’ y; v veloc1ty thus, r 312 = p ' v is the distance from the optical center plane to the object feature whose image is being observed. Since C5, the center of expansion, should remain constant, we can calculate r=R—CE, r'=RI where R is the distance of the image of point P from the left end of the image plane. Therefore, R — C E 3/2 = —R’ v. (3.4) Clearly, the accuracy of forward motion analysis is very sensitive to the estimate made of CE. Suppose one can measure R and R’ at two distinct times to and t1. Then the estimate of the distance from the image plane to P should decrease by precisely v (t1 - to). Writing the estimates as functions of time, we obtain R (to) — CE 3' (10) v - (t1 — to)v : R(It£’)(t_l)CEv, 48 and therefore, _ R’ (to) R(t1) —- R’(t1)R(to) + R’ (t1) R’ (t0) (t1 — to) _ R’ (to) — R' (t1) ' 013 One can estimate CE and then compute the distance from the image plane to P using Equation (3.4). However, to determine the location of P in the workspace, one needs its coordinates (x, y) in the coordinate system of the robot. From Figure 3.27 we see that y=y2+$281nA. If e is the distance in pixels along the image plane from the center of view to the center of expansion, then , c Sln A = If? Therefore, 3/ = 312 + '62 2 and by Equation (3.3) y = 312 + 6732. 2 Finally, given f2 = V C2 + f2, we have C1'3/2 y=y2+;2—+—F- Then by the Pythagorean theorem, x: xi-(y-y2)2=\I(%) -(y-y2)2- As with lateral motion vision, we can track points through a forward-motion 49 EPI, and use the preceding equations to calculate the relative coordinates of sensed features, as 2 Nb 2 f2 ( 2) cry; 3’ = 312 + W' Figure 3.28 shows an epipolar plane image obtained from a forward-looking camera on board ROME. Figure 3.29 shows three images from the forward-motion sequence used to construct the EPI. Figure 3.28. A forward-motion EPI. Equation (3.3) indicates that traces of features through a forward-motion EPI will be hyperbolas. By fitting a hyperbolic curve to each trace, noise is smoothed out of the motion sequence. Unfortunately, forward motion vision has an inherent drawback when applied to HIMM mapping. Since the camera is forward-looking, the amount of the workspace that is sensed is limited by the field of view of the camera lens, and thus very narrow. This is exemplified in Figure 3.29, where the area projected onto the image plane becomes progressively smaller as the robot advances. Therefore, F MV mapping is best used in conjunction with dual side-looking cameras using lateral motion vision, in order to sense the maximum amount of the workspace as possible. 50 Figure 3.29. First, middle and last images of the forward motion sequence from which the EPI of Figure 3.28 was obtained. Image (a) provides the first row of the EPI; likewise, image (c) provides the last row of the EPI. 51 3.2.5 Infrared Proximity Detectors Moravec [49] reports that infrared (IR) proximity detectors would also be appropriate in a grid-based representation. Unlike ultrasonic or vision sensors, IR proximity detectors return a binary value indicating when an object is detected in its field of view. For this sensor, Moravec suggests that the uncertainty profile take an elliptical shape, with high certainty values along the major axis, tapering to zero at the edges of the envelope. A typical IR proximity detector has a very narrow field of view and a range of only about half a meter. Figure 3.30 depicts a possible occupancy grid profile for this sensor using a bivariate Gaussian distribution for the uncertainty model. Figure 3.30. Occupancy grid profile for an infrared proximity detector located at (x = O,y = 0) and facing toward (x = 0.5,y = 0). A bivariate Gaussian function p(x,y I d) = 1 exp I% (g- + 13):” models sensing uncertainty. This profile 3 u 210,0, is used for probability of occupancy or vacancy, depending on whether or not the sensor detected an object. Because of the binary nature of IR data, this sensor is not easily incorporated 52 into the HIMM framework. One approach is to increment all the cells along a line in front of the sensor when the reading is “on”, and to decrement the same cells if the reading is “off”. See Figure 3.31. ooooooooooo ooooooooooo ooooooooooo ooooooooooo R £9.!I:i§':i:i: . 5 '46] i i Figure 3.31. Building an HIMM grid map with infrared proximity sensors. Experimental Data Collection Figure 3.32 shows an IR HIMM grid map of our laboratory created using the above method. Since this sensor only makes local measurements of the environment, the resulting maps are highly dependent upon the path taken by the robot during sensing. This sensor is most useful for coarse-level localization within cluttered locales. See Figure 3.33. 53 Figure 3.32. An IR map of the PRIP lab. The light gray regions indicate unoccupied sensed cells, and thus reveal the path taken by the robot through the workspace. 3.2.6 Tactile Sensors A final sensor amenable to HIMM grid-maps are tactile sensors. These are typically implemented on mobile robots as bumpers. Like infrared proximity detectors, tactile sensors are local, binary-output devices. Because of this, they are also most appro— priate for mapping within highly cluttered domains. Figure 3.34 shows an approach to HIMM grid map construction using tactile sensors. 3.3 Experimental Datasets We employed ultrasonic range sensing, lateral motion vision, and infrared object de- tection in the HIMM framework to collect grid maps of “door” and “room” locales for the coarse-level localization classifier. When mapping a locale, three independent “sensor” grid maps (one for each sensing source) are simultaneously created (see Fig- ure 3.35). The result is three different but corroborating representations of the same locale. Each sensor map reflects different spatial structures; together they provide a robust model of the workspace. Appendix A shows examples of the three sensor grid maps of the ten locales mapped in the Engineering Building. Ten maps were collected 54 Figure 3.33. Four IR grid maps of the PRIP lab, taken at different times under differing exploration paths of the robot. 55 ooo‘.-..--..-...-.‘p-o‘onofioooooonp- .... Q I O I O I ado-......ooofioc-“oo. o 1 o c I‘ II C II. a 3 u . n + I... . 050l0v000n0l10v OI* . c o . . . + ......" Io'll Q'OIJIO‘Iv III“ 1 H n n + I c" I... 'II‘OOI '0 IO. . o o o o o o a u o o I o o o a a o o o o c o . I'IIO'OOOOII I. ' 'IIU 0.. I o o c o c O O o a o o O o o I o o o u o o o o o . I‘OOO‘I JOIIJ IOIOO‘IIlot 0’1 000‘ o a o a a o o . a o o n o o o o o o o o n a a c u o o . “mmr Bumper B ......O...’...O...-.C‘0...’.........I...I I I I I I I O o a o o o o D I O I O c o o I o OO'OO‘O I O D o O 0 Q o 0 o a o 0 o O o c o o I o O o 0 i- Figure 334. Building an HIMM grid map with tactile sensors. from each locale at different times (sometimes many days apart) and under differing exploration paths. oco:::oo“;ou’. O C I ' ...‘.I.. . I l .311. ‘11. . " .... . I. .... .... ..y....4...'4...‘4... ‘ ‘ ‘ o e o e ..H....J....JI...'....‘....' ... ... ...-:zund: O I W. Vision " O U I I . .- .Joocdne. w ‘ so. no. .0. can 5 , a..\ l o _.:l ’Oeoo‘Iu-o I e e and Infrared sensor grid maps of a locale. 3 Figure 3.35. Sonar, Vision CHAPTER 4 Grid Map Processing Since grid maps consist of a two-dimensional array of bytes, they can be subjected to many of the same operations appropriate for intensity images. In the following sec- tions we examine some potentially useful techniques for preprocessing and description of grid-based maps. 4.1 Preprocessing Common image processing operations are useful for identifying spatial structures in grid maps. For example, a threshold operation applied to the map of Figure 3.8 results in a map of the occupied regions of the locale. Combining this map with the original through a logical XOR operation shows the (disconnected) free space regions and sonar anomalies. Selecting the largest of these regions through a connected component analysis yields a (noisy) map of the free space of the room. Applying morphological dilation and erosion operators then results in a smoothed map of the free space, from which shape features may be computed. Figure 4.1 shows the result of each of these operations. 56 57 Figure 4.1. Image processing operations on a sample grid map: (a) the original map, (b) occupied regions, (c) disconnected free space and sonar anomalies, ((1) the free space of the locale, (e) morphological dilation, (f) morphological erosion. 58 4.2 Grid Map Descriptions In order to classify a given map as corresponding to a particular known locale, ap- propriate region descriptions must be chosen to represent the grid maps and provide feature vectors for the classifier. Note that since no global coordinate system is used in coarse-level localization, the grid maps can be expected to be translation and rotation variant, which must be handled by the chosen descriptions. The following sections present a few descriptions that we considered for use with our locale classifiers. 4.2.1 Fourier Descriptors Using this method, the frequency-domain description of the (closed) boundary of the free space of a locale (as in Figure 4.1(f)) is used for shape features [3]. The discrete Fourier series expansion of the “intrinsic function” ¢ (1:) of a boundary of length L is given as 1 L—l . 43(k) = Z Z X (n) eJnka/L, n=0 where the coefficients X (n) are given by L-1 . X (n) = z: We) 6-.....” k=0 and the intrinsic function is «6(3) = ¢ (3) — 2.5/L, (4.1) where z/)(s) is the tangent angle of the boundary with respect to arc length. The term 27m / L in Equation (4.1) subtracts out the rising component of t/J(s). Figure 4.2 depicts how the intrinsic function is computed from a boundary. The magnitudes of the lower-order Fourier descriptors (X (n) for small n) provide 59 21: -- Figure 4.2. Calculating 1/)(s) from a boundary. useful features related to the “lobedness” of the shape that could be used to identify a locale. Furthermore, the Fast Fourier Transform (FF T) provides a computationally efficient means of calculating these features [12]. However, these features can not account for structures internal to the map; only the boundary is considered and other information is lost. 4.2.2 Standard Moments The two-dimensional (p + q)th order moments of gray-level image I (i, j ) are defined as [44]: mm = ZZinqIfiJ) p,q = 0,1,2,.. . t J where I (i, j ) is typically the intensity value of a digitized image at row 1', column j. For our purposes, it is the certainty value of the grid map at coordinate (i, j). If instead of I(i,j), we use its binary representation B (i,j): . . 1 if cell (i,j) has been sensed 30,1) = 0 otherwise 60 the centroid of the grid map is given as __mlo __m01 x—— y——, moo moo where moo is the area of the sensed region of the map. By use of the centroid of the map, translation-invariant central moments are formulated: M = 2:: (.- — .2)» (,- — a): 1032'). (4.2) From this, rotation-invariant central moments are derived: 10 q ((5109 = Z Z (‘1)q-8 0:0: (C03 9Y4“ (Sin mil-Hr #p-rM-mr-l-s (4-3) r=0 8:0 where p! p = ___ C, r! (p — r)! and 9 is the orientation of the major axis of the map, calculated using B (i, j ) in place of I(i,j) in Equation (4.2): l 0 = 5 arctan [ 2.“11 ] [120 '— #02 Finally, standard moments can be calculated, which are invariant under transla- tion, rotation, and scale change: N,, = 15-31 (4.4) (1500 where 1 7=§(P+Q)+1- (4-5) Note that the zeroth and first order standard moments for any I (i, j) will be N00 = 1, N01 = N10 = 0, and N11 = 0. Thus, six standard moments of second- and 61 third-order are available for use as features. (Higher-order moments suffer from a wide dynamic range and noise from digital sampling errors.) The standard moments technique is appealing due to its translation and rotation invariance. However, the scale of a map may be a significant discriminating feature in locale recognition, so the non-scale-invariant moments of Equation (4.3) should be considered as well. We investigated the application of these six standard moments as feature vectors in locale classification. Unfortunately, we observed that it is often difficult to reliably extract the major axis from the data, which greatly affects the resulting features. This observation is also noted by Reeves [51]. 4.2.3 Moment Invariants From the second— and third-order central moments, Hu proposed seven moment in- variants that are independent of translation, rotation and scale [32]. First, normalized (scale-invariant) central moments are derived from Equation (4.2) as 77109 = “Pa/(‘30 where 7 is defined in Equation (4.5). The seven moment invariants are then given as $01 = 7720 + 7702, 992 = (7720 - 7702)2 + 477?“ 903 = (030 — 3012)2 + (37721 + 7103)2 a 904 = (7230 + 7712)2 + (7721 + 7703)2 1 62 $05 = (7730 — 3012) (7130 + 012) X [(7130 + 7112)2 — 3 (’721 + ’703)2] + (31721 — 7703) (’721 + 1703) x [3 (1730 + 1712)2 — (7721 + 7103)2] a $06 = (7720 — 7702) [(7730 + 7712)2 " (7721 + 003V] + 4011(030 + 7712) (7721 + 003) , $07 = (37112 — 7730) (7130 + 1712) X [(7730 + 7112)2 — 3 (021 + 710332] + (37121 — '703) (021 + 7103) X [3 (7730 + "12)2 - (1721 + 7703)2] - The first six moment invariants (cpl to cps) are invariant under rotation and re— flection. Moment invariant 907, however, changes its sign for reflected images of an object, so it is useful in distinguishing between mirror-image patterns. A drawback to the moment invariant technique is that there is no intuitive meaning for any of the seven invariants. Thus, it is difficult to judge which of the invariants would be most appropriate to use with a given set of data. 4.3 Experimental Data After some experimentation, we decided to use Hu’s seven moment invariants as grid map features, since this approach could account for internal structures and also pro- vided us with the most discriminative features. Seven moment invariants can be calculated for each of the three sensor grid maps (i.e., sonar, vision, and IR); thus, twenty-one features are available for locale description. However, the moment invari- ants guarantee scale invariance, and this eliminates some potential discriminatory information from the feature space. So we included pm, the zeroth-order central mo- ment, of each sensor map in our feature set. This provides three additional features 63 related to the size of the locale, resulting in twenty-four candidate features. We calculated P00 and 501, cpg, . . . , 907 for each locale map in our database. For sim- plicity, we excluded the infrared proximity maps of room locales from consideration— by inspection it was evident that these maps provided no useful information in these wide-open regions. However, we did process IR grid maps for the door locales, as they are more useful in constrained environments. Thus, the room locale dataset had 100 patterns in 16 dimensions, and the room locale dataset had 100 patterns in 24 dimensions. Table 4.1 shows how the features are ordered for each pattern. We also use the notation UR to denote feature 1) calculated from sensor map R; i.e., 90;, indicates the second moment invariant calculated on the vision grid map. Of course, not all of these features are useful for recognition, as we will see in the next chapter. Table 4.1. Ordering of the features in the experimental dataset. Note that the IR features are not included for the room locales. #00 $01 $02 $03 904 (P5 906 997 Sonar 1 2 3 4 5 6 7 8 Vision 9 10 11 12 13 14 15 16 IR 17 18 19 20 21 22 23 24 Figure 4.3 shows a star plot [5] of the class centroids in the room locale dataset. Class centroids are calculated by finding the mean feature values of all patterns in the class. The star plot depicts the data by mapping the feature values of the centroids onto the length of the radial lines in each star, beginning with the right-hand side and continuing counter-clockwise. Thus, each star in this plot is composed of 16 radial lines (not all of the lines may be visible due to their length). Conceptually, the more similar that two class centroid stars appear, the more similar the pattern classes are in the data. Note the similarity between the stars for “EB 323” and “EB 360”. One 64 would expect the classifier to have difficulty discriminating between these two locales. Room Locale Class Centrolds X E "T % W EB 312 E8 323 EB 325 E8 327 El E8 330 EB 335 E8 350 EB 352 ‘ E8 354 EB 360 Figure 4.3. A star plot of the class centroids for room locale maps. Figure 4.4 shows a star plot of the door locale class centroids (in 24 features). The dissimilarity of some of the classes is a favorable indication that map recognition will be feasible using this dataset. 65 Door Locale Class Centroids EB 312 EB 323 EB 325 E8 327 EB 330 EB 335 EB 350 EB 352 EB 354 EB 360 Figure 4.4. A star plot of the class centroids for door locale maps. CHAPTER 5 Grid Map Classification We have investigated locale recognition through grid map classification. Having gen- erated room and door locale datasets, we experimented with several commonly used classifiers to find the one which gives the best recognition performance. We reason that this classifier could be built into a robot navigation system to perform on-the-fiy coarse-level localization. 5.1 Pattern Classification The following model is assumed: A given pattern, represented as a feature vector as = (1:1,:c2,. . .,:cd), is presented to the classifier. The classifier then assigns the pattern to one of c classes (L01,w2, . . . ,wc) E (I based on the decision rule 6() and the observed feature values. Consider the classifier as a machine that selects the class corresponding to the discriminant function, g;(:c),i = 1,...,c, of greatest value. Thus, 5(33) = a.- 9 943) 2 91(3) Vi ¢ 2', where a.- corresponds to the action “choose class 0.2;”. We denote the set of training patterns as X = {(2,,01),(a:9,02),...,(:c,,,0,,)}, where each 2:, is a d—dimensional feature vector and each 0, E Q is the class 66 67 membership of 2;. Likewise, the training patterns belonging to class w,- are rep- resented as X; = {21,232, . . . , 23;.“ . Except where indicated, the material for the following sections is taken primarily from [22] and [21]. 5.1 .1 Bayes Classifier The Bayes classifier for minimizing the probability of error assigns test pattern a: to class w,- according to the decision rule 63(23) = a.- 9 943) 2 93(3) Vi # i, where 94‘”) = P(wi l 13)- Thus, the discriminant function is the class a posteriori density, given by Bayes’ formula PUB |w.-)P(w.-) 5:1 11(23 le)P(wj) P(Wi I3) = Here, p(a: | w.) is the class-conditional density, and P(w,-) is the a priori probability of class cog. It is a well-known fact that if the class-conditional densities and a priori proba- bilities are exactly known, the Bayes classifier will be optimal in terms of minimum probability of error. However, in the case that the class conditional densities are estimated, the Bayes classifier is still powerful in that it accounts for class a priori probabilities. Consider the case where the class-conditional densities are multivariate Gaussian, i.e., —-1—(a: - Mar-Wm — M], (51) 1 . “(3 Iw') _ (2w)d/2|2,|1/26Xpl 2 68 where u.- and 2; are the d-component mean vector and d x d covariance matrix of the class-conditional density of w;, respectively. Dropping the denominator (which is independent of 0.2;) from the class a posteriori density, we can rewrite the discriminant function as 943) = 12(2 IWi)P(wa), or equivalently, g;(a:) = lnp(:c |w,-) + 1n P(w,-). Then, since p(:c lwg) ~ N (11,-, 23,-), we have are) = is: - u.)‘2:‘(a= — u.) —§ 1 2 ln27r— 21nl2i| +ln P(w,-). Finally, we can ignore gln 21r as an additive constant, and obtain 94:) = —;:,-(:c — u.)‘E.-“(a= — u.) - élnlzil +1n P(w.-). (5.2) 5.1.2 Minimum Mahalanobis Distance Classifier The minimum Mahalanobis distance (MMD) classifier assigns patterns to the class whose centroid p..- has the smallest Mahalanobis distance from the test pattern 2:. The squared Mahalanobis distance between two vectors a: and p,- is given as M2($,ui) = (m - MYZFIGB - M), which is twice the first term on the right hand side of Equation (5.2). Thus, this dis- tance measure implicitly models the class-conditional densities as multivariate Gaus- sian distributions (as in Equation (5.1)), but does not account for a priori class probabilities. 69 The Mahalanobis distance measure effectively scales the distance between two vectors as if each feature had unit variance. Thus, a point of equal Euclidean distance from two class centroids will have greater Mahalanobis distance from the centroid of the class with smaller overall variance. The decision rule of this classifier is then defined as 5MMD(=) = a.- 3 95(3) 2 91(3) W 751' where 942:) = —M(z,u.-). In practice, the parameters ,1; and 2,- must be estimated from the training samples for each class in order to calculate M(:c, [1,“). A straightforward approach is to apply maximum likelihood parameter estimation; here, u..- and E.- are estimated by the class w,- sample mean and covariance given by . 1'“ - fli=—Z‘Bi nf k=1 and . 1 "i . . i . xi = a“ 2031 — ”0(3). — I‘d)" ' k=l To obtain a reasonable estimate of the covariance matrix 53,-, the number of train- ing samples must be 5 to 10 times d, the dimensionality of the data [34]. Furthermore, if there are fewer than d training samples in a class, 2, will be singular and inversion of the matrix is impossible. A common solution to this dilemma is to assume uncor— 2 related features; thus, if (if, 62, . . . , 63 are the sample feature variances, we estimate 2 with the diagonal matrix 70 . 2 al 2 . a 2 E = , <33 resulting in «2 1/01 «2 2-1 = 1/02 1/63 The MMD classifier is space efficient in that it requires a simple parametric rep- resentation of the training data. However, this approach will not perform well if the class distributions are not approximately Gaussian, particularly if they are not unimodal. 5.1.3 K-Nearest-Neighbor Classifier The nearest-neighbor (NN) classifier assigns a test pattern a: to the same class w; as training pattern at: E X,- nearest to a: in the feature space. In discriminant function formulation, 5NN(=D) = a.- 9 9493) 2 92(3) V17“, where 94‘”) = “ll” — “3:” and ”2: — 23;” _<_ ”a: — sci” V216 Xg, k 75 r. It can be shown that as the number of training samples, n, goes to infinity, the error rate of the nearest neighbor classifier is bounded by twice the optimal (Bayes) error rate. 71 The nearest neighbor classifier can be extended to the k-nearest neighbor (K-NN) classifier. In this case, the test pattern a: is assigned to the class most frequently represented among the 1: training samples nearest to 2. Let K;(z, k) be the number of patterns from class w,- among the k nearest training patterns to :6. Then, 6KNN(:B) = (1,3 g;(a:) > gJ-(sc) Vj 76 i, where g;(2‘3) = K;(23, k). Ties must be handled or avoided by proper choice of k, randomization, rejection, or defaulting to the l-N N case. It can be shown that as k and n go to infinity, the error rate of the k-nearest neighbor classifier approaches the Bayes error rate. The k-nearest-neighbor rule is appealing due to its simple, intuitive approach to classification. Unfortunately, this technique requires the storage and search of all the training patterns, which is expensive to implement as n becomes large. Although parametric classifiers such as the Bayes and minimum Mahalanobis dis- tance classifiers are powerful tools, in finite-sample—size cases nonparametric classifiers can outperform them. Fix and Hodges [30] concluded that if the parametric model of the class distributions was incorrect or if insufficient training samples were avail- able to properly estimate the parameters for the model, then the k-nearest-neighbor classifier may outperform a parametric classifier. 5.2 Error Estimation In order to evaluate the performance of a particular classifier, it is desirable to have an estimate of its probability of misclassification or “error rate”. This is typically done by presenting the classifier with a test dataset—the probability of misclassification is 72 then estimated as the percentage of samples that were misclassified. Suppose that r is the number of samples out of n independent, identically dis- tributed test patterns that were misclassified. Then, the presentation of each test pattern is a Bernoulli trial and 1' is binomially distributed: P(r | 5) = (2) an — 5)""7 where 6 is the true but unknown error rate of the classifier. The maximum likelihood estimate, 5?, of 5 is given by . T €=—. Tl In the case where n is large (n€(1 — E) > 5), we know by the DeMoivre-Laplace theorem [7], é — e ,/é(1— é)/n which can be used to compute a confidence interval for F3. z N(0,1) A problem in classifier design is that one may have only a finite number of samples to both design and evaluate the classifier. With the resubstitution error estimate (also called the apparent error rate), the same set of samples is used to train the classifier as well as test it. It is well known that the resulting error estimate is optimistically biased. In the holdout method of error estimation, the available data is divided into train- ing and testing datasets—the training patterns are used to design the classifier, and the percentage of misclassified test patterns is taken as the error rate of the classifier. This estimate is pessimistically biased. Unfortunately, when the number of available samples is limited, we encounter a dilemma in dividing the data into test and training samples. If we reserve most of the data for the design of the classifier, we will lack confidence in the resulting error rate 73 estimate. If we reserve most of the data for testing the classifier, we cannot obtain a good classifier design. The Ieave-one-out error estimate is especially useful in the case of small sample sizes. Suppose a total of 12 samples are available to design as well as test the classifier. Here, the classifier is designed with (n — 1) samples and tested on the remaining sam- ple. This is repeated n times by leaving out a different pattern each time. The error estimate is then the total number of misclassified patterns divided by n. Although the leave-oneoout estimate makes efficient use of the available samples and is unbiased, it has a large variance and requires the design of 17. different classifiers. Bootstrap error estimation techniques [19, 36] provide a more accurate estimate of classifier performance when the number of available samples is limited. By resampling the original data, these methods create “fake” data sets with which to train and test the classifier. We define bootstrap sample A?“ as a set {(sc'f, 0;), (13;, 0;), . . . , (3;, 0;)} randomly selected from the training set X with replacement. Bootstrap sample A?“ is the 6- th of these sets. Furthermore, let A(2:, (1’ *b) be the classifier decision rule based on training set 26"". We then define indicator function Q[, ] as . *b _ Q [0,A(2,X*b)] = 0 1f A(a:,/1’ )— 0, 1 otherwise. The E0 bootstrap estimator counts the number of misclassified training pat- terns which do not appear in the bootstrap sample. This estimate is obtained by summing the misclassified samples over all bootstrap samples and dividing by the total number of training patterns not appearing in the bootstrap sample. Let A, = {i I (23,-,0,-) ¢ X ‘1’} denote the index set of training patterns which do not 74 appear in the b-th bootstrap sample. Then we define the E0 estimator as B 2 2 Q [0,, am, My] E0 = b=l .45 B Z lAbl 6:1 where IAb] denotes the cardinality of set Ab. Efron [24] suggests that B need not be greater than 200. To compensate for the pessimistic bias of the E0 estimator, Efron proposes the (unbiased) E632 error estimator, defined as E632 = 0.368 * App + 0.632 at E0, where App is the apparent error rate of the classifier. This estimator has smaller variance than leave-one-out error estimates [36]. 5.3 Feature Selection A fundamental issue in pattern recognition is to determine which features should be employed to obtain the smallest classification error for a given problem. Jain and Dubes [35] identify three reasons why dimensionality reduction is necessary: 1. If a finite number of training samples is available, classifier performance is known to deteriorate if superfluous features are included. This phenomena is called the curse of dimensionality [34]. 2. Measurement and classification cost is reduced if fewer features are used, re- sulting in greater efficiency. 3. Successful estimation of class-conditional density parameters require a large ra- tio of training patterns to dimensionality. For example, Kalayeh and Landgrebe 75 [37] recommend that there be five to ten times as many training patterns as there are features in the data. If additional training samples are unavailable, one must resort to limiting the number of features considered. In order to evaluate the effectiveness of a chosen feature subset, some criterion function must be used. Typically this is the estimated error rate of the classifier or some measure of class separation in the feature space. Here we discuss a few of the common approaches to feature selection. Cover and Van Campenhout [16] have shown that to guarantee the selection of the best subset of p features out of d measurements, one must examine all (g) possible subsets of size p. Of course, this exhaustive search will explode combinatorically for problems with even moderate dimensionality. In the likely event that exhaustive search is infeasible, other suboptimal feature selection methods have been proposed. One approach is to select the p individually best features from the available (1 measurements. However, Cover [15] has shown that the best subset of size two may not even include the best individual feature. Thus, this technique will probably never lead to an optimal subset, because it does not account for the statistical dependence among the measurements. Another method, called sequential forward selection, begins with the best indi— vidual feature, and then enlarges the subset by adding one measurement at a time which in combination with the chosen features maximizes (or minimizes) the criterion function. Although still suboptimal, this straightforward approach is able to make some account of statistical dependence among the measurements. Whitney proposed the leave-one—out error rate of a l-N N classifer as a criterion function for sequential forward feature selection; this is known as Whitney’s method [58]. 76 5.4 Experimental Results We investigated the application of the various classifiers and techniques described above to recognize grid-based maps of locales. Using our two datasets of 100 patterns, we designed and tested both K-NN and MMD classifiers for room and door locale recognition. We considered the room locales first. Table 5.1(a) shows the result of sequential forward feature selection using a 3-NN classifier on the 16 possible room locale fea- tures. (In order to ensure that features with large variance would not dominate the feature space, we normalized the data to have zero mean and unit variance.) The first feature chosen was {p30}; the best pair found was {#30, #30}, etc. We evaluate the performance of the classifier under each feature subset using the leave-one-out error estimate, denoted 5“; the number of patterns misclassified out of 100 is denoted as 1'. Likewise, Table 5.1(b) shows the feature subsets selected for use with the room locale dataset and MMD classifier. Since the number of available training samples was limited, we assumed uncorrelated features to obtain diagonal covariance matrices for each class. Note the effect of the curse of dimensionality shown in the results. For both classifiers, the performance initially improves with added features, but subsequently deteriorates as the feature subset increases in size. Since we collected only 10 maps of each locale, we limit the feature space to the five best features as determined by sequential forward feature selection. We conclude that the MMD classifier outperforms the 3-NN classifier in room locale recognition. Using the feature set {p30,pgo,cpf,cpf,cp§} the MMD classifier yields a leave-one-out estimated recognition rate of 96%. Note that the zeroth-order central moments were chosen as the two best discriminating features. We infer that p30 is related to the size of the room, and that p30 is related to the amount of “detail” 77 Table 5.1. Feature selection and classification error of room locales: (a) 3-N N classi- fier, (b) MMD classifier. (a) (b) 3-NN MMD No Feature 1' 63,, No. Feature 7' 63,, 1 pg, 29 0.29 1 pg, 26 0.26 2 #30 7 0.07 2 pg, 8 0.08 3 9059 5 0.05 3 7,9,3 6 0.06 4 so: 5 0.05 4 7p? 5 0.05 5 (pf 7 0.07 5 up; 4 0.04 6 (pg 8 0.08 6 (p539 6 0.06 7 7,0,3 8 0.08 7 (pg 7 0.07 8 7p; 7 0.07 8 (pg 8 0.08 9 «p? 7 0.07 9 7p; 10 0.10 10 (pg 6 0.06 10 901’ 9 0.09 11 so? 6 0.06 11 «pg 11 0.11 12 cp¥ 7 0.07 12 1,999 13 0.13 13 so}; 6 0.06 13 502V 15 0.15 14 90;” 6 0.06 14 35V 17 0.17 15 7p}: 7 0.07 15 ,9,V 17 0.17 16 w], 8 0.08 16 90K 19 0.19 ‘I 02 in the room. Additionally. the moment invariants calculated from the sonar grid maps were counted as more significant than those from the vision. grid maps. Figure 5.] shows pair-wise feature projections of the data using this feature set. 1 I .3, .9, #5 2' ' 00 - so: 3 l; ‘ W. 1 .L: i i i 3 ,, s -l E ..i f ’94 '2 1, '3 3' i :1 ~ i _a w 5:; :5,- 9 15‘3“ “ H- “ i _ - C.- - 0“ l l ' 1 I 1 ’ ' ‘l'. '1: ‘5' y _l I l l i 1 S 1. 902 1! ii I g l l; 3?: :, s 1 La; 3 v S ,‘s . S #00 #00 “V4 “:91 Figure 5.1. Pair-wise feature scatter plots of room locale data. Red denotes EB 312; green, EB 323; yellow, BB 325; orange, BB 327; yellow-green, EB 330; pink, EB 335; light blue, EB 350; dark blue, BB 352; orchid, EB 354; peru, BB 360. Table 5.2 shows the confusion matrix resulting from the MMD classifier and the 79 Table 5.2. Confusion matrix of room locale data resulting from MMD classifier using features #30, p30, (pf, (pig, and (pg. The misclassifications are underlined. 1 2 3 4 5 6 7 8 9 10 1 10 0 0 0 0 0 0 0 0 0 2 0 10 0 0 0 0 0 0 0 0 3 0 0 10 0 0 0 0 0 0 0 4 0 0 0 10 0 0 0 0 0 0 5 0 0 0 0 9 0 l 0 0 0 6 0 0 0 0 0 10 0 0 0 0 7 0 0 0 0 1 0 9 0 0 0 8 0 0 l 0 0 0 0 9 0 0 9 0 0 0 0 0 0 0 0 10 0 10 0 l 0 0 0 0 0 0 0 9 Table 5.3. The numbering of the locales, used for both room and door confusion matrices. 1 BB 312 3 BB 325 || 5 BB 330 I] 7 EB 350 || 9 EB 354 2 EB 323 4 BB 327 || 6 BB 335 || 8 EB 352 || 10 EB 360 5-dimensional room locale feature space—the locales are numbered as indicated in Table 5.3. Here we see that room EB 330 was misclassified once as room EB 350 and vice-versa. Likewise, EB 352 was misclassified once as EB 325, and EB 360 as EB 323. The sensor grid maps shown in Appendix A indicate that the misclassifications are a result of the similar size and structure of these rooms. We also applied the 3—NN and MMD classifiers to the door locale dataset. Table 5.4(a) reports the results of sequential forward feature selection using the 3-N N classi- fier and 24 available door locale features. Again, the data was normalized beforehand. Table 5.4(b) gives the feature subsets selected under the MMD classifier; we assumed uncorrelated features as before. Note that features calculated from the IR grid maps play a significant role in both classifiers, while those calculated from the vision maps 80 are not ranked as important. We reason that this is due to the “confined” nature of these locales. Considering the best subset of five features reported in Table 5.4, we see that the 3-NN classifier performs the best for door locale recognition. With the feature set {go-5,903, 7160,9425”, 113/0}, the leave-one-out method gives an estimated recognition rate of 99%. Thus, the classifier can identify the robot’s location based on the doorway locale alone! Although the area of the mapped regions is small, the scale-invariance property of the moment invariants has the effect of “magnifying” the structures in each sensor map. Furthermore, based on the inferior results of the MMD classifier for this dataset, we conclude that either an insufficient number of patterns was available to estimate the class-conditional density parameters, or that the door locale data is not well represented by a Gaussian distribution. Projections of the data using this feature set are shown in Figure 5.2. As before, the confusion matrix resulting from the 3-NN classifier under the 5- dimensional door locale feature space is given in Table 5.5. The door locale for EB 354 was misclassified once as the door to EB 325. Comparing this confusion matrix with the room locale confusion matrix of Table 5.2, we see that the location of the robot, as decided by the two classifiers, is in agreement 95 times out of 100. There were no cases where the door and room locale classifiers both gave the location of the robot incorrectly (i.e., for doorway and interior maps of the same room). Finally, we used bootstrap error estimation techniques to obtain a more reliable evaluation of classifier performance for the two datasets. Table 5.6(a) reports the E632 error estimate for the MMD classifier with room locale data as 6.26%. Likewise, Table 5.6(b) gives E632 for the 3—N N classifier with door locale data as 2.50%. These correspond to recognition rates of 93.74% for room locales and 97.50% for door locales. 81 Table 5.4. Feature selection and classification error of door locales: (a) 3-NN classifier, (b) MMD classifier. (a) (b) 3-NN MMD No Feature 7' 5,, No Feature 7' 63., 1 cpl, 39 0.39 1 (pg 44 0.44 2 7p; 19 0.19 2 7p; 21 0.21 3 p30 9 0.09 3 1p] 12 0.12 4 (pf 2 0.02 4 1,01? 9 0.09 5 173’, 1 0.01 5 99% 5 0.05 6 «pf 1 0.01 6 of 5 0.05 7 7p; 0 0.00 7 pg, 7 0.07 8 cpg 0 0.00 8 cpf 6 0.06 9 of 0 0.00 9 pg, 5 0.05 10 ,0: 0 0.00 10 (pg 6 0.06 11 1730 0 0.00 11 so: 6 0.06 12 pg” 0 0.00 12 (pg 7 0.07 13 (,0, 0 0.00 13 (pg 7 0.07 14 «pg 0 0.00 14 go}, 8 0.08 15 «p: 0 0.00 15 «p; 8 0.08 16 gag 0 0.00 16 35 9 0.09 17 «pg 0 0.00 17 p31, 10 0.10 18 «pl’ 1 0.01 18 cpl” 10 0.10 19 7,03 1 0.01 19 «p: 10 0.10 20 9p}: 1 0.01 20 90X 12 0.12 21