\ z ..I. . ‘ I ‘ M. .u. . 4 . a : .H , m . . V F . . v , m. , .. . . w I 4 31%. ”2., ,, H a . .mrrufiqvfia‘ Exé‘wfiwfiw‘gnhwm (96%}- 52.250???» This is to certify that the thesis entitled TOWARD AN AUTOMATED RIVET INSPECTION SYSTEM FOR AGING AIRCRAFT presented by Unsang Park has been accepted towards fulfillment of the requirements for the Master of degree in Department of Computer Science Science and Engineering Major Professor’s Signature Mat/7 6 , W5} Date MSU is an Affinnative Action/Equal Opportunity Institution LIBRARY Michigan State University PLACE IN RETURN Box to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 c/CIRC/DateDuepes-pJ 5 TOWARD AN AUTOMATED RIVET INSPECTION SYSTEM FOR AGING AIRCRAFT By Unsang Park A THESIS Submitted to Michigan State University In partial fulfillment of the requirements For the degree of MASTER OF SCIENCE Department of Computer Science and Engineering ' 2004 ABSTRACT TOWARD AN AUTOMATED RIVET INSPECTION SYSTEM FOR AGING AIRCRAFT By Unsang Park This thesis describes work on an automated rivet inspection system for aging aircraft using magneto-optic imaging (M01). M01 is a non-destructive evaluation technique that is being used increasingly in aircraft inspection. Even though MOI offers high efficiency in non-destructive inspection, the large area of material that needs periodic inspection has created a need for more efficient data interpretation methods: an automated inspection system. The proposed inspection algorithm focuses on rivets that are one of the common places where cracks originate. Motion-Based Filtering (MBF) is developed as an effective filtering method for MOI images. MBF extracts only “moving objects” in a sequence of images and suppresses stationary background by using a multiple frame subtraction method. The filtered images are processed with rivet detection algorithms to properly locate rivets. Two rivet detection algorithms are developed based on Hough transformation and morphological operation. The detected rivets are classified by classification algorithms implemented by Hough transformation or Bayes decision rule. The off-line test of the prototype automated rivet inspection system on 245 M01 rivet images showed up to 98% accuracy, but more data is needed for testing. Work is shown in speeding up the algorithms for possible real-time use. A proof-of-concept inspection system showed the capability of processing 3 to 5 images per second. Dedicated to my sweet heart, Jung-Bun Lee, my parents Se-Bong Park and Kyung-Shin Lee. iii ACKNOWLEDGEMENTS I would like to thank Dr. Lalita Udpa for giving me the research opportunity about aircraft inspection system. She always inspired me with new challenges and motivations during all my graduate school years. I would like to thank Dr. George Stockman for his invaluable advice with many challenging problems. He always gave me warm and kind advice regarding my research and course work. I would like to thank Dr. John Weng who taught me many useful techniques in computer vision class. I also appreciate his comments for my thesis. I would like to thank Dr. Pradeep Ramuhalli with his advice and help in weekly meetings. I would like to thank Dr. Anil K. Jain for his advice about the sample correlation problem. I would like to thank all MOI project members, Fan Yuan, Zhiwei Zeng, and Xin Liu, for their support on the project. iv TABLE OF CONTENTS LIST OF TABLES ............................................................................................................ vii LIST OF FIGURES ......................................................................................................... viii Chapter 1: Introduction ................................................................................................. 1 1.1 Nondestructive Evaluation Techniques .............................................................. 2 l .1 . 1 Visual Inspection ............................................................................................... 2 l. 1 .2 Ultrasonic Inspection ......................................................................................... 3 1. l .3 Radiography ................................................................................................ 4 1.1.4 Eddy Current Inspection ............................................................................. 6 1.2 Magneto-optic Imaging (MOI) ........................................................................... 8 1.2.1 Eddy Current Excitation ............................................................................. 8 1.2.2 Magneto-Optic Sensing ............................................................................ 10 1.2.3 Imaging ..................................................................................................... 12 1.3 Problem Statement ............................................................................................ 14 1.4 Thesis Organization .......................................................................................... 14 Chapter 2: Image processing of Magneto-optic Images ............................................. 15 2.1 Introduction ....................................................................................................... 1 5 2.2 Analysis of M01 Images ................................................................................... 15 2.3 Motion segmentation ........................................................................................ 18 2.3.1 Background subtraction ............................................................................ 19 2.3.2 Frame subtraction ...................................................................................... 19 2.3.3 Optical flow .............................................................................................. 20 2.4 Motion-based Filtering (MBF) ......................................................................... 20 2.4.1 Frame subtraction with MOI images ........................................................ 21 2.4.2 Additive Frame Subtraction (AF S) ........................................................... 22 2.4.3 Choice of w in AF S ................................................................................... 24 2.4.4 Post—processing of MBF ........................................................................... 27 2.4.5 Overall Algorithm of MBF ....................................................................... 30 2.5 Results of MBF ................................................................................................. 32 2.6 Conclusion ........................................................................................................ 37 Chapter 3: Algorithms for An Automated Rivet Inspection System With MOI ........ 38 3. 1 Introduction ....................................................................................................... 3 8 3.2 M01 Images ...................................................................................................... 38 3.3 Overall Approach of Automated Rivet Inspection ........................................... 43 3.3.1 Automated Rivet Detection ....................................................................... 45 3.3.2 Hough Transformation Technique ............................................................ 46 3.3.3 Morphological Operation Technique ........................................................ 49 3.4 Automated Rivet Classification ........................................................................ 54 3.4.1 Two-pass Hough Transformation Classifier ............................................. 54 3.4.2 Bayesian Classifier .................................................................................... 57 3.4.3 Feature Selection ....................................................................................... 59 3.4.4 Distributions of the selected features ........................................................ 60 3.5 Results of Automated Rivet Inspection ............................................................ 63 3.6 Conclusion ........................................................................................................ 65 Chapter 4: REAL-TIME IMPLEMENTATION OF AUTOMATED RIVET INSPECTION SYSTEM .................................................................................................. 66 4. 1 Introduction ....................................................................................................... 66 4.2 Optimization of Motion-Based Filtering .......................................................... 66 4.2.1 Frame Grabbing ........................................................................................ 68 4.2.2 RGB to Grayscale conversion ................................................................... 68 4.2.3 Frame subtraction and Combining ............................................................ 70 4.2.4 Thresholding ............................................................................................. 71 4.2.5 Median Filtering ........................................................................................ 71 4.2.6 Contrast Stretching .................................................................................... 75 4.3 Results of the optimized MBF .......................................................................... 76 4.4 Optimizing Rivet Detection and Classification ................................................ 77 4.5 Personal Computer-Based Proof-of Concept System ....................................... 78 4.6 Digital Signal Processor (DSP)-Based Prototype System ................................ 79 4.7 Results of Real-Time Automated Rivet Inspection System ............................. 80 4.8 Conclusion ........................................................................................................ 82 Chapter 5: CONCLUSION AND FUTURE WORK .. ............................................... 83 5.1 Conclusion ........................................................................................................ 83 5.2 Future Work ...................................................................................................... 85 BIBLIOGRAPHY ............................................................................................................. 87 vi LIST OF TABLES Table 2-1: Contrast of objects in M01 images before and after filtering. ........................ 37 Table 3-1: Accuracy of two rivet detection algorithms. ................................................... 63 Table 3-2: Inspection accuracies of three different inspection algorithms. ...................... 64 Table 4-1: Processing time for filtering one image with MBF algorithm using ten past images. ...................................................................................................................... 67 Table 4-2: Time for converting color image to gray scale ............................................... 70 Table 4-3: Processing time of median filtering with various algorithms ......................... 74 Table 4-4: Measured time for each step of MBF before and afier optimization. ............. 76 Table 4—5: Execution times of Hough transformation-based and morphological operation- based rivet detection. ................................................................................................ 77 Table 4-6: Execution time of the automated rivet inspection ........................................... 8O vii LIST OF FIGURES Figure 1.1: Schematic of (a) ultrasonic inspection (b) oscilloscope display of the reflected waves ........................................................................................................................... 3 Figure 1.2: Schematic of radiographic inspection .............................................................. 5 Figure 1.3: Eddy current induction in a specimen .............................................................. 7 Figure 1.4: Uniform sheet-type eddy current excitation ..................................................... 9 Figure 1.5: The effects of rotating sheet-type eddy current in M01 images of rivets. (a) Linear eddy current excitation (b) Rotating linear eddy current excitation [10]. ..... 10 Figure 1.6: (a) Eddy current and induced B field on a flat specimen without any abnormality (b) Eddy current and induced B field on a flat specimen with an abnormality. .............................................................................................................. 11 Figure 1.7: Magnetization of magneto-optic sensor [8] .................................................... 11 Figure 1.8: Schematic of the M01 instrument. .; ............................................................... 12 Figure 1.9: M01 system including handheld sensor (left), excitation unit (center), and wearable monitor (right). .......................................................................................... 13 Figure 2.1: Two consecutive MOI images in V; dark disks represent rivets, which are moving to the left. ..................................................................................................... 16 Figure 2.2: Average pixel intensity changes in a selected area that encloses (a) only background, and (b) both background and object ..................................................... 18 Figure 2.3: A binary difference image from two timely consecutive images ................... 23 Figure 2.4: The typical behavior of the white area of Di(x, y) in a sequence of difference images. The area of Di" reaches its maximum at i=4 in this case. ........................... 26 Figure 2.5: The typical behavior of the white area of Di(x, y) in a sequence of difference images. ...................................................................................................................... 27 Figure 2.6: (a) Combined difference image (b) after stretching (c) after thresholding. 30 Figure 2.7: Algorithm for motion-based filtering. w denotes number of difference images. ................................................................................................................................... 31 viii Figure 2.8: (a),(b),(c),(d) MOI images in time sequence. Rivets are moving to the left while sensor is moving to the right; (e) Motion-based filtered image of the original image ((1). .................................................................................................................. 32 Figure 2.9: (a),(b),(c),(d) Synthetic MOI images in time sequence with uni-directional scan. Rivets are moving to the left while sensor is moving to the right; (e) Motion- based filtered image with uni-directional scan. ........................................................ 33 Figure 2.10: (a),(b),(c),(d) Synthetic MOI images in time sequence, multi-directional scan. Rivets are moving to the left while sensor is moving to the right with small sinusoidal up down motion; (e) Motion-based filtered image with multi-directional scan. .......................................................................................................................... 34 Figure 2.11: Results of MBF. Rectangular box shows selected areas for measuring contrast. (a)(b) Image 1, (c)(d) Image 2, (c)(f) Image 3. .......................................... 36 Figure 3.1: (a) Two normal rivets (b) Right rivet has a radial crack (c) A crack between two rivets ................................................................................................................... 39 Figure 3.2: (a) Seam and two normal rivets (b) A crack along the seam ......................... 39 Figure 3.3: (a) (b) Normal seam and two rivets (c) ((1) Crack along the seam ................. 40 Figure 3.4: Sequence of images with corrosion dome. Corrosion is moving from up left to down right, as the M01 sensor is moving. ................................................................ 41 Figure 3.5: Images of normal objects are shown differently because of the wobbling of the M01 instrument. .................................................................................................. 42 Figure 3.6: Images of defective objects are shown differently because of the wobbling of the MOI instrument. .................................................................................................. 43 Figure 3.7: (a) A schematic rivet with a radial crack (b) A schematic MOI image of the rivet with a radial crack (0) A schematic rivet with a circumferential crack (b) A schematic MOI image of the rivet with a circumferential crack. ............................. 44 Figure 3.8: Typical rivet images in M01 inspection after MBF filtering: (a) (b) normal rivets (c) (d) defective rivets. .................................................................................... 44 Figure 3.9: Overall approach of automated rivet inspection algorithm ............................ 45 Figure 3.10: Schematic of voting in 3-dimensional accumulator in Hough transformation. (a) Each edge votes for its center coordinate ........ (b) Accumulator collects the votes. Figure 3.11: (a) Original MOI image (b) Rivet detection by morphological operation. .. 49 ix' 48 Figure 3.12: Schematic of connected components and bounding rectangles in an image.50 Figure 3.13: Morphology of structuring element. (a) Square shape (b) Circular shape... 51 Figure 3.14: Example results of intermediate steps of morphological operation for rivet detection with intermediate variables: (a) initial rivet image (b) after iterative erosion operation (c) after rivet detection. ................................................................ 52 Figure 3.15: Rivet detection algorithm using morphological operation. .......................... 53 Figure 3.16: (a) Original image. (b) Rivet detection by morphological operation. .......... 54 Figure 3.17: Second pass Hough transformation for rivet classification. E' represents the image of rivet edge that is outside of the circle detected in the first Hough transformation. .......................................................................................................... 55 Figure 3.18: Two-pass Hough transformation method: (a) raw image (b) after first Hough transformation (0) after second Hough transformation. Right rivet is classified as defective. ................................................................................................................... 56 Figure 3.19: Graphical representation of the feature f for Bayesian classifier (a) Hough Transformation-based rivet detection (b) Morphological operation-based rivet detection. ................................................................................................................... 60 . Figure 3.20: The distributions of feature f from (a) Hough transformation-based rivet . detection (b) Morphological operation-based rivet detection. .................................. 62 Figure 4.1: Effect of median filtering in MBF. (a) Before median filtering. (b) After Median filtering with 5 by 5 kernel. Bright objects represent rivets and corrosion. 75 Figure 4.2: A proof-of-concept system for real-time automated rivet inspection. ........... 78 Figure 4.3: A prototype system for the real-time automated rivet inspection system. ..... 79 Figure 4.4: Results of the C++ proof-of-concept system. (a) Raw and processed MOI image without defect. (b) Raw and processed MOI image with defective rivet, the raw image is boxed to highlight the defective rivet. ................................................. 81 Chapter 1: Introduction Nondestructive evaluation (NDE) is widely used in inspecting aircraft structures to detect surface and subsurface defects. The Magneto-Optic Imager (M01) is a powerful device for the Nondestructive Inspection (NDI) of aging aircraft. MOI produces analog images of magnetic flux leakage associated with eddy current distribution around surface and subsurface structures. The main advantages of using MOI are its fast inspection speed and easy interpretation compared with conventional Eddy Current NDI instruments. However, due to the magnetic domain wall structures of the sensor, the M01 images are corrupted by serpentine pattern noise, which lowers the M01 inspection capabilities. The noise also poses difficulties in the quantitative analysis of M01 images, which is one of the crucial requirements for automatic inspection. This thesis advances the state of rivet inspection from MOI along several lines. A proposed automated rivet inspection algorithm is composed of three major steps, Motion- based Filtering (MBF), rivet detection, and rivet classification. MBF is developed to preprocess the M01 images to remove the background noise using a multiple frame subtraction method. Objects, such as rivets and scams, persist from frame to frame and appear to move as sown in Figure 1.1. From the processed image, rivets are detected using Hough transformation or morphological operation. Finally, detected rivets are classified as normal or defective by additional Hough transformation method or Bayesian classifier. Algorithms shown to be successful are transformed in order to speed the processing for real—time application. (a) (b) (C) Figure 1.1: A sequence of sample MOI images'. Rivets and seam appear to move as the M01 scans over sample. 1.1 Nondestructive Evaluation Techniques Nondestructive Evaluation (NDE) is the inspection of materials and components for determination of their integrity or detection of any abnormality and, without impairing their usefulness [1]. NDE is used for quality control of products in manufacturing and monitoring the product during operation to assess the remaining life. Based on the physical principles they rely on, NDE methods are categorized as Visual inspection, Ultrasonic inspection, Radiographic inspection, Eddy Current inspection, etc. A brief description of some of the widely used NDE techniques is provided in the following sections. 1.1.1 Visual Inspection Visual inspection is the oldest and most widely used among all the NDE techniques. It is usually performed as a first step of the evaluation process because of its simplicity and low cost. The specimen is illuminated with light and inspected by the human eye. Using optical instruments such as microscope, telescope, and holography ' Text in the image came with the original data from Boeing and is not clearly legible. Throughout this thesis some original images have been enhanced for acceptable printing. may increase the visual inspection capability [1]. Visual inspection can detect and analyze only surface abnormalities. More sophisticated techniques are needed for subsurface inspection. 1.1.2 Ultrasonic Inspection Ultrasonic inspection is a versatile NDE technique applicable for most materials, metallic or non-metallic. It utilizes high frequency acoustic waves to detect surface and subsurface defects [1]. The ultrasonic energy travels through a test sample and reflects when it strikes a discontinuity. The reflected ultrasonic energy provides information related to subsurface defects. From the propagation distance of ultrasonic energy, derived from propagation velocity and time, the specimen thickness and location of defects are calculated [2]. An estimate of the shape of the defect is also obtained by moving the probe around the defect and measuring defect echo. Figure 1.1 shows the geometry of a specimen with an internal defect and its corresponding pulse-echo response. Crack echo Back wall echo F N Fifi Crack 2a1 (a) (b) Figure 1.2: Schematic of (a) ultrasonic inspection (b) oscilloscope display of the reflected waves If L is the thickness of the specimen and v is the corresponding velocity, echoes from the specimen back wall and defect appears at time 3E and 33‘. in the oscilloscope, V V respectively. Ultrasonic energy may propagate in any material which behaves in an elastic manner and is therefore applicable to many types of materials including metal, rubber, and plastics. Ultrasonic inspection has several advantages. First, it is portable and needs only one accessible surface of the specimen. Second, it can be used for inspecting a large (a few meters) thickness of sample. Third, it is relatively inexpensive and finally, it allows rapid and automated inspection. However, ultrasonic inspection has a couple of limitations. First, it needs a coupling medium to couple the ultrasonic energy into the specimen. The test specimen is either immersed in water or a coupling gel is applied. Second, with a coarse grained specimen, penetration depth may be reduced to as little as 50 or 100 mm at high frequencies and also reflections from grain boundaries result in significant noise in the signal. Third, highly skilled operators are required for data interpretation. 1.1.3 Radiography Radiography is one of the most widely used NDE methods for the detection of internal defects such as porosity and voids [1]. Planar defects can also be detected with proper orientation. Radiography uses short wavelength electromagnetic radiation, such as X-rays or gamma rays that can penetrate materials. As the radiation enters the material, it is absorbed and the amount of absorption depends on the density and thickness of the material [3]. Internal defects can be detected by observing the variation of the amount of absorption of the radiation. The degree of absorption is usually recorded on a film which is sensitive to the radiation source as in Figure 1.2. The location and shape of the defect is directly reflected on the film after irradiation. ' / Specimen H Defect Film Defect image Figure 1.3: Schematic of radiographic inspection Radiography can be used for most types of solid materials, both ferrous and nonferrous alloys, as well as nonmetallic materials and composites. However, radiographic inspection has several limitations. First, it is not portable and it is often difficult or impossible to position the film and source of radiation to obtain a radiograph of the desired area. Second, the depth of inspection is limited by the degree of penetration of the radiation in the specimen. The degree of penetration varies depending on the material. Third, the defects must be parallel to the radiation beam and must be at least 2% of the thickness of the specimen. Fourth, equipment including radiation source generator and safety facility is rather expensive. 1.1.4 Eddy Current Inspection Eddy current inspection has been used for over four decades as a leading in- service NDE method. It can be applied to irregularly shaped conductive material for detecting surface and subsurface abnormalities such as cracks and corrosion. Eddy current testing uses alternating current flowing in a coil to produce an alternating magnetic field in accordance with Ampere’s Law: va=J (LU where H is the magnetic field in Ampere/meter and j is the current density in Amperes/meterz. F araday’s Law of Induction provides the basis for electromagnetic induction and is given by the equation: ~ 63 v E=——— 14 X at ( ) where E is the time-varying electric field intensity in Volt/meter and B is the time- varying magnetic flux density in Weber/meterz. This equation illustrates that a time- varying magnetic field B will induce an emf in a conducting path. When the coil is brought close to an electrically conducting material, and eddy current is induced in the material due to the electromagnetic induction. The induced eddy current in the material generates a secondary magnetic field opposite in direction to the primary field. Figure 1.3 shows the schematic of electromagnetic induction in the specimen by the eddy current probe coil [2]. Primary / magnetic field Secondary f/ magnetic field @ A— Secondary / eddy currents Figure 1.4: Eddy current induction in a specimen The secondary magnetic field may be detected either as a voltage change across a second coil or by the perturbation of the impedance of the original coil. This impedance change is correlated with the surface and subsurface structure of the material, hence used for detecting defects in materials. The eddy currents in an infinitely large planar sample decay exponentially from the surface of the sample according to the equation [4]: j=joexp(—§-) 0-» where y is the distance below the surface of the sample, jo is the current density at the surface of the sample, and 6 is the depth of penetration. The depth of penetration is defined as [5]: (1-4) where 0' is the conductivity, ,u is the relative permeability, and f is the inspection frequency. As shown in equation (1-3), the frequency should be carefully chosen to keep the depth of penetration large enough to reach the subsurface defects. Eddy current inspection has several advantages. First, it is portable and needs access to only one side of the specimen. Second, the equipment is cheap relative to Radiographic instruments. Third, it allows rapid and automated inspection. The major drawback of eddy current testing is that it can be used only for electrically conductive materials. Another drawback is that the depth of inspection is limited to a few millimeters [l]. 1.2 Magneto-optic Imaging (MOI) Magneto-optic Imaging (M01) is a variant of the eddy current inspection method, and it is becoming increasingly practical for defect assessment in aircraft structures. This relatively new nondestructive testing method gives the inspector the ability to quickly generate real-time images of defects in large surface areas [6][7][8][9]. M01 is based on the use of a combination of principles of magneto-optic effects and eddy current induction. The theory and principles of an MOI system are presented in the following sections. 1.2.1 Eddy Current Excitation MO Imaging differs in its operation from conventional eddy current testing in that MOI generates uniform sheet eddy currents in the specimen over the area of the magneto- optic sensor as shown in Figure 1.4 [8]. ’\/ . Primary Eddy currents Secondary Eddy currents (linear) Figure 1.5: Uniform sheet-type eddy current excitation The rectangular gray area on the specimen in Figure 1.4 is the active area of inspection. Alternatively, MOI uses rotating eddy current to eliminate the necessity of aligning induced current perpendicular to the long axis of cracks for successfiil detection. Rotating eddy current is provided using an alternate configuration with two separate primary coils connected to a single foil by two single-tum secondary windings. The two-source coil must produce currents perpendicular to each other and they must be out of phase by ninety degrees [7]. The current density in the specimen is represented by the equation: J =J0 sin(a)t+ 0} : positive intensity To simplify the problem, we replace the pixel intensities for D3 by zero. As a result, the difference image pixels in Di], 0,2, and 0,3 have zero intensities and only region Di" has positive intensity values. Therefore the resulting difference image can be divided into two distinct regions with zero and non-zero intensities and the non-zero intensity area, Di4, belongs to the object, 0,,, in 1,,(x, y) as illustrated in Figure 2.3. 2.4.2 Additive Frame Subtraction (AFS) In Figure 2.3, part of the object is also lost while the background noise is substantially removed. The object that is lost by subtraction belongs to Di] . 22 Figure 2.3: A binary difference image from two timely consecutive images. The entire extent of moving objects, including Di], can be recovered using multiple difference images. This is done by subtracting the current image flom several past images, and combining the difference images using an OR operation to get the binary image, Sn_0R, or a MAX operation to get the grayscale image, SnMAX, as described below. Sn,og(x,y) = 1gétwwibeyn (2-6) Sn,MAX (x, y) = ggiDi (x, Y» (2-7) where Smog or Sn,MAX is the integrated difference image with OR or MAX operation and w represents the number of difference images. In the OR operation, any pixel value Sn’0R(x, y) is the result of OR operation on {D 1(x, y), ..., Dw(x,y)}, which is zero if all Di(x, y) are zeroes, and 1 otherwise. On the other hand, in MAX operation each pixel 23 value SnMA X(x, y) is the result of MAX operation on {D 1(x, y), ..., Dw(x,y)}, which is the maximum value of D,-(x, y), 15 i SW. The method described above is called Additive Frame Subtraction. In practice, Snag or Snmx can be computed recursively as described below. Sk,0R(an)=0R{S (xay)9Dk(xay)}9 S] :D] ZSkSW (2-8) k—l,0R SH”M (x, y) = MAXiSk—mu (x,y),Dk (x,y)}, S1 = D1 2 S k S w (2-9) where SkOR IS Sn,0R and SkMAX IS SnMAX, when k=W. Selecting the optimal value of w is an issue in Additive Frame Subtraction. Two possible techniques for choosing w are presented below. 2.4.3 Choice of win AFS The value w, which is the number of difference images, needs to be chosen with care. In general, w is a function of the object size and scanning velocity. If w is too small, the algorithm will fail to extract the entire object flom the M01 sequence and if w is too large, it will result in a large computation time. The optimum value w can be obtained in two ways. (i) Static method: Let v be the scanning velocity (inches/sec), s the object size along the scanning direction(inches), and f the flame rate(frames/sec). The relationship between w, v, s, and f is expressed as 24 (2-10) In equation (2-10), we can predefine w only if we set the scanning velocity to a constant because 5 and f are constant. (ii) Dynamic method: An alternative approach is to calculate the area of the region with non-zero intensity, the number of pixels in Di" in the difference image Di(x, y). As illustrated in Figure 2.4 and 2.5, typically the area of Di(x, y) increases with i and reaches a maximum when the object in 1,, and 1,,_4 are completely separated. An optimal estimate of w can be chosen as the corresponding i for which the area reaches its maximum. 25 (a) D3 (h) D2 (i) D] Figure 2.4: The typical behavior of the white area of Di(x,y) in a sequence of difference images. The area of DE reaches its maximum at i=4 in this case. 26 6000 — 5000 r 4000 r 3000 r area 2000 r 1000 r O L l L I I I I 4 I—J 1 2 3 4 5 6 7 8 9 10 i Figure 2.5: The typical behavior of the white area of D,{x,y) in a sequence of difference images. 2.4.4 Post-processing of MBF Post processing is performed to enhance the image contrast and remove arbitrary isolated noise pixels. As discussed in Section 2.3.3, the output of Additive Frame Subtraction can be binary or grayscale. The grayscale filtered images are processed in a few post processing steps, while the binary filtered images are directly used as initial data for further analysis such as automatic rivet classification. The first post processing performed on both binary and grayscale output images is median filtering. The median filter is defined as: J(x,y)=median{1(x,y)|x—w$x$x+w,y—wSySy+w} (2-11) w = [%J — l, W is an odd number 27 where J(x, y) is the filtered image, 1(x, y) is the original image, and W is the square kernel window size. The next operation is contrast enhancement. The contrast enhancement increases the contrast of the objects in the combined difference image, so that they become more visible. A grayscale stretching is first applied to map the intensity distribution of combined difference images to the range flom 0 to 255. The mapping function is described as: 255 * Sn (x,y) sn' , = . (x Y) (maxtsntx.y»-rmn(sn(x,y» (2-12) A threshold operation is performed to remove small arbitrary noise pixels in the stretched MAX image. The threshold operation is expressed as: S", M X’(x, y) = { S", M X(x, y), if S", M X(x, y) 2 threshold (2-13) 0, otherwise The threshold value can be chosen by Otsu’s method [22] which finds a value that separates the pixel intensity distribution of two classes. The threshold value minimizes within-class variance in equation (2-14) and at the same time maximizes between-class variance. 0.50) = c110) 0,20) + qzm 07%) (2-14) 28 where own) is the within-class variance, 0'] (t) and 020) are variances of each class, q1(t) is the probability that a pixel has an intensity value less than or equal to t, and q 2(t) is the probability that a pixel has an intensity value greater than t. The value t can be found by a sequential search through all possible values or through a recursive relationship described in [20]. The Otsu’s threshold values for typical combined difference MOI images was seen to lie between 50 and 80. The resultant images after each post-processing step are shown in Figure 2.6. The overall motion-based filtering algorithm summarized in the flowchart in Figure 2.7 is applied to the sequence of images to generate the sequence of filtered images. These results are presented next. 29 (C) Figure 2.6: (a) Combined difference image (b) after stretching (c) after thresholding. 2.4.5 Overall Algorithm of MBF The overall procedure for motion-based filtering is shown below and in Figure 2.7. l. LetD1 =I,,_1-I,,, S1 = D1,i= 2. 2. Calculate D,- = 1,,_,--1,, 3. Calculate S,- with OR operation, if binary, or MAX operation, if grayscale, on SH and Di. 4. For dynamic selection of w, calculate A i, which is the area of D,-, 30 5. Ifi AH, then i = i+1 and go to step 2. 6. Post-processing Input Image, 1,, I i: 2, 51 = D], D] = ln-I-ln I N Di :lln-i-In +—_— yes 1 no Si = MAX(Si-I,Di) Si = 0R(Si-IDi) l J Compute A; Stretch v Threshold Output Image, 5,, = 5; Figure 2.7: Algorithm for motion-based filtering. w denotes number of difference images. 31 2.5 Results of MBF Figure 2.8(e) shows the resulting gray scale image obtained by applying the motion-based filtering algorithm to MOI images in Figure 2.8(a)-(d). Three difference images are obtained by subtracting the image in Figure 2.8(d) flom those in Figure 2.8(a)-(c), and then MAX operation is performed on the difference images followed by post-processing. As seen in Figure 2.8, background noise is reduced to nearly zero while moving objects are retained in their original shape. (d) (6) Figure 2.8: (a),(b),(c),(d) MOI images in time sequence. Rivets are moving to the left while sensor is moving to the right; (e) Motion-based filtered image of the original image (d). The seam, across the width of the image, is almost lost in the filtered image in Figure 2.8(e). There are two reasons for this. First, the original image of the seam does 32 not have enough contrast to be separated flom the background. Second, if an object extends across the image and parallel to the scan direction, it appears stationary in the video sequence and hence is lost in the filtered image as illustrated in Figure 2.9. (d) (6) Figure 2.9: (a),(b),(c),(d) Synthetic MOI images in time sequence with uni- directional scan. Rivets are moving to the left while sensor is moving to the right; (e) Motion-based filtered image with uni-directional scan. One possible way to overcome this problem is to perform a scan orthogonal to the seam or a multidirectional scan so that the seam does not appear stationary. Figure 2.10 shows a synthetic video sequence, obtained using a scan path that is sinusoidal about the seam direction. 33 (d) (e) Figure 2.10: (a),(b),(c),(d) Synthetic MOI images in time sequence, multi-directional scan. Rivets are moving to the left while sensor is moving to the right with small sinusoidal up down motion; (e) Motion-based filtered image with multi—directional scan. The effectiveness of the performance of the motion-based filtering was measured quantitatively with the raw and filtered gray scale images using image contrast as a criterion. Contrast is one of measures of object visibility [23]. Image contrast is defined as: = fl (2-15) Cb where c0 and Cb are the average intensity of object and background, respectively determined flom a sample window of each region. However equation (2-15) cannot be 34 used if the background intensity is equal to zero or the object intensity is lower than the background intensity. Therefore a modified contrast function is defined as: CO _Cbl C' =l (2-16) The contrast of an object will be 1 in an ideal case when the difference of intensities between the object and background is 255. Three images are selected to measure the effectiveness of the MBF in terms of contrast as in Figure 2.11. 35 (e) (D Figure 2.11: Results of MBF. Rectangular box shows selected areas for measuring contrast. (a)(b) Image 1, (c)(d) Image 2, (e)(f) Image 3. A rectangular box of size 30x30 pixels is selected in background and object for each image to measure the contrast. The measured contrasts of objects in M01 images in Figure 2.11, before and after filtering, are shown in Table 2.1. 36 Table 2-1: Contrast of objects in M01 images before and after filtering. Images Contrast Original 0. 14 Image 1 Filtered 0.71 Original 0.17 Image2 Filtered 0.62 Original 0. 19 Image3 Filtered 0.49 Original 0.17 Average Filtered 0.61 2.6 Conclusion In this chapter, a new filtering method for enhanced MOI images was presented. The MBF separates moving parts flom stationary parts in a sequence of images. The proposed method is shown to remove effectively the serpentine background noise associated with domain in M01 images. This can help the human operator interpret the M01 data more accurately. The result of MBF also provides a binary image output for automated rivet inspection. The following chapter studies automated aircraft inspection methods based on the MB-filtered MOI images. 37 Chapter 3: Algorithms for An Automated Rivet Inspection System With MOI 3.1 Introduction This chapter introduces signal processing algorithms for automated rivet inspection in airframe structures. Among the various types of problems in aircraft inspection, the detection of cracks under rivets is one of the major challenges facing the aviation industry. In the inspection of the aircraft skin, the large number of rivets makes the manual inspection time consuming and laborious. The human operator needs to scan a large area around every rivet and analyze the acquired images. In practice, the majority of the rivets are good and only a few rivets are defective. This causes the human operator to become accustomed to normal rivets and to tend to ignore defective rivets. An automated rivet inspection system should increase speed, accuracy, consistency and hence reliability ‘ of the inspection. The rivet inspection algorithm can be used in manual scanning to assist the human operator or in a fully automated robot inspection system. The rivet inspection algorithm developed in this thesis comprises three major steps, namely, preprocessing, rivet detection, and rivet classification. This chapter explains the implementation details in each stage of the inspection algorithm and presents the inspection results on MOI data. 3.2 M01 Images Some examples of M01 images of cracks and corrosion are shown in this section. Distinctive features of the defect images will be used as the basis for classification. Sample MOI images of cracks around rivets are shown in Figure 3.1. 38 (c) Figure 3.1: (a) 'IVvo normal rivets (b) Right rivet has a radial crack (c) A crack between two rivets. Figure 3.2 and 3.3 show an example of an MOI image of a crack around a seam. (a) (b) Figure 3.2: (a) Seam and two normal rivets (b) A crack along the seam 39 (C) ((1) Figure 3.3: (a) (b) Normal seam and two rivets (c) ((1) Crack along the seam. In Figure 3.2, the crack is seen as a large object (indicated by an arrow), whereas in Figure 3.3, the crack is seen as a breakage in the seam. This difference is due to the geometrical configurations of the crack and seam. Figure 3.4 presents sample MOI images of a corrosion dome. 4O (c) (d) Figure 3.4: Sequence of images with corrosion dome. Corrosion is moving from up left to down right, as the MO] sensor is moving. The serpentine pattern noise is particularly dominant in the corrosion image, and severely degrades the inspection capability of M01. As observed in the previous example, MOI images of defects do not represent the exact shape of defects, and the defect images appear differently depending on the airflame structure around them. The defect images are also validated by the finite element modeling technique [24]. Another challenge for the MOI is that the images are affected by variations in the scanning procedure as shown in Figure 3.5. 41 (C) Figure 3.5: Images of normal objects are shown differently because of the wobbling of the M01 instrument. The sensor wobble causes a variation in the distance between test sample and M01, which in turn results in the induced magnetic fields and hence the M01 images flom same objects. Such variation in the data flom the same object presents a significant challenge to the development of automated rivet recognition and classification. The variation of M01 images with defective rivets is shown in Figure 3.6. 42 (a) (b) Figure 3.6: Images of defective objects are shown differently because of the wobbling of the MOI instrument. Assuming some prior knowledge about the test geometry and defects images, algorithms for automated rivet inspection will be discussed in the next section. 3.3 Overall Approach of Automated Rivet Inspection With the wide variation in shapes of defects in M01 images, different inspection algorithms need to be developed for each type of defect. Among the various problems, rivet inspection is one of the most important because rivets are common sites where cracks can develop. The cracks around a rivet are classified into two categories, radial and circumferential, according to its configuration relative to the rivet as described in Figure 3.7. 43 (a) (b) (C) (d) Figure 3.7 : (a) A schematic rivet with a radial crack (b) A schematic MOI image of the rivet with a radial crack (c) A schematic rivet with a circumferential crack (b) A schematic MOI image of the rivet with a circumferential crack. A rivet inspection algorithm for detecting radial cracks around a rivet is discussed in the remainder of this chapter. A typical rivet has roughly a circular or oval shape, while it has an additional protruding blob when there is a radial crack as shown in Figure 3.8. [‘3 - c. I (b) (d) (a) (C) Figure 3.8: Typical rivet images in M01 inspection after MBF filtering: (a)(b) normal rivets (c)(d) defective rivets. The overall approach of automated rivet inspection is a three-step procedure as depicted in Figure 3.9. 44 Automated rivet inspection i Motion-based Filtering 4 Rivet detection iv Rivet classification Figure 3.9: Overall approach of automated rivet inspection algorithm. The raw image obtained by M01 is fed to the MBF module described in Chapter 2 to generate a binary image. Ideally, the binary image is devoid of background noise and contains all information about objects in the raw image such as rivets, defects, and corrosion. The binary image result is used in the subsequent rivet detection and rivet classification modules. MBF was discussed in Chapter 2, so that the following sections discuss issues related to rivet detection and rivet classification. 3.3.1 Automated Rivet Detection We introduce two different approaches for rivet detection. The first approach is based on circular Hough transformation and the second approach is based on a morphological image processing method. Both rivet detection methods identify all circular objects in the images. Both algorithms detects a circle that encloses the rivet but not the blob associated with the defect. 45 3.3.2 Hough Transformation Technique The Hough transformation method was originally proposed to detect straight lines in a given image. Later, the circular Hough transformation was proposed as an efficient method for detecting circles in an image [25] [26]. The rivet detection based on Hough transformation is composed of four steps as follows. i) The MOI images processed with MBF has zero values for background and non-zero values for object. Therefore, the edge detection is performed by detecting obj ect- background discontinuities as: I'(x,y)=l, ifI(x,y)¢0and{I(x—l,y)=00r1(x+l,y)=Oor I(x,y-—l)=00rI(x,y+l)=0} (3.1) 0, otherwise where, I (x, y) is the input image and I '(x, y) is the edge image. ii) Gradient and magnitude of gradient for each edge pixel is computed using the Sobel operator for the edge pixels detected in step i): —1 o 1 —1—2 —1 Sobel, = -2 o 2, Sobel); o 0 o (3.2) —1 01 1 2 1 where, Sobel”, and Sobel}, are the Sobel operators for x-axis and y-axis respectively. 46 gx= Sobel . .xI x,1,j r+i—2,c+j—2 l=l j=l i=3j=3 g)’ = . l . ISOberJJ XIr+i—2,c+j—2 (3'3) 1: J: 2 2 g = gx + g) gx and g y are the gradients with respect to x-axis and y-axis respectively, g is the gradient magnitude, and ‘x’ represents convolution operation. iii) Transform the coordinates of the edge pixel to the three-dimensional (x, y, radius) accumulator. The possible center coordinate is calculated for predefined range of radius r as: xc = x-rc056 (0056 = gx /g) (3.4) yc =y—rsin6 (sin6=gy/g) where, xc and yc are the possible center coordinates and r is the given radius. Based on equation 3.4, the accumulator collects the votes for candidate circles as: A(xc,yc,r)=A(xc,yC,r)+l, for xczx—rcosfl (cost9=gx/g) (3.5) yc =y-rsinl9 (sinazgy/g) 47 where, A(xc, yc,r) is the 3-dimensional accumulator, x and y are the coordinates of edge pixels in the input image. The schematic of the voting and accumulator is shown in Figure 3.10. A height V AIMS > width (3) (b) Figure 3.10: Schematic of voting in 3-dimensional accumulator in Hough transformation. (a) Each edge votes for its center coordinate (b) Accumulator collects the votes. iv) Determine the Hough circle of the rivet by choosing peak values in the accumulator as: if A(xc,yc,rc)2T, then [(xc,yc) is center of a circle with radius rc (3.6) else, skip A(xc , yc a re) where, T is the threshold value. If no accumulator element has a value larger than of equal to the threshold T, then the given image is decided not to have a circular object. 48 For correct results of Hough transformation, some parameters need to be predefined, such as range of r (ra S r S rb) and threshold value of the votes T. In our rivet detection algorithm, the range of radius is between 15 and 45 pixels and the threshold value is chosen as 150 experimentally. Typical results of the Hough transformation—based rivet detection are shown in Figure 3.11. (C) (d) Figure 3.11: (a) and (c) Original MOI images (b) and (d) Rivet detection by Hough transformation. 3.3.3 Morphological Operation Technique In morphological operation-based rivet detection, the MBF filtered image is first segmented using connected component analysis to determine the bounding rectangle of 49 each object. Object broken by a few pixels’ distance need to be considered as a connected object, so that a closing operation is performed. A closing operation is a dilation operation followed by erosion operation. The dilation of binary image B by structuring element S, denoted by B 60 S, is defined by [20]: BeS=bteJBsb (3.5) where, Sb represents a translation of set of pixels S by a position vector b. The erosion, denoted by B e S is defined by [20]: BGS={b|b+seB VseS} (3.6) Connected component analysis finds objects that are surrounded by the background pixels. Two connected components and their bounding rectangles are shown in Figure 3.12. . :e .34..) yak. . «(25$ - ~ ,.' r . , H w 73;; ' Rafi. 1.x“: «1455:: $12.! “44$: 1‘. "93“!“ ' 1.4," 1 L’."¢ : 2.; ,,.::\.'.‘{.‘ (3.1:;- .3 :-° a) “_ a .. . , ”32$? Eli-‘99??- ~ "if-‘3" 9.933 .: i‘a‘t; "’ ,{E"'J"I~‘:’ '3" ' 3.391 {La . on ‘- 9,31,31- . [cw-- '1,.~.§fl9 site:- .g: Figure 3.12: Schematic of connected components and bounding rectangles in an image. 50 An examination of the width, height, and aspect ratio of the bounding rectangle can quickly determine potential segments of a rivet. The initial center and radius c and r are calculated flom the bounding rectangle by: c=I(cx,cy) cx =(J1:1 +x2)/2 Cy =(yl +y2)/2 r=min(cx _x1,x2 _cx,y1-cyacy _y2) (3.7) Then, morphological erosion operation is performed. The erosion operation is performed in each bounding rectangle with a structuring element of size W. The size Wis chosen as the largest value for which the eroded image is not empty. The structuring element is described in Figure 3.13. (a) (b) Figure 3.13: Morphology of structuring element. (a) Square shape (b) Circular shape. Either square or circular structuring elements can be used for rivet detection. The square- shaped structuring element is easier to implement than circular structuring elements. The unbiased center, c' and radius, r' are calculated after the erosion according to: 51 c'=I(cx',cy') cx'=(xl '+x2')/2 c,..'=(y,'+y2)/2 r' = min(cx '-xl,x2 —cx ',yl —cy ',cy '— yz) (3.6) The intermediate results of this procedure are shown in Figure 3.14. X] x2 xl’ xz' .5 y2 (a) (b) (C) Figure 3.14: Example results of intermediate steps of morphological operation for rivet detection with intermediate variables: (a) initial rivet image (b) after iterative erosion operation (c) after rivet detection. The procedure for rivet detection using morphological operation is summarized in Figure 3.15. 52 Rivet detection via morphological operation v Closing + Segment out connected component until there is no more object ‘— + no wl [3(a)2 Ix); otherwise decide (02 (3.7) where, PM), |x) and P(a)3|x) represents posterior probability, and x represents the M01 rivet data. The posterior probability [3(a)] Ix) is the probability that the data x belongs to class a), given data x. From Bayes rule we have: p(x l (01-)P(a)i) p(x) Pm, l x) = (3.8) where, P(x| (0,) is the conditional probability density function of class (a), P( (OJ is the prior probability, and p(x) is normalization factor to make the sum of posterior probability equal to 1. Using Bayes rule, the decision rule with probability density functions and prior probabilities is obtained as: rivet e (01 if p(x|a)l)P(a)])> p(x|a)2)P(a)2); otherwise decide (02 (3.9) where, p(-) represents a probability density function. Assuming equal prior probability, a decision rule using the conditional probability density functions is obtained as: rivet e (0' if p(x|w]) > p(x|o)2); otherwise decide (02 (3.10) 58 The probability density function p(x l (0,.) , assumed to be Gaussian, is expressed as in Eq. 2 (3.11) for the univariate case with the estimated mean pi and variance 01' . 2 pglwnzfizexp 1&1] (3.11) The multivariate probability density function is obtained with estimated mean vector 17,- and covariance matrix it when the dimension of the feature vector is d can be expressed as: put I (0,.) = d}, ((27) exv[—l(i-fi,-)’E;‘(i-fl,)] (3-12) :_ I1/2 2 I 3.4.3 Feature Selection For the Bayesian rivet classifier, a variable d is defined as: d:\/()c—c,c)2+(y——cy)2 —r (3.13) where, (x, y) is the coordinate of a point on the edge outside of the circle obtained by the rivet detection algorithm. A set of d values are obtained for each edge pixel (x, y) value as: D={di |i=l,...,ne}, ne is the number of edge pixels (3.14) The negative values of d are discarded from consideration because they do not give information about the defect. A feature f is then computed as: 59 f = n’h largest element in D (3.14) n=Lt> pivot where, P 1 and P2 are partitions. The partitioning operation is performed on P 1 and P2 recursively until the number of elements of all partitions are one. All partitions are combined at the last step to obtain the sorted list of S. In modified quick sort, each. partitioning is performed on only one of two partitions, which contains the k’h largest element. The operation stops when the pivot is k’h largest, in which case the pivot is the median value. The modified quick sort-based median search is faster because it minimizes the sorting operation: an) whereas quicksort is @(n log n). In moving median with sort, the median value is obtained at the first kernel window by sorting S. After moving the window to the next position, w elements are removed and w elements are added to S, thus the sorting operation becomes very fast by removing the overlap of pixels. In moving median with histogram, a histogram is built and operated during the median filter. The histogram holds the counts of values in current window, so that the median value is found by counting the histograms flom O to 255: the bin with the 72 accumulated counts larger than or equal to k becomes the median. This median bin is remembered at each move of window, and as adding and removing elements flom the histogram, the previous median bin is adjusted and the median is quickly searched. More formal explanation of the moving median with histogram is described below. A histogram of size 256 is defined as: H(i) = 0, 0 s i s 255 (4-8) The moving median with histogram initializes variables as: H(I(x+i,y+i))=H(I(x+i,y+i))+1, —w